title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Topic Ontologies for Arguments",
"Topic Ontologies for Arguments"
]
| [
"Yamen Ajjour [email protected] \nLebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n\n",
"Johannes Kiesel [email protected] \nLebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n\n",
"Benno Stein [email protected] \nLebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n\n",
"Martin Potthast [email protected] \nLebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n\n"
]
| [
"Lebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n",
"Lebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n",
"Lebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n",
"Lebiniz University Hannover\nBauhaus-Universität Weimar\nBauhaus-Universität Weimar\nLeipzig University\n"
]
| []
| Many computational argumentation tasks, like stance classification, are topic-dependent: the effectiveness of approaches to these tasks significantly depends on whether the approaches were trained on arguments from the same topics as those they are tested on. So, which are these topics that researchers train approaches on? This paper contributes the first comprehensive survey of topic coverage, assessing 45 argument corpora. For the assessment, we take the first step towards building an argument topic ontology, consulting three diverse authoritative sources: the World Economic Forum, the Wikipedia list of controversial topics, and Debatepedia. Comparing the topic sets between the authoritative sources and corpora, our analysis shows that the corpora topicswhich are mostly those frequently discussed in public online fora-are covered well by the sources. However, other topics from the sources are less extensively covered by the corpora of today, revealing interesting future directions for corpus construction. | 10.48550/arxiv.2301.09759 | [
"https://export.arxiv.org/pdf/2301.09759v1.pdf"
]
| 256,194,365 | 2301.09759 | cc1ff576441f6d090e91bb130175cb43e7cae0fd |
Topic Ontologies for Arguments
Yamen Ajjour [email protected]
Lebiniz University Hannover
Bauhaus-Universität Weimar
Bauhaus-Universität Weimar
Leipzig University
Johannes Kiesel [email protected]
Lebiniz University Hannover
Bauhaus-Universität Weimar
Bauhaus-Universität Weimar
Leipzig University
Benno Stein [email protected]
Lebiniz University Hannover
Bauhaus-Universität Weimar
Bauhaus-Universität Weimar
Leipzig University
Martin Potthast [email protected]
Lebiniz University Hannover
Bauhaus-Universität Weimar
Bauhaus-Universität Weimar
Leipzig University
Topic Ontologies for Arguments
Many computational argumentation tasks, like stance classification, are topic-dependent: the effectiveness of approaches to these tasks significantly depends on whether the approaches were trained on arguments from the same topics as those they are tested on. So, which are these topics that researchers train approaches on? This paper contributes the first comprehensive survey of topic coverage, assessing 45 argument corpora. For the assessment, we take the first step towards building an argument topic ontology, consulting three diverse authoritative sources: the World Economic Forum, the Wikipedia list of controversial topics, and Debatepedia. Comparing the topic sets between the authoritative sources and corpora, our analysis shows that the corpora topicswhich are mostly those frequently discussed in public online fora-are covered well by the sources. However, other topics from the sources are less extensively covered by the corpora of today, revealing interesting future directions for corpus construction.
Introduction
The term "topic" refers to a text's subject matter. A text can be about one or more topics; the relation underlying topics and texts is called "aboutness" (Yablo, 2014). Topics play a central role in argumentation, since they constrain or guide strategies and rhetorical devices by providing the accepted and expected universe of discourse. Also, the view of pragma-dialectics in argumentation emphasizes that argumentation is topic-dependent (van Eemeren, 2015): "The basic aspects of strategic maneuvering [. . . ] are making an expedient selection from the 'topical potential' available at a certain discussion." Though debaters often use commonplace arguments across topics , this is only possible for related topics: a black-market argument, for example, applies to topics like banning drugs or banning guns. When developing computational models to extract, analyze, or generate arguments, however, one should thus ensure a wide topic coverage in model training to improve the model's generalizability (e.g., as recently shown by Reuver et al., 2021).
A set of topics may be organized as a graph, sometimes called "topic space". Information theorists and library scientists map hierarchical topic relations within ontologies (Hjørland, 2001). Here, topics are labeled with a subject heading, i.e., a phrase from a controlled vocabulary which concisely and discriminatively describes a topic. Library ontologies are not designed with argumentation tasks in mind, but other ontology efforts specifically address argumentative topic spaces. We identified and harnessed three authoritative sources of ontologic knowledge that cover global issues, controversies, and popular debates: the World Economic Forum's "Strategic Intelligence" site, Wikipedia's list of controversial topics, and Debatepedia's debate classification system (cf. Section 4).
We contribute a comprehensive overview of argument corpora and their topic coverage as per the mentioned ontologies. The coverage of corpora that provide topic labels is manually assessed by aligning each label to the ontologies' topics, computing the proportion of ontology topics covered by a corpus, and the distribution of corpus arguments in an ontology. Our analyses show that ex-isting corpora focus on a subset of possible topics (cf. Section 5). For the corpora without topic labels, we categorize their argumentative texts by measuring the semantic relatedness of corpus documents to ontology topics. Given the 748 topic, this is a challenging classification, for which we achieve a remarkable F 1 of 0.59 (cf. Section 6). 1 Altogether, we lay the foundation for the study and systematic exploration of controversial topics within computational argumentation analysis. The identified authoritative resources already capture quite comprehensively their respective domains. Future work will have to extend our approach to other topic spaces, such as business, domestic, historic, and scientific argumentation spaces.
Related Work
Our review of related work focuses on the role of the variable "topic" in computational argumentation. Moreover, we briefly review topic ontologies and hierarchical topic classification.
Topics in Computational Argumentation
In computational argumentation, arguments are typically modeled as compositions of argument units, where an argument unit is represented as a span of text. Habernal and Gurevych (2016a) adopts Toulmin (1958)'s (1958) model, which defines six unit types, among which are "claim" and "data". Wachsmuth et al. (2017) employ a more basic model of two units, which defines an argument as a claim or conclusion supported by one or more premises. These models capture arguments without explicitly identifying the topic they address. consider claims to be topic-dependent and study their detection in the context of a random selection of 32 topics from idebate.org. This work raises the question why topic-dependence has not been addressed more urgently until now.
Key tasks for computational argumentation include the mining of arguments from natural language (Moens et al., 2007;Al-Khatib et al., 2016), classifying their stances with regard to a thesis (Bar-Haim et al., 2017), and analyzing which arguments are more persuasive (Tan et al., 2016;Habernal and Gurevych, 2016a). Current approaches to these tasks rely on supervised classification. Daxenberger et al. (2017) show that supervised classifiers fail to generalize across domains (∼ topics). More recently, Stab et al. (2018) tweak Bi-1 Anonymized data at https://zenodo.org/record/3928096. LSTM (Graves and Schmidhuberab, 2005) to integrate the topic while jointly detecting (1) whether a sentence is an argument and (2) its stance to the topic. The designed neural network outperforms BiLSTM without topic integration in both tasks; the approach gives further evidence for the topicdependence of argument mining and stance classification. Whether model transfer between more closely related topics works better is unknown. As a first step, Reuver et al. (2021) show that crosstopic stance-classification with BERT (Devlin et al., 2018) produces mixed results depending on the topics, but misses the relations between the topics. Gu et al. (2018) show that integrating the topic of an argument helps assessing its persuasiveness.
Topic plays a central role in argument retrieval and generation since it defines what arguments are relevant. Argument retrieval aims at delivering pro and con arguments on a given topic query. A major challenge in argument retrieval is the grouping of arguments that address common aspects of a topic. As shown by Reimers et al. (2019) and Ajjour et al. (2019a), integrating the topic is an important step while clustering arguments. For argument generation, introduce an approach that matches an input topic against a list of topics that are paired with sets of topic-adjustable commonplace arguments (e.g., black-market arguments). In a similar vein, Bar-Haim et al. (2019) identify consistent and contrastive topics for a given topic with the goal of expanding the topic in a new direction (e.g., fast food versus obesity). Both approaches show the merit of utilizing argument topic ontologies in argument generation. Perhaps only abstract argumentation can be conceived of as topic independent, since it studies the structure and relations among arguments more than their language.
Topic Ontologies
In information science, an ontology is defined as "an explicit specification of a conceptualization" (Gruber, 1993). Topic ontologies are a specific type of ontologies which specify topics as nodes of a directed acyclic graph. An edge in the graph then implies an "is part of"-relation between the topics (Xamena et al., 2017). The effort in creating topic ontologies ranges from ad-hoc decisions (e.g., tags for blog posts) to extensive classification schemes for libraries. The oldest classification scheme that is still used today in libraries is the Dewey Decimal Classification. It has been translated into over 30 languages, and it contains several tens of thousands of classes. Most topic ontologies focus on a specific domain, such as a the ACM Computing Classification System for computer science, or DMOZ for web pages. 2 The only topic ontology directly linked to arguments is that of Debatepedia.
Hierarchical Text Classification
Hierarchical text classification aims at classifying a document into a class hierarchy. Depending on how the hierarchical structure is exploited, classification can be done top-down (from higher classes downwards), bottom-up, or flat (ignoring hierarchical relations) (Silla and Freitas, 2011). Researchers usually train supervised classifiers for each class in the hierarchy (Sun and Lim, 2001).
Survey of Argument Corpora
To study arguments and computational argumentation tasks, researchers compile corpora with argumentative texts. To the best of our knowledge, Table 1 lists all corpora dedicated to argumentation to 2020. We review these corpora and their associated publications with regard to what are the sources of arguments, what is the granularity of the corpus, what is the size of the corpora in terms of their units, and which and how many different topics are covered in them. Reviewing all papers citing a corpus, we also analyzed how many experiments were carried out using them.
The most elaborate discussion of topic selection is given in Habernal and Gurevych (2016a), who chose six topics (homeschooling, public versus private schools, redshirting, prayers in schools, single sex education, mainstreaming) to focus on different education-related aspects. The broadest selection of topics is reported by the researchers of IBM Debater, 3 who obtain arguments from Wikipedia. The only other work mentioning their source of topics stems from Stab et al. (2018), who randomly select 8 topics from two lists of controversial topics that originate from an online library and the debate portal ProCon.org, respectively. Peldszus and Stede (2015) predefine a set of topics and give writers the freedom to choose which one to write about, but nothing is said about where the set of predefined topics originate from. Conard et al. (2012) and Hasan and Ng (2014) explicitly select topics (1 and 4, respectively). For all other corpora with topic labels, their authors do not argue on choosing topics, nor selection or sampling criteria. Neither do the authors of corpora without topic labels.
Altogether, it appears that the best practices in argumentation do not as of yet consider topic sampling as a prerequisite task to ensure coverage of a certain domain of interest, and diversity. Based on our review, we presume three basic topic selection directives are in use today: (1) Manual selection. Topics are manually defined or selected. Although the process may be random, when aiming for controversial topics, one may often end up with commonplace topics in Western culture (e.g., abortion, death penalty, gay marriage), despite them them being still relevant and important today.
(2) Sourcedriven (greedy within a time-span). A source of argument ground truth is either exploited in its entirety, or a maximum subset fulfilling desired properties is used. Since argument-related ground truth is hard to come by, it is understandable that all available sources are being exploited. (3) Sourcedriven (sampled). A source or argument ground truth is exploited and a subset is sampled. Here, it may be infeasible to exploit a source in its entirety. Al-Khatib et al. (2016b) randomly select 300 documents from three websites. Park and Cardie (2018) and do not mention anything about their sampling process. In general, both source-driven corpus construction approaches inevitably incurs the source's idiosyncracies of topic selection, both in terms of skew towards certain topics. Scaling up may or may not be a remedy for this problem.
We assess how many experiments have been reported on each of the corpora by collecting the publications referring to a corpus as per Google Scholar, focusing on conference and journal papers, but excluding books and web pages. We then check whether the cited corpus is mentioned in its data, experiment, or results section. As can be seen in Table 1, corpora with fewer topics tend to be used more often in experiments than those with larger amounts. In total, 82 experiments were carried out on argument corpora with no clearly defined topic selection directive. The skew towards smaller-scale experiments may affect generalizability.
Acquiring Argument Topic Ontologies
Topic ontologies provide for a knowledge organization principle, and, especially if widely accepted, also a standard. They are typically modeled as Table 1: Survey of argument corpora indicating data source, unit granularity, and size in terms of units and topics (if authors remarked on it). The unit granularity is the one in the corpus' files, using premises and conclusions as one unit each and the best context-preserving unit for corpora featuring multiple granularities. We presume these topic selection directives from the corpus description: either manual selection by the authors, or source-driven-i.e., the topics in the selected source(s)-from the units of a specific time-span or by random sampling. Experiments (Exp.) denotes the count of papers that use the corpus in an experiment among those papers that cite the corpus' paper. Global cooperation will be essential for developing the sort of resilience necessary to better deal with crises.
Multilateral cooperation will be necessary for a healthy global recovery.
In order to deliver on the Sustainable Development Goals, sufficient and stable tax revenue is necessary.
When properly done, abortion is one of the safest procedures in medicine People cannot know a God or prove the existence of a God.
World Economic Forum: "Strategic Intelligence" (excerpt) Wikipedia: "List of Controversial Issues" (excerpt)
Level 1
Level 2 Figure 1: Example for an assignment of arguments (bottom) to topics of a two-leveled ontology. Level 2 topics are subtopics of their linked Level 1 topics. Arguments linked to a Level 2 topic also pertain to its Level 1 upper-topics.
directed acyclic graphs, where nodes correspond to topics and edges indicate "is part of" relations; topics that are part of other topics are called their subtopics. A topic ontology is often displayed in levels, starting with the topics that are not subtopics of others, continuing recursively with each lower level of subtopics. Figure 1 shows an excerpt of a two-level topic ontology for arguments. The identification of the topics to be included in an argument topic ontology, as well as their relations, requires domain expertise. Building an all-encompassing ontology thus requires experts from every top-level domain where argumentation of scientific interest is expected. In the following, we suggest and outline three authoritative sources of relevant topic ontologies, which comprise a wide selection of important argumentative topics.
World Economic Forum (WEF)
The World Economic Forum is a not-for-profit foundation that coordinates organizations from both the public and the private sector to work on economical and societal issues. As part of their efforts, their "Strategic Intelligence" platform 4 strives to inform decision makers on domestic and global topics, specifically global issues (e.g., artificial intelligence and climate change), industries (e.g., healthcare delivery and private investors), and economies (e.g., Africa and ASEAN). Domain experts for each topic curate a stream of relevant news articles which they each tag with 4-9 subtopics of their topic (e.g., the continuous monitoring of mental health). Wikipedia Wikipedia strives for a neutral point of view, but many topics of public interest are discussed controversially. Some editors thus curate a list of such controversial articles to highlight where special care is needed, grouped into 14 toplevel topics (e.g., environment and philosophy) and 4-176 subtopics (e.g., creationism and pollution). 5 Omitted is the "People" topic and articles on countries; their controversiality is not universal. Debatepedia The debate portal's goal is to create an encyclopedia of debates which are organized as "pro" and "con" arguments. A list of 89 topics helps visitors to browse the debates. The debates are contributed by anonymous web users, which makes the covered topics easily accessible. Topics in Debatepedia tend to address issues of Western culture. For example, the topic "United States" covers 306 debates while "Third World" covers 12 debates. The project is no longer maintained, but can be accessed through the Wayback Machine. 6 The three ontologies are publicly accessible, and two of them are actively maintained and updated. Acquiring the ontologies is straightforward-not straightforward is to make use of them. A key task associated with every topic ontology is to categorize a given document. Having just a short string label describing a (potentially multifaceted) topic, such as "The Great Reset", renders this task exceedingly difficult. Fortunately, domain experts have been pre-categorizing documents into the aforementioned ontologies. In particular, regarding the WEF, invited domain experts categorize news articles for every topic, regarding Wikipedia, the text of the associated wiki articles is available, as are the associated debates for Debatepedia.
Articles that are categorized into Level 2 topics are propagated up to their respective Level 1 topics. Table 2 shows the large differences between the ontologies. The WEF ontology contains the most topics and links the most documents, which contain the most tokens overall. The topics at Wikipedia Level 2 are just linked to a single article each, so every topic's amount of text is smaller. The number of authors reflects the number of editors.
Topic Coverage
To assess the topic coverage of the argument corpora in light of the three ontologies, we map the topic labels of those corpora providing them to their matching ontology topics. Table 1 lists 31 argument corpora that provide topic labels. Altogether 2,117 different labels have been assigned. They are concise descriptions of the main issues of an argument and have been provided by the corpus authors. The labels follow the style of the text register of the respective corpus: In essays, for instance, topics are usually thesis statements, while Wikipedia-derived corpora use article titles, and the topics of debate corpora include clichés such as "This house should". Often, topic labels express a stance towards a target issue, e.g., "ban guns". Five types of topic labels can be distinguished: concept, comparison of concepts, conclusion (includes claim and thesis), question, and imperative. We normalize the topic labels by converting all concepts to singular form, removing clichés, and dropping stance-indicating words such as "legalize". Our normalization aims at retaining only the central target issue of a topic label and leads to 748 unique topic labels.
Topic Label Normalization
Mapping Topic Labels to Ontology Topics
Using the preprocessed topic labels as queries, we retrieve for each topic label the 50 top-most relevant topics in each level of the three ontologies. To facilitate the retrieval of ontology topics, we employ a BM25-weighted (Robertson et al., 2004) index of the concatenated documents for each topic. BM25 is a widely used modified version of TF-IDF (Croft et al., 2009). This enables us to narrow down the mapping of a topic label to a manageable size. Except for a handful of cases, 50 ontology topics can be retrieved for each topic label. The topic labels were then manually mapped to an ontology topic, if they form synonyms, or if the former is a subtopic of the latter-which thus indicates that all arguments in the corpus with that topic label are about the ontology topic. A topic label can thus be mapped to multiple ontology topics. For example, the topic label "plastic bottles" is mapped to "pollution" and "recycling" in Wikipedia Level 2. Table 2 shows general statistics of this mapping of corpora topic labels to ontology topics. Most of the topic labels (2,002 out of 2,117) are mapped to at least one Debatepedia topic while only 355 labels are mapped to WEF Level 2 topics. For Wikipedia Level 2, only 285 out of the 748 topics are actually covered by argument corpora. Already this first analysis suggests that existing argument corpora cover typically a small subset of possible argumentative topics that people are trained to debate. For those topic labels that can be mapped are mapped on average to 2.8 topics in Debatepedia, to 1.24 topics in Wikipedia Level 1, and to 1.48 topics in WEF Level 1. As discussed in Section 4, topics in Debatepedia focus on the Western culture and are easily accessible, whereas topics in WEF require deeper domain knowledge and have more global relevance. The broad coverage of Debatepedia's topics indicates that the studied argument corpora focus on common topics that are easily approachable while global issues or those that need domain knowledge lack coverage.
Analysis of Topic Coverage
For a more fine-grained analysis, Figure 2 illustrates the differences regarding the number of ontology topics covered by a corpus: while topics in Wikipedia Level 1 are covered well by some argument corpora, topics in Wikipedia and WEF Level 2 are covered only marginally. Note that topic coverage varies significantly between the corpora: the Claim Sentence Search dataset's topics cover 93% of the Wikipedia Level 1 topics, while the Ideological Debates Reasons dataset covers only 14%. The colors show the topic granularity of the corpus; especially the Record Debating Dataset 3 dataset is fine-grained: as the highest value, 36 of its topics are mapped to the Wikipedia Level 1 category "Politics and Economics". Figure 3 shows how the set of the units of the 31 labeled corpora distribute over the top matching topics in Debatepedia, Wikipedia Level 1, and WEF Level 1. Distributions over Level 2 are omitted for brevity and can be found in the supplementary material. The distribution is significantly skewed: while the top ten topics in Debatepdia are matched by 340k to 150k corpora units, the top ten topics in WEF Level 1 are matched by 340k to 20k corpora units. The comparison between the three ontologies supports our previous finding that argument corpora cover easily accessible topics (e.g., "Media and Entertainment" and "Society"). Figure 2: Proportion of ontology topics covered by at least n corpus topics (per ontology level and per corpus).
Unit Categorization
The previous analysis is done on those argument corpora which contain topic labels. About a third of the argument corpora are thus excluded from that analysis. As a step toward assessing their topic coverage, we map the ontology topics for a unit (cf. Table 1) in an argument corpus by treating the unit as a (long) query in a standard information retrieval setup, where ontology topics are the retrieval targets. The documents categorized into each topic have been concatenated and used as the topic's representation. Though the documents associated with a topic are not necessarily argumentative, they cover the salient aspects of the topic.
To retrieve topics for a corpus unit, we implement and evaluate the following approaches: Semantic Interpretation (SI) and SI with Text Embeddings (T2V-SI). The Semantic interpretation approach computes the semantic similarity of a unit and a topic as follows: it uses the cosine similarity of the TF-IDF vectors for the unit and the concatenated topic's documents. This corresponds to the semantic interpretation step that is at the core of the well-known ESA model (Gabrilovich and Markovitch, 2007). In Text2vec-SI, the similarity of topics and corpus units is calculated using BERT embeddings (Devlin et al., 2018). We follow the common approach to generate text embeddings, which is to take the dimension-wise average of the word embeddings for all tokens in the text. 7 As a baseline, we implement a direct match approach, which assigns a unit an ontology topic if the topic's text appears in the unit text (ignoring case).
For evaluation, we collect 34,638 pooled query relevance judgments (0.53 inter-annotator agreement as per Krippendorff's α) on 104 randomly selected argument units as queries from 26 corpora. The annotation process is detailed in the appendix.
Based on the similarity scores of the approach, we derive Boolean labels that indicate whether a unit is or is not about one of the ontologies' topics using two policies. The threshold policy labels a Table 2: Statistics for each topic ontology level: for topics and topic documents (Section 4), Count of mapped topic labels of the analyzed corpora for each ontology level, Count of all covered ontology topics by the topic labels and the min, max, and mean count of covered ontology topics per topic label (Section 5), and the effectiveness of the approaches and baseline in unit categorization (in terms of precision, recall, and F 1 -score) (Section 6).
unit as about a topic if their similarity is above a threshold θ. The top-k policy labels a unit as about a topic if the topic is among the top-k topics with the highest similarity to the unit. We report the parameter of policy with which the approach achieved the highest F 1 -score on the pooled judgments. Table 2 shows the results of this evaluation. The baseline produces different results across ontologies-it performs poorly for both the abstract topics in Wikipedia Level 1 and the specific topics in WEF Level 2. The semantic interpretation approach clearly outperforms the baseline for all ontologies in terms of the F 1 -score. The Text2vec approach outperforms the baseline and semantic interpretation on the most abstract topics (Wikipedia Level 1), but its performance is supbar to that of semantic interpretation on the other ontology levels.
Conclusion
The computational argumentation community faces a dilemma: Either carry on as before and risk topic bias, or go the extra mile to ensure topic represen-tativity and create extra work for corpus authors. The latter option is further complicated by the fact that the space of controversial topics is not well explored to date, and that there are no widely accepted argument topic ontologies as of yet. In this paper, we give a glimpse into a future in which the argument topic space has been mapped and is accessible to corpus construction and designing experiments: We identified three authoritative sources of ontologic knowledge with respect to argument topics. For each ontology, we reveal the topic coverage of 31 argument corpora that are provided with topic labels by aligning the topic labels of a corpus to the ontologies topics. To assess the topic coverage of non-labeled corpora, we introduce an approach that identifies the ontology topics of an argumentative text, reaching an F 1 of 0.59.
Our analyses show that the topic coverage of the studied argument corpora is both limited to only a subset of the ontologies topics and skewed. The majority of topics that require domain knowledge such as those on mental health, philosophy, or in-ternational security are marginally covered in the analyzed argument corpora. This renders existing argumentation technologies more suitable to teach people how to construct arguments than to support them taking decisions about complex topics. For a mature development of argumentation technology, a careful sampling and controlling of the topics should be employed while constructing corpora, designing experiments, or applying classifiers.
In future work, major tasks are to further explore the argument topic space regarding matters not yet covered by the existing topic ontologies and to unify the different ontologies. Besides "is part of"-relations between topics, other relation types may be considered as well, thereby inducing a topic knowledge base. However, already the work presented here can assist in selecting arguments for corpus construction and model training.
Figure 3 :
3Distribution of corpora units over the top matching topics in an ontology (31 labeled corpora).
https://dl.acm.org/ccs and https://dmoz-odp.org/ 3 https://www.research.ibm.com/haifa/dept/vst/debating_data.shtml
https://intelligence.weforum.org
https://en.wikipedia.org/wiki/Wikipedia:List_of_controversial_issues 6 https://web.archive.org/web/20180222051626/http://www.debatepedia.org/ en/index.php/Welcome_to_Debatepedia%21
For efficiency, we limited the embeddings to 10,000 randomly sampled sentences for the topics that had more sentences associated with them.
LimitationsThe three topic ontologies which we used to assess the topic coverage of argument corpora come from recognized sources and cover different domains. Nevertheless, these topic ontologies might not fully cover all possible controversial topics that are relevant to argumentation (e.g., those topics related to private life). Having said that, we do believe that our paper sets a cornerstone for studying topic bias in argument corpora, which researchers can extend.Another limitation of this study is the moderate effectiveness achieved by our approaches for unit categorization. Our approaches for unit categorization achieved moderate effectiveness because of the large space of controversial topics (about 742 for Wikipedia). Future research can improve upon our approach by utilizing the structure of the topic ontology using hierarchical classifiers. Hierarchical classifiers first map a document to one topic in the upper level and then consider only the subtopics of this topic for classification in the lower levels. In this way, the space of controversial topics in the lower levels can be largely reduced.AppendixCorpus Topic Labels Mapping to Level 2 TopicsFor completeness, weFigure 4show the two graphs that are omitted fromFigure 2of the paper as their fine-grained topics are less relevant for the discussion in Section 5.3.Annotation Procedure for Unit CategorizationIn order to assess the effectiveness of the approaches and baseline outlined in the paper, we employ a pooled evaluation, as it is standard for information retrieval evaluations, where there are too many instances for a complete manual annotation. We randomly sampled four units from 26 corpora, which were all annotated by three expert annotators. The annotators were instructed to label a topic as about the unit if they could imagine a discussion on the topic for which the unit would be relevant. For each unit, we annotated for aboutness only those topics which are among the five topics with the highest similarity to this unit according to at least one of the approaches. The employed assessment interface (seeFigure 5) shows the unit (top left), the current topic (top right), as well as all topics in the pool for that unit (bottom; the current topic is marked blue, whereas already annotated topics are marked green (about) and red (not about). The same interface has been used for the topic label annotations.To reduce biases, both the units and the topics were shown in a different and random order to each assessor. The annotation took about 40 hours. The annotation process resulted in an inter-annotator agreement of 0.53 in terms of Krippendorff's α and produced a total of 34,638 annotations of topic-unit pairs, about 2% of what would have been needed for a complete annotation.
Internet Argument Corpus 2.0: An SQL Schema for Dialogic Social Media and the Corpora to go with it. Rob Abbott, Brian Ecker, Pranav Anand, Marilyn Walker, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)European Language Resources Association (ELRARob Abbott, Brian Ecker, Pranav Anand, and Marilyn Walker. 2016. Internet Argument Corpus 2.0: An SQL Schema for Dialogic Social Media and the Cor- pora to go with it. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 4445-4452. European Language Resources Association (ELRA).
A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics. Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfreund, Noam Slonim, Proceedings of the 2014 Workshop on Argumentation Mining. the 2014 Workshop on Argumentation MiningAssociation for Computational LinguisticsEhud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gut- freund, and Noam Slonim. 2014. A Benchmark Dataset for Automatic Detection of Claims and Ev- idence in the Context of Controversial Topics. In Proceedings of the 2014 Workshop on Argumenta- tion Mining (ArgMining 2014), pages 64-68. Asso- ciation for Computational Linguistics.
Modeling Frames in Argumentation. Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, Benno Stein, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language ProcessingACLYamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019a. Modeling Frames in Argu- mentation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and 9th International Joint Conference on Natu- ral Language Processing (EMNLP 2019). ACL.
Data Acquisition for Argument Search: The args.me corpus. Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, Benno Stein, 42nd German Conference on Artificial Intelligence. SpringerKI 2019Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019b. Data Acquisition for Argument Search: The args.me corpus. In 42nd German Conference on Ar- tificial Intelligence (KI 2019). Springer.
Cross-Domain Mining of Argumentative Text through Distant Supervision. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, Benno Stein, 10.18653/v1/N16-1165Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2016). the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2016)Association for Computational LinguisticsKhalid Al-Khatib, Henning Wachsmuth, Matthias Ha- gen, Jonas Köhler, and Benno Stein. 2016. Cross- Domain Mining of Argumentative Text through Dis- tant Supervision. In Proceedings of the 2016 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (NAACL 2016), pages 1395- 1404. Association for Computational Linguistics.
Cross-Domain Mining of Argumentative Text through Distant Supervision. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, Benno Stein, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2016). the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2016)Association for Computational LinguisticsKhalid Al-Khatib, Henning Wachsmuth, Matthias Ha- gen, Jonas Köhler, and Benno Stein. 2016a. Cross- Domain Mining of Argumentative Text through Dis- tant Supervision. In Proceedings of the 2016 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (NAACL 2016), pages 1395- 1404. Association for Computational Linguistics.
A News Editorial Corpus for Mining Argumentation Strategies. Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, Benno Stein, 26th International Conference on Computational Linguistics (COLING 2016). Association for Computational LinguisticsKhalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016b. A News Editorial Corpus for Mining Argumenta- tion Strategies. In 26th International Conference on Computational Linguistics (COLING 2016), pages 3433-3443. Association for Computational Linguis- tics.
Stance Classification of Context-Dependent Claims. Roy Bar-Haim, Indrajit Bhattacharya, Francesco Dinuzzo, Amrita Saha, Noam Slonim, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017). the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)Association for Computational LinguisticsRoy Bar-Haim, Indrajit Bhattacharya, Francesco Din- uzzo, Amrita Saha, and Noam Slonim. 2017. Stance Classification of Context-Dependent Claims. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics (EACL 2017), pages 251-261. Association for Computational Linguistics.
From arguments to key points: Towards automatic argument summarization. Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kantor, Dan Lahav, Noam Slonim, 10.18653/v1/2020.acl-main.371Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2021). the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2021)Association for Computational LinguisticsRoy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kantor, Dan Lahav, and Noam Slonim. 2020. From arguments to key points: Towards automatic argu- ment summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics (ACL 2021), pages 4029-4039. Associa- tion for Computational Linguistics.
From Surrogacy to Adoption; From Bitcoin to Cryptocurrency: Debate Topic Expansion. Roy Bar-Haim, Dalia Krieger, Orith Toledo-Ronen, Lilach Edelstein, Yonatan Bilu, Alon Halfon, Yoav Katz, Amir Menczel, Ranit Aharonov, Noam Slonim, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)Association for Computational LinguisticsRoy Bar-Haim, Dalia krieger, Orith Toledo-Ronen, Lilach Edelstein, Yonatan Bilu, Alon Halfon, Yoav Katz, Amir Menczel, Ranit Aharonov, and Noam Slonim. 2019. From Surrogacy to Adoption; From Bitcoin to Cryptocurrency: Debate Topic Expansion. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2019), pages 977-990. Association for Computational Lin- guistics.
Implementing the Argument Web. Floris Bex, John Lawrence, Mark Snaith, Chris Reed, Communications of the ACM. 56Floris Bex, John Lawrence, Mark Snaith, and Chris Reed. 2013. Implementing the Argument Web. Communications of the ACM, 56:66-73. Crawled in Jan, 2020.
Anael Malet, Assaf Gavron, and Noam Slonim. Yonatan Bilu, Ariel Gera, Danel Hershcovich, Benjamin Sznajder, Dan Lahav, Guy Moshkowich, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)Association for Computational LinguisticsArgument Invention from First PrinciplesYonatan Bilu, Ariel Gera, Danel Hershcovich, Ben- jamin Sznajder, Dan Lahav, Guy Moshkowich, Anael Malet, Assaf Gavron, and Noam Slonim. 2019. Argument Invention from First Principles. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2019), pages 1013-1026. Association for Computational Linguistics.
Back up your stance: Recognizing arguments in online discussions. Filip Boltuzić, Jan Šnajder, Proceedings of the First Workshop on Argumentation Mining. the First Workshop on Argumentation MiningThe Association for Computational LinguisticsFilip Boltuzić and Jan Šnajder. 2014. Back up your stance: Recognizing arguments in online discus- sions. In Proceedings of the First Workshop on Ar- gumentation Mining, pages 49-58. The Association for Computational Linguistics.
Recognizing Arguing Subjectivity and Argument Tags. Alexander Conard, Janyce Wiebe, Rebecca Hwa, Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics. the Workshop on Extra-Propositional Aspects of Meaning in Computational LinguisticsAlexander Conard, Janyce Wiebe, and Rebecca Hwa. 2012. Recognizing Arguing Subjectivity and Ar- gument Tags. In Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Compu- tational Linguistics (ExProM 2012), pages 80-88.
Bruce Croft, Donald Metzler, Trevor Strohman, Search Engines: Information Retrieval in Practice. USAAddison-Wesley1st editionBruce Croft, Donald Metzler, and Trevor Strohman. 2009. Search Engines: Information Retrieval in Practice, 1st edition. Addison-Wesley, USA.
What is the Essence of a Claim? Cross-Domain Claim Identification. Johannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, Iryna Gurevych, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJohannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, and Iryna Gurevych. 2017. What is the Essence of a Claim? Cross-Domain Claim Iden- tification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 2045-2056. Association for Computational Linguistics.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.
Corpus wide argument mining -A working solution. Eyal Liat Ein-Dor, Lena Shnarch, Alon Dankin, Benjamin Halfon, Ariel Sznajder, Carlos Gera, Martin Alzate, Leshem Gleize, Yufang Choshen, Yonatan Hou, Ranit Bilu, Noam Aharonov, Slonim, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USA2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceLiat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Hal- fon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, and Noam Slonim. 2020. Corpus wide argument mining -A working solution. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty- Second Innovative Applications of Artificial Intelli- gence Conference, IAAI 2020, The Tenth AAAI Sym- posium on Educational Advances in Artificial Intel- ligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7683-7691.
Unsupervised expressive rules provide explainability and assist human experts grasping new domains. Shnarch Eyal, Leshem Choshen, Guy Moshkowich, Ranit Aharonov, Noam Slonim, 10.18653/v1/2020.findings-emnlp.243Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsShnarch Eyal, Leshem Choshen, Guy Moshkowich, Ranit Aharonov, and Noam Slonim. 2020. Unsuper- vised expressive rules provide explainability and as- sist human experts grasping new domains. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2020, pages 2678-2697. Association for Computational Linguistics.
Computing semantic relatedness using wikipediabased explicit semantic analysis. Evgeniy Gabrilovich, Shaul Markovitch, IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence. Hyderabad, IndiaEvgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipedia- based explicit semantic analysis. In IJCAI 2007, Proceedings of the 20th International Joint Confer- ence on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pages 1606-1611.
Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network. Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, Noam Slonim, Proceedings of the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019)Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, and Noam Slonim. 2019. Are You Convinced? Choos- ing the More Convincing Evidence with a Siamese Network. In Proceedings of the 2019 Annual Meet- ing of the Association for Computational Linguistics (ACL 2019), pages 967-976.
Framewise phoneme classification with bidirectional lstm and other neural network architectures. Alex Graves, Jürgen Schmidhuberab, Neural networks : the official journal of the International Neural Network Society. 18Alex Graves and Jürgen Schmidhuberab. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural net- works : the official journal of the International Neu- ral Network Society, 18:602-10.
The workweek is the best time to start a family -a study of GPT-2 based claim generation. Shai Gretz, Yonatan Bilu, Edo Cohen-Karlik, Noam Slonim, 10.18653/v1/2020.findings-emnlp.47Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsShai Gretz, Yonatan Bilu, Edo Cohen-Karlik, and Noam Slonim. 2020. The workweek is the best time to start a family -a study of GPT-2 based claim gen- eration. In Findings of the Association for Compu- tational Linguistics: EMNLP 2020, pages 528-544, Online. Association for Computational Linguistics.
A translation approach to portable ontology specifications. Thomas R Gruber, Knowledge Acquisition. 5Thomas R. Gruber. 1993. A translation approach to portable ontology specifications. Knowledge Acqui- sition, 5:199-220.
Incorporating Topic Aspects for Online Comment Convincingness Evaluation. Yunfan Gu, Yhongyu Wei, Maoran Xu, Hao Fu, Yang Liu, Xuanjing Huang, Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningAssociation for Computational LinguisticsYunfan Gu, Yhongyu Wei, Maoran Xu, Hao Fu, Yang Liu, and Xuanjing Huang. 2018. Incorporating Topic Aspects for Online Comment Convincingness Evaluation. In Proceedings of the 5th Workshop on Argument Mining (ArgMining 2018), pages 97-104. Association for Computational Linguistics.
Argumentation Mining in User-Generated Web Discourse. Ivan Habernal, Iryna Gurevych, Computational Linguistics. Ivan Habernal and Iryna Gurevych. 2016a. Argumen- tation Mining in User-Generated Web Discourse. Computational Linguistics.
What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in web argumentation. Ivan Habernal, Iryna Gurevych, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsIvan Habernal and Iryna Gurevych. 2016b. What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in web argumentation. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1214-1223. Association for Com- putational Linguistics.
Which argument is more convincing? Analyzing and predicting convincingnessof Web arguments using bidirectional LSTM. Ivan Habernal, Iryna Gurevych, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)Association for Computational LinguisticsIvan Habernal and Iryna Gurevych. 2016c. Which ar- gument is more convincing? Analyzing and predict- ing convincingnessof Web arguments using bidirec- tional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL 2016), pages 1589-1599. Association for Computational Linguistics.
Yes, we can! Mining Arguments in 50 Years of US Presidential Campaign Debates. Shohreh Haddadan, Elena Cabrio, Serena Villata, Proceedings of the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019)Shohreh Haddadan, Elena Cabrio, and Serena Villata. 2019. Yes, we can! Mining Arguments in 50 Years of US Presidential Campaign Debates. In Proceed- ings of the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 4684-4690.
Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates. Saidul Kazi, Vincent Hasan, Ng, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingKazi Saidul Hasan and Vincent Ng. 2014. Why are You Taking this Stance? Identifying and Classifying Rea- sons in Ideological Debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 751- 762.
Towards a theory of aboutness, subject, topicality, theme, domain, field, content . . . and relevance. Birger Hjørland, 10.1002/asi.1131Journal of the American Society for Information Science and Technology. 529Birger Hjørland. 2001. Towards a theory of aboutness, subject, topicality, theme, domain, field, content . . . and relevance. Journal of the American Society for Information Science and Technology, 52(9):774- 778.
An argument-annotated corpus of scientific publications. Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningAssociation for Computational LinguisticsAnne Lauscher, Goran Glavaš, and Simone Paolo Ponzetto. 2018. An argument-annotated corpus of scientific publications. In Proceedings of the 5th Workshop on Argument Mining, pages 40-46. Asso- ciation for Computational Linguistics.
Towards Effective Rebuttal: Listening Comprehension using Corpus-Wide Claim Mining. Tamar Lavee, Matan Orbach, Lili Kotlerman, Yoav Kantor, Shai Gretz, Lena Dankin, Michal Jacovi, Yonatan Bilu, Ranit Aharonov, Noam Slonim, Proceedings of the Fourth Workshop on Argument Mining. the Fourth Workshop on Argument MiningTamar Lavee, Matan Orbach, Lili Kotlerman, Yoav Kantor, Shai Gretz, Lena Dankin, Michal Jacovi, Yonatan Bilu, Ranit Aharonov, and Noam Slonim. 2019. Towards Effective Rebuttal: Listening Com- prehension using Corpus-Wide Claim Mining. In Proceedings of the Fourth Workshop on Argument Mining 2017(ArgMining 2017), pages 719-724.
Context dependent claim detection. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, Noam Slonim, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersRan Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context depen- dent claim detection. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 1489- 1500.
Towards an argumentative content search engine using weak supervision. Ran Levy, Ben Boginand Shai, Ranit Gretz, Noam Aharonov, Slonim, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsRan Levy, Ben Boginand Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2066-2081.
Argument Mining from Speech: Detecting Claims in Political Debates. Marco Lippi, Paolo Torroni, Proceedings of the 2016 Association for the Advancement of ArtificialIntelligence (AAAI 2016). the 2016 Association for the Advancement of ArtificialIntelligence (AAAI 2016)Marco Lippi and Paolo Torroni. 2016. Argument Mining from Speech: Detecting Claims in Political Debates. In Proceedings of the 2016 Association for the Advancement of ArtificialIntelligence (AAAI 2016), pages 2979-2985.
Never Retreat, Never Retract: Argumentation Analysis for Political Speeches. Stefano Menini, Elena Cabrio, Sara Tonelli, Ser-Enavillata, Proceedings of the Thirty-second Association for the Advancement of Artifical Intelligene (AAAI) Conference of Artifical Intelligence. the Thirty-second Association for the Advancement of Artifical Intelligene (AAAI) Conference of Artifical IntelligenceAAAI PressStefano Menini, Elena Cabrio, Sara Tonelli, and Ser- enaVillata. 2018. Never Retreat, Never Retract: Ar- gumentation Analysis for Political Speeches. In Pro- ceedings of the Thirty-second Association for the Advancement of Artifical Intelligene (AAAI) Con- ference of Artifical Intelligence, pages 4889-4896. AAAI Press.
Listening Comprehension over Argumentative Content. Shachar Mirkin, Guy Moshkowich, Matan Orbach, Lili Kotlerman, Yoav Kantor, Tamar Lavee, Michal Jacovi, Yonatan Bilu, Ranit Aharonov, Noam Slonim, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingShachar Mirkin, Guy Moshkowich, Matan Orbach, Lili Kotlerman, Yoav Kantor, Tamar Lavee, Michal Ja- covi, Yonatan Bilu, Ranit Aharonov, and Noam Slonim. 2018. Listening Comprehension over Argu- mentative Content. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 719-724.
Measuring the similarity of sentential arguments in dialogue. Amita Misra, Brian Ecker, Marilyn Walker, Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 17th Annual Meeting of the Special Interest Group on Discourse and DialogueLos AngelesAssociation for Computational LinguisticsAmita Misra, Brian Ecker, and Marilyn Walker. 2016. Measuring the similarity of sentential arguments in dialogue. In Proceedings of the 17th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue, pages 276-287, Los Angeles. Association for Computational Linguistics.
Automatic Detection of Arguments in Legal Texts. Marie-Francine Moens, Erik Boiy, Proceedings of the 11th International conference on Artificial Intelligence and Law (ICAIL 2007). the 11th International conference on Artificial Intelligence and Law (ICAIL 2007)Association for Computational MachineryMarie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic Detection of Arguments in Legal Texts. In Proceedings of the 11th International conference on Artificial Intel- ligence and Law (ICAIL 2007), pages 225-230. As- sociation for Computational Machinery.
Creating a domain-diverse corpus for theory-based argument quality assessment. Lily Ng, Anne Lauscher, Joel Tetreault, Courtney Napoles, Proceedings of the 7th Workshop on Argument Mining. the 7th Workshop on Argument MiningOnline. Association for Computational LinguisticsLily Ng, Anne Lauscher, Joel Tetreault, and Courtney Napoles. 2020. Creating a domain-diverse corpus for theory-based argument quality assessment. In Proceedings of the 7th Workshop on Argument Min- ing, pages 117-126, Online. Association for Compu- tational Linguistics.
A dataset of general-purpose rebuttal. Matan Orbach, Yonatan Bilu, Ariel Gera, Yoav Kantor, Lena Dankin, Tamar Lavee, Lili Kotlerman, Shachar Mirkin, Michal Jacovi, Ranit Aharonov, Noam Slonim, 10.18653/v1/D19-1561Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMatan Orbach, Yonatan Bilu, Ariel Gera, Yoav Kantor, Lena Dankin, Tamar Lavee, Lili Kotlerman, Shachar Mirkin, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019. A dataset of general-purpose rebuttal. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5591- 5601, Hong Kong, China. Association for Computa- tional Linguistics.
Out of the echo chamber: Detecting countering debate speeches. Matan Orbach, Yonatan Bilu, Assaf Toledo, Dan Lahav, Michal Jacovi, Ranit Aharonov, Noam Slonim, 10.18653/v1/2020.acl-main.633Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMatan Orbach, Yonatan Bilu, Assaf Toledo, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2020. Out of the echo chamber: Detecting counter- ing debate speeches. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7073-7086, Online. Association for Computational Linguistics.
A Corpus of e-Rulemaking User Comments for Measuring Evaluability of Arguments. Joonsuk Park, Claire Cardie, Proceedings of the 2018 International Conference on Language Resources and Evaluation (LREC. the 2018 International Conference on Language Resources and Evaluation (LRECJoonsuk Park and Claire Cardie. 2018. A Corpus of e- Rulemaking User Comments for Measuring Evalua- bility of Arguments. In Proceedings of the 2018 In- ternational Conference on Language Resources and Evaluation (LREC 2018).
An annotated corpus of argumentative microtexts. Andreas Peldszus, Manfred Stede, Proceedings of the 2015 European Conference on Argumentation: Argumentation and Reasoned Action. the 2015 European Conference on Argumentation: Argumentation and Reasoned ActionAndreas Peldszus and Manfred Stede. 2015. An an- notated corpus of argumentative microtexts. In Pro- ceedings of the 2015 European Conference on Ar- gumentation: Argumentation and Reasoned Action (ECA 2015).
Modeling Organization in Student Essays. Isaac Persing, Alan Davis, Vincent Ng, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingIsaac Persing, Alan Davis, and Vincent Ng. 2010. Mod- eling Organization in Student Essays. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229-239.
Lightly-Supervised Modeling of Argument Persuasiveness. Isaac Persing, Vincent Ng, Proceedings of 2017 International Joint Conference on Natural Language Processing. 2017 International Joint Conference on Natural Language ProcessingIsaac Persing and Vincent Ng. 2017. Lightly- Supervised Modeling of Argument Persuasiveness. In Proceedings of 2017 International Joint Con- ference on Natural Language Processing (IJCNLP 2017), pages 594-604.
Marie Francine Moens, Teresa Goncalves, and Paulo Quaresma. Prakash Poudyal, Jaromir Savelka, Aagje Ieven, Proceedings of the 7th Workshop on Argument Mining. the 7th Workshop on Argument MiningOnline. Association for Computational LinguisticsECHR: Legal corpus for argument miningPrakash Poudyal, Jaromir Savelka, Aagje Ieven, Marie Francine Moens, Teresa Goncalves, and Paulo Quaresma. 2020. ECHR: Legal corpus for argument mining. In Proceedings of the 7th Workshop on Ar- gument Mining, pages 67-75, Online. Association for Computational Linguistics.
Classification and Clustering of Arguments with Contextualized Word Embeddings. Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)Association for Computational LinguisticsNils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and Clustering of Arguments with Contextualized Word Embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2019), pages 567-578. Association for Computational Lin- guistics.
Roser Morante, and Antske Fokkens. 2021. Is Stance Detection Topic-Independent and Cross-topic Generalizable? -A Reproduction Study. Myrthe Reuver, Suzan Verberne, Proceedings of the 2021 Workshop on Argumentation Mining. the 2021 Workshop on Argumentation MiningMyrthe Reuver, Suzan Verberne, Roser Morante, and Antske Fokkens. 2021. Is Stance Detection Topic- Independent and Cross-topic Generalizable? -A Re- production Study. In Proceedings of the 2021 Work- shop on Argumentation Mining (ArgMining 2021).
Show Me Your Evidence -an Automatic Method for Context Dependent Evidence Detection. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M Khapra, Ehud Aharoni, Noam Slonim, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015). the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015)Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show Me Your Evidence -an Auto- matic Method for Context Dependent Evidence De- tection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 719-724.
Simple BM25 extension to multiple weighted fields. Stephen E Robertson, Hugo Zaragoza, Michael J Taylor, 10.1145/1031171.1031181Proceedings of the. theStephen E. Robertson, Hugo Zaragoza, and Michael J. Taylor. 2004. Simple BM25 extension to multi- ple weighted fields. In Proceedings of the 2004
ACM CIKM International Conference on Information and Knowledge Management. Washington, DC, USAACMACM CIKM International Conference on Informa- tion and Knowledge Management, Washington, DC, USA, November 8-13, 2004, pages 42-49. ACM.
Debatesum: A large-scale argument mining and summarization dataset. Allen Roush, Arvind Balaji, Proceedings of the 7th Workshop on Argument Mining. the 7th Workshop on Argument MiningAssociation for Computational LinguisticsAllen Roush and Arvind Balaji. 2020. Debatesum: A large-scale argument mining and summarization dataset. In Proceedings of the 7th Workshop on Ar- gument Mining, pages 1-7, Online. Association for Computational Linguistics.
Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining. Eyal Schnarch, Carlos Alzate, Lena Dankin, Martin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, Noam Slonim, Proceedings of the 2018 Annual Meeting of the Association for Computational Linguistics (ACL 2018). the 2018 Annual Meeting of the Association for Computational Linguistics (ACL 2018)Eyal Schnarch, Carlos Alzate, Lena Dankin, Mar- tin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2018. Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining. In Proceedings of the 2018 Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 599- 605.
A survey of hierarchical classification across different application domains. Carlos Silla, Alex Freitas, Data Mining and Knowledge Discovery. 22Carlos Silla and Alex Freitas. 2011. A survey of hi- erarchical classification across different application domains. Data Mining and Knowledge Discovery, 22:31-72.
More or less controlled elicitation of argumentative text: Enlarging a microtext corpus via crowdsourcing. Maria Skeppstedt, Andreas Peldszus, Manfreds Stede, Proceedings of the 5th Workshop on Argument Mining 2017. the 5th Workshop on Argument Mining 2017Association for Computational LinguisticsMaria Skeppstedt, Andreas Peldszus, and ManfredS Stede. 2018. More or less controlled elicitation of argumentative text: Enlarging a microtext corpus via crowdsourcing. In Proceedings of the 5th Work- shop on Argument Mining 2017 (ArgMining 2017), pages 155-163. Association for Computational Lin- guistics.
ArgumenText: Searching for Arguments in Heterogeneous Sources. Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, Iryna Gurevych, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: System Demonstrations. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: System DemonstrationsChristian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. ArgumenText: Searching for Arguments in Hetero- geneous Sources. In Proceedings of the 2018 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: System Demonstrations, pages 21-25.
Parsing Argumentation Structure in Persuasive Essays. Christian Stab, Iryna Gurevych, Computational Linguistics. 433Christian Stab and Iryna Gurevych. 2017. Parsing Ar- gumentation Structure in Persuasive Essays. Com- putational Linguistics, 43(3):619-659.
Hierarchical Text Classification and Evaluation. Aixin Sun, Ee-Pen Lim, Proceedings of the 2001 Institute of Electrical and Electronics Engineer (IEEE) International Conference on Data Mining (ICDM 2001). the 2001 Institute of Electrical and Electronics Engineer (IEEE) International Conference on Data Mining (ICDM 2001)Association for Computational LinguisticsAixin Sun and Ee-Pen Lim. 2001. Hierarchical Text Classification and Evaluation. In Proceedings of the 2001 Institute of Electrical and Electronics Engi- neer (IEEE) International Conference on Data Min- ing (ICDM 2001), pages 521-528. Association for Computational Linguistics.
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee, 10.1145/2872427.2883081Proceedings of the 25th International Conference on World Wide Web(WWW 2016). the 25th International Conference on World Wide Web(WWW 2016)ternational World Wide Web Conferences Steering CommitteeChenhao Tan, Vlad Niculae, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Pro- ceedings of the 25th International Conference on World Wide Web(WWW 2016), pages 613-624. In- ternational World Wide Web Conferences Steering Committee.
Multilingual argument mining: Datasets and analysis. Orith Toledo-Ronen, Matan Orbach, Yonatan Bilu, Artem Spector, Noam Slonim, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, volume EMNLP 2020 of Findings of ACL. the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, volume EMNLP 2020 of Findings of ACLAssociation for Computational LinguisticsOrith Toledo-Ronen, Matan Orbach, Yonatan Bilu, Artem Spector, and Noam Slonim. 2020. Multilin- gual argument mining: Datasets and analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, volume EMNLP 2020 of Findings of ACL, pages 303-317. Association for Computational Linguistics.
The Uses of Argument. Stephen Toulmin, Cambridge University PressStephen Toulmin. 1958. The Uses of Argument. Cam- bridge University Press.
H Frans, Van Eemeren, Reasonableness and Effectiveness in Argumentative Discourse. SpringerFrans H. van Eemeren, editor. 2015. Reasonableness and Effectiveness in Argumentative Discourse, vol- ume 27 of Argumentation Library. Springer.
Computational Argumentation Quality Assessment in Natural Language. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, Benno Stein, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EMNLP 2017). the 15th Conference of the European Chapter of the Association for Computational Linguistics (EMNLP 2017)Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Al- berdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational Argumentation Quality As- sessment in Natural Language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EMNLP 2017), pages 176-187.
A structural analysis of topic ontologies. Eduardo Xamena, Nélida Beatriz Brignole, Ana Gabriela Maguitman, 10.1016/j.ins.2017.08.081Information Science. 421Eduardo Xamena, Nélida Beatriz Brignole, and Ana Gabriela Maguitman. 2017. A structural anal- ysis of topic ontologies. Information Science, 421:15-29.
Aboutness. Stephen Yablo, Princeton University PressStephen Yablo. 2014. Aboutness. Princeton University Press.
Conversational Flow in Oxford-style Debates. Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJustine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational Flow in Oxford-style Debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL-HLT 2016).
| []
|
[
"Unveiling the formation of the massive DR21 ridge",
"Unveiling the formation of the massive DR21 ridge"
]
| [
"L Bonne \nSOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA\n",
"S Bontemps \nLaboratoire d'Astrophysique de Bordeaux\nUniversité de Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance\n",
"N Schneider \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln\n",
"R Simon \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln\n",
"S D Clarke \nInstitute of Astronomy and Astrophysics\nAcademia Sinica\nTaipeiTaiwan\n",
"T Csengeri \nLaboratoire d'Astrophysique de Bordeaux\nUniversité de Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance\n",
"E Chambers \nSOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA\n",
"U Graf \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln\n",
"J M Jackson \nSOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA\n\nGreen Bank Observatory\nPO Box 224 944Green BankWVUSA\n",
"R Klein \nSOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA\n",
"Y Okada \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln\n",
"A G G M Tielens \nDepartment of Astronomy\nUniversity of Maryland\n20742-2421College ParkMDUSA\n\nLeiden Observatory\nPO Box 95132300 RALeidenThe Netherlands\n",
"M Tiwari \nDepartment of Astronomy\nUniversity of Maryland\n20742-2421College ParkMDUSA\n"
]
| [
"SOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA",
"Laboratoire d'Astrophysique de Bordeaux\nUniversité de Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln",
"Institute of Astronomy and Astrophysics\nAcademia Sinica\nTaipeiTaiwan",
"Laboratoire d'Astrophysique de Bordeaux\nUniversité de Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance",
"SOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln",
"SOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA",
"Green Bank Observatory\nPO Box 224 944Green BankWVUSA",
"SOFIA Science Center\nNASA Ames Research Center\n94 045Moffett FieldCAUSA",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937Köln",
"Department of Astronomy\nUniversity of Maryland\n20742-2421College ParkMDUSA",
"Leiden Observatory\nPO Box 95132300 RALeidenThe Netherlands",
"Department of Astronomy\nUniversity of Maryland\n20742-2421College ParkMDUSA"
]
| []
| We present new 13 CO(1-0), C 18 O(1-0), HCO + (1-0) and H 13 CO + (1-0) maps from the IRAM 30m telescope, and a spectrally-resolved [C II] 158 µm map observed with the SOFIA telescope towards the massive DR21 cloud. This traces the kinematics from low-to high-density gas in the cloud which allows to constrain the formation scenario of the high-mass star forming DR21 ridge. The molecular line data reveals that the sub-filaments are systematically redshifted relative to the dense ridge. We demonstrate that [C II] unveils the surrounding CO-poor gas of the dense filaments in the DR21 cloud. We also show that this surrounding gas is organized in a flattened cloud with curved redshifted dynamics perpendicular to the ridge. The sub-filaments thus form in this curved and flattened mass reservoir. A virial analysis of the different lines indicates that self-gravity should drive the evolution of the ridge and surrounding cloud. Combining all results we propose that bending of the magnetic field, due to the interaction with a mostly atomic colliding cloud, explains the velocity field and resulting mass accretion on the ridge. This is remarkably similar to what was found for at least two nearby low-mass filaments. We tentatively propose that this scenario might be a widespread mechanism to initiate star formation in the Milky Way. However, in contrast to low-mass clouds, gravitational collapse plays a role on the pc scale of the DR21 ridge because of the higher density. This allows more effective mass collection at the centers of collapse and should facilitate massive cluster formation. | null | [
"https://export.arxiv.org/pdf/2305.07785v1.pdf"
]
| 258,686,509 | 2305.07785 | 987944c59303b74b0802008f9d09875914212d8b |
Unveiling the formation of the massive DR21 ridge
May 16, 2023
L Bonne
SOFIA Science Center
NASA Ames Research Center
94 045Moffett FieldCAUSA
S Bontemps
Laboratoire d'Astrophysique de Bordeaux
Université de Bordeaux
CNRS
allée Geoffroy Saint-Hilaire
B18N, 33615PessacFrance
N Schneider
I. Physikalisches Institut
Universität zu Köln
Zülpicher Str. 7750937Köln
R Simon
I. Physikalisches Institut
Universität zu Köln
Zülpicher Str. 7750937Köln
S D Clarke
Institute of Astronomy and Astrophysics
Academia Sinica
TaipeiTaiwan
T Csengeri
Laboratoire d'Astrophysique de Bordeaux
Université de Bordeaux
CNRS
allée Geoffroy Saint-Hilaire
B18N, 33615PessacFrance
E Chambers
SOFIA Science Center
NASA Ames Research Center
94 045Moffett FieldCAUSA
U Graf
I. Physikalisches Institut
Universität zu Köln
Zülpicher Str. 7750937Köln
J M Jackson
SOFIA Science Center
NASA Ames Research Center
94 045Moffett FieldCAUSA
Green Bank Observatory
PO Box 224 944Green BankWVUSA
R Klein
SOFIA Science Center
NASA Ames Research Center
94 045Moffett FieldCAUSA
Y Okada
I. Physikalisches Institut
Universität zu Köln
Zülpicher Str. 7750937Köln
A G G M Tielens
Department of Astronomy
University of Maryland
20742-2421College ParkMDUSA
Leiden Observatory
PO Box 95132300 RALeidenThe Netherlands
M Tiwari
Department of Astronomy
University of Maryland
20742-2421College ParkMDUSA
Unveiling the formation of the massive DR21 ridge
May 16, 2023Draft version Typeset using L A T E X twocolumn style in AASTeX631ISM: individual objects: DR21 -ISM: kinematics and dynamics -ISM: clouds -stars: massive -stars: formation
We present new 13 CO(1-0), C 18 O(1-0), HCO + (1-0) and H 13 CO + (1-0) maps from the IRAM 30m telescope, and a spectrally-resolved [C II] 158 µm map observed with the SOFIA telescope towards the massive DR21 cloud. This traces the kinematics from low-to high-density gas in the cloud which allows to constrain the formation scenario of the high-mass star forming DR21 ridge. The molecular line data reveals that the sub-filaments are systematically redshifted relative to the dense ridge. We demonstrate that [C II] unveils the surrounding CO-poor gas of the dense filaments in the DR21 cloud. We also show that this surrounding gas is organized in a flattened cloud with curved redshifted dynamics perpendicular to the ridge. The sub-filaments thus form in this curved and flattened mass reservoir. A virial analysis of the different lines indicates that self-gravity should drive the evolution of the ridge and surrounding cloud. Combining all results we propose that bending of the magnetic field, due to the interaction with a mostly atomic colliding cloud, explains the velocity field and resulting mass accretion on the ridge. This is remarkably similar to what was found for at least two nearby low-mass filaments. We tentatively propose that this scenario might be a widespread mechanism to initiate star formation in the Milky Way. However, in contrast to low-mass clouds, gravitational collapse plays a role on the pc scale of the DR21 ridge because of the higher density. This allows more effective mass collection at the centers of collapse and should facilitate massive cluster formation.
INTRODUCTION
The formation of massive stars is still a matter of intense debate (e.g. Zinnecker & Yorke 2007;Tan et al. 2014;Motte et al. 2018a). The very short, or potentially non-existing, massive prestellar core phase (e.g. Motte et al. 2007Motte et al. , 2010Russeil et al. 2010;Csengeri et al. 2014;Tigé et al. 2017;Sanhueza et al. 2019) suggests that high-mass stars might not form from the quasistatic evolution toward collapse of individual prestellar cores. This opens the question of how massive stars can form. From observations of molecular clouds that form massive stars it was proposed that some large-scale dynamics, partly driven by self-gravity, can provide the fast concentration of mass necessary to form massive stars (e.g. Peretto et al. 2006Peretto et al. , 2013Peretto et al. , 2014Hartmann & Burkert 2007;Schneider et al. 2010Schneider et al. , 2015Csengeri et al. 2011a,b;Galván-Madrid et al. 2010;Wyrowski et al. 2012Wyrowski et al. , 2016Beuther et al. 2015;Williams et al. 2018;Jackson et al. 2019;Bonne et al. 2022a). This probable coupling to the molecular cloud dynamics is also put forward by several theoretical models. In the competitive accretion model (Bonnell et al. 2001(Bonnell et al. , 2004Bonnell & Bate 2006;Wang et al. 2010), the gravitational pull of low-mass protostellar cores drives (Hennemann et al. 2012) of the DR21 cloud (ridge and surrounding sub-filaments). The yellow and red polygons indicate the area covered by the two different backend setups of the IRAM 30m observations, i.e. setup1 (yellow) and setup2 (red). The southern sub-filament is only covered by the first setup which contains HCO + (1-0) and H 13 CO + (1-0). The crosses indicate the location of the spectra displayed in the bottom and top panels. The white circles are centered on the DR21(OH) clump and indicate radii of 0.5, 1.5, 3 and 4 pc at a distance of 1.4 kpc, which will provide reference for the virial analysis presented in Sect. 4. Middle right: The red box outlines the area that we use from the available SOFIA [C II] observations that cover a significantly larger region in Cygnus-X North (Schneider et al. 2023). The nomenclature indicated for sub-filaments is adopted from Schneider et al. (2010); Hennemann et al. (2012): N (north), F1N, F1S, F3N, F3S, SW (south-west), S (south). Bottom and top: 13 CO(1-0), C 18 O(1-0), HCO + (1-0), H 13 CO + (1-0) and [C II] spectra from the locations indicated in the left middle panel.
the mass accretion from the surrounding cloud to determine their final mass. The fragmentation-induced starvation scenario proposes that massive stars form through gravitational fragmentation at the center of dense filamentary accretion flows (Peters et al. 2010(Peters et al. , 2011Girichidis et al. 2012), where the fragmentation in the inflowing filaments sets a limit on the accretion of the most massive stars at the center of the accretion flow. Another variant proposes that gravitational collapse on multiple scales after the thermal instability (Vázquez-Semadeni et al. 2009, 2019 drives the required dynamics and mass concentration responsible for the formation of massive stars which is then halted by stellar feedback. From simulations it was also proposed that compression by the collision of atomic flows or fully developed molecular clouds at high velocities can form dense filamentary structures that host massive star formation (e.g. Dobbs et al. 2012Dobbs et al. , 2020Inoue & Fukui 2013;Wu et al. 2015;Balfour et al. 2017;Bisbas et al. 2017). The velocity and density of the H I flows or molecular clouds would then be decisive to determine whether massive stars can form. Typically, collision velocities >10 km s −1 would be needed to form massive stars (Haworth et al. 2015;Dobbs et al. 2020). Based on observed bridging with CO lines between separated velocity components in several clouds, it has been proposed that cloud-cloud collisions (CCCs) might play a role in high-mass star formation (e.g. Bisbas et al. 2018;Fukui et al. 2021;Lim et al. 2021). From simulations, it was also proposed that an oblique shock associated with collision velocities above 7 km s −1 could bend the magnetic field around the filaments and drive subsequent mass inflow that enables high-mass star formation (Inoue et al. 2018;Abe et al. 2021).
Analyzing observations of the low-mass Musca filament and the surrounding Chamaeleon-Musca regions, Bonne et al. (2020a,b) concluded that continuous mass accretion on the Musca filament was driven by such bending of the magnetic field due to a 50 pc scale collision at ∼ 7 km s −1 between H I clouds in the Chamaeleon-Musca complex. This mechanism was proposed for the formation of high-mass star forming filaments (Inoue et al. 2018) but might also be applicable for low-mass star forming filaments. Specifically, the Musca filament would be the result of a turbulent overdensity that is compressed and guided by the bended magnetic field during interaction with more diffuse gas in the colliding cloud. Interestingly, Faraday rotation measurements in the radio domain, which trace the magnetic field properties in the interstellar medium (ISM), unveiled curved magnetic fields around several nearby low-to intermedi-ate mass star forming regions (Tahani et al. 2019. It also unveiled a correlation of the cold (CNM) & lukewarm (LNM) neutral medium with the magnetic field structure in the diffuse ISM (Bracco et al. 2020). These results would have a straightforward explanation in the proposed scenario of colliding H I clouds that bend the magnetic field. Furthermore, recent observations of the massive star forming cloud NGC 6334 found kinematic structure resembling the results in Musca (Arzoumanian et al. 2022). For the atomic and molecular lines, the line transition wavelength (λ) and frequency (ν) are given in columns 3 and 4, respectively. The effective velocity resolution is indicated in column 5 and the angular resolution of the original data is displayed in column 6.
In this paper, we present a multi-wavelength study of the DR21 cloud in molecular and atomic lines and dust continuum to follow up on the link between molecular cloud evolution and the formation of dense filamentary structures. Specifically, after focusing on Musca, we now focus on a region where massive stars are presently forming. The DR21 cloud is located in the north of the Cygnus-X molecular cloud complex (Reipurth & Schneider 2008) at an estimated distance of ∼ 1.4 kpc (Rygl et al. 2012). This cloud hosts the DR21 ridge which is the filamentary structure with a length of ∼ 4 pc and a mass of ∼ 10 4 M that is typically defined by high column densities, i.e., N H2 > 10 23 cm −2 Hennemann et al. 2012). The DR21 ridge is the densest and most massive filament in the entire Cygnus-X complex and one of the most active high-mass star forming regions within 2 kpc from the sun Kumar et al. 2007;Bontemps et al. 2010;Beerer et al. 2010;Csengeri et al. 2011a,b;Duarte-Cabral et al. 2013. In this paper, we will use the term ridge for the inner, high-column density part (N H2 > 10 23 cm −2 ) of the molecular cloud (Hill et al. 2011(Hill et al. , 2012, sub-filaments for the lower density filaments connected to the ridge and cloud for the parsec scale surrounding cloud that includes the ridge.
Detailed kinematic studies of the DR21 region by Schneider et al. (2010) demonstrated that the ridge is experiencing large scale gravitational collapse and mass accretion by sub-filaments that run parallel to the magnetic field (Vallée & Fiege 2006;Ching et al. 2022). Figure 1 shows the DR21 cloud and ridge (N H2 >10 23 cm −2 ) and the sub-filaments defined in Schneider et al. (2010) and Hennemann et al. (2012). It is also proposed that the DR21 cloud is part of the Cygnus-X complex which forms a single region with multiple velocity components (Schneider et al. 2006(Schneider et al. , 2007. This was later confirmed by Rygl et al. (2012) and provided arguments that DR21 is the result of a collision between two molecular clouds with velocity components at ∼ -3 km s −1 and ∼ 9 km s −1 (Dickel et al. 1978;Dobashi et al. 2019). This collision is revisited and discussed in Schneider et al. (2023) using new insight from the SOFIA [C II] data.
In this paper, we aim to establish the main processes, i.e. gravity, magnetic field and turbulence that drive the evolution of this massive star forming molecular cloud from low-density (n H2 10 3 cm −3 ) gas in the surrounding cloud to high-density (n H2 > 10 5 cm −3 ) gas in the ridge. In Sect. 2, we give observational details. Section 3 presents the observational results of the DR21 cloud, followed by the inflow and virial analysis of these observational results in Sect. 4. Lastly, in Sect. 5 we put these results into context to propose a scenario for the DR21 cloud evolution and discuss how it compares to low-mass star forming clouds.
OBSERVATIONS
The observations tracing different density regimes in the DR21 cloud are described in the following subsections and are summarized in Table 1.
IRAM 30m observations
Observations with the IRAM 30m telescope of the DR21 region (surrounding cloud, ridge and subfilaments) were carried out in February 2015. These observations were performed with two different spectral setups using the FTS50 configuration of the EMIR receiver. Setup 1, which contains HCO + (1-0) and H 13 CO + (1-0), covers the spectral ranges ∼ 85.2 GHz -87.2 GHz, 88.5 GHz -90.5 GHz, 101.0 GHz -102.8 GHz and 104.2 GHz -106.0 GHz. Setup 2, which contains 13 CO(1-0) and C 18 O(1-0), covers the spectral ranges ∼ 89.6 GHz. These two setups allow to observe a variety of molecular lines (e.g. C 18 O(1-0), N 2 H + (1-0), HCN(1-0), HCO + (1-0), NH 2 D[1(1,1)0-1(0,1)0], etc...), which allows to follow the kinematic and chemical evolution of the dense gas in the DR21 ridge and the connected sub-filaments. The obtained antenna temperature (T * A ) noise rms within a spectral resolution of 0.2 km s −1 is ∼0.14 K around 90 GHz and ∼0.20 K around 110 GHz, respectively. A main beam efficiency of 0.81 for frequencies around 90 GHz and 0.78 for frequencies around 110 GHz was used (Kramer et al. 2013). The region on the sky that is covered by the two different setups is displayed in Fig. 1. The mapping was performed in the On-the-Fly (OTF) observing mode, and the resulting data cubes have a spatial resolution between ∼23 (110 GHz) and ∼29 (90 GHz), see Tab. 1. To produce the data cubes, a first order baseline was fitted to the data with CLASS in GILDAS 1 as the baselines are generally well behaved at the observed frequencies. A full analysis of all the lines that are covered will be the topic of a future paper. Here, the first results from 13 CO(1-0), C 18 O(1-0), HCO + (1-0), and H 13 CO + (1-0) will be presented.
SOFIA [C II] observations with upGREAT
To obtain a more complete view on the kinematics of the DR21 cloud, the IRAM observations are combined with observations of the [C II] 158 µm line by the Stratospheric Observatory for Infrared Astronomy (SOFIA; Young et al. 2012). The [C II] observations at 158 µm were taken as part of the SOFIA Legacy program FEEDBACK 2 , which maps 11 Galactic highmass star-forming regions (Schneider et al. 2020), including a part of the Cygnus-X north region, in the [C II] 158 µm line and the [O I] 63 µm line. The mapping of Cygnus-X north with FEEDBACK is now completed 3 and covers the DR21 cloud, the Diamond Ring (Marston et al. 2004) and the W75N molecular cloud. The data can be found on the IRSA archive with project number 07 0077 4 . We use data that covers the DR21 cloud, see Fig. 1. The observations were carried out with the dual-frequency heterodyne array upGREAT receiver (Risacher et al. 2018) in OTF mapping mode and calibrated with the GREAT pipeline (Guan et al. 2012). We used an emission-free reference position at RA(2000)=20 h 39 m 48.34 s , Dec(2000)=42 • 57 39.11 . The 2×7 pixel LFA array was tuned to the [C II] 158 µm line and the 7 pixel high-frequency array (HFA) was Figure 2. Top left: Integrated brightness map of 13 CO(1-0) from -8 to 1 km s −1 towards the DR21 cloud. Overlaid on the map are the integrated intensity contours starting at 2 K km s −1 with increments of 10 K km s −1 . The locations of the DR21 radio continuum source (Downes & Rinehart 1966), and the DR21(OH) and DR21-N massive clumps (e.g. Motte et al. 2007) are indicated with black triangles. The subfilaments in cloud are indicated with their names. Top right: The same for HCO + (1-0) with increments of 5 K km s −1 . The triangles indicating DR21, DR21(OH) and DR21-N help to compare with the 13 CO and C 18 O maps, from setup2, which have a different map size. Bottom left: The same for C 18 O(1-0), with contour increments of 2 K km s −1 . Bottom right: The same for H 13 CO + (1-0), with contour increments of 2 K km s −1 .
tuned to the [O I] 63 µm line (data not used here). The beam size at 158 (63) µm is 14.1 (6 ). Here we employ [C II] data smoothed to an angular resolution of 20 and a velocity resolution of 0.5 km s −1 , which results in a typical noise rms of ∼ 1 K per beam. In order to improve the baseline removal to reduce striping in the data cube we employ the Principal Component Analysis (PCA) method described in Tiwari et al. (2021); Kabanovic et al. (2022); Schneider et al. (2023). More details on the SOFIA FEEDBACK observational scheme and data reduction are found in Schneider et al. (2020).
JCMT observations
With the 15 pixel HARP instrument on the JCMT telescope, a 12 CO(3-2) mapping of the Cygnus-X north and south areas was carried out between 2007 and 2009 within the observing programs M08AU018 (PI N. Schneider) and M07BU019 (PI R. Simon). The observation method and reduction of this data is the same as the one described in Gottschalk et al. (2012) for the pi- The [C II] emission shows that there is extended emission around the ridge and sub-filaments. The white contours indicate the Herschel column density map at NH 2 = 10 22 cm −2 , 3×10 22 cm −2 , 10 23 cm −2 and 4×10 23 cm −2 . The red box outlines the area used to make the position-velocity (PV) diagram perpendicular to the DR21 ridge in Fig. 10. The vertical dashed black line defines the center (r = 0 pc) used for the PV diagram. The locations of the DR21 radio continuum source (Downes & Rinehart 1966), and the DR21(OH) and DR21-N massive clumps (e.g. Motte et al. 2007) are indicated. . Average spectrum of the DR21 ridge (excluding the DR21 continuum source) displaying multiple velocity components between -10 and 20 km s −1 . The focus of this paper is on the emission between -8 and 1 km s −1 alone, indicated by the dashed vertical lines, which is the velocity range associated with the DR21 ridge (Schneider et al. 2006. lot study and this full dataset will be presented in more detail in a forthcoming paper. The observations have a spatial resolution of 15 and a spectral resolution of 0.42 km s −1 , with a noise rms of ∼ 0.25 K.
Herschel observations
We employ Herschel column density maps from the Cygnus-X region that were taken as part of the HOBYS 5 program. Low angular resolution (36 ) column density maps are presented in Schneider et al. (2016a). The procedure for obtaining these maps is described in Hill et al. (2011Hill et al. ( , 2012. We here use column density maps at a higher spatial resolution of 18 that makes use of the method described in Palmeirim et al. (2013). The concept is to employ a multi-scale decomposition of the flux maps and assume a constant line-of-sight temperature. The final map is then constructed from the difference maps of the convolved maps at 500 µm, 350 µm, and 250 µm, and the temperature information from the color temperature derived from the 160 µm to 250 µm ratio.
RESULTS
Spectral line integrated maps
From the large variety of lines detected with the IRAM 30m telescope we selected a subset to present in this paper, i.e. the 13 CO(1-0), C 18 O(1-0), HCO + (1-0) and H 13 CO + (1-0) lines, that trace both the ridge and subfilaments. A full analysis of all detected lines with the IRAM 30m telescope is out of the scope of this paper. Figure 2 displays the velocity integrated maps of these 4 lines. The overall emission distribution of the DR21 cloud is similar to what was shown in Schneider et al. (2010), but the new maps are significantly larger. As a result, they better cover the extent of the dense subfilaments which are best visible to the west of the DR21 ridge. Note that the coverage is not the same for all lines which is due to different setups for the observations. From Fig. 2 we see that the 13 CO(1-0) and HCO + (1-0) emission remains well detected outside the ridge while the C 18 O(1-0) and H 13 CO + (1-0) lines are mostly detected towards the ridge which indicates a large concentration of mass. ized phase (Hollenbach & Tielens 1999). It thus nicely complements the CO/HCO + observations as it should trace the lower-density gas. The [C II] line integrated intensity map in Fig. 3 reveals interesting differences compared to the IRAM molecular line data. Emission is detected toward the dense sub-filaments that are also seen in 13 CO(1-0), HCO + (1-0) and 12 CO(3-2) (see App. A) which have typical critical densities of ∼2×10 3 cm −3 , 5×10 4 cm −3 and 3×10 4 cm −3 , respectively (Shirley 2015). However, [C II] additionally also traces the lower density gas surrounding the DR21 ridge and dense sub-filaments. The [C II] observations thus unveil a significant mass reservoir that is not located in the ridge and the sub-filaments. This gas has a typical Herschel column density (N H2 ) between 5×10 21 cm −2 and 3×10 22 cm −2 after correction for a ∼5×10 21 cm −2 background (Schneider et al. 2016a. Even though [C II] is detected towards these higher column densities (N H2 > 10 22 cm −2 ), these also start to be detected in CO lines, see App. B, which indicates that [C II] does not trace the full column density range there. Assuming the cloud has a thickness 6 of 1-3 pc indicates that [C II] in the DR21 cloud might trace a density range between n H2 = 0.5-1.6×10 3 cm −3 and n H2 = 0.3-1.0×10 4 cm −3 toward the DR21 cloud. In a later paragraph we will further constrain the typical density traced by [C II]. Only at the location of the DR21 radiocontinuum source there is a strong peak of [C II] emission in the ridge. DR21 is the most evolved region in the ridge and a site of massive star formation with several compact H II regions and a number of possible O-stars, including a prominent outflow source (Marston et al. 2004). Stellar winds (Cyganowski et al. 2003;Immer et al. 2014) probably dominate the dynamics of that region. As the [C II] emission in the DR21 source is strongly affected by the local feedback from the embedded O stars, this region is mostly left out for the work presented in this paper. Lastly, we also note from Fig. 3 that the [C II] emission unveils filamentary gas to the east of the ridge that are not detected with Herschel or molecular line data. This is probably because of the low contrast with the significant Herschel column density background and because these filamentary structures are not visible in molecular line data.
Spectra and channel maps
The average spectrum for the entire DR21 ridge is presented in Fig. 4 which shows multiple velocity components in the velocity range between -10 km s −1 and 20 km s −1 . In this paper we will exclusively focus on the emission between -8 and 1 km s −1 , highlighted in Fig. 4, which is the emission originating from the DR21 cloud (Schneider et al. 2006. The components of the full spectrum are discussed in Schneider et al. (2023). The channel maps of 13 CO(1-0) emission, shown in Fig. 5, resolve the velocity structure in the DR21 cloud. The emission in the velocity range from -5 km s −1 to -2.5 km s −1 is organized in a north-south elongated coherent structure mostly inside the N H2 =10 23 cm −2 dust column density contour and forms the ridge. In this velocity range, the channel maps show a north-south velocity gradient in the ridge. At higher and lower velocities inside the ridge, the emission is more clumpy and concentrates on the DR21(OH) and DR21 locations. At velocities between -4 and -2.5 km s −1 the sub-filament F1S becomes visible, and between -3 km s −1 and -1 km s −1 all the other western sub-filaments become prominent. Inspecting the 13 CO(1-0), C 18 O(1-0), HCO + (1-0), H 13 CO + (1-0) and [C II] spectra over the map in Fig. 1, we find that the spectra show skewed profiles. This is also the case for H 13 CO + (1-0) and C 18 O(1-0) which are optically thin for excitation temperatures in the DR21 ridge down to 10 K. This suggests the presence of multiple components which we will address later on. The selected positions in Fig. 1 represent typical lines in the ridge (positions 1,2,3 & 5) and in the sub-filaments (positions 4 & 6). Note that the HCO + (1-0) spectra in the ridge display strong self-absorption centered on the velocities of peak emission for C 18 O(1-0) and H 13 CO + (1-0), see Schneider et al. (2010) for more discussion. Furthermore, it becomes obvious in Fig. 1 that HCO + (1-0) has high-velocity wing emission towards DR21 and DR21(OH), tracing the prominent molecular outflows in these regions. The [C II] spectra also show flat-topped and skewed line profiles in Fig. 1. Since some bright [C II] regions were found to experience [C II] self-absorption and optical depth effects Guevara et al. 2020;Kabanovic et al. 2022), we verified whether this could be the case for the DR21 ridge. The [C II] spectra do not show a noteworthy dip, seen for HCO + (1-0) which argues against self-absorption. In App. C, the optical depth of the [C II] line is calculated using the [ 13 C II] noise level which indicates a maximal optical depth of τ = 1.7. In addition, we performed calculations with RADEX (van der Tak et al. 2007) which indicates that the optical depth for [C II] typically is below 1 for the DR21 cloud. The [C II] line shape is thus not caused by optical depth effects but rather by multiple velocity components around the DR21 ridge. This also fits with the observation in Fig. 4 that the peak [C II] emission is displaced from the peak molecular line emission. Therefore, we fit the spectra with multiple Gaussian velocity components (see App. D) over the full map, and analyze the goodness of the fit, with the BTS fitting algorithm (Clarke et al. 2018). We find that the [C II] emission in the range of −8 to 1 km s −1 consist of either one or two velocity components that are typically separated by ∼ 2-3 km s −1 . The first component is typically around −4.5 km s −1 and the second one around −2.5 km s −1 (see Fig. 21 of App. D). The fitting results in Fig. 19 of App. D also show that the regions with multiple velocity components are concentrated in the close vicinity of the DR21 ridge.
The velocity field of the DR21 cloud
Even though the spectra are slightly more complex than a single Gaussian in the -8 to 1 km s −1 velocity range, we performed a single Gaussian line fitting on the data sets to obtain information on the global velocity distribution and linewidth for the different tracers. A fit was accepted when the brightness was higher than 3× the noise rms in the different tracers. Note that moment maps might be better suited to give a view on the global dynamics in the region. However, in large regions of the observed map the signal-to-noise (S/N) of the data is not abundant while excellent S/N is required for producing trustworthy moment maps (e.g. Teague 2019). As a result, the produced first and second moment maps have large uncertainties in the subfilaments and do not provide a good visualization of the velocity field there. The resulting velocity maps for the molecular lines are presented in Fig. 6 and show similar velocity gradients over the ridge as the ones presented in Schneider et al. (2010) using only N 2 H + (1-0). We confirm that there is not a single gradient perpendicular to the ridge, but that the velocity pattern over the ridge shows organised velocity gradients with altering orientation. These altering velocity gradients are mixed with a clear north-south velocity gradient from -4.5 to -2.5 km s −1 . Remarkably, the velocities of the sub-filaments (N, F1N, F3N, F3S and SW) in all lines is redshifted (v LSR ∼-2.5 to -1 km s −1 ) with respect to the ridge (bulk emission around -3.5 km s −1 ). Only the F1S sub-filament at velocities around -3.5 km s −1 is not significantly redshifted with respect to the ridge but is not blueshifted either. Lastly, the south-ern sub-filament appears slightly redshifted close to the ridge, but more to the south this filament becomes less redshifted with respect to the ridge. This observation of dominantly redshifted sub-filaments and a complete lack of a truly blueshifted subfilament with respect to the ridge is noteworthy and confirms the observations in the channel maps of 13 CO(1-0) in Fig. 5. The [C II] observations trace the lower density gas kinematics in the cloud, and thus provide a complementary view on the surrounding gas kinematics in the cloud off the dense filamentary structures. The velocity field is shown in Fig. 7. There is, similar to the molecular line observations, mostly systematically redshifted emission (v∼-3 to -1 km s −1 ) in the cloud around the ridge. This redshifted gas becomes increasingly prominent further away from the ridge. In addition, Apps. D & E show that there is an excellent correspondence in velocity of the subfilaments and this redshifted [C II] emission, indicating that the subfilaments are directly embedded in this mass reservoir seen with [C II]. However, the [C II] velocity field also shows additional features. In particular, blueshifted velocities down to v=-5 km s −1 are found in localized regions directly east and north-west of the ridge. No blueshifted gas is found further away from the ridge and the F1S filament. This fits with the observation in Fig. 4 that the average [C II] emission has an offset from the molecular line emission and that it is associated with the presence of two [C II] velocity components (at -4.5 and -2.5 km s −1 ) established in the previous section. Lastly, from Fig. 20 in App. D we also note that these regions with more blueshifted velocities overlap with the regions that have two fitted velocity components. In these regions the blueshifted velocity component is thus the dominant source of emission which affects the observed velocity field.
Linewidths
The Gaussian line fitting also provides maps of the FWHM for the spectra. The results for the optically thinner lines C 18 O(1-0) and H 13 CO + (1-0) are presented in Fig. 8. The C 18 O(1-0) map indicates, where detected, that the sub-filaments have a lower FWHM than the ridge, typically 1 km s −1 . In the ridge, a FWHM between 1 and 2.5 km s −1 is observed (which typically has two velocity components, see App. E). Noteworthy is the strong local increase of the FWHM up to 4 km s −1 at locations where the sub-filaments connect to the ridge. This behaviour is also observed in the velocity dispersion maps of N 2 H + (1-0) and NH 3 (Keown et al. 2019). One would expect that this is the result of overlapping velocity components, associated to the sub-filament and the ridge, at this connection. However, App. E shows that these regions are best fitted with a single Gaussian component. This suggests either that the overlapping velocity components are closely blended together or that there is a rapid change in the line-of-sight velocity field at the edge of the ridge for example due to accretion shocks. Comparing the FWHM maps for the molecular lines in Fig. 8 with those of the [C II] line, we observe three clear differences. Firstly, the [C II] line has a significantly larger linewidth (typically > 4-5 km s −1 ) than the molecular lines over the full map. Note that the largest FWHMs are associated with the DR21 outflow which is not the focus of our study. Secondly, the highest [C II] FWHM values, excluding the DR21 outflow, are found outside of the dense DR21 ridge, mostly in the eastern region without molecular dense sub-filaments. This east-west asymmetry with respect to the ridge for the [C II] linewidth is remarkable and seems to be correlated with the lack of dense sub-filaments east of the ridge. Thirdly, examining the map in more detail, it is also observed that the [C II] emission directly surrounding the sub-filaments has a significantly larger linewidth than the emission towards the sub-filaments. In Fig. 1, it is observed that this broader [C II] linewidth is the result of the flat-topped spectra which are due to more blueshifted [C II] emission with respect to the molecular lines. This more blueshifted [C II] emission is the result of the prominent blueshifted [C II] velocity component found in App. D and thus is not necessarily the result of higher turbulent support. This demonstrates that the molecular lines miss an important part of the picture to understand the full DR21 molecular cloud dynamics and evolution.
4. ANALYSIS 4.1. The C + column density and excitation In Sect. 3.1, it was estimated that [C II] traces gas between 5×10 2 and 10 4 cm −3 . Here, we will explore the excitation conditions and regions traced by the [C II] emission in more detail. To determine the typical temperature of the C + gas, we consider the heating from FUV photons. From the FUV field map calculated in Schneider et al. (2016b), we find that the DR21 cloud is located in an FUV field up to G 0 = 200-400 Habing due to heating from local UV source in the region (see also Schneider et al. 2023). This fits with the typical line integrated [C II] intensity towards the surrounding cloud of ∼ 50 K km s −1 which is expected for an FUV field of G 0 = 200 Habing in the PDR Toolbox (Kaufman et al. 2006;Pound & Wolfire 2008 at densities between 5×10 2 and 10 4 cm −3 . For this FUV-field strength and density range, the PDR Toolbox predicts temperatures of 100-200 K for the [C II] emitting gas of the photodissociation region (PDR). With these excitation conditions, it is possible to produce the C + column density map over the full extent of the FEEDBACK map using the equation from Gold-
smith et al. (2012) ∆TA=3.43×10 −16 × 1+0.5e 91.25/T kin 1+ 2.4×10 −6 C ul −1 N(C + ) δv
(1) with ∆T A the brightness temperature for a uniform source that fills the beam, T kin the kinetic temperature, C ul the collisional de-excitation rate, N(C + ) the C + column density and δv the linewidth. C ul is given by
C ul = n × R ul(2)
where R ul is the de-excitation rate coefficient given by
R ul = 3.8 × 10 −10 cm 3 s −1 (T kin /100) 0.14(3)
Using a T kin = 100 K and n H2 = 5×10 3 cm −3 then gives C + column densities N(C + ) ≈ 0.4-1.0×10 18 cm −2 over the extent of the cloud which is shown in Fig. 9. However, note that these assumed excitation conditions are not valid for the DR21 radiocontinuum region as it is a dense and strongly irradiated PDR region. Comparing this C + column density map with the deduced H 2 column density map from the Herschel data also allows to estimate the [C + ]/[H 2 ] abundance ratio over the map. This shows that we find a C + abundance around 10 −4 in the outer parts of the cloud around the ridge and sub-filaments, which is of the order of the elemental abundance ratio of carbon (χ C ) in the ISM (e.g. Glover & Clark 2012). Basically all carbon is thus found in the ionized state (C + ) in these regions. Towards the subfilaments, we find under these conditions that ∼ 10% of the gas is traced by [C II] and towards the ridge this drops even lower to values between 0.1 and 1%. We do however note that the density and temperature are an important assumption. A higher temperature up to 200 K can reduce the C + column density with ∼ 50% while a lower temperature can increase the C + column density by a factor 2. A lower typical density of 10 3 cm −3 can increase the column density with an additional factor 2. However, such a N(C + ) increase due to a low density would result in a C + abundance above the elemental abundance. This suggest that a significant part of the [C II] emission originates from regions with a typical density of n H2 ∼ 5×10 3 cm −3 . Combining this density with the Herschel column density for the surrounding clouds results in a maximal line-of-sight depth of 2 pc. This implies that the surrounding cloud of the DR21 ridge has a sheet-like morphology. The same analysis is done for 13 CO(1-0) in App. B. This analysis indicates that the 13 CO abundance peaks in some sub-filaments and at the edges of the DR21 ridge. The abundance then decreases in the ridge and in the outer cloud. Both in the ridge and the outer part of the cloud, 13 CO(1-0) appears to trace only a fraction of the total gas mass. Combined, this provides compelling evidence that [C II] indeed traces most of the surrounding gas in the DR21 cloud. The regions surrounding the ridge and sub-filaments are thus CO poor gas that lights up in [C II]. In combination with the molecular line data, [C II] thus provides a global view of the cloud.
Position-velocity diagram of the ridge
The position-velocity (PV) diagram perpendicular to the DR21 ridge gives an additional view on the cloud kinematics with respect to the ridge. Figure 10 shows PV diagrams for the optically thick -HCO + (1-0) & 12 CO(3-2)-and thin -H 13 CO + (1-0) and C 18 O(1- Figure 9. Top: C + column density map between -8 and 1 km s −1 assuming a kinetic temperature of 100 K and nH 2 = 5×10 3 cm −3 for the excitation conditions. Note that these excitation conditions, which should be representative for most of the map, are most likely not valid towards the peak around the DR21 radiocontinuum region in the south of the map as this region experiences strong internal heating. Bottom: The resulting [C + ]/[H2] abundance ratio deduced from the Herschel column density map over the DR21 cloud on a log scale. 0)-molecular lines observed with the IRAM 30m and JCMT (see the red box in Fig. 3 for the covered region). It confirms that the emission from subfilaments outside the ridge is at more redshifted velocities with respect to the densest gas in the ridge. This gives rise to a V-like shape in the PV diagram, similar to what was observed for the Musca filament (Bonne et al. 2020a) which has about the same size as the DR21 ridge but has (1-0), respectively. The PV diagram was constructed perpendicular to the filament from the region indicated by the red box in Fig. 3 by averaging the intensities along declination axis in this box. The black triangles follow the peak brightness velocity of 12 CO(3-2) as a function of the distance (r) from the center of the ridge. This indicates that the gas is redshifted at both sides of the ridge which is at a velocity of ∼ -3 km s −1 (indicated by the dashed horizontal line). At the ridge, there is a large increase in linewidth, associated with the velocity gradients observed over the ridge in Fig. 6. This fits with the observed HCO + (1-0) blue asymmetry in the PV diagram due to self-absorption in a collapsing cloud. Right: The [C II] PV diagram perpendicular to the DR21 ridge over the same region with the blue line following the peak brightness velocity. On large scales (|r| > 1 pc), the [C II] emission shows a redshifted V-shape followed with a sharp transition to more blueshifted emission at (|r| < 1 pc) which makes the velocity profile deviate from a pure V-shape. To highlight this, we fitted a V-shape to the PV diagram which is indicated in magenta. We interpret this to be due to a second more spatially concentrated velocity component that is associated with the formation of the DR21 ridge. The Pearson coefficients for the central velocity as a function of radius are -0.76 and 0.89 at the east and west of the ridge, respectively. This points to a relatively well organized, but not perfect correlation, for the central velocity with radius. a two orders of magnitude lower mass. For the Musca filament this was proposed to be the result of magnetic field bending by the interaction of an overdensity with a more diffuse region in the colliding H I cloud. As for this previous study of the low-mass Musca filament, the DR21 ridge is located at the apex of this V-like shape. However, the V-shape for DR21 in molecular lines is not as clear as the ones observed in Musca. This can be related to the rapid linewidth increase within 1 pc of the DR21 ridge, while the Musca filament is transsonic (Hacar et al. 2016;Bonne et al. 2020a). Such a rapid linewidth increase in the densest regions is predicted to be the result of pc scale gravitational acceleration in massive clouds (e.g. Peretto et al. 2006Peretto et al. , 2007Hartmann & Burkert 2007;Gómez & Vázquez-Semadeni 2014;Watkins et al. 2019) and could also be the result of accretion driven turbulence (Klessen & Hennebelle 2010). A gravitational collapse scenario is also supported by the strong blue asymmetry self-absorption of HCO + (1-0) that covers the same velocity interval as the perpendicular velocity gradients over the ridge . Figure 10 also presents the [C II] PV diagram which traces the surrounding gas in the DR21 cloud. The PV diagram shows that the surrounding gas forms a redshifted V-shape perpendicular to the DR21 ridge on large scales (|r| > 1 pc). The Pearson correlation coefficients for the central [C II] velocity as a function of radius are -0.76 and 0.89 on the east and west side of the ridge, respectively. With bootstrapping we find that these coefficients are far out of the 95% intervals for the null hypothesis that there is no correlation, i.e. [-0.19, 0.20] (east) and [-0.18, 0.17] (west), and thus demonstrate a clearly organized velocity field perpendicular to the ridge. As the cloud appears to be organized in a sheet, this points at a curved sheet. However, in [C II] it is also observed that there are deviations from a pure Vshape due to blueshifted emission in the vicinity of the ridge at |r| ∼ 1 pc. This deviation from the V-shape is particularly visualized in the PV diagram by the sharp peak velocity transition at |r| ∼ 1 pc and might explain why the Pearson correlation coefficients are not equal to one. We propose that this traces two distinct but converging velocity components at -4.5 and -2.5 km s −1 in the DR21 cloud that drive continuous mass inflow to the DR21 ridge which is located at the intersection of these flows.
Convergence of blue-and redshifted gas in the DR21 cloud
The molecular line observations demonstrated the existence of an organized velocity field over the ridge, similar to the results presented in Schneider et al. (2010). This internal velocity field is proposed to be connected to the inflowing sub-filaments in the surrounding cloud, and is typically considered to be tracing inflow. Using 8 µm observations from the Spitzer Space Telescope (Hora et al. 2009), we attempt to portray a 3D configuration of the ridge and sub-filaments. Inspecting the 8 µm map of the DR21 cloud (Fig. 11), it becomes obvious that heavily reduced 8 µm emission ('IR-dark') traces regions inside the ridge, except for the bright compact sources at DR21 and DR21(OH) which shows internal regions heated by stellar/YSO feedback. Consequently, this indicates that the gas associated with strong 8 µm extinction is located at the front side of the DR21 ridge with N H2 > 10 23 cm −2 , as this is a foreground layer absorbing 8 µm emission. The region of the ridge with more extended 8 µm emission is then located towards the back of the DR21 ridge. Towards the sub-filaments similar, but lower contrast, 8 µm extinction is observed. The overlay of the 8 µm map with contours of molecular and [C II] line emission at v = -4.6 km s −1 and -1.6 km s −1 indicates that the most blueshifted molecular line emission corresponds to the extended 8 µm emission features, while the redshifted line emission corresponds to the darkest 8 µm regions. This clearly indicates that the redshifted gas is located in front of the blueshifted gas from our point of view, which confirms that the flows in the ridge and cloud are indeed converging. The red-and blueshifted velocity component that make the V-shape in the PV diagram are thus accreted on the ridge. Note that there are several contaminants (DR21, protostellar objects,...), a lower contrast and a more complex contour distribution that make the [C II] map less straightforward to evaluate. Nonetheless, there is a clear association at several locations between the distribution of the redshifted gas and 8 µm extinction in the surrounding cloud that is traced by this line.
Virial analysis of the DR21 cloud
In Schneider et al. (2010) it was found that the DR21 ridge is gravitationally collapsing. With the newly obtained data sets, we can further investigate the global stability of the molecular cloud as a function of its radius, centered on the column density peak in DR21(OH). This is done with the different tracers by estimating the gravitational potential energy, the turbulent energy and the magnetic energy. In this analysis, the density profile of the cloud might affect the results that use the simple equations below which correspond to a sphere with uniform density. The same is valid for the more sheet-like morphology that we propose for the surrounding DR21 cloud to maintain a plausible C + abundance. However, the impact of these differences from a uniform sphere are predicted to be relatively small ( 50% for typical clump density profiles and a not too flattened ellipsoid cloud based on Bertoldi & McKee 1992). Therefore, considering the uncertainties, we think the analysis below is reasonable. The thermal and turbulent energy is calculated using
E T = 3 2 M σ 2(4)
with M the mass in the region determined from the Herschel dust column density map and σ the velocity dispersion of the studied region (which we approximate here with the observed linewidth as there is no direct measure of the kinetic temperature). The gravitational energy is determined by
E G = − 3 5 GM 2 R (5)
where G is the gravitational constant and R the radius. The magnetic energy is calculated using
E mag = 1 2 M V 2 A(6)
with V A the Alfvén speed, which is given by V
A = B √ µ0ρ
with µ 0 the vacuum permeability, ρ the density, and B the magnetic field strength. The evolution of mass as a function of radius from DR21(OH) is shown in Fig. 12. The increase in mass flattens as a function of radius at r > 1.5 pc which is expected with mass concentration in the ridge. For the error on the mass, we assume an uncertainty of 15% for the derived column density as this corresponds to typical differences when creating the column density map with different methods (e.g. Peretto et al. 2016). To estimate the thermal and turbulent internal support of the cloud, we assume that the linewidths of the observed lines, i.e. 13 CO(1-0), C 18 O(1-0), H 13 CO + (1-0) and [C II], are a good proxy. This turns out to be a good approximation as their linewidth is dominated by non-thermal motion. Note, that this approach ignores that a significant part of the linewidth might be associated with organized convergent flows driven by gravitational collapse or inertial motion instead of internal support (Traficante et al. 2018a(Traficante et al. ,b, 2020) and that we have demonstrated the presence of multiple velocity components associated with inflow. The linewidth as a function of radius for the three considered molecules is presented in Fig. 12. It shows an increasing linewidth towards small radii and a fairly constant linewidth at r > 1.5 pc. The increasing linewidth towards the ridge can fit with gravitationally driven inflow. This would also fit with predictions by Traficante et al. (2020) since most of the mass in the DR21 cloud has column densities 0.1 g cm −2 (i.e. N H2 2.6×10 22 cm −2 ). The estimated kinetic support based on the linewidth is thus an upper limit on the turbulent support, and might significantly overestimate it. We do have to note that the current IRAM 30m do not entirely cover the studied radii, but the [C II] data showed that these regions are highly CO-poor.
Determining the magnetic field in the cloud as a function of its size is the most uncertain quantity because there are only a few dust polarization magnetic field observations in the region. Based on these observations, it was proposed in Ching et al. (2017) that the magnetic field strength in the DR21 ridge is 0.94 mG. Therefore, we used two approaches to estimate the magnetic field strength evolution in the cloud. First, we make use of the magnetic field strength relation from Crutcher et al. (2010) which is given by
B = 0.01 mG (n H < 300 cm −3 ) 0.01 ( nH 300 ) 0.65 mG (n H > 300 cm −3 )(7)
Secondly, we use B ∝ n k H and start from the constraint that the magnetic field strength is 0.94 mG at the density of the ridge. To extrapolate the relation, we use two values: k = 0.5 and k = 0.67. These exponents represent two asymptotic cases: k = 0.67 is generally considered to describe the case where the magnetic field is dominated by gravitational collapse, and k = 0.5 the case where the magnetic field plays an important role in support against gravitational collapse (e.g. Basu 1997;Hennebelle et al. 2011). The resulting magnetic field strengths as a function of radius for the different estimates are shown in Fig. 12. Over the full cloud there is an uncertainty up to a factor 4 for the magnetic field strength based on the different relations used. This will thus have a significant impact on the magnetic energy as will be shown in the next paragraph. The calculated energy terms are shown in Fig. 13 & Fig. 14, compared to the gravitational energy. The thermal and kinetic energy, estimated from the velocity dispersion, is very similar for the different molecules and has values < 20% of the gravitational energy. In addition, it was pointed out that these values likely are an upper limit. The relation for [C II], tracing the lower density gas, is different as it reaches up to 40-60% of the gravitational energy. This seems to suggest that in the lower density gas, which is particularly found at r > 2-3 pc, there might be significant thermal or turbulent support. However, it has to be taken into account that we found multiple velocity components which do not necessarily contribute to support against collapse in the region. The magnetic field energy depends on its assumed field strength evolution. Assuming k = 0.5 indicates that the cloud can experience some support from the magnetic field on large scales (∼ 3-4 pc), but gradually this magnetic support decreases leading to an increasing importance of gravitational collapse when ap-proaching the ridge (r < 3 pc). Assuming k = 0.67, the magnetic field provides little support at low densities while the higher density regions towards the ridge (r < 1 pc) experience an increased importance of the magnetic field. Yet, in both cases the values still indicate that the gravitational energy dominates over the cloud. This becomes particularly evident in Fig. 14 where the total support terms over the gravitational terms are plotted. This shows that only considering k = 0.5 and the velocity dispersion of the [C II] emission allow for support against gravitational collapse, but this ignores that the [C II] velocity dispersion might not be associated with mass inflow rather than turbulent support.
DISCUSSION
Combining the results and analysis from Sects. 3 and 4, we will now discuss the cloud evolution leading to high-mass star formation in the DR21 ridge.
Magnetic field bending followed by gravitational collapse in the DR21 ridge
From the fitted velocity maps and PV diagrams we found indications of an accelerating velocity field, based on the rapid linewidth increase in the ridge and a blueshifted asymmetry in the self-absorbed molecular lines towards the ridge. Additionally, the observations showed that basically all sub-filaments and the surrounding cloud appear to be organized in a flattened structure that gives rise to a redshifted V-shape perpendicular to the ridge. [C II] also displays a second blueshifted velocity component that is localized in the vicinity of the ridge and looks like a more localized blueshifted V-shape. The accelerating velocity field can be the result of the proposed gravitationally driven inflow (e.g. Peretto et al. 2006;Hartmann & Burkert 2007). However, to explain the observed redshifted V-like velocity field in the surrounding cloud, as well as the systematic redshift of the sub-filaments, we have to consider the apparent flattened geometry of the molecular cloud. If the flattened geometry is curved, this can provide a straightforward explanation for the observed velocity field if this curved sheet geometry is associated with the magnetic field bending in a collision as predicted by (Hartmann et al. 2001;Inoue & Fukui 2013;Vaidya et al. 2013;Inoue et al. 2018;Abe et al. 2021). The presence of a strong magnetic field in the compressed sheet might then even initially prevent gravitational acceleration and significantly affect the velocity field until the massive ridge is reached. We note that this is the same mechanism that was also proposed to explain the observations toward the Musca cloud that has a very similar kinematic Figure 12. Top: Radial profile of the average density in the DR21 cloud (in black). The average density within each radius was estimated based on the mass within the considered radius. The mass within the radius r, is shown at the axis on the right (in red). Middle: The same for the velocity dispersion of H 13 CO + (1-0) (in blue), 13 CO(1-0) (in red), C 18 O(1-0) (in green) and [C II] (in cyan). The molecular lines show very similar behaviour while the [C II] line has a significantly higher linewidth. However, note that at the outer radii (r ∼ 3-4 pc) the cloud is not fully covered by the IRAM 30m map (Figs. 1 and 2). Bottom: Estimated characteristic magnetic field strength for the the gas within a radius for the three different considered profiles. The green curve shows the predictions by the Crutcher et al. (2010) relation. The red and blue curves use a magnetic field strength of 0.94 mG at the density of the ridge and k = 0.5 and 0.67, respectively, in B ∝ n k H .
V-shape perpendicular to the filament (Bonne et al. 2020b). There, it was proposed that Musca formed at the apex of a bent magnetic field as the result of an overdensity interacting with diffuse H I gas in a colliding flow. In fact, the DR21 region is proposed to be in a high-velocity (v col 20 km s −1 ) collision with the diffuse, mostly atomic, regions of the other velocity component between v LSR = 9 and 15 km s −1 in the Figure 13. Ratio of the magnetic (left) and thermal + kinetic energy (right) over the gravitational energy as a function of the radius in the DR21 cloud. The radius is centered on DR21(OH). For the thermal + kinetic energy, the linewidth of H 13 CO + (1-0), 13 CO(1-0), C 18 O(1-0) and [C II] are used. It appears that the regions of cloud traced by [C II] experience more support against gravitational collapse than the molecular regions. However, the presence of multiple components has to be kept in mind which makes this an upper limit. The three approaches to estimate the magnetic field support are displayed on the left. It shows that the magnetic support is always smaller than the gravitational energy, but that the importance strongly depends on the assumptions for the magnetic field strength evolution.
Cygnus-X region (Schneider et al. 2023). This points to the same formation mechanism for the DR21 cloud as for the Musca cloud. However, Cygnus-X region is more than an order of magnitude more massive than the Chamaeleon-Musca region (e.g. Reipurth & Schneider 2008) and the collision velocity is higher, which can explain why the DR21 ridge is forming massive stars (e.g. Dobbs et al. 2020;). The second [C II] velocity component we found in the vicinity of the DR21 ridge at v LSR ∼ -4.5 km s −1 is then likely gas associated with the 20 km s −1 collision, responsible for the formation of the DR21 ridge, that leads to accretion on the ridge from the blueshifted side. One could consider the scenario that the DR21 cloud is a swept-up shell created by expansion from a local bubble. However, there is no evident massive cluster candidate near the DR21 cloud that could drive the expansion of such a massive region (Maia et al. 2016) and the study of the multiple velocity components by Schneider et al. (2023) does not unveil a typical expanding shell morphology in the region.
In summary, we propose that the DR21 ridge was formed as the result of a high-velocity collision in the Cygnus-X region which bends the magnetic field around the ridge. This bending is at the origin of the inflow seen in the form of a kinematic V-shape in the flattened surrounding cloud. However, due to the high-density in the compressed cloud, gravity increasingly starts to dominate the curved kinematics driven by the magnetic field bending. This explains the observed accelerated motion in the ridge and the results from the virial analysis.
The proposed scenario thus provides a comprehensive explanation for the rich set of observations that trace the density range of the DR21 cloud. Observations that trace the line-of-sight magnetic field morphology, and thus the proposed bending, would be invaluable to further test this scenario. Lastly, we note that gravity did not take over the cloud kinematics in Musca (Bonne et al. 2020a). This is an important difference because the increasing importance of gravity on larger scales in the DR21 cloud allows more effective mass provision to the center(s) of collapse, e.g. the DR21(OH) and DR21-N clumps .
Mass accretion on the DR21 ridge
The mass inflow towards the ridge guided by the subfilaments can be estimated usingṀ acc = N πR 2 v inf n Peretto et al. 2013), with N the number of inflowing filaments, R the radius, n the density and v inf the velocity of the infalling filaments. Working with 7 molecular inflowing subfilaments with n = 10 5 cm −3 , R = 0.1 pc (Hennemann et al. 2012) and an inflow velocity of 1-2 km s −1 , we get an estimated mass accretion rate of 1.3-2.6×10 −3 M yr −1 similar to the estimate of Schneider et al. (2010). In Schneider et al. (2010), it was also shown that the F3 sub-filament continues its flow towards the DR21(OH) clump in the ridge and may provide inflow to multiple MDCs. However, this mass inflow rate is roughly an order of magnitude lower than the required mass accretion rate (∼ 10 −2 M yr −1 ) to replenish the DR21 ridge within 1 Myr. Figure 14.
The ratio of the energy terms that provide support, i.e. E th+kin and EB, over the gravitational energy as a function of radius in the DR21 cloud. This is shown for [C II] (top), which has the largest linewidth, and C 18 O(1-0) (bottom), which has the smallest linewidth. Only when considering [C II] and a k=0.5 evolution of the magnetic field do the support terms get larger than the gravitational energy at r > 3 pc.
Taking into account that the mass inflow probably also happen off the sub-filaments, we can calculate this contribution of mass inflow to the ridge. We consider inflow over the full surface of the ridge for which we assume a cylindrical geometry with R = 0.36 pc and L = 4.15 pc (Hennemann et al. 2012). For the inflow, we work with a velocity of ∼ 1-2 km s −1 and n = 10 4 cm −3 , which is justified by N H2 ∼ 3×10 22 cm −2 within the 1 pc vicinity of the ridge. This density is at the high range of densities in the ambient gas, which can be expected if the gas is gravitationally concentrated when approaching the ridge. These values give an estimated mass accretion rate of 0.56-1.1×10 −2 M yr −1 , which can replenish the DR21 ridge in 1-2 Myr. Note that this might be an overestimate up to a factor ∼2 as the inflow is predominantly sideways in the sheet and not uniform. 1-2 Myr is an expected lifetime for the DR21 ridge when taking the expression from Clarke & Whitworth (2015) for the timescale of a free-fall longitudinal collapse for a filament τ ff = (0.49 + 0.26 A)(G ρ) − 1 2 (8)
where A (= 5.8) is the initial aspect ratio of the filament half length over the filament radius and ρ (= 3.9×10 −19 g cm −3 ) the average density of the ridge. This results in a τ f f of 0.39 Myr. Considering support against longitudinal collapse from the magnetic field, which is perpendicular to the DR21 ridge at the center (Vallée & Fiege 2006;Matthews et al. 2009;Ching et al. 2022), 1-2 Myr likely is an appropriate time scale to replenish the DR21 ridge. We thus propose that the DR21 ridge can be continuously replenished by mass inflow from the surrounding cloud. Only when feedback from the newly formed massive stars disperses the dense ridge and prevents further mass accretion it will end active high-mass star formation. This dispersal of the ridge by stellar feedback, which is proposed to happen on relatively short timescales after the first massive stars form (Hollyhead et al. 2015;Luisi et al. 2021;Chevance et al. 2022;Bonne et al. 2022b) (i.e. 3 Myr), might thus be important to maintain a low star formation effiency (SFE). Even though stellar feedback might halt mass provision to the ridge, it is observed around the DR21 radiocontinuum source that regions of dense gas a bit further out, in particular the nearby sub-filaments (S & SW), remain present. This might be able to maintain continued lower mass star formation while stellar feedback is already dispersing the ridge. For example, low-mass cores were detected in the sub-filament F3 (Hennemann et al. 2012).
The link with high-mass star formation
The above scenario implies that high-mass star formation might only occur over an interval of the total star formation time in a cloud. In the DR21 cloud, the formation of high-mass stars would be directly related to the lifetime of the massive ridge. This potentially shorter phase of high-mass star formation might fit with the observed shallow slope of the core mass function (CMF) in high-mass star forming regions (Motte et al. 2018b;Liu et al. 2018;Pouteau et al. 2022) compared to the power law tail of the initial mass function, as well as the slightly shallower initial mass functions (IMF) in young clusters (< 4 Myr) of Cygnus-X (Maia et al. 2016). However, further work with e.g. the ALMA-IMF sample , will be needed to better understand the evolution of these top-heavy CMFs in highmass star forming regions and how it connects with the IMF of young stellar clusters as well as the evolution of high-mass star forming clouds and filaments (Pouteau et al. 2022).
CONCLUSION
We have presented new molecular line data of 13 CO(1-0), C 18 O(1-0), HCO + (1-0) and H 13 CO + (1-0) from the IRAM 30m telescope and a [C II] map from the FEED-BACK SOFIA legacy survey of the DR21 cloud. In this study, we exclusively focus on the emission between -8 km s −1 and 1 km s −1 , which is associated with the DR21 cloud, for all lines. The molecular lines trace the dense ridge (N H2 >10 23 cm −2 ) and sub-filaments while [C II] traces the surrounding molecular cloud (i.e. N H2 10 22 cm −2 ) that embeds these filamentary structures. We demonstrate that this surrounding cloud seen with [C II] is mostly CO poor. Both in the molecular and [C II] spectra we find indications of more than one velocity component within the studied velocity range. We also note there is an offset for the peak velocity in the cloud between the [C II] and molecular lines, confirming they trace different regions of the cloud. Inside the ridge, the molecular lines confirm the results from Schneider et al. (2010) while the observations of the sub-filaments show they are systematically redshifted with respect to the ridge. The [C II] kinematics show a redshifted V-like shape, between -3 and -1 km s −1 , perpendicular to the ridge that intersects with a more blueshifted velocity component (at v LSR ∼ -5 km s −1 ) at the location of the ridge. We confirm that these two intersecting velocity components converge, and that there is continuous mass accretion on the DR21 ridge. The V-shape velocity field embeds all the redshifted sub-filaments, and density requirements to explain the observed [C II] emission indicate that the V-shape has a flattened morphology. Tracing the CO poor gas, seen in [C II] towards the DR21 cloud, is thus essential for a comprehensive view on cloud evolution and filament formation. Performing a virial analysis, including estimates for the magnetic field support, we find that the cloud should be gravitationally collapsing on pc scales. The surrounding gas traced by [C II] appears to have more support against gravitational collapse, but this does not take into account the organised velocity field and presence of two velocity components that are not associated with turbulent support.
We propose that there is continuous mass accretion onto the DR21 ridge. This accretion is initiated by the bending of the magnetic field around the ridge due to a large-scale collision. In this collision, an overdensity is compressed into a dense filament/ridge during the interaction with diffuse atomic regions of the colliding cloud. However, because of the high density in the DR21 cloud, gravity is taking over the cloud dynamics which facilitates rapid mass provision to a (few) center(s) of collapse where massive clusters are forming. We found that the continuous mass inflow, visible thanks to [C II], can replenish the ridge. Therefore we propose that the DR21 ridge will remain present until stellar feedback will disperse it and halt massive star formation Lastly, we note that this scenario is the same as proposed for the low-mass Musca cloud (Bonne et al. 2020a), except for the increasing importance of gravitational collapse in the massive DR21 cloud. This hints that the same scenario might be at the origin of filament formation in both a low-and high-mass star forming region. Furthermore, recent magnetic field and velocity observations of some nearby low-and high-mass star forming regions (e.g. Tahani 2022; Arzoumanian et al. 2022) appear to have a straightforward explanation in this scenario. This leads us to tentatively propose that these types of collisions are a widespread mechanism in the Milky Way to initiate a wide variety of star formation activity in the Milky Way.
We thank W. Lim and F. Comeron for insightful comments on an early draft of the paper. Based in part on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). Fig. 3. Here, the contours indicate the JCMT 12 CO(3-2) emission starting at 20 K km s −1 with incremental steps of 20 K km s −1 .
A. [C II] AND JCMT 12 CO(3-2) OBSERVATIONS In Fig. 15 we present a comparison of the [C II] integrated intensity and JCMT 12 CO(3-2) integrated intensity. This JCMT data over the full Cygnus-X region will be the topic of a forthcoming paper. Because of the larger column density than for 13 CO, the 12 CO emission is detected further into the cloud compared to 13 CO (see Fig. 2). However, 12 CO still is particularly sensitive to the massive ridge and surrounding subfilaments and has a clearly distinct morphology compared to [C II]. This confirms that [C II] provides a unique view on the lower-column density gas that is not accessible with molecular lines.
B. THE 13 CO ABUNDANCE
The calculation of the 13 CO column density map allows to estimate 13 CO abundance based on the background subtracted Herschel column density map (Schneider et al. 2016a). This then allows to explore which regions of the cloud are traced by the observed 13 CO(1-0) transition. The calculation of the 13 CO column density follows the calculations presented in (Mangum & Shirley 2015). Below we recapitulate the equations and values used. Note that these equations assume that the molecule is optically thin. This assumption is not necessarily valid for 13 CO(1-0) over the full map, in particular at the ridge. However, due to the high density in the ridge, which will result in freeze-out of the molecule, the opacity should not be excessive even in these regions of very high column density. When studying the outer regions of the ridge and the ambient gas, C 18 O is barely detected which implies that the 13 CO opacity will not be significantly above 1 for standard isotopologue ratios. Nonetheless, it has to be kept in mind this assumption is a limitation of the calculations below. The total column density is then calculated from
N[cm −2 ] = f(T ex ) T mb dv (B1)
Here T mb is the main beam brightness in Kelvin, v the velocity in km s −1 and f(T ex ) is given by
f(T ex ) = 3hZ 8π 3 µ 2 J t exp(hν/kT ex ) [1 − exp(−hν/kT ex )](J(T ex ) − J(T BG ))(B2)
In this equation, Z is the partition function Z = 2kT ex hν + 1/3 (B3) Figure 16. Map of the 13 CO abundance on a log-scale over the DR21 cloud. This is deduced from the 13 CO(1-0) intensity, a kinetic temperature of 18 K and the Herschel column density map. The white contours indicate the Herschel column density map at N = 10 22 cm −2 , 3×10 22 cm −2 , 10 23 cm −2 and 4×10 23 cm −2 .
with k the Boltzmann constant, h the Planck constant, ν the transition frequency (for 13 CO(1-0) this is 110.201 GHz) and T ex the excitation temperature for which we assume a value of 18 K, see also Schneider et al. (2010). Further in the equation of f(T ex ), µ is the dipole moment (for 13 CO(1-0) this is 0.112 Debye), J t is the upper value of the rotational quantum number (i.e. 1) and J(T ex ) is given by
J(T ex ) = hν k[exp(hν/kT ex ) − 1](B4)
For the background (BG), a temperature of 2.73 K is assumed based on the cosmic microwave background (CMB).
The estimated 13 CO abundance map, based on the Herschel column density map, is presented in Fig. 16. Using the assumptions above, 13 CO is most abundant at the edges of the ridge and in the sub-filaments (F3N, F3S and SW in particular). Inside the ridge, there is a significant decrease with increasing column density, see also Fig. 17, which is expected because of CO depletion at densities n H2 > 10 4 cm −3 . To trace the DR21 ridge kinematics, one thus needs to rely on tracers like HCO + , N 2 H + and deuterated molecules to probe the very dense gas of the ridge. In the surrounding cloud, at N H2 < 1.5×10 22 cm −2 where 13 CO is detected, there is also an observed lower 13 CO abundance.
Indeed, Fig. 17 shows that the 13 CO abundance starts decreasing again towards lower column densities. This is the result of an increasing CO dark gas fraction in the DR21 cloud below these column densities. From Fig. 17, it is also observed that at N H2 < 1.5×10 22 cm −2 the [C II] abundance continues to increase up to the typical carbon abundance in the ISM. This indicates that [C II] allows to trace the CO dark gas in the surrounding DR21 cloud.
C. THE [C II] OPTICAL DEPTH
In order to constrain the [C II] optical depth towards the DR21 cloud, and in particular the ridge, we work with the average spectrum over the ridge in order to reduce the noise rms. This decrease in noise rms is important in order to try and detect the [ 13 C II] hyperfine structure lines that are located at -65.2 km s −1 (F = 1-0), 11.7 km s −1 (F = 2-1) and 62.4 km s −1 (F = 1-1) with respect to the [C II] line. The hyperfine structure lines have a relative strength of 0.25 (F = 1-0), 0.625 (F = 2-1) and 0.125 (F = 1-1). Ideally one thus works with the F = 2-1 transition, but this line is very close to the actual [C II] emission and contaminated by the observed bridging with a higher velocity component Figure 17. Top: 13 CO abundance as a function of the Herschel column density for all pixels in the map. Bottom: The same for the C + abundance. (Schneider et al. 2023). At the location of the F = 2-1 transition no indication of additional [ 13 C II] emission is found, see Fig. 18 but we will not work with this transition because of the confusion from the actual [C II] emission. Therefore we focus on the second brightest transition (F = 1-0) at -65.2 km s −1 . Fig. 18 shows that this transition is not detected at a noise rms of 7.6×10 −1 K within 0.5 km s −1 for the averaged spectrum. Although not detected, we still try to obtain an idea of the optical depth associated with this noise level. This can be done using the equation
T mb,12 T mb,13 = 1 − e −τ τ α (C5)
With τ the optical depth and and α the local 12 C/ 13 C abundace for which we take a value of 60 (Wilson & Rood 1994) and a correction factor of 0.25 for the relative strength of the F = 1-0 transition. Then using the noise rms value (7.6×10 −1 K) results in an optical depth of τ = 1.7. Since the F = 1-0 transition is not detected, this indicates τ = 1.7 rather is an upper limit which indicates that the [C II] line is optically thin or marginally optically thick at best towards the DR21 ridge.
To further investigate the potential optical depth of the [C II] line, we also run RADEX simulations (van der Tak et al. 2007) using input values obtained in this paper for the [C II] emitting region towards the DR21 ridge. These calculations are presented in Tab. 2 which shows that the predicted optical depth typically is below unity. Note that the highest obtained optical depths, which are still only of the order of unity and occur at T kin ≈ 30 K, have a too low corresponding line brightness such that they likely are not representative for the observed emission. This is in agreement with the results from Sec. 4.1 which indicates that the kinetic temperature in the [C II] emitting gas is likely of the order of 100 K (for which we obtain lower optical depths).
D. [C II] MULTI-COMPONENT FIT
As the [C II] emission likely is not optically thick, we examine here whether more than one Gaussian component is required to explain the observed [C II] line spectral line profiles. To fit all spectra over the full map within the -8 to 1 km s −1 velocity range, we employ the Beyond The Spectrum (BTS) fitting tool which is updated with the Akaike Information Criterion (AIC) (Clarke et al. 2018(Clarke et al. , 2019. However, we do exclude the DR21 radiocontinuum source and associated outflow from the analysis. As the [C II] noise rms is still relatively high at the native resolution, which can mask multiple velocity components, we smoothed the data to a spatial resolution of 30 . This value was taken to balance significantly reducing the noise rms while avoiding the inclusion of large velocity gradients in a single beam which would give rise to additional velocity components. Note that the 30 data is also used in Schneider et al. (2023). In the fitting iteration procedure, when a fit has been done with n velocity peaks the fit will be repeated with n-1 and n+1 velocity peaks and the corrected AIC is calculated for each new fit. Then the fit with the smallest number of velocity peaks is chosen unless a higher number of velocity peaks is smaller by ∆ AIC > 10 (see e.g. Clarke et al. (2019) for the equations). This process is iterated until the number of peaks does not change anymore. The defined ∆ AIC parameter was found to be robust not to overfit noisy spectra, in particular for large spectral cubes (Clarke et al. in prep.). The resulting number of velocity components over the map in the -8 to 1 km s −1 velocity range are presented in Fig. 19. This demonstrates that the regions with two velocity components are in particular located at the edge of the DR21 ridge. To test this result, we also fitted the [C II] data cube between -8 and 1 km s −1 at the 20 resolution. This demonstrated a similar spatial distribution of the two velocity components. Only, because of the higher noise rms, the procedure found less pixels with two velocity components. Comparing the number of velocity components with the [C II] velocity field in Fig. 20, we find that regions with two velocity components have a close spatial correlation with the regions that have a blueshifted velocity in [C II]. This confirms that the location of the DR21 ridge is associated with the intersection of the proposed V-shape accretion flow with an additional velocity component. Lastly, Fig. 21 shows the velocity distribution of the two fitted velocity components, while Fig. 22 shows the associated maps for the velocity and linewidth. The more blueshifted velocity component is typically found within the range of -6 km s −1 and -4 km s −1 while the redshifted component varies between -4 and -1 km s −1 . The typical velocity difference of the two converging components is between 2 and 3 km s −1 . We also performed a multi-component fit with the BTS fitting tool to the C 18 O(1-0) and H 13 CO + (1-0) molecular lines between -8 and 1 km s −1 as they are not affected by strong opacity or self-absorption effects. This shows that there are significant regions that host two resolved velocity components in both lines, see Figs. 23 & 26. These regions are particularly in and at the edge of the DR21 ridge. It is noteworthy when inspecting these figures that the regions where the subfilaments connect to the ridge, which is associated with a high linewidth (FWHM >3 km s −1 ), are fitted with a single velocity component. Either this suggests that several velocity components blend closely together there or that there might be a rapid reorientation in the line-of-sight velocity of the inflow or accretion shocks. From the velocity field maps, it can be observed that these regions are associated with a significant changes in velocity, see Figs. 25 & 28. Inspecting the velocity distribution for the regions with two fitted components, see Figs. 24 & 27, it is found that the interval for the redshifted velocity component is very similar for [C II] and the molecular lines. This tends to confirm that the redshifted filaments are embedded in the redshifted inflowing mass reservoir. However, for the blueshifted velocity components, it appears that [C II] emission is shifted towards more blueshifted velocities with respect to the emission in the molecular lines. This shift for the blueshifted [C II] velocity component fits with the fact that there are no blueshifted subfilaments and thus that the blueshifted envelope of the DR21 ridge has a different morphology than the redshifted envelope.
Figure 1 .
1Middle left: Herschel column density map
Figure 3 .
3Line integrated emission from -8 to 1 km s −1 for [C II].
Figure 4
4Figure 4. Average spectrum of the DR21 ridge (excluding the DR21 continuum source) displaying multiple velocity components between -10 and 20 km s −1 . The focus of this paper is on the emission between -8 and 1 km s −1 alone, indicated by the dashed vertical lines, which is the velocity range associated with the DR21 ridge (Schneider et al. 2006, 2010).
Figure 3
3presents the SOFIA [C II] observations. The critical density of the [C II] 158 µm line is typically only a few 10 3 cm −3 for collisions with atomic and molecular hydrogen (i.e. H & H 2 ; Goldsmith et al. 2012), and it predominantly traces the photodissociated layers at the interface between molecular cloud and ion-5 Herschel imaging survey of OB young stellar objects (Motte et al. 2010).
Figure 5 .
513 CO(1-0) channel maps (IRAM) between -6.0 and -0.4 km s −1 of the DR21 cloud. The grey contours trace the Herschel column density at NH 2 = 10 22 , 2×10 22 , 5×10 22 , 10 23 and 5×10 23 cm −2 , to highlight the ridge and sub-filaments. Most sub-filaments become particularly visible at v lsr > -3 km s −1 .
Figure 6 .
6Top left: Velocity map of 13 CO(1-0) over the DR21 ridge and sub-filaments, obtained from fitting a single Gaussian to the spectra. The uncertainty of the centroid velocity is below 0.15 km s −1 over the entire map. Overplotted on the map are the Herschel column density contours at NH 2 = 10 22 , 2×10 22 , 5×10 22 , 10 23 and 5×10 23 cm −2 which indicate the ridge as well as the surrounding sub-filaments. The sub-filaments are indicated with their name fromHennemann et al. (2012). Top right:The same for HCO + (1-0) without fitting inside the DR21 ridge because of the HCO + (1-0) self-absorption there. Bottom left:The same for C 18 O(1-0). Bottom right: The same for H 13 CO + .
Figure 7 .
7Left: Velocity field observed with [C II]. The black contours indicate the Herschel column density contours at NH 2 = 10 22 , 2×10 22 , 5×10 22 , 10 23 and 5×10 23 cm −2 . The sub-filaments are indicated with their name. Right: The same for the [C II] FWHM.
Figure 8 .
8Left: FWHM map of C 18 O(1-0) over the DR21 ridge and sub-filaments. The uncertainty of the FWHM is generally below 0.1 km s −1 and always below 0.4 km s −1 over the entire map. Overplotted on the map are the Herschel column density contours at NH 2 = 10 22 , 2×10 22 , 5×10 22 , 10 23 and 5×10 23 cm −2 . Right: The same for H 13 CO + (1-0).
Figure 10 .
10Left: HCO + (1-0) PV diagram with the green and red contours indicating the emission from C 18 O(1-0) and H 13 CO +
Figure 11 .
11Spitzer 8 µm image of the DR21 cloud overlaid with velocity contours of C 18 O(1-0) (left) and[C II] (right) at -1.6 km s −1 and -4.6 km s −1 , and overlaid with the Herschel column density (middle). The northern and southern part of the ridge shows 8 µm extinction, i.e. is 'IR-dark', while in the middle there is strong emission at the locations of DR21 and DR21(OH). Some protostellar sources show up as localized emission spots. Outside of the ridge, the 8 µm emission is extended, rather diffuse, with some feather-like features. The contours of C 18 O(1-0) indicate that the more blueshifted emission is located towards the region with IR emission in the ridge, while the redshifted emission is concentrated towards the strong 8 µm extinction. The structure of the [C II] emission is more complex as it more diffuse and also affected by the feedback in DR21 and local protostellar sources (e.g. around DR21(OH)). However, there is overall a correspondance on the large scales of the blueshifted gas to bright Spitzer 8 µm regions and redshifted gas to 8 µm extinction. The figure in the middle indicates the location of DR21, DR21 (OH) and DR21-N for reference of their location.
Figure 15 .
15SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NAS2-97001, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. This work was supported by the Agence National de Recherche (ANR/France) and the Deutsche Forschungsgemeinschaft (DFG/Germany) through the project "GENESIS" (ANR-16-CE92-0035-01/DFG1591/2-1) and the DFG project number SFB 956. The FEEDBACK project is supported by the Federal Ministry of Economics and Energy (BMWI) via DLR, Projekt Number 50 OR 2217 (FEEDBACKplus). Financial support for the SOFIA Legacy Program, FEEDBACK, at the University of Maryland was provided by NASA through award SOFO070077 issued by USRA. Facilities: SOFIA (upGREAT), IRAM 30m, Herschel, JCMT Software: astropy (Astropy Collaboration et al. The [C II] integrated intensity map from -8 to 1 km s −1 as in
Figure 18 .
18Left: [ 13 C II] F = 1-0 transition shifted to the same velocity as the [C II] line. There is no detected counterpart for this [ 13 C II] transition at the velocities of the DR21 ridge component. Middle: The same for the [ 13 C II] F = 1-1 transition. Right: The same for the [ 13 C II] F = 2-1 transition. For this transition, there is confusion from the higher velocity components(Schneider et al. 2023). Therefore, we do not use this transition here. N(C + ) (10 17 cm −2 ) T kin (K) nH 2 (cm −3 ) FWHM (km s
Figure 19 .
19Number of fitted velocity components within the -8 to 1 km s −1 velocity range. The white box indicates the region around the DR21 radio continuum source and its associated outflow which artificially increases the number of fitted velocity components. The white contours indicate the Herschel column density at NH 2 = 10 22 , 3×10 22 , 10 23 and 4×10 23 cm −2 .
Figure 20 .
20Left: [C II] velocity field from Fig. 7 with the black contours indicating the regions that are fitted with two velocity components between vLSR -8 and 1 km s −1 . Right: The same for the [C II] FWHM map.
Figure 21 .
21Peak velocity distribution of the blueshifted (blue) and redshifted (red) velocity components of the double Gaussian fit to the data.
Figure 22 .
22Top: (left) The [C II] velocity field of the DR21 cloud when taking into account all emission between -8 and 1 km s −1 . The black contours indicate the Herschel column density at NH 2 = 10 22 , 3×10 22 , 10 23 and 4×10 23 cm −2 . (middle left) The [C II] velocity field in the pixels that have only one velocity component. (middle right) The [C II] velocity field of the blueshifted component in the pixels that were fitted with two Gaussian components. (right) The [C II] velocity field of the redshifted component in the pixels that were fitted with two Gaussian components. Bottom: The same as above for the FWHM.
Figure 23 .
23Left: The same as Fig. 19 for C 18 O(1-0) with the column density contours in black. Middle & right: Same as Fig. 20 for C 18 O(1-0).
Figure 24 .
24The same as Fig. 21 for C 18 O(1-0). The grey histogram indicates the peak velocity distribution for the pixels with a single fitted velocity component. E. C 18 O(1-0) AND H 13 CO + (1-0) MULTI-COMPONENT FIT
Figure 26 .Figure 27 .
2627The same asFig. 23for H 13 CO + (1-0) The same asFig. 24for H 13 CO + (1-0)
Table 1 .
1Summary of the observational datasets.Instrument
Species
λ
ν
∆ v
Θ
[µm]
[GHz] [km/s] [ ]
IRAM
EMIR
13 CO(1-0)
2720.4 110.20
0.2
23
EMIR
C 18 O(1-0)
2730.8 109.78
0.2
23
EMIR
HCO + (1-0)
3361.3
89.19
0.2
28
EMIR
H 13 CO + (1-0) 3455.7
86.75
0.2
29
SOFIA
upGREAT
[C II]
157.74 1900.54
0.5
14
JCMT
HARP
12 CO 3→2
869.0
345.80
0.42
15
Table 2 .
2Results produced with RADEX for conditions (N(C + ), nH 2 , T kin , FWHM) that could be representative of the [C II] emitting gas. TR is the radiation temperature, which is equivalent to the main beam temperature under the assumption that the beam filling is unity (this should be a reasonable assumption seeing the extended [C II] emission). The optical depth (τ ) predictions are basically always below unity which suggests relatively optically thin [C II] emission.
This is similar to the size in the plane of the sky or assumes it has a slightly flattened morphology.
. D Abe, T Inoue, S.-I Inutsuka, T Matsumoto, 10.3847/1538-4357/ac07a1ApJ. 91683Abe, D., Inoue, T., Inutsuka, S.-i., & Matsumoto, T. 2021, ApJ, 916, 83, doi: 10.3847/1538-4357/ac07a1
. D Arzoumanian, D Russeil, A Zavagno, 10.1051/0004-6361/202141699A&A. 66056Arzoumanian, D., Russeil, D., Zavagno, A., et al. 2022, A&A, 660, A56, doi: 10.1051/0004-6361/202141699
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy Collaboration10.1051/0004-6361/201322068A&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
. S K Balfour, A P Whitworth, D A Hubber, 10.1093/mnras/stw2956MNRAS. 4653483Balfour, S. K., Whitworth, A. P., & Hubber, D. A. 2017, MNRAS, 465, 3483, doi: 10.1093/mnras/stw2956
. S Basu, 10.1086/304420ApJ. 485240Basu, S. 1997, ApJ, 485, 240, doi: 10.1086/304420
. I M Beerer, X P Koenig, J L Hora, 10.1088/0004-637X/720/1/679ApJ. 720679Beerer, I. M., Koenig, X. P., Hora, J. L., et al. 2010, ApJ, 720, 679, doi: 10.1088/0004-637X/720/1/679
. F Bertoldi, C F Mckee, 10.1086/171638ApJ. 395140Bertoldi, F., & McKee, C. F. 1992, ApJ, 395, 140, doi: 10.1086/171638
. H Beuther, T Henning, H Linz, 10.1051/0004-6361/201526759A&A. 581119Beuther, H., Henning, T., Linz, H., et al. 2015, A&A, 581, A119, doi: 10.1051/0004-6361/201526759
. T G Bisbas, K E I Tanaka, J C Tan, B Wu, F Nakamura, 10.3847/1538-4357/aa94c5ApJ. 85023Bisbas, T. G., Tanaka, K. E. I., Tan, J. C., Wu, B., & Nakamura, F. 2017, ApJ, 850, 23, doi: 10.3847/1538-4357/aa94c5
. T G Bisbas, J C Tan, T Csengeri, 10.1093/mnrasl/sly039MNRAS. 47854Bisbas, T. G., Tan, J. C., Csengeri, T., et al. 2018, MNRAS, 478, L54, doi: 10.1093/mnrasl/sly039
. L Bonne, N Peretto, A Duarte-Cabral, 10.1051/0004-6361/202142154A&A. 66522Bonne, L., Peretto, N., Duarte-Cabral, A., et al. 2022a, A&A, 665, A22, doi: 10.1051/0004-6361/202142154
. L Bonne, S Bontemps, N Schneider, 10.1051/0004-6361/202038281A&A. 64427Bonne, L., Bontemps, S., Schneider, N., et al. 2020a, A&A, 644, A27, doi: 10.1051/0004-6361/202038281
. L Bonne, N Schneider, S Bontemps, 10.1051/0004-6361/201937104A&A. 64117Bonne, L., Schneider, N., Bontemps, S., et al. 2020b, A&A, 641, A17, doi: 10.1051/0004-6361/201937104
. L Bonne, N Schneider, P García, 10.3847/1538-4357/ac8052ApJ. 935171Bonne, L., Schneider, N., García, P., et al. 2022b, ApJ, 935, 171, doi: 10.3847/1538-4357/ac8052
. I A Bonnell, M R Bate, 10.1111/j.1365-2966.2006.10495.xMNRAS. 370488Bonnell, I. A., & Bate, M. R. 2006, MNRAS, 370, 488, doi: 10.1111/j.1365-2966.2006.10495.x
. I A Bonnell, M R Bate, C J Clarke, J E Pringle, 10.1046/j.1365-8711.2001.04270.xMNRAS. 323785Bonnell, I. A., Bate, M. R., Clarke, C. J., & Pringle, J. E. 2001, MNRAS, 323, 785, doi: 10.1046/j.1365-8711.2001.04270.x
. I A Bonnell, S G Vine, M R Bate, 10.1111/j.1365-2966.2004.07543.xMNRAS. 349735Bonnell, I. A., Vine, S. G., & Bate, M. R. 2004, MNRAS, 349, 735, doi: 10.1111/j.1365-2966.2004.07543.x
. S Bontemps, F Motte, T Csengeri, N Schneider, 10.1051/0004-6361/200913286A&A. 524Bontemps, S., Motte, F., Csengeri, T., & Schneider, N. 2010, A&A, 524, A18, doi: 10.1051/0004-6361/200913286
. A Bracco, V Jelić, A Marchal, 10.1051/0004-6361/202039283A&A. 644Bracco, A., Jelić, V., Marchal, A., et al. 2020, A&A, 644, L3, doi: 10.1051/0004-6361/202039283
. M Chevance, J M D Kruijssen, M R Krumholz, 10.1093/mnras/stab2938MNRAS. 509272Chevance, M., Kruijssen, J. M. D., Krumholz, M. R., et al. 2022, MNRAS, 509, 272, doi: 10.1093/mnras/stab2938
. T.-C Ching, S.-P Lai, Q Zhang, 10.3847/1538-4357/aa65ccApJ. 838121Ching, T.-C., Lai, S.-P., Zhang, Q., et al. 2017, ApJ, 838, 121, doi: 10.3847/1538-4357/aa65cc
. T.-C Ching, K Qiu, D Li, 10.3847/1538-4357/ac9dfbApJ. 941122Ching, T.-C., Qiu, K., Li, D., et al. 2022, ApJ, 941, 122, doi: 10.3847/1538-4357/ac9dfb
. S D Clarke, A P Whitworth, 10.1093/mnras/stv393MNRAS. 4491819Clarke, S. D., & Whitworth, A. P. 2015, MNRAS, 449, 1819, doi: 10.1093/mnras/stv393
. S D Clarke, A P Whitworth, R L Spowage, 10.1093/mnras/sty1675MNRAS. 4791722Clarke, S. D., Whitworth, A. P., Spowage, R. L., et al. 2018, MNRAS, 479, 1722, doi: 10.1093/mnras/sty1675
. S D Clarke, G M Williams, J C Ibáñez-Mejía, S Walch, 10.1093/mnras/stz248MNRAS. 4844024Clarke, S. D., Williams, G. M., Ibáñez-Mejía, J. C., & Walch, S. 2019, MNRAS, 484, 4024, doi: 10.1093/mnras/stz248
. R M Crutcher, B Wandelt, C Heiles, E Falgarone, T H Troland, 10.1088/0004-637X/725/1/466ApJ. 725466Crutcher, R. M., Wandelt, B., Heiles, C., Falgarone, E., & Troland, T. H. 2010, ApJ, 725, 466, doi: 10.1088/0004-637X/725/1/466
. T Csengeri, S Bontemps, N Schneider, F Motte, S Dib, 10.1051/0004-6361/201014984A&A. 527135Csengeri, T., Bontemps, S., Schneider, N., Motte, F., & Dib, S. 2011a, A&A, 527, A135, doi: 10.1051/0004-6361/201014984
. T Csengeri, S Bontemps, N Schneider, 10.1088/2041-8205/740/1/L5ApJL. 7405Csengeri, T., Bontemps, S., Schneider, N., et al. 2011b, ApJL, 740, L5, doi: 10.1088/2041-8205/740/1/L5
Figure 28. The same as Fig. 22 for H 13 CO + (1-0). Figure 28. The same as Fig. 22 for H 13 CO + (1-0).
. T Csengeri, J S Urquhart, F Schuller, 10.1051/0004-6361/201322434A&A. 56575Csengeri, T., Urquhart, J. S., Schuller, F., et al. 2014, A&A, 565, A75, doi: 10.1051/0004-6361/201322434
. C J Cyganowski, M J Reid, V L Fish, P T P Ho, 10.1086/377688ApJ. 596344Cyganowski, C. J., Reid, M. J., Fish, V. L., & Ho, P. T. P. 2003, ApJ, 596, 344, doi: 10.1086/377688
. J R Dickel, H R Dickel, W J Wilson, 10.1086/156317ApJ. 223840Dickel, J. R., Dickel, H. R., & Wilson, W. J. 1978, ApJ, 223, 840, doi: 10.1086/156317
. K Dobashi, T Shimoikura, S Katakura, F Nakamura, Y Shimajiri, 10.1093/pasj/psz041PASJ. 7112Dobashi, K., Shimoikura, T., Katakura, S., Nakamura, F., & Shimajiri, Y. 2019, PASJ, 71, S12, doi: 10.1093/pasj/psz041
. C L Dobbs, K Y Liow, S Rieder, 10.1093/mnrasl/slaa072MNRAS. 4961Dobbs, C. L., Liow, K. Y., & Rieder, S. 2020, MNRAS, 496, L1, doi: 10.1093/mnrasl/slaa072
. C L Dobbs, J E Pringle, A Burkert, 10.1111/j.1365-2966.2012.21558.xMNRAS. 4252157Dobbs, C. L., Pringle, J. E., & Burkert, A. 2012, MNRAS, 425, 2157, doi: 10.1111/j.1365-2966.2012.21558.x
. D Downes, R Rinehart, 10.1086/148691ApJ. 144937Downes, D., & Rinehart, R. 1966, ApJ, 144, 937, doi: 10.1086/148691
. A Duarte-Cabral, S Bontemps, F Motte, 10.1051/0004-6361/201423677A&A. 570Duarte-Cabral, A., Bontemps, S., Motte, F., et al. 2014, A&A, 570, A1, doi: 10.1051/0004-6361/201423677
. 10.1051/0004-6361/201321393A&A. 558-. 2013, A&A, 558, A125, doi: 10.1051/0004-6361/201321393
. Y Fukui, A Habe, T Inoue, R Enokiya, K Tachihara, 10.1093/pasj/psaa103PASJ. 731Fukui, Y., Habe, A., Inoue, T., Enokiya, R., & Tachihara, K. 2021, PASJ, 73, S1, doi: 10.1093/pasj/psaa103
. R Galván-Madrid, Q Zhang, E Keto, 10.1088/0004-637X/725/1/17ApJ. 72517Galván-Madrid, R., Zhang, Q., Keto, E., et al. 2010, ApJ, 725, 17, doi: 10.1088/0004-637X/725/1/17
. P Girichidis, C Federrath, R Banerjee, R S Klessen, 10.1111/j.1365-2966.2011.20073.xMNRAS. 420613Girichidis, P., Federrath, C., Banerjee, R., & Klessen, R. S. 2012, MNRAS, 420, 613, doi: 10.1111/j.1365-2966.2011.20073.x
. S C O Glover, P C Clark, 10.1111/j.1365-2966.2011.19648.xMNRAS. 4219Glover, S. C. O., & Clark, P. C. 2012, MNRAS, 421, 9, doi: 10.1111/j.1365-2966.2011.19648.x
. P F Goldsmith, W D Langer, J L Pineda, T Velusamy, 10.1088/0067-0049/203/1/13ApJS. 20313Goldsmith, P. F., Langer, W. D., Pineda, J. L., & Velusamy, T. 2012, ApJS, 203, 13, doi: 10.1088/0067-0049/203/1/13
. G C Gómez, E Vázquez-Semadeni, 10.1088/0004-637X/791/2/124ApJ. 791124Gómez, G. C., & Vázquez-Semadeni, E. 2014, ApJ, 791, 124, doi: 10.1088/0004-637X/791/2/124
. M Gottschalk, R Kothes, H E Matthews, T L Land Ecker, W R F Dent, 10.1051/0004-6361/201118600A&A. 54179Gottschalk, M., Kothes, R., Matthews, H. E., Land ecker, T. L., & Dent, W. R. F. 2012, A&A, 541, A79, doi: 10.1051/0004-6361/201118600
. U U Graf, R Simon, J Stutzki, 10.1051/0004-6361/201218930A&A. 54216Graf, U. U., Simon, R., Stutzki, J., et al. 2012, A&A, 542, L16, doi: 10.1051/0004-6361/201218930
. X Guan, J Stutzki, U U Graf, 10.1051/0004-6361/201218925A&A. 5424Guan, X., Stutzki, J., Graf, U. U., et al. 2012, A&A, 542, L4, doi: 10.1051/0004-6361/201218925
. C Guevara, J Stutzki, V Ossenkopf-Okada, 10.1051/0004-6361/201834380A&A. 63616Guevara, C., Stutzki, J., Ossenkopf-Okada, V., et al. 2020, A&A, 636, A16, doi: 10.1051/0004-6361/201834380
. A Hacar, J Kainulainen, M Tafalla, H Beuther, J Alves, 10.1051/0004-6361/201526015A&A. 587Hacar, A., Kainulainen, J., Tafalla, M., Beuther, H., & Alves, J. 2016, A&A, 587, A97, doi: 10.1051/0004-6361/201526015
. L Hartmann, J Ballesteros-Paredes, E A Bergin, 10.1086/323863ApJ. 562852Hartmann, L., Ballesteros-Paredes, J., & Bergin, E. A. 2001, ApJ, 562, 852, doi: 10.1086/323863
. L Hartmann, A Burkert, 10.1086/509321ApJ. 654988Hartmann, L., & Burkert, A. 2007, ApJ, 654, 988, doi: 10.1086/509321
. T J Haworth, E J Tasker, Y Fukui, 10.1093/mnras/stv639MNRAS. 45010Haworth, T. J., Tasker, E. J., Fukui, Y., et al. 2015, MNRAS, 450, 10, doi: 10.1093/mnras/stv639
. P Hennebelle, B Commerçon, M Joos, 10.1051/0004-6361/201016052A&A. 52872Hennebelle, P., Commerçon, B., Joos, M., et al. 2011, A&A, 528, A72, doi: 10.1051/0004-6361/201016052
. M Hennemann, F Motte, N Schneider, 10.1051/0004-6361/201219429A&A. 543Hennemann, M., Motte, F., Schneider, N., et al. 2012, A&A, 543, L3, doi: 10.1051/0004-6361/201219429
. T Hill, F Motte, P Didelon, 10.1051/0004-6361/201117315A&A. 53394Hill, T., Motte, F., Didelon, P., et al. 2011, A&A, 533, A94, doi: 10.1051/0004-6361/201117315
. 10.1051/0004-6361/201219009A&A. 542-. 2012, A&A, 542, A114, doi: 10.1051/0004-6361/201219009
. D J Hollenbach, A G G M Tielens, 10.1103/RevModPhys.71.173Reviews of Modern Physics. 71173Hollenbach, D. J., & Tielens, A. G. G. M. 1999, Reviews of Modern Physics, 71, 173, doi: 10.1103/RevModPhys.71.173
. K Hollyhead, N Bastian, A Adamo, 10.1093/mnras/stv331MNRAS. 4491106Hollyhead, K., Bastian, N., Adamo, A., et al. 2015, MNRAS, 449, 1106, doi: 10.1093/mnras/stv331
J L Hora, S Bontemps, S T Megeath, American Astronomical Society Meeting Abstracts #213. 2131American Astronomical Society Meeting AbstractsHora, J. L., Bontemps, S., Megeath, S. T., et al. 2009, in American Astronomical Society Meeting Abstracts, Vol. 213, American Astronomical Society Meeting Abstracts #213, 356.01
. K Immer, C Cyganowski, M J Reid, K M Menten, 10.1051/0004-6361/201321736A&A. 56339Immer, K., Cyganowski, C., Reid, M. J., & Menten, K. M. 2014, A&A, 563, A39, doi: 10.1051/0004-6361/201321736
. T Inoue, Y Fukui, 10.1088/2041-8205/774/2/L31ApJL. 77431Inoue, T., & Fukui, Y. 2013, ApJL, 774, L31, doi: 10.1088/2041-8205/774/2/L31
. T Inoue, P Hennebelle, Y Fukui, 10.1093/pasj/psx089PASJ. 7053Inoue, T., Hennebelle, P., Fukui, Y., et al. 2018, PASJ, 70, S53, doi: 10.1093/pasj/psx089
. J M Jackson, J S Whitaker, J M Rathborne, 10.3847/1538-4357/aaef84ApJ. 870Jackson, J. M., Whitaker, J. S., Rathborne, J. M., et al. 2019, ApJ, 870, 5, doi: 10.3847/1538-4357/aaef84
. S Kabanovic, N Schneider, V Ossenkopf-Okada, 10.1051/0004-6361/202142575A&A. 65936Kabanovic, S., Schneider, N., Ossenkopf-Okada, V., et al. 2022, A&A, 659, A36, doi: 10.1051/0004-6361/202142575
. M J Kaufman, M G Wolfire, D J Hollenbach, 10.1086/503596ApJ. 644283Kaufman, M. J., Wolfire, M. G., & Hollenbach, D. J. 2006, ApJ, 644, 283, doi: 10.1086/503596
. J Keown, J Di Francesco, E Rosolowsky, 10.3847/1538-4357/ab3e76ApJ. 8844Keown, J., Di Francesco, J., Rosolowsky, E., et al. 2019, ApJ, 884, 4, doi: 10.3847/1538-4357/ab3e76
. R S Klessen, P Hennebelle, 10.1051/0004-6361/200913780A&A. 52017Klessen, R. S., & Hennebelle, P. 2010, A&A, 520, A17, doi: 10.1051/0004-6361/200913780
. C Kramer, J Penalver, A Greve, Kramer, C., Penalver, J., & Greve, A. 2013
. M S N Kumar, C J Davis, J M C Grave, B Ferreira, D Froebrich, 10.1111/j.1365-2966.2006.11145.xMNRAS. 37454Kumar, M. S. N., Davis, C. J., Grave, J. M. C., Ferreira, B., & Froebrich, D. 2007, MNRAS, 374, 54, doi: 10.1111/j.1365-2966.2006.11145.x
. W Lim, F Nakamura, B Wu, 10.1093/pasj/psaa035PASJ. 73239Lim, W., Nakamura, F., Wu, B., et al. 2021, PASJ, 73, S239, doi: 10.1093/pasj/psaa035
. K Y Liow, C L Dobbs, 10.1093/mnras/staa2857MNRAS. 4991099Liow, K. Y., & Dobbs, C. L. 2020, MNRAS, 499, 1099, doi: 10.1093/mnras/staa2857
. M Liu, J C Tan, Y Cheng, S Kong, 10.3847/1538-4357/aacb7cApJ. 862105Liu, M., Tan, J. C., Cheng, Y., & Kong, S. 2018, ApJ, 862, 105, doi: 10.3847/1538-4357/aacb7c
. M Luisi, L D Anderson, N Schneider, 10.1126/sciadv.abe9511Science Advances. 79511Luisi, M., Anderson, L. D., Schneider, N., et al. 2021, Science Advances, 7, eabe9511, doi: 10.1126/sciadv.abe9511
. F F S Maia, E Moraux, I Joncour, 10.1093/mnras/stw450MNRAS. 4583027Maia, F. F. S., Moraux, E., & Joncour, I. 2016, MNRAS, 458, 3027, doi: 10.1093/mnras/stw450
. J G Mangum, Y L Shirley, 10.1086/680323PASP. 127Mangum, J. G., & Shirley, Y. L. 2015, PASP, 127, 266, doi: 10.1086/680323
. A P Marston, W T Reach, A Noriega-Crespo, 10.1086/422817ApJS. 154333Marston, A. P., Reach, W. T., Noriega-Crespo, A., et al. 2004, ApJS, 154, 333, doi: 10.1086/422817
. B C Matthews, C A Mcphee, L M Fissel, R L Curran, 10.1088/0067-0049/182/1/143ApJS. 182143Matthews, B. C., McPhee, C. A., Fissel, L. M., & Curran, R. L. 2009, ApJS, 182, 143, doi: 10.1088/0067-0049/182/1/143
. F Motte, S Bontemps, F Louvet, 10.1146/annurev-astro-091916-055235ARA&A. 5641Motte, F., Bontemps, S., & Louvet, F. 2018a, ARA&A, 56, 41, doi: 10.1146/annurev-astro-091916-055235
. F Motte, S Bontemps, P Schilke, 10.1051/0004-6361:20077843A&A. 4761243Motte, F., Bontemps, S., Schilke, P., et al. 2007, A&A, 476, 1243, doi: 10.1051/0004-6361:20077843
. F Motte, A Zavagno, S Bontemps, 10.1051/0004-6361/201014690A&A. 51877Motte, F., Zavagno, A., Bontemps, S., et al. 2010, A&A, 518, L77, doi: 10.1051/0004-6361/201014690
. F Motte, T Nony, F Louvet, 10.1038/s41550-018-0452-xNature Astronomy. 2478Motte, F., Nony, T., Louvet, F., et al. 2018b, Nature Astronomy, 2, 478, doi: 10.1038/s41550-018-0452-x
. F Motte, S Bontemps, T Csengeri, 10.1051/0004-6361/202141677A&A. 6628Motte, F., Bontemps, S., Csengeri, T., et al. 2022, A&A, 662, A8, doi: 10.1051/0004-6361/202141677
. P Palmeirim, P André, J Kirk, 10.1051/0004-6361/201220500A&A. 55038Palmeirim, P., André, P., Kirk, J., et al. 2013, A&A, 550, A38, doi: 10.1051/0004-6361/201220500
. N Peretto, P André, A Belloche, 10.1051/0004-6361:20053324A&A. 445979Peretto, N., André, P., & Belloche, A. 2006, A&A, 445, 979, doi: 10.1051/0004-6361:20053324
. N Peretto, P Hennebelle, P André, 10.1051/0004-6361:20065653A&A. 464983Peretto, N., Hennebelle, P., & André, P. 2007, A&A, 464, 983, doi: 10.1051/0004-6361:20065653
. N Peretto, C Lenfestey, G A Fuller, 10.1051/0004-6361/201527064A&A. 59072Peretto, N., Lenfestey, C., Fuller, G. A., et al. 2016, A&A, 590, A72, doi: 10.1051/0004-6361/201527064
. N Peretto, G A Fuller, A Duarte-Cabral, 10.1051/0004-6361/201321318A&A. 555112Peretto, N., Fuller, G. A., Duarte-Cabral, A., et al. 2013, A&A, 555, A112, doi: 10.1051/0004-6361/201321318
. N Peretto, G A Fuller, P André, 10.1051/0004-6361/201322172A&A. 56183Peretto, N., Fuller, G. A., André, P., et al. 2014, A&A, 561, A83, doi: 10.1051/0004-6361/201322172
. T Peters, R Banerjee, R S Klessen, M.-M Mac Low, 10.1088/0004-637X/729/1/72ApJ. 72972Peters, T., Banerjee, R., Klessen, R. S., & Mac Low, M.-M. 2011, ApJ, 729, 72, doi: 10.1088/0004-637X/729/1/72
. T Peters, R S Klessen, M.-M Mac Low, R Banerjee, 10.1088/0004-637X/725/1/134ApJ. 725134Peters, T., Klessen, R. S., Mac Low, M.-M., & Banerjee, R. 2010, ApJ, 725, 134, doi: 10.1088/0004-637X/725/1/134
M W Pound, M G Wolfire, Astronomical Society of the Pacific Conference Series. R. W. Argyle, P. S. Bunclark, & J. R. Lewis394654Astronomical Data Analysis Software and Systems XVIIPound, M. W., & Wolfire, M. G. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 394, Astronomical Data Analysis Software and Systems XVII, ed. R. W. Argyle, P. S. Bunclark, & J. R. Lewis, 654
. M W Pound, M G Wolfire, 10.3847/1538-3881/ac9b1fAJ. 16525Pound, M. W., & Wolfire, M. G. 2023, AJ, 165, 25, doi: 10.3847/1538-3881/ac9b1f
. Y Pouteau, F Motte, T Nony, 10.1051/0004-6361/202142951A&A. 66426Pouteau, Y., Motte, F., Nony, T., et al. 2022, A&A, 664, A26, doi: 10.1051/0004-6361/202142951
Star Formation and Young Clusters in Cygnus. B Reipurth, N Schneider, B. Reipurth436Reipurth, B., & Schneider, N. 2008, Star Formation and Young Clusters in Cygnus, ed. B. Reipurth, Vol. 4, 36
. C Risacher, R Güsten, J Stutzki, 10.1142/S2251171718400147Journal of Astronomical Instrumentation. 71840014Risacher, C., Güsten, R., Stutzki, J., et al. 2018, Journal of Astronomical Instrumentation, 7, 1840014, doi: 10.1142/S2251171718400147
. D Russeil, A Zavagno, F Motte, 10.1051/0004-6361/200913632A&A. 51555Russeil, D., Zavagno, A., Motte, F., et al. 2010, A&A, 515, A55, doi: 10.1051/0004-6361/200913632
. K L J Rygl, A Brunthaler, A Sanna, 10.1051/0004-6361/201118211A&A. 53979Rygl, K. L. J., Brunthaler, A., Sanna, A., et al. 2012, A&A, 539, A79, doi: 10.1051/0004-6361/201118211
. P Sanhueza, Y Contreras, B Wu, 10.3847/1538-4357/ab45e9ApJ. 886102Sanhueza, P., Contreras, Y., Wu, B., et al. 2019, ApJ, 886, 102, doi: 10.3847/1538-4357/ab45e9
. N Schneider, S Bontemps, R Simon, 10.1051/0004-6361:20065088A&A. 458Schneider, N., Bontemps, S., Simon, R., et al. 2006, A&A, 458, 855, doi: 10.1051/0004-6361:20065088
. N Schneider, T Csengeri, S Bontemps, 10.1051/0004-6361/201014481A&A. 52049Schneider, N., Csengeri, T., Bontemps, S., et al. 2010, A&A, 520, A49, doi: 10.1051/0004-6361/201014481
. N Schneider, R Simon, S Bontemps, F Comerón, F Motte, 10.1051/0004-6361:20077540A&A. 474Schneider, N., Simon, R., Bontemps, S., Comerón, F., & Motte, F. 2007, A&A, 474, 873, doi: 10.1051/0004-6361:20077540
. N Schneider, T Csengeri, R S Klessen, 10.1051/0004-6361/201424375A&A. 57829Schneider, N., Csengeri, T., Klessen, R. S., et al. 2015, A&A, 578, A29, doi: 10.1051/0004-6361/201424375
. N Schneider, S Bontemps, F Motte, 10.1051/0004-6361/201628328doi: 10.1051/0004-6361/201628328A&A. 58740A&ASchneider, N., Bontemps, S., Motte, F., et al. 2016a, A&A, 587, A74, doi: 10.1051/0004-6361/201527144 -. 2016b, A&A, 591, A40, doi: 10.1051/0004-6361/201628328
. N Schneider, R Simon, C Guevara, 10.1088/1538-3873/aba840PASP. 132104301Schneider, N., Simon, R., Guevara, C., et al. 2020, PASP, 132, 104301, doi: 10.1088/1538-3873/aba840
. N Schneider, V Ossenkopf-Okada, S Clarke, 10.1051/0004-6361/202039610A&A. 666165Schneider, N., Ossenkopf-Okada, V., Clarke, S., et al. 2022, A&A, 666, A165, doi: 10.1051/0004-6361/202039610
. N Schneider, L Bonne, S Bontemps, 10.1038/s41550-023-01901-5Nature Astronomy. Schneider, N., Bonne, L., Bontemps, S., et al. 2023, Nature Astronomy, doi: 10.1038/s41550-023-01901-5
. Y L Shirley, 10.1086/680342PASP. 127299Shirley, Y. L. 2015, PASP, 127, 299, doi: 10.1086/680342
. M Tahani, 10.3389/fspas.2022.940027Frontiers in Astronomy and Space Sciences. 9940027Tahani, M. 2022, Frontiers in Astronomy and Space Sciences, 9, 940027, doi: 10.3389/fspas.2022.940027
. M Tahani, R Plume, J C Brown, J D Soler, J Kainulainen, 10.1051/0004-6361/201936280A&A. 63268Tahani, M., Plume, R., Brown, J. C., Soler, J. D., & Kainulainen, J. 2019, A&A, 632, A68, doi: 10.1051/0004-6361/201936280
. M Tahani, W Lupypciw, J Glover, 10.1051/0004-6361/202141170A&A. 660Tahani, M., Lupypciw, W., Glover, J., et al. 2022, A&A, 660, A97, doi: 10.1051/0004-6361/202141170
J C Tan, M T Beltrán, P Caselli, 10.2458/azu_uapress_9780816531240-ch007Protostars and Planets VI. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning149Tan, J. C., Beltrán, M. T., Caselli, P., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 149, doi: 10.2458/azu uapress 9780816531240-ch007
. R Teague, 10.3847/2515-5172/ab2125Research Notes of the American Astronomical Society. 374Teague, R. 2019, Research Notes of the American Astronomical Society, 3, 74, doi: 10.3847/2515-5172/ab2125
. J Tigé, F Motte, D Russeil, 10.1051/0004-6361/201628989A&A. 60277Tigé, J., Motte, F., Russeil, D., et al. 2017, A&A, 602, A77, doi: 10.1051/0004-6361/201628989
. M Tiwari, R Karim, M W Pound, 10.3847/1538-4357/abf6ceApJ. 914117Tiwari, M., Karim, R., Pound, M. W., et al. 2021, ApJ, 914, 117, doi: 10.3847/1538-4357/abf6ce
. A Traficante, G A Fuller, A Duarte-Cabral, 10.1093/mnras/stz3344MNRAS. 4914310Traficante, A., Fuller, G. A., Duarte-Cabral, A., et al. 2020, MNRAS, 491, 4310, doi: 10.1093/mnras/stz3344
. A Traficante, G A Fuller, R J Smith, 10.1093/mnras/stx2672MNRAS. 4734975Traficante, A., Fuller, G. A., Smith, R. J., et al. 2018a, MNRAS, 473, 4975, doi: 10.1093/mnras/stx2672
. A Traficante, Y N Lee, P Hennebelle, 10.1051/0004-6361/201833513A&A. 6197Traficante, A., Lee, Y. N., Hennebelle, P., et al. 2018b, A&A, 619, L7, doi: 10.1051/0004-6361/201833513
. B Vaidya, T W Hartquist, S A E G Falle, 10.1093/mnras/stt800MNRAS. 4331258Vaidya, B., Hartquist, T. W., & Falle, S. A. E. G. 2013, MNRAS, 433, 1258, doi: 10.1093/mnras/stt800
. J P Vallée, J D Fiege, 10.1086/497957ApJ. 636332Vallée, J. P., & Fiege, J. D. 2006, ApJ, 636, 332, doi: 10.1086/497957
. F F S Van Der Tak, J H Black, F L Schöier, D J Jansen, E F Van Dishoeck, 10.1051/0004-6361:20066820A&A. 468627van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, A&A, 468, 627, doi: 10.1051/0004-6361:20066820
. E Vázquez-Semadeni, G C Gómez, A K Jappsen, J Ballesteros-Paredes, R S Klessen, 10.1088/0004-637X/707/2/1023ApJ. 7071023Vázquez-Semadeni, E., Gómez, G. C., Jappsen, A. K., Ballesteros-Paredes, J., & Klessen, R. S. 2009, ApJ, 707, 1023, doi: 10.1088/0004-637X/707/2/1023
. E Vázquez-Semadeni, A González-Samaniego, P Colín, 10.1093/mnras/stw3229MNRAS. 4671313Vázquez-Semadeni, E., González-Samaniego, A., & Colín, P. 2017, MNRAS, 467, 1313, doi: 10.1093/mnras/stw3229
. E Vázquez-Semadeni, A Palau, J Ballesteros-Paredes, G C Gómez, M Zamora-Avilés, 10.1093/mnras/stz2736MNRAS. 4903061Vázquez-Semadeni, E., Palau, A., Ballesteros-Paredes, J., Gómez, G. C., & Zamora-Avilés, M. 2019, MNRAS, 490, 3061, doi: 10.1093/mnras/stz2736
. P Wang, Z.-Y Li, T Abel, F Nakamura, 10.1088/0004-637X/709/1/27ApJ. 70927Wang, P., Li, Z.-Y., Abel, T., & Nakamura, F. 2010, ApJ, 709, 27, doi: 10.1088/0004-637X/709/1/27
. E J Watkins, N Peretto, K Marsh, G A Fuller, 10.1051/0004-6361/201935277A&A. 62821Watkins, E. J., Peretto, N., Marsh, K., & Fuller, G. A. 2019, A&A, 628, A21, doi: 10.1051/0004-6361/201935277
. G M Williams, N Peretto, A Avison, A Duarte-Cabral, G A Fuller, 10.1051/0004-6361/201731587A&A. 61311Williams, G. M., Peretto, N., Avison, A., Duarte-Cabral, A., & Fuller, G. A. 2018, A&A, 613, A11, doi: 10.1051/0004-6361/201731587
. T L Wilson, R Rood, 10.1146/annurev.aa.32.090194.001203ARA&A. 32191Wilson, T. L., & Rood, R. 1994, ARA&A, 32, 191, doi: 10.1146/annurev.aa.32.090194.001203
. B Wu, S Van Loo, J C Tan, S Bruderer, 10.1088/0004-637X/811/1/56ApJ. 81156Wu, B., Van Loo, S., Tan, J. C., & Bruderer, S. 2015, ApJ, 811, 56, doi: 10.1088/0004-637X/811/1/56
. F Wyrowski, R Güsten, K M Menten, H Wiesemeyer, B Klein, 10.1051/0004-6361/201218927A&A. 54215Wyrowski, F., Güsten, R., Menten, K. M., Wiesemeyer, H., & Klein, B. 2012, A&A, 542, L15, doi: 10.1051/0004-6361/201218927
. F Wyrowski, R Güsten, K M Menten, 10.1051/0004-6361/201526361A&A. 585149Wyrowski, F., Güsten, R., Menten, K. M., et al. 2016, A&A, 585, A149, doi: 10.1051/0004-6361/201526361
. E T Young, E E Becklin, P M Marcum, 10.1088/2041-8205/749/2/L17ApJL. 74917Young, E. T., Becklin, E. E., Marcum, P. M., et al. 2012, ApJL, 749, L17, doi: 10.1088/2041-8205/749/2/L17
. H Zinnecker, H W Yorke, 10.1146/annurev.astro.44.051905.092549ARA&A. 45481Zinnecker, H., & Yorke, H. W. 2007, ARA&A, 45, 481, doi: 10.1146/annurev.astro.44.051905.092549
| []
|
[
"European 5G Security in the Wild: Reality versus Expectations Gines Garcia-Aviles [email protected] i2CAT Foundation",
"European 5G Security in the Wild: Reality versus Expectations Gines Garcia-Aviles [email protected] i2CAT Foundation"
]
| [
"Oscar Lasierra [email protected] \ni2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n\n",
"Esteban Municio [email protected] \ni2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n\n",
"Antonio Skarmeta [email protected] \ni2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n\n",
"Xavier Costa-Pérez [email protected] \ni2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n\n"
]
| [
"i2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n",
"i2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n",
"i2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n",
"i2CAT Foundation\ni2CAT Foundation\ni2CAT Foundation and ICREA NEC Laboratories Europe\nUniversity of Murcia\n"
]
| []
| 5G cellular systems are slowly being deployed worldwide delivering the promised unprecedented levels of throughput and latency to hundreds of millions of users. At such scale security is crucial, and consequently, the 5G standard includes a new series of features to improve the security of its predecessors (i.e., 3G and 4G). In this work, we evaluate the actual deployment in practice of the promised 5G security features by analysing current commercial 5G networks from several European operators. By collecting 5G signalling traffic in the wild in several cities in Spain, we i) fact-check which 5G security enhancements are actually implemented in current deployments, ii) provide a rich overview of the implementation status of each 5G security feature in a wide range of 5G commercial networks in Europe and compare it with previous results in China, iii) analyse the implications of optional features not being deployed, and iv) discuss on the still remaining 4G-inherited vulnerabilities. Our results show that in European 5G commercial networks, the deployment of the 5G security features is still on the works. This is well aligned with results previously reported from China [16] and keeps these networks vulnerable to some 4G attacks, during their migration period from 4G to 5G.CCS CONCEPTS• Security and privacy → Mobile and wireless security. | 10.48550/arxiv.2305.08635 | [
"https://export.arxiv.org/pdf/2305.08635v1.pdf"
]
| 258,686,746 | 2305.08635 | ec1ec9185d48f5fc87eda1ac1a213bfd20c54237 |
European 5G Security in the Wild: Reality versus Expectations Gines Garcia-Aviles [email protected] i2CAT Foundation
Oscar Lasierra [email protected]
i2CAT Foundation
i2CAT Foundation
i2CAT Foundation and ICREA NEC Laboratories Europe
University of Murcia
Esteban Municio [email protected]
i2CAT Foundation
i2CAT Foundation
i2CAT Foundation and ICREA NEC Laboratories Europe
University of Murcia
Antonio Skarmeta [email protected]
i2CAT Foundation
i2CAT Foundation
i2CAT Foundation and ICREA NEC Laboratories Europe
University of Murcia
Xavier Costa-Pérez [email protected]
i2CAT Foundation
i2CAT Foundation
i2CAT Foundation and ICREA NEC Laboratories Europe
University of Murcia
European 5G Security in the Wild: Reality versus Expectations Gines Garcia-Aviles [email protected] i2CAT Foundation
10.1145/xxxxxxx.xxxxxxx5Gsecuritysubscriber anonymitysubscriber privacyexperimen- tal data collection
5G cellular systems are slowly being deployed worldwide delivering the promised unprecedented levels of throughput and latency to hundreds of millions of users. At such scale security is crucial, and consequently, the 5G standard includes a new series of features to improve the security of its predecessors (i.e., 3G and 4G). In this work, we evaluate the actual deployment in practice of the promised 5G security features by analysing current commercial 5G networks from several European operators. By collecting 5G signalling traffic in the wild in several cities in Spain, we i) fact-check which 5G security enhancements are actually implemented in current deployments, ii) provide a rich overview of the implementation status of each 5G security feature in a wide range of 5G commercial networks in Europe and compare it with previous results in China, iii) analyse the implications of optional features not being deployed, and iv) discuss on the still remaining 4G-inherited vulnerabilities. Our results show that in European 5G commercial networks, the deployment of the 5G security features is still on the works. This is well aligned with results previously reported from China [16] and keeps these networks vulnerable to some 4G attacks, during their migration period from 4G to 5G.CCS CONCEPTS• Security and privacy → Mobile and wireless security.
INTRODUCTION
The arrival of the fifth generation of mobile networks (5G) is substantially changing the way networks are designed and deployed. From the subscribers perspective, 5G effectively provides an improved performance compared with their predecessors, increasing available bandwidth (e.g., to provide on-demand high-quality video services) and reducing end-to-end latency (e.g., to provide real-time augmented/virtual reality applications). By the end of 2021, more than 176 commercial 5G networks have been deployed worldwide, of which only 22 were already 5G Stand Alone (SA) networks. [11]. Unfortunately, such growing figures also bring greater risks in terms of security.
However, unlike previous mobile generations such as 3G/4G which are subject to a number of known attacks [13,15,21,22], 5G provides security enhancements through a series of new generation specifications defined by the 3rd Generation Partnership Project (3GPP), including TS 33.501 [3] and TS 33.511 [1]. Despite this, while current real-world 5G deployments follow the same architectural security framework reference, neither all of them implement the same 5G security mechanisms enabled by the new specifications, nor they do it in the same way. This is usually caused by the optionality of some mechanisms and by the operators' inherent constraints (cost, compatibility, or performance) [16].
In this work, we report a hands-on security analysis of currently deployed 5G networks, fact-checking security mechanisms compliance and identifying still existing vulnerabilities in current 5G deployments. For those non-compliant or partially-compliant deployments, we identify and provide an in-depth characterisation of the attacks they are vulnerable to.
In order to perform such analysis, we collect and study signalling messages between various 5G networks and the User Equipment (UE) through commercial cellular traffic sniffers focusing on currently deployed 5G networks from different network operators in urban and suburban areas of various cities in the east coast of Spain. These traces include information about the User Plane (UP) and Control Plane (CP) security activation, the subscriber identifiers exchanged and the Authentication procedures performed for accessing the network.
Our measurements show that although commercial deployments do not implement all user authentication mechanisms specified in the standard, the confidentiality and integrity implementation at the UE does always seem to comply with the standard. However, unlike previously reported in [16] for Chinese 5G deployments in Beijing, the majority of the observed networks are still exposed to 4G-inherited vulnerabilities such as identity and user data leakage and Denial of Service (DoS) attacks because of the yet general absence of Standalone (SA) 5G network deployments. Note that this is as expected due to practical deployment reasons; which is aligned with the roadmap specified by operators towards the adoption of 5G not just in Europe but worldwide. GSMA forecasts a 44% average adoption of 5G within Europe by 2025 [10]. So the migration path from 4G to 5G is on the works but will take some years still to be completed.
Therefore, the main contributions of this work are i) a comprehensive compliance analysis for different 5G networks deployed in Spain in order to fact-check and evaluate the actual security and privacy mechanisms implemented by vendors and operators in a typical European 5G network deployment 1 ; and ii) a study of the available security vulnerabilities in current commercial 5G networks.
The rest of this work is structured as follows. In Section 2 we briefly provide the necessary background on 5G NR. Section 3 describes the corresponding security mechanisms included in the 5G standard and identifies the most common security threats. In Section 4 we detail the methodology followed for data collection and its subsequent analysis and in Section 5 we report the results, extensively discussing the capabilities, standard compliance, and vulnerabilities observed in the different 5G networks. Finally, Section 6 concludes this work.
BACKGROUND 2.1 5G Outline
The architecture of 5G cellular networks can be logically separated into three main components, User Equipment (UE), the Radio Access Network (RAN), and the mobile Core Network (CN). The UEs establish a wireless connection with the RAN to be able to reach the CN, which acts as i) an authentication entity, allowing/denying devices to access the network; and ii) acting as an ingress/egress point of the traffic generated from/to the internet.
Within the 5G context, UEs are essentially defined as a combination of two components. First, the Universal Subscriber Identity Module (USIM) card, which is used to store user identification data, such as the public/private keys and the Subscriber Permanent Identifier (SUPI), known as the International Mobile Subscriber Identity (IMSI) in 4G. Second, the Mobile Equipment (ME) hardware itself, is identified by the International Mobile Equipment Identity (IMEI).
The RAN manages the wireless connectivity through the 5G base stations (gNBs), replacing or coexisting with legacy 4G base stations (eNBs). LTE/NR coexistence is ensured through the 5G Non-standalone (NSA) mode or EUTRA NR Dual Connectivity (ENDC), which allows UEs to configure a 5G secondary node for data plane transmissions. This mode keeps 4G eNBs as master nodes which are in charge of carrying control plane traffic. In contrast, 5G Standalone (SA) mode adopts the gNB as the master node of the connection to jointly manage both data and control planes traffic. The interaction between the UE and the RAN is one of the most vulnerable parts in the network, and therefore, the main security features imposed by the 5G standard come to solve some of the major risks and pitfalls in the wireless domain [5,8,17,22].
Similarly to 4G, the 5G core network (CN) provides the UEs with external packet data network connectivity. It consists of various network functions to manage different fundamental processes such as session control (SMF), authentication (AUSF and SEAF), access and mobility (AMF), etc.
Critical NR Procedures: Initial Attachment and Registration
The initial procedures performed by 5G NR carry essential information required to establish a stable and secure communication through the RAN. These processes are based on the exchange of 1 The network operators considered in this work operate in about 70% of the countries in the EU with similar 5G deployments Figure 1: 5G NR Initial Registration Procedure information between parties: UE-gNB for the radio link and UE-gNB-CN for a higher-level communication layer. Both processes must be performed in a way that preserves security and confidentiality, avoiding third-party observers to gather the exchanged information, and hence bypassing security leaks in subsequent communications.
Broadcast Channel and Random Access:
To effectively establish a connection, the UE must perform a set of interactions with the gNB before starting with the registration procedure itself, the Cell search and the Random Access Channel (RACH).
The Cell Search procedure allows the UE to acquire time and frequency synchronisation within cells with the goal of retrieving cell parameters and system information from the Master Information Block (MIB) and the System Information Block (SIB). Synchronisation is obtained by detecting Synchronisation Signal Block (SSB) and decoding Primary and Secondary Sinchronisation Signals (PSS and SSS) located on the synchronisation raster. Then, MIB is decoded from the Physical Broadcast Channel (PBCH) from the same synchronisation raster to subsequently configure the Control resource set zero (CORESET0) and SearchSpace. With this information, the UE can perform the blind decode of the Physical Downlink Control Channel (PDCCH) and configure the remaining parameters to find and decode SIB1 in the Physical Downlink Shared Channel (PDSCH) (full procedure defined in 3GPP 38.104 [2]).
RACH procedure allows the UE to configure UL synchronisation and obtain an identifier for the radio communication. If BeamForming is supported, the UE shall detect, choose and synchronise with the best beam to start the communication with the gNB.
Radio Resource Control:
After the Random Access procedure, if the UE is not attached to the network, it has to initiate the registration procedure. Otherwise, the UE initiates the tracking area update if it has changed since the last update. For initiating any NAS procedure, the UE needs to establish a Radio Resource Control (RRC) connection with the gNB. The main purpose of this procedure is to establish an active connection with the gNB, enabling the acquisition of radio resources for the communication. RRC connection establishment involves the creation of the Signalling Radio Bearer one (SRB1) for the RRC messages exchange.
The last message of this process can carry the initial Non-Access Stratum (NAS) message from the UE to the Access and Mobility Management Function (AMF) via the gNB (Mobility Management Entity (MME) via the eNB for NSA deployments).
2.2.3
Non-Access-Stratum: To get Non-Access Stratum (NAS)level services (e.g, internet connectivity) , NAS nodes in the network need to know about the UE. To facilitate this, the UE has to initiate the Attach Procedure, which is mandatory to be performed by the UE at boot time (or by setting Airplane mode off). Once the attach procedure succeeds, a context is established for the UE in the core, and a default bearer is established between the UE and the Packet Data Network Gateway (PDN GW). This results in the allocation of an available IP address to the UE, enabling IP-based internet services in the 5G device.
SECURITY IN 5G NR
Security in cellular networks has been evolving during the different mobile generations in order to address the open threads identified during their operation. The enhancements brought by the 5G standard [3] are depicted in Figure 1 and summarised next.
UE credentials and identifiers
One of the major enhancements introduced in 5G SA networks is the concealment of the SUPI. In previous generations, subscriber permanent identifiers (which in some cases contain relevant information such as the phone number) were sent in clear text and thus, attackers could retrieve this information and perform impersonation attacks [15,21]. In 5G, the UE sends a concealed version of the SUPI called Subscription Concealed Identifier (SUCI) generated by using asymmetric cryptography (the private key is securely stored in the USIM). However, 5G SUCI-Catching attacks are still possible as reported in [7]. In this sense, in order to avoid sending the subscriber identifier over the radio link, temporal identifiers were added in 4G (Globally Unique Temporary Identity, GUTI and the Temporary Mobile Subscriber Identity, TMSI) but their refresh rate was sub-optimal, failing on providing confidentiality and anonymity to users. 5G networks also use temporary identifiers but 3GPP imposes specific guidelines dealing with aforementioned vulnerabilities.
Enhanced Authentication and Privacy
The 5G Authentication and Key Agreement (5G-AKA) protocol is a security protocol introduced in the 5G standard to provide mutual authentication between all the components in the communication, i.e., UE, Serving Network (SN) and Home Network (HN) with privacy-preserving policies (i.e., providing user ID confidentiality and preventing from user tracking). Similarly to AKA protocols of previous generations, 5G-AKA allows two end-points to establish the root keys from which new keys in subsequent security procedures will be derived [19]. However previous authentication protocols, such as 4G-AKA, failed to provide anonymity to the user because a) the IMSI of the user was sent in plain text and b) when replacing the ID of the user with temporary identifiers they are usually static and persistent, hence predictable as studied in [12]. 5G-AKA adopts the use of the 5G-GUTI to address this issue. Additionally, 5G-AKA enables Non-3GPP accesses (authentication is no longer related to one specific access technology) and allows the Serving Network and the Home Network to mutually authenticate themselves by cross-verification (i.e., by the AMF and SEAF in the Serving Network and by the AUSF in the Home Network) [14]. Table 1 summarizes the 5G-AKA security enhancements along with the new 5G UE identifiers, in contrast with the previous 4G-AKA protocol.
Improved Confidentiality and Integrity
Previous generations of cellular networks failed on providing confidentiality/integrity protection on some pre-authentication signalling messages, allowing attackers to exploit multiple vulnerabilities [20]. For that reason, 5G introduces novel protection mechanisms specifically designed for signalling and user data. Besides increasing the length of the key algorithms (to 256-bit expected for future 3GPP releases), 5G forces mandatory integrity support of the user plane, and extends confidentiality and integrity protection to the initial NAS messages. Table 3
summarises in column
Standard the requirements in terms of confidentiality and integrity protection as defined in [3]. 5G also secures the UE network capabilities, a field within the initial NAS message, which is used to allow UEs to report to the AMF about the supported integrity and encryption algorithms in the initial NAS message.
In addition to backward compatibility, 5G UEs shall implement New Radio Encryption Algorithm (NEA) 0, 128-NEA1 and 128-NEA2 for confidentiality protection and New Radio Integrity Algorithm (NIA) 0, 128-NIA1 and 128-NIA2 for integrity protection. However, the implementation of the 128-NEA3 and 128-NIA3 is optional [3]. In 4G, the UE security capabilities are exchanged with integrity protection only when the UE has already established a security context. An attacker entity could capture this message and gain substantial information, e.g., the technologies supported by the UE or the device model in the best-case scenario. In order to prevent this, 5G includes both integrity and confidentiality protection in the initial registration NAS message to protect the UE capability field. However, for both 4G and 5G, if the UE does not have an established security context (i.e., the first registration attempt), the UE capability field is sent in clear. This information allows an attacker to read/modify the exchanged information and perform multiple attacks (e.g., user identification and power drain [23].
UE Radio Capabilities transfer
Before establishing the connection, the UE needs to provide the gNB its capabilities for radio access (e.g., supported frequency bands, EN-DC support, etc.). In previous generations, this information was sent without establishing CP security directives, and hence, an adversary could hijack this information and perform bidding down attacks [23]. 5G ensures its protection by sending them in the RRC UE Capability Information message after enabling security directives. Table 2). Then, to homogenise the data collection process, we have defined an experimentation methodology consisting on the following steps:
(1) Airplane mode ON: The terminal will always start with airplane mode activated. (2) Start data collection: Once the airplane mode of the terminal is active, we start the data collection tool at the device. (3) Airplane mode OFF: Disabling the airplane mode will allow the device to initiate the registration process to establish an active session with the mobile operator. (4) Initial registration: At this phase, we wait until the registration process is complete. Finally, to effectively study the temporary identifiers, we replace the Traffic Generation step with ON-OFF Switch, where airplane mode is activated and deactivated during the traffic gathering. Both types of experiments were performed on each geographical place with an average duration of 15 minutes. It is important to highlight the non-intrusive nature of the data collection process, where we only collect data transmitted openly over the air in a passive manner, i.e. without performing any interaction with the network or users.
Data Evaluation Methodology
Traces extracted from the communication process contains all the information required for the evaluation of 5G networks security features introduced in Section 3. We look into the RRC and NAS messages to identify the status of the security enhancements.
Deployment type identification. The first step in the evaluation process is to identify the type of deployment at which the user was connecting to. The incremental approach followed by operators towards the deployment of 5G networks results in two different types of deployments: i) 5G NSA and 5G SA (see Sec. 2). The identification between NSA and SA will be performed by using the Information Elements (IEs) carried by the MIB. More specifically, in 5G SA deployments the gNB will include pdcch-ConfigSIB1, ssb-SubcarrierOffset or dmrs-TypeA-Position IEs, which will not be present on a 5G NSA deployment.
Authentication procedure. The evaluation of the authentication procedure will be performed after the RRC connection establishment. Apart from the different messages exchange from other authentication procedures, there are other indicators within the messages that allow the proper identification of 5G AKA. For example, after the RRCSetupComplete message, the UE sends a NAS Reg-istrationRequest initiating the authentication procedure and hence, disclosing the underlying authentication procedure (e.g. "5GS registration type" field, 5G-GUTI as TypeOfIdentity, or the inclusion of the 5G-TMSI).
Privacy and Anonymity. Privacy and anonymity of terminals depend on whether UE identity is accessible by third-party observers or not. There are two types of parameters devoted to identify UEs within the authentication process: i) the permanent subscriber identifiers, which must be securely transmitted; and ii) temporal subscriber identifiers, which must be periodically updated in order to avoid their correlation with UEs. The permanent subscriber identifier can be found in the NAS RegistrationRequest (within the 5GS Mobile Identity IE), when the UE starts a registration procedure or in the NAS IdentityResponse after receiving a NAS IdentityRequest from the network. Then, we focus on measuring the refresh rate of the temporal identifiers (5G-GUTI and 5G-TMSI). Their values must be updated after each registration procedure within NAS Registration Accept message and after NAS Service Request in the subsequent RRC Connection Request message where a new value for 5G-TMSI shall be assigned by the gNB.
In order to assess the implementation of the new 5G security features, we will check if the security of the permanent identifiers and the refresh period was applied to the temporal subscriber identifiers by checking their value within the aforementioned messages.
Confidentiality and Integrity. To assess confidentiality and integrity in the Control Plane we need to look into the RRC and NAS SecurityModeCommand messages, where the algorithms to provide protection are selected and activated. For the UP, we have first located the NAS Service Request and the subsequent RRC Secu-rityModeCommand messages, which activate the Data Radio Bearer (DRB) and the algorithms. However, the UP security is established with the RRCReconfiguration message, which carries information about the algorithms used for the service to provide integrity and confidentiality protection per DRB.
UE Supported Capabilities. UE network capabilities are always sent in the NAS Registration Request message within the UE network capability and UE additional security capability fields. In these both fields, all the security algorithms supported by the UE regarding each mobile technology are sent to the base station. In the case of NSA deployments, this information is carried by the UECapabilityInformation message sent by the UE.
UE Radio Capabilities Transfer. UE radio capabilities are sent in the RRC UE Capability Information message. Following the registration procedure time events in the traces, we verify that in some networks this message is sent before the RRC Security Mode Command without confidentiality or integrity protection. The results of the analysis following the methodology introduced in Section 4.2 are summarised in Table 3. Each row of the table represents the different security features under study, and being the columns, the standard view of each feature, the results obtained for two different operators and the results obtained in [16] respectively. The first result to highlight is the complete absence of 5G SA deployments. Both operators are offering 5G coverage by means of 5G NSA deployments which essentially rely on existing 4G infrastructures. Hence, there is no enhancement on the Authentication and Key Agreement process.
Ciphering of Permanent Identifiers: We have checked that no concealment of permanent identifiers has been done by capturing the permanent IMSI and IMEI values which are sent without protection within the NAS Identity Response message.
Temporary Identifier and GUTI Refresh: We have verified along the different traces, after receiving the NAS Attach Accept and RRC Connection Request messages, the freshness of m-TMSI value within GUTI. m-TMSI shall change its value after these messages, however, only during the Registration procedure the temporary identifier is updated.
Confidentiality and Integrity: In terms of confidentiality, on the one hand the nr-RadioBearerConfig-r15 IE to establish the DRB points to the NEA2 algorithm in all the traces except for Operator B in the city of Tarragona. This algorithm indicates that User Data confidentiality is effectively met even if the standard marks it as optional. In contrast, Tarragona does not accomplish confidentiality protection of user data due to the lack of a 5G DRB in this area. On the other hand, confidentiality protection for the initial NAS and RRC messages is not yet implemented using 5G NEA algorithms.
In contrast to confidentiality in data transmissions, integrity is a mandatory feature for signalling messages. Nevertheless, the configured data and signalling radio bearers do not show any of the mandatory algorithms in the IntegrityProtAlgorithm field within the IEs. Instead, they use algorithms from previous generations (i.e., eia2) which do not provide the required security level.
UE Network security capabilities: Moreover, we have verified the supported algorithms in the UE by checking the UE security capabilities within NAS Attach Request message. Despite only using 5G NEA algorithm for securing the UP, the UE supports both 5G NEA and NIA plus legacy 4G and 3G algorithms.
UE Radio capabilities: We found that only in Operator B, four access networks are sending the radio capabilities before initialising the security environment for the CP messages.
Although there is a clear trend in the reported results, note that there might be other operators/deployments (not covered in this measurement campaign) exhibiting better security results if they are more advanced in their migration path from 4G to 5G.
Effective Attacks on current 5G Deployments
Subscriber credentials (identity attacks): Since none of the studied networks implement concealment of the permanent identifier, the legacy IMSI catching attacks can still be deployed [9] as well as more sophisticated attacks that exploit subscriber credentials leakability [7] [18]. Moreover, temporary identifiers can be found in all captures (updated every time the Registration Procedure is performed), enabling identity mapping and tracking attacks by correlating temporary identifiers with UEs. Authentication vulnerabilities (activity monitoring): Our previous section revealed the complete absence of 5G-AKA protocol and hence, the presence of UEs and their consumed mobile services can be inferred. Authors in [4] propose novel privacy attacks against all variants of AKA protocol which also affect the studied scenarios.
UP Confidentiality and Integrity: As highlighted in the evaluation section, confidentiality protection is enabled in most of the studied deployments while integrity protection is completely missing. This absence allows an adversary to perform data manipulation, identity mapping and impersonation attacks (i.e., MitM attacks) even if confidentiality is active [13,20,21]. [23]. Given the obtained results, most of the deployments enclosed by Operator B are susceptible to these attacks.
The implementation of active data collection methodologies (e.g. [6], [7]) would enrich the obtained results, allowing an in-depth analysis of the security features not only from a network subscriber perspective but from the view of an active attacker willing to exploit the available vulnerabilities.
CONCLUSIONS
5G networks are expected to significantly improve the security of mobile users, thanks to the newly introduced mandatory features which address identified 4G vulnerabilities. In this paper, we analysed the progress of current 5G European commercial networks deployments with respect to the expected security features. In order to do so, we collected a dataset comprising 5G measurements from two different operators in Spain, six different cities and both urban and suburban scenarios. The two major network operators considered in our study operate in 70% of the European countries and, due to economies of scale, our results can be reasonably expected to be applicable to other European countries served by the same operators. Our results show that current 5G network deployments miss expectations on i) providing improved privacy and anonymity to subscriber identifiers (transmitting them in clear text), ii) refreshing often enough temporal subscriber identifiers ( facilitating subscriber identification and tracking), iii) additional confidentiality protection (inheriting security vulnerabilities from previous generations) and iv) UE radio capabilities are sometimes transferred without protection (enabling bidding down and battery drain attacks).
As already reported in [10], we are in the midst of the 4G to 5G migration, expected to be mature by 2025. Thus, as we get closer to this date, we expect operators to increasingly deploy 5G security features accordingly, covering the gaps identified by our work.
ACKNOWLEDGMENTS
This work has been supported by the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union -NextGeneration EU, in the framework of the Recovery Plan, Transformation and Resilience (PRTR) (Call UNICO I+D 5G 2021, ref. number TSI-063000-2021-6-Open6G), and by the CERCA Programme from the Generalitat de Catalunya.
( 5 )
5Traffic generation: This phase consists on the generation of ICMP traffic to check the connectivity status and to force a possible reconfiguration of the radio channel. (6) Stop data collection: Finally, we stop the data collection tool as well as the collected data of the experiment.
Table 1 :
15G-AKA Security Enhancements
Table 2 :
2Cities covered for data collection4 METRICS IN THE WILD
4.1 Data Collection Methodology
In order to characterise 5G commercial deployments we have used a
commercial protocol analyser with two different SIM cards from two
different network operators 2 . The commercial protocol analyser is
a Keysight NEMO Handy Handheld 3 which includes a debugging
tool used for wireless diagnostics. We have collected data traces
from six Spanish cities: Barcelona (B), Tarragona (T), Castellón de
la Plana (C), Valencia (V), Alicante (A) and Murcia (M) (see
Table 3 :
35G Security mechanisms availability on current network deployments
5 EVALUATION
5.1 Reality Check, is current 5G Really
Improving Security?
Table 4 :
4Overview of vulnerabilities and attacks UE Radio Capabilities: Transmitting radio capabilities information before the CP security activation (Security Mode Command message) could lead to Identification, Binding Down and Battery Drain attacks
For anonymity reasons, we will refer to them as Operator A and Operator B. 3 https://www.keysight.com/us/en/product/NTH00000B/nemo-handy-handheldmeasurement-solution.html
Security Assurance Specification (SCAS) for the next generation Node B (gNodeB) network product class. 3GPP. 2021. TS 33511Release 163GPP. 2021. TS 33.511: Security Assurance Specification (SCAS) for the next generation Node B (gNodeB) network product class (Release 16). https://www. 3gpp.org/ftp/Specs/archive/33{_}series/33.511
Base Station (BS) radio transmission and reception: Section 5.4.3 Syncronization Raster (Release 15. 3GPP. 2021. TS 381043GPP. 2021. TS 38.104: Base Station (BS) radio transmission and reception: Section 5.4.3 Syncronization Raster (Release 15). https://www.etsi.org/deliver/etsi_ts/ 138100_138199/138104/15.14.00_60/ts_138104v151400p.pdf
Security architecture and procedures for 5G system. 3GPP. 2022. TS 33501Release 173GPP. 2022. TS 33.501: Security architecture and procedures for 5G system (Release 17). https://www.3gpp.org/ftp/Specs/archive/33{_}series/33.501/
New privacy threat on 3G, 4G, and upcoming 5G AKA protocols. Ravishankar Borgaonkar, Proceedings on Privacy Enhancing Technologies. on Privacy Enhancing Technologies3Ravishankar Borgaonkar et al. 2019. New privacy threat on 3G, 4G, and upcoming 5G AKA protocols. Proceedings on Privacy Enhancing Technologies 2019, 3 (2019).
A New Tracking-Attack Scenario Based on the Vulnerability and Privacy Violation of 5G AKA Protocol. Ya- , Chu Cheng, Chung-An Shen, IEEE Access. 10Ya-Chu Cheng and Chung-An Shen. 2022. A New Tracking-Attack Scenario Based on the Vulnerability and Privacy Violation of 5G AKA Protocol. IEEE Access 10 (2022), 77679-77687.
LTE security disabled: misconfiguration in commercial networks. Merlin Chlosta, Proceedings of the 12th conference on security and privacy in wireless and mobile networks. the 12th conference on security and privacy in wireless and mobile networksMerlin Chlosta et al. 2019. LTE security disabled: misconfiguration in commercial networks. In Proceedings of the 12th conference on security and privacy in wireless and mobile networks. 261-266.
5G SUCI-catchers: still catching them all. Merlin Chlosta, Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks. the 14th ACM Conference on Security and Privacy in Wireless and Mobile NetworksMerlin Chlosta et al. 2021. 5G SUCI-catchers: still catching them all?. In Proceed- ings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks. 359-364.
Security Threats to Voice Services in 5G Standalone Networks. Security and Communication Networks. Zhiwei Cui, 2022Zhiwei Cui et al. 2022. Security Threats to Voice Services in 5G Standalone Networks. Security and Communication Networks 2022 (2022).
IMSI-catch me if you can: IMSI-catcher-catchers. Adrian Dabrowski, 10.1145/2664243.2664272Adrian Dabrowski et al. 2014. IMSI-catch me if you can: IMSI-catcher-catchers. https://doi.org/10.1145/2664243.2664272
The future of 5G connectivity in Europe. Gsma, GSMA. 2022. The future of 5G connectivity in Europe. Product. https://www. gsma.com/gsmaeurope/news/the-future-of-5g-in-europe/
The Mobile Economy 2022. Gsma, GSMA. 2022. The Mobile Economy 2022. White Paper. https: //data.gsmaintelligence.com/api-web/v2/research-file-download?id= 69042315{&}file=280222-The-Mobile-Economy-2022.pdf
GUTI Reallocation Demystified: Cellular Location Tracking with Changing Temporary Identifier. Byeongdo Hong, NDSS. Byeongdo Hong et al. 2018. GUTI Reallocation Demystified: Cellular Location Tracking with Changing Temporary Identifier.. In NDSS.
Lost traffic encryption: fingerprinting LTE/4G traffic on layer two. Katharina Kohls, Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks. the 12th Conference on Security and Privacy in Wireless and Mobile NetworksKatharina Kohls et al. 2019. Lost traffic encryption: fingerprinting LTE/4G traffic on layer two. In Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks. 249-260.
The 5G-AKA authentication protocol privacy. Adrien Koutsos, 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEEAdrien Koutsos. 2019. The 5G-AKA authentication protocol privacy. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 464-479.
Easy 4G/LTE IMSI catchers for non-programmers. F Stig, Mjolsnes, F Ruxandra, Olimid, International Conference on Mathematical Methods, Models, and Architectures for Computer Network Security. SpringerStig F Mjolsnes and Ruxandra F Olimid. 2017. Easy 4G/LTE IMSI catchers for non-programmers. In International Conference on Mathematical Methods, Models, and Architectures for Computer Network Security. Springer, 235-246.
Measuring the Deployment of 5G Security Enhancement. Shiyue Nie, Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks. the 15th ACM Conference on Security and Privacy in Wireless and Mobile NetworksShiyue Nie et al. 2022. Measuring the Deployment of 5G Security Enhancement. In Proceedings of the 15th ACM Conference on Security and Privacy in Wireless and Mobile Networks. 169-174.
IMSI catchers in the wild: A real world 4G/5G assessment. Ivan Palamà, Computer Networks. 194108137Ivan Palamà et al. 2021. IMSI catchers in the wild: A real world 4G/5G assessment. Computer Networks 194 (2021), 108137.
Nori: Concealing the Concealed Identifier in 5G. John Preuß Mattsson, Prajwol Kumar Nakarmi, The 16th International Conference on Availability, Reliability and Security. John Preuß Mattsson and Prajwol Kumar Nakarmi. 2021. Nori: Concealing the Concealed Identifier in 5G. In The 16th International Conference on Availability, Reliability and Security. 1-7.
Chapter 8 -Security. Stefan Rommer, 10.1016/B978-0-08-103009-7.00008-95G Core Networks. Stefan Rommer et al.Academic PressStefan Rommer et al. 2020. Chapter 8 -Security. In 5G Core Networks, Stefan Rommer et al. (Eds.). Academic Press, 171-201. https://doi.org/10.1016/B978-0- 08-103009-7.00008-9
Breaking LTE on Layer Two. David Rupprecht, IEEE Symposium on Security & Privacy (SP). IEEEDavid Rupprecht et al. 2019. Breaking LTE on Layer Two. In IEEE Symposium on Security & Privacy (SP). IEEE.
IMP4GT: IMPersonation Attacks in 4G NeTworks. David Rupprecht, NDSS. David Rupprecht et al. 2020. IMP4GT: IMPersonation Attacks in 4G NeTworks.. In NDSS.
Practical attacks against privacy and availability in 4G/LTE mobile communication systems. Altaf Shaik, arXiv:1510.07563arXiv preprintAltaf Shaik et al. 2015. Practical attacks against privacy and availability in 4G/LTE mobile communication systems. arXiv preprint arXiv:1510.07563 (2015).
New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities. Altaf Shaik, Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks. the 12th Conference on Security and Privacy in Wireless and Mobile NetworksAltaf Shaik et al. 2019. New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities. In Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks. 221-231.
| []
|
[
"A Natural Copula",
"A Natural Copula"
]
| [
"P B Lerner "
]
| []
| []
| Copulas are widely used in financial economics (Brigo 2010) as well as in other areas of applied mathematics. Yet, there is much arbitrariness in their choice. The author proposes "a natural copula" concept, which minimizes Wasserstein distance between distributions in some space, in which both these distributions are embedded. Transport properties and hydrodynamic interpretation are discussed with two examples of distributions of financial significance. A natural copula can be parsimoniously estimated by the methods of linear programming. | null | [
"https://export.arxiv.org/pdf/2304.06859v1.pdf"
]
| 258,170,507 | 2304.06859 | 7539d31f4f371f9919fb86a4e212da9a24664d49 |
A Natural Copula
P B Lerner
A Natural Copula
1
Copulas are widely used in financial economics (Brigo 2010) as well as in other areas of applied mathematics. Yet, there is much arbitrariness in their choice. The author proposes "a natural copula" concept, which minimizes Wasserstein distance between distributions in some space, in which both these distributions are embedded. Transport properties and hydrodynamic interpretation are discussed with two examples of distributions of financial significance. A natural copula can be parsimoniously estimated by the methods of linear programming.
Introduction
Since their invention by Sklar (Sklar, Fonctions de repartition a n dimensions et leur marges 1959), (Sklar, Random functions, joint distributions and copulas 1960), copulas became a valuable instrument in financial mathematics. Possible choices of copulas are numerous: Gaussian, Archimedean, etc. (Durante 2016). In this paper, we propose and motivate a concept of the "natural copula", namely the one, which minimizes Wasserstein distance (Villani 2003) in the space, in which they are mutually embedded. Some arbitrariness remains because the Wasserstein distance can be chosen in multiple ways, but this choice must correspond to the substantive nature of a problem.
We use a VAP (volume-at-price) distribution and an actual limit order book (LOB) to illustrate the concept. Our approximation method allows fast minimization of the Wasserstein distance by the methods of linear programming (Matousek 2007). Furthermore, our result admits an elegant interpretation in hydrodynamic terms, namely that copula corresponds to the flow of incompressible liquid (probability distributions that are normalized to unity) between two distributions viewed as a source and a drain of probability flow.
The paper is structured as follows. In section 2 we provide a literature review. In Section 3, the problem is formulated in a relevant language. In Section 4, a parsimonious linear programming method of estimating the copula is given. In Section 5, we provide two examples of natural copulas between 1) the Volume-at-Price distribution of IBM stock, and 2) a trader's LOB of the SDPR500 on an arbitrary trading day. In Section 6, we illustrate a natural copula through a hydrodynamic analogy. In Section 7, a measure of correlation similar to Kendall tau is defined.
Finally, we conclude our treatment in Section 8.
Literature review
The application of copulas is based on a celebrated Sklar (Sklar, Fonctions de repartition a n dimensions et leur marges 1959) (Sklar, Random functions, joint distributions and copulas 1960).
In financial mathematics, copulas are used to evaluate the distribution of the random variable X+Y when the marginal distributions for X and Y are known quantities. The distributions for X and Y can be arbitrary and mutually dependent (Cerubini 2016). Also, they are rarely Gaussian.
In particular, because of the additivity of money prices of derivative products must be linear functions of the components of the replicating portfolio. In the asset management business, a typical problem is modeling a portfolio of financial instruments, frequently, a credit portfolio with each instrument having its own distribution (Brigo 2010). While the expected payoff of each instrument can be a highly nonlinear function of the parameters, the distribution of the portfolio is a linear combination of the payoffs of its constituent parts.
Finally, in the market microstructure, the distributions of the quotes on the buy and sell side of the LOB are connected by the clearing condition. This requires that the volume of sold securities must be equal to the volume of bought securities and that the orders are filled at the best execution price (Hasbrouck 2007).
We want to present the last application as an example. It has an obvious connection with the transportation problem of Monge (Monge 1781). Gaspar Monge, an XVIII century mathematician, first posed a problem of optimal moving of remblais (rubble) into an accurate pile (deblais). This problem caused a whirlwind of literature in the XX century, related to programming and optimization, from pioneering work by Kantorovich (Kantorovich 1942) to our days (Brenier 2004).
Our definition of copula as "natural" is such that the distance to carry purchased securities to cover sales, or vice versa, is minimal. Since the 1960s, but mainly in the XXI century, the transport problems acquired an elegant formulation in hydrodynamic terms ( (Brenier 2004) and
op. cit). We demonstrate the hydrodynamic interpretation of the samples of one-dimensional distributions. The extension of the procedure to a multidimensional case is sometimes non-trivial, but there are algorithmic methods to build (n+1) dimensional copula from the family of ndimensional copulas (Shemyakin A. 2017).
Problem design
For our demonstration of the method, we chose the Wasserstein-2 metric as a criterion for the affinity of the distributions. The functional to minimize is thus the following:
= ∫ ( − ) 2 ( , )(1)
Where
( , 0) = ( ) (0, ) = ( )(1a)
Under constraints:
∫ ( ) = 1 ∫ ( ) = 1 ∫ ( , ) = 1 (1b)
Here pX(x) and pY(y) are the known distributions. To provide a practical solution to the problem, we use parametric approximations of the real financial distributions. A particular parametrization can be chosen in an infinite number of ways, and it is not an integral part of the method. To make my computations feasible for a laptop, I use the following parametrizations:
{ ( ) = ∑ 4 =1 ( ) − 2 2 ⁄ ∫ ∑ 4 =1 ( ) − 2 2 ⁄ 1 0 ⁄ ( ) = ∑ 4 =1 ( ) − 2 2 ⁄ ∫ ∑ 4 =1 ( ) − 2 2 ⁄ 1 0 ⁄ }(2)
In the system of Equations (2), Hi are the Hermite polynomials (Abramowitz 1964). We have chosen a rather small number n=4 for this problem to be executed on a standard laptop. We use a Gaussian cutoff factor in (2) to be able to extend integration from the finite sampling domain to
the real line [-∞,∞].
For the convenience of comparison with the actual volume-at-price distributions, the test functions of Equation (2) were rescaled into the following form:
{ ( ) = ∑ 4 =1 ( ( − ) ⁄ ) −( − ) 2 2 2 ⁄ ∫ ∑ 4 =1 ( ( − ) ⁄ ) −( − ) 2 2 2 ⁄ 1 0 ⁄ ( ) = ∑ 4 =1 ( ( − ) ⁄ ) −( − ) 2 2 2 ⁄ ∫ ∑ 4 =1 ( ( − ) ⁄ ) −( − ) 2 2 2 ⁄ 1 0 ⁄ } (2')
In Equations (2'), VB, S are the volume amplitudes for buy and sell distributions, pB, S are the centered prices, γ(1,2) are scaling factors for the polynomial powers, and σimplied width of the price distributions.
Parsimonious optimization
To minimize the functional (1) under constraints (1a-1b), we use the following approach. This approach is not an integral part of the selection of a natural copula but it is an efficient method of approximating copulas. First, we seek a solution in the form:
( , ) = ( ) ( ) ( , )(3)
Where f(x,y)=C + P(x,y). 1 Here C is a constant and P(x,y) is a polynomial such that P(x,0)=0 and P(0,y)=0. With this choice, the constraints (1a) are satisfied automatically. The first two constraints of the system (1b) are satisfied by a specific choice of approximation of the marginals pX(x) and
pY(y) of Equation (2). The integration of each monomial in a polynomial P(x,y) provides a certain 1 Note that the constant is uniquely determined by the normalization condition.
integral Ii. Subsequent integration of each monomial multiplied by a mutual distance, ( − ) 2 produces another set of definite integrals, ̃ . That reduces minimizing the functional (1) to the linear programming problem:
min { } ∑ =0̃(4)
under the following constraint:
∑ =0 ≥ 1(5)
Note, that we changed a unity normalization condition for the distribution by an inequality.
Because of the properties of a distance, all integrals ̃ are non-negative and, consequently, the minimum is always achieved by equality. Otherwise, we can always make an affine scaling of the coefficients so that Equation (5) has the sign of equality but the resulting value of the functional of Equation (4) is lower. With an approximation of each of the marginals by five Hermite functions and five-monomial polynomial f(x,y), the Mathematica© optimization takes a few seconds on a PC and, usually, only one or two coefficients in f(x,y) are significant.
Two examples of copulas
To provide examples of the method, we use (1) [Place Figures 1 and 2 here]
Hydrodynamic interpretation of the "Natural Copula"
The connection of the problems of optimal transport with hydrodynamics evolved from the 1960s work by V. I. Arnold (Arnold 1966) and was resuscitated in the 2000s through the work of Benamou and Brenier (see (Brenier 2004) We illustrate this interpretation by plotting the vector field associated with the derivatives of the copula function in Fig. 3. We interpret these derivatives as velocities:
{ = = − }(6)
Where V(x,y) is a potential function of the flow, which we identify with our copula. Then, as usual, we can define the following characteristics:
{ Γ = ∮ + Φ = ∮ − }(7)
which are called circulation and flux in hydrodynamics. We choose the contour of integration along the border of the sampling area. Because of the square integration region [0,1]×[0,1], the y-integrals over parts of the integration contour parallel to the x-axis and the xintegrals over contour intervals parallel to the y-axis are identically zero (see Fig. ?). So, only one component of flow velocity gives a contribution to the contour integral:
{ Γ = ∫ {1,0} {0,0} + ∫ {1,1} {1,0} + ∫ {0,1} {1,1} + ∫ {0,1} {1,1} Φ = − ∫ {1,0} {0,0} + ∫ {1,1} {1,0} − ∫ {0,1} {1,1} + ∫ {0,1} {1,1} }(8)
The results of our calculation in arbitrary units are provided in Table 2.
[Place Table 3] We tentatively identify circulation with the number of shares changing hands in the case of optimal unwinding of the LOB and fluxwith the market imbalance.
Measure of correlation
The measure of correlation, which is usually invoked in connection to copulas is the Kendall copula tau parameter (Shemyakin A. 2017). A natural extension of the concept of correlation to our method of estimation is the following expression:
= ∫ ( ) ( )( 2 ( , ) − 1)(9)
The estimates of our two model distributions are given in Table 4. For both selected distributions, the CT is on the order of 20%. This measure of correlation is independent of our copula method and is only the feature of the preferred method of estimation, which we outlined in Section 5.
[Place Table 3]
Conclusion
This paper demonstrates that it is possible to define a "natural" copula, which minimizes Wasserstein distance between distributions in a space, in which they are both embedded. Its estimation does not involve any arbitrariness other than a particular embedding space of the marginals, which usually appears naturally in the problem and the index of the Wasserstein distance.
We propose a parsimonious method of estimation of a natural copula through the linear programming algorithm. This method can be applied independently of any parametric form of the copula approximation.
The natural copula has a hydrodynamic interpretation. Namely, if we consider one distribution a drain, and another-a sink, the flow of probability between them obeys Euler equations. The incompressibility of the liquid corresponds to the preservation of the normalization for the distributions.
value-at-price (VAP) distribution of the trades in IBM stock on 05.03.2016 and (2) SPDR500 LOB distribution of one trader on or around 03.01.2014. Both distributions were smoothed down with MA(2) process before applying a parametric approximation. We estimated real distributions according to Equations (2'). After that, they were projected into [0,1]×[0,1] squares and the optimization of the copula according to Section 4 was obtained. The parameters for two examples (four one-dimensional distributions) are given in Tables 1 and 2. [Place Tables 1 and 2 here] Empirical distributions and their estimates are plotted in Figs. 1 and 2.
and op. cit.). 2 Approximately speaking, the Wasserstein distance between two shapes/distributions is minimized by the flow of incompressible fluid, which obeys Euler equations. Heretofore, we can assume that one marginal distribution is a hydrodynamic source and the other marginal distribution is a drain for an incompressible fluid. Incompressibility preserves the normalization of the distributions of probability flows. The properties of a natural copula are thus related to the global properties of probability flow.
Fig. 1 Fig. 2 .
12Volume-at-Price distribution (yellow dot) for the IBM stock on May 03, 2016. The solid red line indicates our approximant of offers, the solid green line is the approximant of bids and the blue curve is the sum of the total. Natural copula of the IBM VaP distribution in A) normalized coordinates, B) original dollar prices. The vertical scale is arbitrary.
Fig. 3 .Fig. 4 .
34Natural copula of a sample LOB of SPDR500 on an average trading day. Vector fields of hydrodynamic velocities (Equation(6)of the main text) for the copula distributions of Figs. 2 and 3. The coordinates x and y are the percentage scales of the embedding box [0,1]×[0,1].
Table 3 .
3Hydrodynamic flux Φ and circulation Γ (Equation(6)) for the sample copulas in arbitrary units. Turnover is significant, while the imbalance is small.Sample
VB
VS
σp, $
ξ
pB, $
pS, $
θ
IBM050316 3.5
6.0
0.0471
3.558
13.374
13.561
1
SDPR500
14.0
16.0
10.561
1.698
173.164 174.116 2
Table 2. Parametrization of the function ( , ) =
1+ 2
2
of Equation (3).
Sample
A
IBM
27.66
SDPR500
9.78
Sample
Φ
Γ
IBM
25.97
-0.285
SDPR500
61.03
0.045
Table 4. The measure of correlation
= ∫ ( ) ( )( 2 ( , ) − 1)
, Equation
(8) for the normalized copulas.
Sample
CT
IBM
0.2011
SDPR500
0.1861
For the modern bibliography, one can consult(Liu 2016) and(Chizat 2016).
M Abramowitz, I Stegun, Handbook of Mathematical Functions with Tables, Graphs and Mathematical Tables. WashingtonNBSAbramowitz, M. and I. Stegun. 1964. Handbook of Mathematical Functions with Tables, Graphs and Mathematical Tables . Washington : NBS.
Sur la geometrie differentielle des groupes de Lie de dimension infinie et ses applications a l'hydrodynamique des fluides parfaits. V Arnold, Ann. Inst. Fourier. Arnold, V. 1966. "Sur la geometrie differentielle des groupes de Lie de dimension infinie et ses applications a l'hydrodynamique des fluides parfaits." Ann. Inst. Fourier 319-361.
Transport Theory and Geometric Partial Differential Equations. Y Brenier, Brenier, Y. 2004. Transport Theory and Geometric Partial Differential Equations.
Credit models and the crisis: a journey into CDOs, copulas, correlations and dynamic models. D Brigo, A Pallavicini, R Torresetti, Brigo, D., A. Pallavicini and R. Torresetti. 2010. Credit models and the crisis: a journey into CDOs, copulas, correlations and dynamic models. .
. & Wiley, Sons, Wiley & Sons. .
U Cerubini, F Gobbi, S Mulinacci, Convolution copula econometrics. SpringerCerubini, U., F. Gobbi and S. Mulinacci. 2016. Convolution copula econometrics. Springer.
. L Chizat, G Peyre, B Schmitzer, F.-X Vialard, Chizat, L, G. Peyre, B. Schmitzer, F.-X. Vialard. 2016. arxiv.org. https://arxiv.org/1508.05216.
Principles of Copula Theory. F Durante, C Seupi, CRC PressDurante, F., C. Seupi. 2016. Principles of Copula Theory. CRC Press.
Empirical market microstructure: the institutions, economics and econometrics. J Hasbrouck, Oxford UPHasbrouck, J. 2007. Empirical market microstructure: the institutions, economics and econometrics . Oxford UP.
On the translocation of masses. L Kantorovich, C. R. Ac. Sci. URSS. 37Kantorovich, L. 1942. "On the translocation of masses." C. R. Ac. Sci. URSS 37: 199-201.
Least action principles for incompressible flows and optimal transport between shapes. J.-G Liu, R L Pego, D Slepcev, Liu, J.-G., R. L. Pego and D. Slepcev. 2016. "Least action principles for incompressible flows and optimal transport between shapes." arxiv.org . https://arxiv.org/1604.03387.
Understanding and Using Linear Programming. J Matousek, B Gartner, SpringerHeidelbergMatousek, J. and B. Gartner. 2007. Understanding and Using Linear Programming. Heidelberg: Springer .
Memoire sur la theorie des deblais et des remblais. C Monge, Histoire de l'Academie Royale des Sciences de ParisMonge, C. 1781. Memoire sur la theorie des deblais et des remblais . Histoire de l'Academie Royale des Sciences de Paris.
An introduction to copulas. R B Nelsen, SpringerNelsen, R. B. 2006. An introduction to copulas. Springer.
. A Shemyakin, A , Shemyakin A., and A..
In Intorduction to Bayesian estimation and copula model of dependence, by and A. Knyazev, Knyazev Shemyakin A. Wiley & CoChapter 6Knyazev. 2017. "Chapter 6." In Intorduction to Bayesian estimation and copula model of dependence, by and A.. Knyazev Shemyakin A. Wiley & Co.
Random functions, joint distributions and copulas. A Sklar, Kybernetica. Sklar, A. 1960. "Random functions, joint distributions and copulas." Kybernetica, 449-460.
Fonctions de repartition a n dimensions et leur marges. Paris, 8Publications de l'Institute de Statistique de l'Universite de-. 1959. "Fonctions de repartition a n dimensions et leur marges." Publications de l'Institute de Statistique de l'Universite de Paris, 8.
The Wasserstein distances. Topics in optimal transportation. C Villani, American Mathematical SocietyVillani, C. 2003. The Wasserstein distances. Topics in optimal transportation. . American Mathematical Society.
| []
|
[]
| [
"Alessandro Pegoraro \nSystem Security Lab\nTechnical University of Darmstadt\nGermany\n",
"Kavita Kumari \nSystem Security Lab\nTechnical University of Darmstadt\nGermany\n",
"Hossein Fereidooni \nSystem Security Lab\nTechnical University of Darmstadt\nGermany\n",
"Ahmad-Reza Sadeghi \nSystem Security Lab\nTechnical University of Darmstadt\nGermany\n"
]
| [
"System Security Lab\nTechnical University of Darmstadt\nGermany",
"System Security Lab\nTechnical University of Darmstadt\nGermany",
"System Security Lab\nTechnical University of Darmstadt\nGermany",
"System Security Lab\nTechnical University of Darmstadt\nGermany"
]
| []
| ChatGPT has become a global sensation. As Chat-GPT and other Large Language Models (LLMs) emerge, concerns of misusing them in various ways increase, such as disseminating fake news, plagiarism, manipulating public opinion, cheating, and fraud. Hence, distinguishing AI-generated from human-generated becomes increasingly essential. Researchers have proposed various detection methodologies, ranging from basic binary classifiers to more complex deep-learning models. Some detection techniques rely on statistical characteristics or syntactic patterns, while others incorporate semantic or contextual information to improve accuracy. The primary objective of this study is to provide a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection. Additionally, we evaluated other AIgenerated text detection tools that do not specifically claim to detect ChatGPT-generated content to assess their performance in detecting ChatGPT-generated content. For our evaluation we have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains and user-generated responses from popular social networking platforms. The dataset serves as a reference to assess the performance of various techniques in detecting ChatGPT-generated content. Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content. | 10.48550/arxiv.2304.01487 | [
"https://export.arxiv.org/pdf/2304.01487v2.pdf"
]
| 257,921,485 | 2304.01487 | 756b8fb9d6dec949f236705476e83026f67abe78 |
5 Apr 2023
Alessandro Pegoraro
System Security Lab
Technical University of Darmstadt
Germany
Kavita Kumari
System Security Lab
Technical University of Darmstadt
Germany
Hossein Fereidooni
System Security Lab
Technical University of Darmstadt
Germany
Ahmad-Reza Sadeghi
System Security Lab
Technical University of Darmstadt
Germany
5 Apr 2023To ChatGPT, or not to ChatGPT: That is the question!
ChatGPT has become a global sensation. As Chat-GPT and other Large Language Models (LLMs) emerge, concerns of misusing them in various ways increase, such as disseminating fake news, plagiarism, manipulating public opinion, cheating, and fraud. Hence, distinguishing AI-generated from human-generated becomes increasingly essential. Researchers have proposed various detection methodologies, ranging from basic binary classifiers to more complex deep-learning models. Some detection techniques rely on statistical characteristics or syntactic patterns, while others incorporate semantic or contextual information to improve accuracy. The primary objective of this study is to provide a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection. Additionally, we evaluated other AIgenerated text detection tools that do not specifically claim to detect ChatGPT-generated content to assess their performance in detecting ChatGPT-generated content. For our evaluation we have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains and user-generated responses from popular social networking platforms. The dataset serves as a reference to assess the performance of various techniques in detecting ChatGPT-generated content. Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
I. INTRODUCTION
ChatGPT developed by OpenAI has garnered significant attention and sparked extensive discourse in the Natural Language Processing (NLP) community and several other fields. Chat-GPT is an AI chatbot introduced by OpenAI in November 2022. It utilizes the power of OpenAI's LLMs belonging to the GPT-3. 5 and GPT-4 families. However, ChatGPT is not a simple extension of these models. Instead, it has undergone a fine-tuning process utilizing supervised and reinforcement learning techniques based on Human Feedback [7], [23]. This approach to transfer learning has allowed ChatGPT to learn from existing data and optimize its performance for conversational applications. It has also facilitated ChatGPT's exceptional performance in various challenging NLP tasks [5], [11], [27], [34], [38]. The media's promotion of ChatGPT has resulted in a chain of reactions, with news and media companies utilizing it for optimal content creation, teachers and academia using it to prepare course objectives and goals, and individuals using it to translate content between languages. Unfortunately, as is often the case with such technologies, misuse has also ensued. Students are employing it to generate their projects and coding assignments [6], [8], while scholars are utilizing it to produce papers [15]. Malicious actors use it to propagate fake news on social media platforms [19], [9], and educational institutions are employing it to provide mental health education to students without their consent [2], among other uses. Furthermore, ChatGPT has the potential to generate seemingly realistic stories that could deceive unsuspecting readers [4], [32]. Hence, developing an efficient detection algorithm capable of distinguishing AI-generated text, particularly ChatGPT, from human-generated text has attracted many researchers. In general, detecting AI-generated text using machine learning concerns two types: black-box and white-box detection. Blackbox detection relies on API-level access to language models, limiting its capability to detect synthetic texts [35]. This type involves data collection, feature extraction, and building a classifier for detection. It includes simple classifiers such as binary logistic regression [33]. In contrast, white-box detection has full access to language models, enabling control of the model's behavior and traceable results [35]. It includes zero-shot detection [25], [33], [39] that leverages pre-trained generative models like GPT-2 [33] or Grover [39] and pretrained language models fine-tuned for the task. A large body of research is attributed to building detectors for the text generated by AI bots [8], [15], [16], [17], [20], [21], [22], [25], [29], [33], [39], [40]. Furthermore, some claim that their AI-text detector can distinguish the ChatGPTgenerated text from the human-generated text [1], [3], [6], [10], [12], [13], [14], [18], [26], [28], [36], [37]. On that account, our motivation is to test all the tools (generalized AI-text detectors plus detectors targeting ChatGPT-generated text) against a benchmark dataset (Section III-A), comprising of ChatGPT prompts and human responses, spanning different domains. We will elaborate on each tool and its functionality in the following section.
Our goals in this paper are as follows:
• We explore the research conducted on AI-generated text detection focusing on ChatGPT detection. We outline different white-box and black-box detection schemes proposed in the literature. We also explore detection schemes in education, academic/scientific writing, and the detection tools available online.
• We evaluate the effectiveness of various tools that aim to distinguish ChatGPT-generated from human-generated responses and compare their accuracy and reliability. Our assessment includes tools that claim to detect ChatGPT prompts and other AI-generated text detection tools that do not target ChatGPT-generated content. This evaluation's primary objective is to gauge these tools' effectiveness in detecting ChatGPT-generated content. Our analysis reveals that the most effective online tool for detecting generated text can only achieve a success rate of less than 50%, as depicted in Table I.
• Our research aims to inspire further inquiry into this critical area of study and promote the development of more effective and accurate detection methods for AI-generated text. Further, our findings underscore the importance of thorough testing and verification when assessing AI detection tools.
II. RELATED WORKS
This section provides an overview of current research on distinguishing AI-generated text from human-generated text.
To categorize most automated machine learning-based detection methods for synthetic texts, we followed OpenAI's classification [33], which divides these methods into three main categories, which are: i) Simple Classifiers [18], [33], ii) Zero-shot detection techniques [22], [25], [39], and iii) Fine-tuning based detection [26]. Simple Classifiers fall under the category of black-box detection techniques, whereas zeroshot and fine-tuning-based detection techniques come under the umbrella of white-box detection techniques. There exists other approaches that do not fit into these three categories; however, they are still significant and merit consideration. These alternative methods include testing ChatGPT-generated text against various plagiarism tools [20], designing a Deep Neural Network-based AI detection tool [6], a samplingbased approach [16], and online detection tools [1], [3], [10], [13], [17], [28], [29], [36], [40]. In the following sections, we will analyze the existing approaches belonging to the aforementioned categories and alternative methods, focusing on their effectiveness in detecting AI-generated text.
A. Simple Classifiers
OpenAI [33], an artificial intelligence research company, analyzed the human detection and automated ML-based detection of synthetic texts. For the human detection of the synthetic datasets, authors showed that the models trained on the GPT-2 datasets tend to increase the perceived humanness of GPT-2 generated text. Hence, they tested a simple logistic regression model, zero-shot detection model (explained in Section II-B), and fine-tune-based detection model (described Section in II-C). The simple logistic regression model was trained on TF-IDF (Term Frequency-Inverse Document Frequency), unigram, and bigram features and later analyzed at different generation strategies and model parameters. It was found that the simple classifiers can work correctly up to an accuracy of 97%. However, detecting shorter outputs is more complicated than detecting more extended outputs for these models. Guo et al. [18] conducted human evaluations and compared datasets generated by ChatGPT and human experts, analyzing linguistic and stylistic characteristics of their responses and highlighting differences between them. The authors then attempted to detect whether the text was ChatGPT-generated or humangenerated by deploying: first, a simple logistic regression model on GLTR Test-2 features and, second, a pre-trained deep classifier model based on RoBERTa [24] for single-text and Q&A detection (as explained in Section II-C). However, the proposed detection models are ineffective in detecting ChatGPT-generated text from human-generated text due to the highly unbalanced corpus of datasets, which did not capture all the text-generating styles of ChatGPT. Another study by Kushnareva et al. [22] utilized Topological Data Analysis (TDA) to extract three types of interpretable topological features, such as the number of connected components, the number of edges, and the number of cycles present in the graph, for artificial text recognition. The authors then trained a logistic regression classifier with these features and tested the approach on datasets from WebText & GPT-2, Amazon Reviews & GPT-2, and RealNews & GROVER [39]. However, this approach is unlikely to be effective for ChatGPT, as it was not tested on that specific model.
B. Zero-shot detection techniques
OpenAI [33] has also developed a GPT-2 detector using a 1.5 billion parameter GPT-2 model that can identify the top 40 generated outputs with an accuracy of 83% to 85%. However, when the model was fine-tuned to the Amazon reviews dataset, the accuracy dropped to 76%. In a different study [25], the authors explored the Zero-shot detection of AI-generated text and deployed an online detection tool (DetectGPT) to distinguish GPT-2 generated text from the human-generated text. They used the generative model's log probabilities to achieve this. The authors experimented and demonstrated that AI-generated text occupies the negative curvature regions of the model's log probability function. However, it should be noted that the authors assumed that one could evaluate the log probabilities of the model(s) under consideration, which may not always be possible. Moreover, as mentioned by the authors, this approach is only practical for GPT-2 prompts. Zellers et al. [39] utilized a transformer identical to the one used for GPT-2, except that they used nucleus sampling instead of top-k sampling to select the next word during text generation. The model they developed, known as Grover, can generate text such as fake news and detect its own generated text. It is also available online. Authors used Grover, GPT-2 (124M or 355M ), BERT (BERT-Base or BERT-Large), and FastText verification tools to classify the news articles generated by Grover. They proved that Grover is the best among the previously mentioned detector to verify its selfgenerated fake news. However, it's not visible that it will work for the text generated by the GPT models. Also, it has been shown that the bi-directional transformer model RoBERTa outperforms Grover models with equivalent parameter size in detecting GPT-2 texts [33].
C. Fine-tuning based detection
In [33], the authors conducted experiments to fine-tune pretrained language models for detecting AI-generated texts by basing the classifiers on RoBERTa BASE and RoBERTa LARGE . They found that fine-tuning RoBERTa consistently outperformed fine-tuning an equivalent capacity GPT-2 model. However, the approach could not detect text generated by ChatGPT, as demonstrated in [31]. In a separate study, Mitrovic et al. [26] investigated the feasibility of training an ML model to distinguish between queries generated by ChatGPT and those generated by humans. The authors attempted to detect ChatGPT-generated two-line restaurant reviews using a framework based on DistilBERT, a lightweight model trained using BERT, which was fine-tuned using a Transformer-based model. Additionally, the predictions made by the model were explained using the SHAP method. The authors concluded that an ML model could not successfully identify texts generated by ChatGPT. Guo at al. [18] also developed a pre-trained deep classifier model based on RoBERTa for single-text and Q&A detection. The limitations are the same as described in Section II-A.
D. Other approaches proposed in literature
Extensive academic research has investigated the adverse impact of ChatGPT on education, which has shown that students and scholars can use ChatGPT to engage in plagiarism. To address this issue, several existing tools, such as RoBERTa, Grover, or GPT-2, have been utilized to check the uniqueness of educational content against ChatGPT-generated text. In [6], the authors proposed a transformer-based model named AICheatCheck, a web-based AI detection tool designed to identify whether a human or ChatGPT generated a given text. AICheatCheck examines a sentence or group of sentences for patterns to determine their origin. The authors used the data collected by Guo et al. [18] (with its limitations) and from the education field. Also, it is not specified in the paper on what basis or on what features AICheatCheck can achieve high accuracy. The study in [20], evaluates the effectiveness of two popular plagiarism-detection tools, iThenticate and Turnitin, in detecting plagiarism concerning 50 essays generated by ChatGPT. The authors also compared ChatGPT's performance against itself and found it more effective than traditional plagiarism detection tools. Another study by Gao et al. [15] aimed to compare ChatGPT-generated academic paper abstracts using a GPT-2 Output Detector RoBERTa, a plagiarism checker, and human review. The authors collected ten research abstracts from five high-impact medical journals and then used Chat-GPT to output research abstracts based on their titles and journals. However, the tool used in the study is not available online for verification. In recent work, Cotton et al. [8] investigate the pros and cons of using ChatGPT in the academic field, particularly concerning plagiarism. In a different work, authors in [16] utilized the distributional properties of the underlying text used for the model. They deployed a tool called GLTR that highlights the input text in different colors to determine its authenticity. GLTR was tested on the prompts from GPT-2 1.5B parameter model [30] and human-generated articles present on social media platforms. The authors also conducted a human study, asking the students to identify fake news from real news.
E. Online tools
Below, we examine multiple online tools and delineate their brief functionality. [21]: This tool utilizes stylometric signals to examine the writing style of a text by identifying patterns and features that are unique to them. These signals are then extracted from the input text to enable sequence-based detection of AI-generated tweets. 2) ZeroGPT [40]: This tool is specifically developed to detect OpenAI text but has limited capabilities with shorter text. 3) OpenAI Text Classifier [28]: This fine-tuned GPT model developed by OpenAI predicts the likelihood of a text being AI-generated from various sources, including ChatGPT. However, the tool only works with 1000 characters or more and is less reliable in determining if a text was artificially generated. 4) GPTZero [29]: This classification model determines whether a document was written by an LLM, providing predictions at a sentence, paragraph, and document level. However, it mainly works for content in the English language and only allows text with a character count between 250 and 5000 characters. 5) Hugging Face [13]: This tool was released by Hugging Face for detecting text generated by ChatGPT. However, it tends to over-classify text as being ChatGPT-written. 6) Perplexity (PPL) [17]: The perplexity (PPL) is a widely employed metric for assessing the efficacy of Large language models (LLM). It is calculated as the exponential of the negative average log-likelihood of text under the LLM. A lower PPL value implies that the language model is more confident in its predictions. LLMs are trained on vast text corpora, enabling them to learn common language patterns and text structures. Consequently, PPL can be utilized to gauge how effectively a given text conforms to such typical characteristics. 7) Writefull GPT Detector [36]: It is primarily used for detecting plagiarism, this tool can identify if a piece of text is generated by GPT-3 or ChatGPT. However, the tool's percentage-based system for determining whether the text was created by AI has a degree of uncertainty for both samples generated by humans and those generated by ChatGPT. 8) Copyleaks [10]: This tool claims to detect if a text is generated by GPT-3, ChatGPT, humans, or a combination of humans and AI. The tool accepts only text with 150 or more characters. 9) Content at Scale [3]: This is an online tool available to detect text generated by ChatGPT. However, the tool can only analyze samples with 25000 characters or less. 10) Originality.ai [1]: This paid tool is designed to work with GPT-3, GPT 3.5 (DaVinci-003), and ChatGPT models. However, the tool only works with 100 words or more and is prone to classify ChatGPT-generated content as real. 11) Writer AI Content Detector [37]: This tool is designed to work with GPT-3 and ChatGPT models. However, its limitation restricts the amount of text that can be checked on each experiment to a maximum of 1500 characters. 12) Draft and Goal [12]: This tool is intended for detecting content generated by GPT-3 and ChatGPT models, and it is equipped to perform detection in both English and French. However, it has a requirement that the input text should be at least 600 characters or longer to work effectively.
1) Stylometric Detection of AI-generated Text
III. EVALUATION
This section evaluates publicly available literature, tools, or codes that can differentiate between AI-generated and humangenerated responses. Our primary focus is on the tools claiming to detect ChatGPT-generated content. However, we also evaluate (to the best of our abilities) the performance of other AI-generated text detection tools that do not make explicit claims about detecting ChatGPT-generated content on ChatGPT prompts. To assess the effectiveness of these tools, we employ a benchmark dataset (Section III-A) that comprises prompts from ChatGPT and humans. Then, we measure the detection capabilities of these tools on both ChatGPT-generated and human-generated content and present the results in Table I.
A. Benchmark Dataset
We utilized the inquiry prompts proposed by Guo et al. [18] through the OpenAI API 1 to generate a benchmark dataset. This dataset comprises 58,546 responses generated by humans and 72,966 responses generated by the ChatGPT model, resulting in 131,512 unique samples that address 24,322 distinct questions from various fields, including medicine, opendomain, and finance. Furthermore, the dataset incorporates responses from popular social networking platforms, which provide a wide range of user-generated perspectives. To assess the similarity between human-generated and ChatGPTgenerated responses, we employed the sentence transformer all-MiniLM-L6-v2 2 . Then, we selected responses with the highest and lowest levels of similarity to assemble a benchmark dataset, which was reduced to approximately 10% of the primary dataset that we generated. This benchmark dataset serves as a standardized reference for evaluating the ability of different techniques to detect ChatGPT-generated content.
B. Evaluation Metrics
To measure and compare the effectiveness of each approach, we utilized the following metrics:
•
C. Evaluated Tools and Algorithms
We evaluate several tools and algorithms summarized in Section II. Table I outlines the detection capability of these tools for the ChatGPT-generated content in terms of TPR and TNR. We can observe that none of the evaluated approaches can consistently detect the ChatGPT-generated text. Analysis reveals that the most effective online tool for detecting generated text can only achieve a success rate of less than 50%, as depicted in Table I.
IV. CONCLUSION
This study delved into the various methods employed for detecting ChatGPT-generated text. Through a comprehensive review of the literature and an examination of existing approaches, we assess the ability of these techniques to differentiate between responses generated by ChatGPT and those produced by humans. Furthermore, our study includes testing and validating online detection tools and algorithms utilizing a benchmark dataset that covers various topics, such as finance and medicine, and user-generated responses from popular social networking platforms. Our experiments highlight Chat-GPT's exceptional ability to deceive detectors and further indicate that most of the analyzed detectors are prone to classifying any text as human-written, with a general high T NR of 90% and low T PR. These findings have significant implications for enhancing the quality and credibility of online discussions. Ultimately, our results underscore the need for continued efforts to improve the accuracy and robustness of text detection techniques in the face of increasingly sophisticated AI-generated content.
TABLE I :
ISummary of analyzed papers
Approach
Published in
Target Model
Publicly
Available
Free/Paid
ChatGPT detc.
Capability (TPR%)
Human-text detc.
Capability (TNR%)
Grover GPT-2 GPT-3 ChatGPT *
Kumarage et al. [21]
2023
Free
23.3
94.7
Bleumink et al. [6]
2023
Paid
13.4
95.4
ZeroGPT [40]
2023
Paid
45.7
92.2
OpenAI Classifier [28]
2023
Free
31.9
91.8
Mitchell et al. [25]
2023
Free
18.1
80.0
GPTZero [29]
2023
Paid
27.3
93.5
Hugging Face [13]
2023
Free
10.7
62.9
Guo et al. [18]
2023
Free
47.3
98.0
Perplexity (PPL) [17]
2023
Free
44.4
98.3
Writefull GPT [36]
2023
Paid
21.6
99.3
Copyleaks [10]
2023
Paid
22.9
92.1
Cotton et al. [8]
2023
×
-
-
-
Khalil et al. [20]
2023
×
-
-
-
Mitrovic et al. [26]
2023
×
-
-
-
Content at Scale [3]
2022
Paid
38.4
79.8
Orignality.ai [1]
2022
×
Paid
7.6
95.0
Writer AI Detector [37]
2022
Paid
6.9
94.5
Draft and Goal [12]
2022
Free
23.7
91.1
Gao et al. [15]
2022
×
-
-
-
Fröhling et al. [14]
2021
Free
27.8
89.2
Kushnareva et al. [22]
2021
Free
25.1
96.3
Solaiman et al. [33]
2019
Free
7.2
96.4
Gehrmann et al. [16]
2019
Free
32.0
98.4
Zellers et al. [39]
2019
Free
43.1
91.3
* GPT 3.5 and above.
True Positive Rate (TPR): This metric represents the tool's sensitivity in detecting text that ChatGPT generates. True Positive (T P) is the total number of correctly identified samples, while we consider False Negative (FN) the number of samples not classified as generated text or incorrectly identified as human text. Therefore, T PR = T P T P+FN . • True Negative Rate (TNR): This metric indicates the tool's specificity in detecting human-generated texts. True Negatives (T N) is the total number of correctly identified samples, while False Positives (FP) is the number of samples incorrectly classified as being produced by ChatGPT. Therefore, T NR = T N T N+FP .
https://openai.com/blog/introducing-chatgpt-and-whisper-apis 2 https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
. Originality, 2022Online; accessed 28Originality.ai. https://originality.ai/, 2022. [Online; accessed 28-Mar- 2023].
Hidden use of chatgpt in online mental health counseling raises ethical concerns. Online; accessed 24-Mar-2023Hidden use of chatgpt in online mental health counseling raises ethical concerns. https://www.psychiatrist.com/news/hidden-use-of-chatgpt-in- online-mental-health-counseling-raises-ethical-concerns/, 2023. [On- line; accessed 24-Mar-2023].
A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, arXiv:2302.04023arXiv preprintYejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
Chatgpt and the future of medical writing. Som Biswas, Som Biswas. Chatgpt and the future of medical writing, 2023.
Keeping ai honest in education: Identifying gpt-generated text. Groot Arend, Aaron Bleumink, Shikhule, Arend Groot Bleumink and Aaron Shikhule. Keeping ai honest in education: Identifying gpt-generated text, 2023. https://www.aicheatcheck.com/.
Deep reinforcement learning from human preferences. Advances in neural information processing systems. Jan Paul F Christiano, Tom Leike, Miljan Brown, Shane Martic, Dario Legg, Amodei, 30Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human prefer- ences. Advances in neural information processing systems, 30, 2017.
Chatting and cheating: Ensuring academic integrity in the era of chatgpt. R E Debby, Cotton, A Peter, J Reuben Cotton, Shipway, Innovations in Education and Teaching International. Debby RE Cotton, Peter A Cotton, and J Reuben Shipway. Chatting and cheating: Ensuring academic integrity in the era of chatgpt. Innovations in Education and Teaching International, pages 1-12, 2023.
Chatgpt and the rise of large language models: The new ai-driven infodemic threat in public health. Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Pierpaolo Gaetano, Paolo Privitera, Alberto Eugenio Ferragina, Caterina Tozzi, Rizzo, Available at SSRN 4352931Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pier- paolo Privitera, Paolo Ferragina, Alberto Eugenio Tozzi, and Caterina Rizzo. Chatgpt and the rise of large language models: The new ai-driven infodemic threat in public health. Available at SSRN 4352931, 2023.
. Copyleaks AI Content Detector. Copyleaks. Online; accessed 23-Mar-2023Copyleaks AI Content Detector. Copyleaks. https://copyleaks.com/ai-content-detector, 2023. [Online; accessed 23-Mar-2023].
Chatgpt for (finance) research: The bananarama conjecture. Michael Dowling, Brian Lucey, Finance Research Letters. 103662Michael Dowling and Brian Lucey. Chatgpt for (finance) research: The bananarama conjecture. Finance Research Letters, page 103662, 2023.
Draft and Goal. ChatGPT -GPT3 Content Detector. Draft and Goal. ChatGPT -GPT3 Content Detector.
Hugging Face ChatGPT-Detection. Hugging FaceHugging Face. Hugging Face ChatGPT-Detection .
Feature-based detection of automated language models: tackling gpt-2, gpt-3 and grover. Leon Fröhling, Arkaitz Zubiaga, PeerJ Computer Science. 7443Leon Fröhling and Arkaitz Zubiaga. Feature-based detection of automated language models: tackling gpt-2, gpt-3 and grover. PeerJ Computer Science, 7:e443, April 2021. https://peerj.com/articles/cs-443/#supplementary-material.
Comparing scientific abstracts generated by chatgpt to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv. A Catherine, Gao, M Frederick, Howard, S Nikolay, Emma C Markov, Siddhi Dyer, Yuan Ramesh, Alexander T Luo, Pearson, Catherine A Gao, Frederick M Howard, Nikolay S Markov, Emma C Dyer, Siddhi Ramesh, Yuan Luo, and Alexander T Pearson. Comparing scientific abstracts generated by chatgpt to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv, pages 2022-12, 2022.
Gltr: Statistical detection and visualization of generated text. Sebastian Gehrmann, Hendrik Strobelt, Alexander M Rush, arXiv:1906.04043arXiv preprintSebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. Gltr: Statistical detection and visualization of generated text. arXiv preprint arXiv:1906.04043, 2019. http://gltr.io/.
. Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, Yupeng Wu, Chatgpt Detector Using Linguistic Features. Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. Chatgpt Detector Using Linguistic Features.
How close is chatgpt to human experts? comparison corpus, evaluation, and detection. Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, Yupeng Wu, arXiv:2301.07597arXiv preprintBiyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. How close is chatgpt to human experts? comparison corpus, evalu- ation, and detection. arXiv preprint arXiv:2301.07597, 2023. https://github.com/Hello-SimpleAI/chatgpt-comparison-detection.
Regulating chatgpt and other large generative ai models. Philipp Hacker, Andreas Engel, Marco Mauer, arXiv:2302.02337arXiv preprintPhilipp Hacker, Andreas Engel, and Marco Mauer. Regulating chatgpt and other large generative ai models. arXiv preprint arXiv:2302.02337, 2023.
Will chatgpt get you caught? rethinking of plagiarism detection. Mohammad Khalil, Erkan Er, arXiv:2302.04335arXiv preprintMohammad Khalil and Erkan Er. Will chatgpt get you caught? rethinking of plagiarism detection. arXiv preprint arXiv:2302.04335, 2023.
Stylometric detection of ai-generated text in twitter timelines. Tharindu Kumarage, Joshua Garland, Amrita Bhattacharjee, Kirill Trapeznikov, Scott Ruston, Huan Liu, arXiv:2303.03697arXiv preprintTharindu Kumarage, Joshua Garland, Amrita Bhat- tacharjee, Kirill Trapeznikov, Scott Ruston, and Huan Liu. Stylometric detection of ai-generated text in twit- ter timelines. arXiv preprint arXiv:2303.03697, 2023. https://github.com/TSKumarage/Stylo-Det-AI-Gen-Twitter-Timelines.
Artificial text detection via examining the topology of attention maps. Laida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekaterina Artemova, Serguei Barannikov, Alexander Bernstein, Irina Piontkovskaya, Dmitri Piontkovski, Evgeny Burnaev, arXiv:2109.04825arXiv preprintLaida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekate- rina Artemova, Serguei Barannikov, Alexander Bernstein, Irina Pio- ntkovskaya, Dmitri Piontkovski, and Evgeny Burnaev. Artificial text detection via examining the topology of attention maps. arXiv preprint arXiv:2109.04825, 2021. https://github.com/danchern97/tda4atd.
Illustrating reinforcement learning from human feedback (rlhf). Nathan Lambert, Louis Castricato, Alex Leandro Von Werra, Havrilla, Hugging Face BlogNathan Lambert, Louis Castricato, Leandro von Werra, and Alex Havrilla. Illustrating reinforcement learning from human feedback (rlhf). Hugging Face Blog, 2022. https://huggingface.co/blog/rlhf.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Detectgpt: Zero-shot machinegenerated text detection using probability curvature. Eric Mitchell, Yoonho Lee, Alexander Khazatsky, D Christopher, Chelsea Manning, Finn, arXiv:2301.11305arXiv preprintEric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D Manning, and Chelsea Finn. Detectgpt: Zero-shot machine- generated text detection using probability curvature. arXiv preprint arXiv:2301.11305, 2023. https://github.com/eric-mitchell/detect-gpt https://detectgpt.ericmitchell.ai/.
Chatgpt or human? detect and explain. explaining decisions of machine learning model for detecting short chatgpt-generated text. Sandra Mitrović, Davide Andreoletti, Omran Ayoub, arXiv:2301.13852arXiv preprintSandra Mitrović, Davide Andreoletti, and Omran Ayoub. Chatgpt or human? detect and explain. explaining decisions of machine learn- ing model for detecting short chatgpt-generated text. arXiv preprint arXiv:2301.13852, 2023.
Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots. Reham Omar, Omij Mangukiya, Panos Kalnis, Essam Mansour, arXiv:2302.06466arXiv preprintReham Omar, Omij Mangukiya, Panos Kalnis, and Essam Mansour. Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots. arXiv preprint arXiv:2302.06466, 2023.
. Openai, OpenAI. https://beta.openai.com/ai-text-classifier, January 2023.
. Edward Tian, 2023PrincetonOnline; accessed 23-Mar-2023Edward Tian (Princeton). GPTZero. https://gptzero.me/, 2023. [Online; accessed 23-Mar-2023].
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Chatgpt: Bullshit spewer or the end of traditional assessments in higher education. Jürgen Rudolph, Samson Tan, Shannon Tan, Journal of Applied Learning and Teaching. 612023Jürgen Rudolph, Samson Tan, and Shannon Tan. Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 2023.
. Yiqiu Shen, Laura Heacock, Jonathan Elias, D Keith, Beatriu Hentel, George Reig, Linda Shih, Moy, Chatgpt and other large language models are double-edged swordsYiqiu Shen, Laura Heacock, Jonathan Elias, Keith D Hentel, Beatriu Reig, George Shih, and Linda Moy. Chatgpt and other large language models are double-edged swords, 2023.
Release strategies and the social impacts of language models. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, arXiv:1908.09203arXiv preprintIrene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203, 2019. https://openai-openai-detector.hf.space/ https://github.com/openai/gpt-2-output-dataset/tree/master/detector.
Applying bert and chatgpt for sentiment analysis of lyme disease in scientific literature. Teo Susnjak, arXiv:2302.06474arXiv preprintTeo Susnjak. Applying bert and chatgpt for sentiment analysis of lyme disease in scientific literature. arXiv preprint arXiv:2302.06474, 2023.
The science of detecting llm-generated texts. Ruixiang Tang, Yu-Neng Chuang, Xia Hu, arXiv:2303.07205arXiv preprintRuixiang Tang, Yu-Neng Chuang, and Xia Hu. The science of detecting llm-generated texts. arXiv preprint arXiv:2303.07205, 2023.
. Detector, Online; accessed 23-Mar-2023writefull. GPT Detector. https://x.writefull.com/gpt-detector, 2023. [Online; accessed 23-Mar-2023].
. Com Writer, Ai Content, Detector, 2023Online; accessed 23-Mar-2023Writer.com. AI Content Detector. https://writer.com/ai-content-detector/, 2023. [Online; accessed 23-Mar-2023].
Assessing the performance of chatgpt in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv. Yee Hui Yeo, S Jamil, Wee Han Samaan, Peng-Sheng Ng, Hirsh Ting, Aarshi Trivedi, Walid Vipani, Ju Dong Ayoub, Omer Yang, Brennan Liran, Spiegel, Yee Hui Yeo, Jamil S Samaan, Wee Han Ng, Peng-Sheng Ting, Hirsh Trivedi, Aarshi Vipani, Walid Ayoub, Ju Dong Yang, Omer Liran, Brennan Spiegel, et al. Assessing the performance of chatgpt in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv, pages 2023-02, 2023.
Defending against neural fake news. Advances in neural information processing systems. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, 32Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. Advances in neural information processing systems, 32, 2019. https://rowanzellers.com/grover/.
. Zerogpt, ZeroGPT. https://www.zerogpt.com, January 2023.
| [
"https://github.com/Hello-SimpleAI/chatgpt-comparison-detection.",
"https://github.com/TSKumarage/Stylo-Det-AI-Gen-Twitter-Timelines.",
"https://github.com/danchern97/tda4atd.",
"https://github.com/eric-mitchell/detect-gpt",
"https://github.com/openai/gpt-2-output-dataset/tree/master/detector."
]
|
[
"The TOP counter and determination of bunch-crossing time at Belle II M. Starič, for the Belle II TOP group",
"The TOP counter and determination of bunch-crossing time at Belle II M. Starič, for the Belle II TOP group"
]
| [
"J Stefan \nInstitute\nLjubljanaSlovenia\n"
]
| [
"Institute\nLjubljanaSlovenia"
]
| []
| At the Belle II experiment a Time-of-Propagation (TOP) counter is used for particle identification in the barrel region. This novel type of particle identification device combines the Cherenkov ring imaging technique with the time-of-flight and therefore it relies on a precise knowledge of the time of collision in each triggered event. We discuss the performance of the counter and present a maximum likelihood based method for the determination of event collision time from the measured data. | null | [
"https://export.arxiv.org/pdf/2305.12890v1.pdf"
]
| 255,097,504 | 2305.12890 | 4a9146361479a35a1c5ee8c2f2e84d1c0c3fffff |
The TOP counter and determination of bunch-crossing time at Belle II M. Starič, for the Belle II TOP group
J Stefan
Institute
LjubljanaSlovenia
The TOP counter and determination of bunch-crossing time at Belle II M. Starič, for the Belle II TOP group
TOP counterparticle identificationcollision time determination
At the Belle II experiment a Time-of-Propagation (TOP) counter is used for particle identification in the barrel region. This novel type of particle identification device combines the Cherenkov ring imaging technique with the time-of-flight and therefore it relies on a precise knowledge of the time of collision in each triggered event. We discuss the performance of the counter and present a maximum likelihood based method for the determination of event collision time from the measured data.
Introduction
The Belle II experiment [1, 2] is a second generation of B Factory experiments aimed for the precise measurements in B, charm and τ physics as well as for the searches of physics beyond the standard model. The experiment is sited at KEK, Tsukuba, Japan. The upgraded KEKB collider, the Su-perKEKB provides collisions of 4 GeV positrons with 7 GeV electrons at or near the energy of Υ(4S ) resonance, which predominantly decays to a pair of B anti-B mesons. With similar cross-sections also pairs of cc and ττ are produced, enabling to study charm and τ-lepton physics with the same collected data. The SuperKEKB collider utilizes the so called nanobeam optics, with which it is possible to squeeze the beams at the interaction region to a sub-micron dimensions and hence to achieve much larger luminosity at similar beam currents. The SuperKEKB is targeting a 30-times the luminosity of its ancestor.
The Belle II detector is a general purpose spectrometer utilizing charged particle vertexing and tracking, neutral particle detection and particle identification (PID). It consists of the following components: a vertex detector made of two layers of DEPFET sensors (PXD) 1 and four layers of double-sided silicon detectors (SVD), a central drift chamber (CDC), a timeof-propagation counter (TOP) in the barrel and a proximity focusing aerogel RICH (ARICH) in the forward, both utilizing Cherenkov ring imaging technique, a CsI(Tl) based electromagnetic calorimeter (ECL), and a K L and muon detector system (KLM). The super-conducting solenoid coil provides a magnetic field of 1.5 T for the charged particle momentum measurements.
Except PXD all other detector components are involved in particle identification: SVD and CDC with energy loss measurements (dE/dx), TOP and ARICH exploit Cherenkov ring Email address: [email protected] (M. Starič, for the Belle II TOP group) 1 second DEPFET layer is not completely installed yet imaging, ECL is involved with energy deposit measurements and KLM with penetrating power measurements. The last two components mainly contribute to lepton identification, while the first four components contribute mainly to hadron identification. All these components provide log likelihoods for the six stable or long lived charged particles: electron, muon, pion, kaon, proton and deuteron. The log likelihoods are combined by summing over detector components,
log L h = det log L det h , h = {e, µ, π, K, p, d}.(1)
Particle selection is performed by either using a binary PID,
P h/h = L h L h + L h ,(2)
where h and h denote particles to be distinguished, or by a global PID,
P h = L h h L h .(3)
It is also possible to weight the likelihoods in Eq. 3 with the corresponding prior probabilities. The Belle II has started taking data in 2019. Since then we recorded 424 fb −1 , a data sample roughly equivalent to the BaBar or half of the Belle data sample, but compared to the target it represents roughly a 1% of the final goal. The luminosity has been steadily increasing during past data taking period, reaching a world record of 4.7 × 10 34 cm −2 s −1 in 2022. To achieve the target of 6 × 10 35 cm −2 s −1 an increase of the order of magnitude is still needed in the next years.
The TOP counter
The TOP counter is a variant of the DIRC detector [3]. Cherenkov photons emitted in a quartz plate by charged particles are transported to the photon detectors by means of total internal reflections. The two dimensional information about the Cherenkov ring is obtained by measuring the time-of-arrival and the position of photons at the photon detectors. The time-of-arrival is measured relative to the e + e − collision time and thus includes the time-of-flight of a particle. This kind of DIRC therefore combines time-of-flight measurement with Cherenkov ring imaging technique.
The Belle II TOP counter [4] is devoted to hadron ID in the barrel region between polar angles of 32 0 and 120 0 . It consists of sixteen modules positioned at a radius of 120 cm. The quartz optics of a module is composed of a 2.6 m long, 2 cm thick and 45 cm wide quartz plate and a 10 cm long expansion prism at backward side ( Fig. 1). At forward side the quartz plate is shaped to form a spherical mirror of radius-of-curvature of 6.5 m. The prism exit window is equipped with two rows of sixteen Hamamatsu R10754 micro channel plate photo multipliers (MCP-PMT) with NaKSbCs photocathode and 4 × 4 anode readout channels [5] forming an imaging plane of 512 pixels. These tubes are single-photon sensitive, have excellent time resolution ( Fig. 2) and can work in a strong magnetic field.
The readout electronics is based on a 8-channel waveform sampling ASIC developed by the University of Hawaii [6]. Each channel of the chip utilizes a switched-capacitor array with a sampling rate of 2.7 Gs/sec and a 11 µs long analog ring buffer for storing waveforms. Four ASIC chips are mounted on a carrier board together with a Xilinx Zynq 030-series FPGA which provides clocking and control for the ASICs. A set of four carrier boards and a data aggregator board (SCROD) equipped with a 045-series Xilinx Zynq FPGA form a front-end readout module that interfaces with the Belle II data acquisition system (DAQ). When a trigger is received, the ASIC chips digitize the relevant time interval of the waveforms for triggered channels using 12-bit Wilkinson-type ADC. The digitized data is then sent to the SCROD where the pedestal subtraction and feature extraction (time, amplitude and pulse width) are performed. The feature-extracted data are packed and sent via optical link to the DAQ system. Electronic time resolution of ∼50 ps has been obtained for single photon signals. tronic channels is calibrated with a precision better than 50 ps (r.m.s). This is performed by injecting double pulses of a constant time delay between the first and second pulse into the inputs. The calibration constants are determined with a minimization procedure described in Ref. [7]. The second step involves time alignment of channels within each module with a precision of at least 50 ps (r.m.s). This is done with a laser calibration system consisting of a pico-second pulsed laser source coupled to a light distribution system made of optical fibers and equipped at output with graded index micro lenses that illuminate MCP-PMT's uniformly as much as possible [8]. The last two steps are done with muons from e + e − → µ + µ − events, since particle identities are known in these events. These calibrations involve time alignment of modules and the calibration of bunch crossing time offset [7] with respect to the accelerator RF clock, with which the waveform-sampling electronics is synchronized; the precision is below 10 ps (r.m.s). Besides the timing calibrations we perform also masking of hot and dead channels; the masks are determined from the measured collision data. The first three calibrations are found to be very stable in time. They are performed at the beginning of each new running period and cross-checked several times during that period. The bunch crossing time offset depends on the accelerator conditions that can change on a daily basis. This calibration is performed continuously for every run.
Particle identification with TOP counter
Particle identification is based on an extended likelihood method with an analytical construction of the probability density functions (PDF) [9,10]. For a given charged particle hypothesis h (h = e, µ, π, K, p, d) the extended likelihood is defined as
log L h = N i=1 log N h S h (c i , t i ) + N B B(c i , t i ) N h + N B + log P N (N h + N B ),(4)
where N h and S h (c, t) are the expected signal yield and signal PDF for the hypothesis h, respectively, N B and B(c, t) are the expected background yield and background PDF, respectively, and c and t are the pixel number and arrival time of the detected photon, respectively. The second term in Eq. 4 is the Poisson probability to measure N photons while expecting N h + N B . The signal PDF for a given pixel c is parameterized as a sum of m c Gaussian PDF's:
S h (c, t) = m c k=1 n ck G(t − t ck ; σ ck ),(5)
where t ck and σ ck are the position and width, respectively, and n ck is the fraction of expected signal photons in the k-th peak.
Those as well as m c are determined analytically with the model described in Ref. [10]. The background PDF is modeled as a uniform distribution in a time window in which the photons are measured. The expected background yield N B is estimated event-by-event from the photon counts of other modules. PID performance of TOP counter is governed mainly by two parameters: the number of detected photons per charged particle and the single photon time resolution. Both have been studied with collision data using muons from e + e − → µ + µ − events. The momentum range of these muons is between 4 and 7 GeV/c, hence the Cherenkov angle is saturated in quartz. The number of detected photons per muon is measured in a time window of 0 to 75 ns, the same as used for the likelihood determination; a time window of -50 ns to 0 is taken to estimate background. Background subtracted photon yields as a function of muon polar angle are shown in Fig. 3. On average we detect 20 to 45 photons per muon. Strong polar angle dependence is due to several factors: muon trajectory length in the quartz (proportional to 1/ sin θ), a fraction of Cherenkov ring satisfying total internal reflection requirement, and the photon losses due to light absorption, quartz surface imperfections and mirror reflectivity. Photon losses are the largest for polar angles around cos θ ∼ 0.3 since the distance photons must travel is the longest. Enhancement at nearly perpendicular moun impact (cos θ ∼ 0) is due to the fact that the total internal reflection requirement is satisfied for the photons flying directly toward PMT's (direct photons) as well as for those flying toward the spherical mirror (reflected photons). With muons from e + e − → µ + µ − events we also measured the time resolution of single photons. The main contribution to the resolution comes from the dispersion of light in quartz (chromatic error) and is proportional to the photon time-ofpropagation [9]. We first assigned photons to the peaks of analytic PDF (Eq. 5) using sPlot technique [11]. The differences of measured photon times and the associated peak positions were then histogrammed in bins of photon propagation time and finally fitted with a convolution of TTS distribution (Fig. 2) and a Gaussian distribution, whose width σ is taken as a free parameter. The results are shown in Fig. 4. A linear dependence is clearly visible for the direct photons, while for the reflected ones an enhanced time resolution can be noticed; this enhancement is due to chromatic error corrections obtained by focusing photons with a spherical mirror. Performance of kaon identification has been studied with collision data using kinematically tagged kaons and pions from D 0 → K − π + decays with D 0 meson reconstructed in D * + → D 0 π + decay. The results for P K/π > 0.5 are shown in Fig. 5. Cherenkov threshold for kaon is at 0.5 GeV/c, while the minimal transverse momentum needed to reach the TOP counter is 0.27 GeV/c. Above the Cherenkov threshold and below 2 GeV/c the identification efficiency is between 90% and 93% with 4% to 8% pion mis-identification (Fig. 5a). Above 2 GeV/c the performance starts to degrade; at 3 GeV/c it reaches a broad plateau with ∼80% efficiency and ∼20% pion misidentification. Fig. 5b shows polar angle dependence. In the backward region (cos θ < 0) the performance is better than in the forward region primarily because of smaller particle momenta. The deep in efficiency at cos θ ∼ 0.3 coincides roughly with the minimum in photon yields shown in Fig. 3. For these photons also the chromatic error contribution is among the largest.
Determination of bunch-crossing time
The start for photon time-of-arrival measurements is given by level one trigger whose precision (about 8 ns r.m.s.) does not match the requirement for TOP counter (below 25 ps). This precision can be obtained by identifying a collision bunchcrossing in the off-line processing. The SuperKEKB collider orbits bunches of particles with a frequency of 508 MHz, which corresponds to about 2 ns spacing between RF buckets. The length of a single bunch is 6 mm (r.m.s.), which corresponds to a 14 ps (r.m.s) spread in collision time. If the collision bunchcrossing is uniquely identified, one can correct the measured photon times by the precise timing given with the RF clock and hence can obtain the required start time precision.
The method relies on maximizing the sum of log likelihoods (Eq. 4) of particles hitting the TOP counter against a common offset subtracted from the measured photon times. At least one particle in the event that emits enough Cherenkov photons is therefore needed. Particle identities are also required; they are determined from dE/dx measurements in CDC and SVD (the most likely ones are chosen). The result of maximization is then rounded to the nearest RF bucket time and used to correct photon arrival times.
The maximum is searched by scanning a selected time interval because local maxima are usually present. This search is performed in two steps. First, a coarse scan is performed in steps of 0.5 ns within a time interval of ±50 ns using a lookup table of time-projected PDF's. Then a fine scan is performed in a time interval of ±5 ns around the result of the coarse scan, divided into 200 equidistant steps, and using a complete 2-dimensional PDF's. Finally, the maximum is determined precisely by fitting a parabola to the three largest values.
Efficiency of finding the correct bunch-crossing depends on particle multiplicity and is found to be very sensitive to beam background. Monte Carlo simulations of generic BB events give the following efficiencies: 98.2% if beam background is absent, 97.4% with the present background level and 92.1% with the level expected at the SuperKEKB target luminosity. The inefficiency is found to be primarily due to false maxima caused by Cherenkov photons coming from beam background shower particles. These are not correlated with the collision time, therefore reducing the search interval should increase the efficiency. Recently, SVD can provide the collision time with ∼1 ns precision enabling to shorten the search interval. The improved method is using the collision time determined with SVD instead of the coarse scan. In addition, falsely reconstructed bunch-crossings are suppressed by requiring reconstructed bunch-crossing to be matched with a filled bucket. With these modifications the efficiency has been largely improved: 99.9% at present background level, and 99.5% at the target luminosity where we expect a background rate of 11 MHz per PMT. The method becomes also much less dependent on particle multiplicity, as shown in Fig. 6.
Figure 1 :Figure 2 :
12Quartz optics 3. Calibration of TOP counter Calibration of TOP counter involves several steps. At first, the time base of sampling electronics of each of the 8192 elec-Transition time spread (TTS) of Hamamtsu R10754.
photon yield [photons/muon]
Figure 3 :
3Number of detected photons per muon as a function of cosine of muon polar angle.
Figure 4 :
4Single photon time resolution except TTS as a function of photon propagation time.
Figure 5 :
5Kaon efficiency and pion mis-identification probability as a function of momentum (a) and cosine of polar angle (b) for P K/π > 0.5 as measured with collision data using D * + → D 0 (K − π + )π + decays.
Figure 6 :
6Efficiency of finding the correct bunch-crossing as a function of particle multiplicity. The average multiplicity of hadronic events is about 4 charged particles in the acceptance of TOP counter.
AcknowledgmentsWe thank the SuperKEKB group for the excellent operation of the accelerator; the KEK cryogenics group for the efficient operation of the solenoid; the KEK computer group for on-site computing support; and the raw-data centers at BNL, DESY, GridKa, IN2P3, and INFN for off-site computing support.
. T Abe, Kek Report, T. Abe et.al., KEK Report 2010-1 (2010).
. I Adam, Nucl. Instr. and Meth. A. 538281I. Adam et.al., Nucl. Instr. and Meth. A 538 (2005) 281.
. J Fast, Nucl. Instr. and Meth. A. 876145J. Fast, Nucl. Instr. and Meth. A 876 (2017) 145.
. S Hirose, Nucl. Instr. and Meth. A. 787293S. Hirose et. al., Nucl. Instr. and Meth. A 787 (2015) 293.
. D Kotchetkov, Nucl. Instr. and Meth. A. 941162342D. Kotchetkov et.al., Nucl. Instr. and Meth. A 941 (2019) 162342.
. M Starič, Nucl. Instr. and Meth. A. 876260M. Starič, Nucl. Instr. and Meth. A 876 (2017) 260.
. U Tamponi, Nucl. Instr. and Meth. A. 87659U. Tamponi, Nucl. Instr. and Meth. A 876 (2017) 59.
. M Starič, Nucl. Instr. and Meth. A. 595252M. Starič et.al., Nucl. Instr. and Meth. A 595 (2008) 252.
. M Starič, Nucl. Instr. and Meth. A. 639252M. Starič, Nucl. Instr. and Meth. A 639 (2011) 252.
. M Pivk, F R Le Diberder, Nucl. Instr. and Meth. A. 555356M. Pivk, F.R. Le Diberder, Nucl. Instr. and Meth. A 555 (2005) 356.
| []
|
[
"Hierarchical Beam Training for Extremely Large-Scale MIMO: From Far-Field to Near-Field",
"Hierarchical Beam Training for Extremely Large-Scale MIMO: From Far-Field to Near-Field"
]
| [
"Student Member, IEEEYu Lu \nDepartment of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina\n",
"Student Member, IEEEZijian Zhang \nDepartment of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina\n",
"Fellow, IEEELinglong Dai [email protected]. \nDepartment of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina\n"
]
| [
"Department of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina",
"Department of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina",
"Department of Electronic Engineering\nCenter for Information Science and Technology (BNRist)\nTsinghua University as well as Beijing National Research\n100084BeijingChina"
]
| []
| Extremely large-scale MIMO (XL-MIMO) is a promising technique for future 6G communications.The sharp increase in the number of antennas causes electromagnetic propagation to change from farfield to near-field. Due to the near-field effect, the exhaustive near-field beam training at all angles and distances requires very high overhead. The improved fast near-field beam training scheme based on time-delay structure can reduce the overhead, but it suffers from very high hardware costs and energy consumption caused by time-delay circuits. In this paper, we propose a near-field two dimension (2D) hierarchical beam training scheme to reduce the overhead without the need for extra hardware circuits.Specifically, we first formulate the multi-resolution near-field codewords design problem covering different angle and distance coverages. Next, inspired by phase retrieval problems in digital holography imaging technology, we propose a Gerchberg-Saxton (GS)-based algorithm to acquire the theoretical codeword by considering the ideal fully digital architecture. Based on the theoretical codeword, an alternating optimization algorithm is then proposed to acquire the practical codeword by considering the hybrid digital-analog architecture. Finally, with the help of multi-resolution codebooks, we propose a near-field 2D hierarchical beam training scheme to significantly reduce the training overhead, which is verified by extensive simulation results.Index TermsExtremely large-scale MIMO (XL-MIMO), extremely large-scale antenna array (ELAA), beam training, codebook design.All authors are with the | 10.48550/arxiv.2212.14705 | [
"https://export.arxiv.org/pdf/2212.14705v2.pdf"
]
| 255,340,541 | 2212.14705 | e52ed4df1edeb56b95f12b0b984b81117d76a8f9 |
Hierarchical Beam Training for Extremely Large-Scale MIMO: From Far-Field to Near-Field
24 May 2023
Student Member, IEEEYu Lu
Department of Electronic Engineering
Center for Information Science and Technology (BNRist)
Tsinghua University as well as Beijing National Research
100084BeijingChina
Student Member, IEEEZijian Zhang
Department of Electronic Engineering
Center for Information Science and Technology (BNRist)
Tsinghua University as well as Beijing National Research
100084BeijingChina
Fellow, IEEELinglong Dai [email protected].
Department of Electronic Engineering
Center for Information Science and Technology (BNRist)
Tsinghua University as well as Beijing National Research
100084BeijingChina
Hierarchical Beam Training for Extremely Large-Scale MIMO: From Far-Field to Near-Field
24 May 20231 2
Extremely large-scale MIMO (XL-MIMO) is a promising technique for future 6G communications.The sharp increase in the number of antennas causes electromagnetic propagation to change from farfield to near-field. Due to the near-field effect, the exhaustive near-field beam training at all angles and distances requires very high overhead. The improved fast near-field beam training scheme based on time-delay structure can reduce the overhead, but it suffers from very high hardware costs and energy consumption caused by time-delay circuits. In this paper, we propose a near-field two dimension (2D) hierarchical beam training scheme to reduce the overhead without the need for extra hardware circuits.Specifically, we first formulate the multi-resolution near-field codewords design problem covering different angle and distance coverages. Next, inspired by phase retrieval problems in digital holography imaging technology, we propose a Gerchberg-Saxton (GS)-based algorithm to acquire the theoretical codeword by considering the ideal fully digital architecture. Based on the theoretical codeword, an alternating optimization algorithm is then proposed to acquire the practical codeword by considering the hybrid digital-analog architecture. Finally, with the help of multi-resolution codebooks, we propose a near-field 2D hierarchical beam training scheme to significantly reduce the training overhead, which is verified by extensive simulation results.Index TermsExtremely large-scale MIMO (XL-MIMO), extremely large-scale antenna array (ELAA), beam training, codebook design.All authors are with the
I. INTRODUCTION
With the emergence of new applications such as digital twins, 6G is expected to achieve a 10-fold increase in spectrum efficiency than 5G [1], [2]. The extremely large-scale MIMO (XL-MIMO) is a promising technique for 6G to achieve ultra-high spectrum efficiency [3], [4]. In XL-MIMO systems, the base station (BS) usually deploys an extremely large-scale antenna array (ELAA), which consists of hundreds or even thousands of antennas. ELAA in the XL-MIMO system is expected to drastically improve spatial resolution to realize a high spatial multiplexing gain in 6G. In order to obtain spatial multiplexing gain, XL-MIMO should generate a directional beam with high array gain by beamforming. To support beamforming, beam training should be conducted to search the optimal beamforming vector, i.e., codeword, in the predefined codebook.
As the number of BS antennas in XL-MIMO systems is much larger than that of 5G systems, the high-dimensional XL-MIMO beam training overhead will be overwhelming.
A. Prior Works
There are two typical categories of beam training methods for MIMO, which are far-field beam training and near-field beam training respectively. For the first category, since the antenna number at BS is usually not very large in 3G-5G systems, the MIMO channel is modeled in the far-field region with the planar wave assumption, where the array response vector of the far-field channel is only related to the angle. In this case, the orthogonal Discrete Fourier Transform (DFT) codebook can be utilized in beam training to capture the physical angle information in the angle-domain of the channel paths [5], [6]. However, since the size of the DFT codebook is proportional to the number of antennas at BS, the DFT codebook suffers from very high training overhead when it comes to XL-MIMO systems. Thus, to reduce the beam training overhead, some hierarchical beam training schemes were proposed [7], [8]. The basic idea of the beam training is to search from the lowest-resolution codebook to the highest-resolution codebook layer by layer, where the angle range needed to be scanned reduces layer by layer gradually.
With the help of hierarchical beam training, the overhead becomes proportional to the logarithm of the antenna number at BS [8].
As the antenna number dramatically increases in 6G XL-MIMO systems, the near-field range expands by orders of magnitude, which can extend to several hundred meters [9]. Thus, the XL-MIMO channel should be modeled in the near-field region subjected to the spherical wave assumption. In this case, the existing far-field beam training schemes may not be valid for the near-field XL-MIMO channel. To cope with this problem, near-field beam training should be utilized to match the near-field XL-MIMO channel feature. For the second category, i.e., nearfield beam training, the array response vector of the near-field channel is not only related to the angle but also to distance. Thus, to capture the physical angle information as well as distance information of the channel paths, a polar-domain codebook [10] should be utilized instead of a DFT codebook. Accordingly, the size of the polar-domain codebook is the product of the antenna number at BS and the number of sampled distance grids. Since only one angle and one distance can be measured in each time slot, the exhaustive search method for near-field beam training has a very high overhead [11]. To address this problem, we have proposed a fast timedelay based near-field beam training for XL-MIMO with low overhead [12], where each antenna requires time-delay circuits to provide frequency-dependent phase shift. In specific, due to the near-field beam split effect in a wideband situation, near-field beams can be flexibly controlled by extra time-delay hardware circuits and then focus on different angles and distances at different frequencies in one time slot. However, the time-delay based beamforming structure will lead to not only high hardware costs but also very high energy consumption, especially for XL-MIMO systems with a large number of antennas.
B. Contributions
Thus, to design a general and low-overhead beam training scheme, we propose a nearfield two dimension (2D) hierarchical beam training scheme by designing the multi-resolution codebooks referring to the hierarchical beam training in the far-field scenario. Our contributions are summarized as follows.
1) We first formulate the problem of near-field codeword design. Specifically, compared with the far-field case, the ideal beam pattern of near-field codeword should not only cover a certain angle range but also a certain distance range. By considering ideal fully digital architecture, we provide the design problem of the near-field theoretical codeword. Then, based on the theoretical codeword, we formulate the problem of a practical codeword with assumptions of the hybrid digital-analog structure and quantized phase shifts in practice.
2) In order to design the near-field theoretical codeword, inspired by the Gerchberg-Saxton (GS) algorithm in phase retrieval problems for digital holography imaging, we propose a GS-based theoretical codeword design algorithm for a fully digital architecture. Different from the original GS algorithm, we modify the transformation methods from Fourier transform to polar-domain transform to match the near-field assumption. Additionally, the power constraint instead of amplitude measurements are considered in each iteration to control the power of the codeword.
3) Since fully digital architecture with high energy assumption is not available in a practical XL-MIMO system, we then design the practical codeword considering the hybrid digital-analog architecture. Based on the theoretical codeword, an alternating optimization algorithm is proposed to acquire the practical codeword, where the digital beamforming vector and the analog beamforming matrix are optimized iteratively. Specifically, in each iteration, the digital beamforming vector is obtained by a closed-form solution. Meanwhile, phases of the entries in the analog beamforming matrix are solved individually by a highefficient iterative search method. 4) Next, we generate multi-resolution codebooks based on the practical codewords obtained by the alternating optimization algorithm. With the aid of multi-resolution codebooks with different angle coverages and distance coverages, we propose a near-field two dimension (2D) hierarchical beam training scheme. Specifically, codewords are searched in multiresolution codebooks layer by layer, where angle and distance ranges are reduced gradually.
Moreover, we provide the analysis of the proposed beam training overhead, which is proportional to the sum of the logarithm of the antenna number and the sampled distance grid number. Simulation results show that the proposed beam training scheme can reach sub-optimal achievable rate performance with low overhead.
C. Organization and Notations
Organization: The rest of the paper is organized as follows. In Section II, we first introduce the signal model, the near-field channel model, and the formulation of the near-field codebook design problem. In Section III, we provide the design of the theoretical codeword by the proposed Gerchberg-Saxton algorithm considering fully digital architecture. In Section IV, we propose an alternating optimization scheme to design the practical codeword with hybrid digital-analog architecture. Then, the proposed near-field 2D hierarchical beam training scheme is described in Section V. Simulation results and conclusions are provided in Section VI and Section VII, respectively.
Notations: Lower-case and upper-case boldface letters a and A denote a vector and a matrix, respectively; a H and A H denote the conjugate transpose of vector a and matrix A, respectively; ∥a∥ 2 denotes the l 2 norm of vector a; ∥a∥ F denotes the Frobenius norm of vector a. 0 N ×M denotes N × M -dimensional null matrix. Finally, CN (µ, Σ) denotes the probability density function of complex multivariate Gaussian distribution with mean µ and variance Σ. U(−a, a) denotes the probability density function of uniform distribution on (−a, a).
II. SYSTEM MODEL
In this section, we will first introduce the signal model of the XL-MIMO system. Then, the existing near-field channel model will be briefly reviewed. Finally, we formulate the problem of codeword design in the near-field scenario.
A. Signal Model
We consider the scenario where the BS employs a N -element ELAA to communicate with a single-antenna user. Let h H ∈ C 1×N denote the channel from the BS to the user. Since the XL-MIMO channel h H is generally dominated by a few main paths, we only need to search the physical location of the main paths by beam training instead of acquiring the explicit channel information [13], [14]. Therefore, the main path is concerned in this paper, and the corresponding beam training method will be investigated to search for the optimal beamforming vector to align with the main path. Take downlink transmission as example, the received signal y can be represented by
y = h H vs + n,(1)
where v ∈ C N ×1 represents the beamforming vector at the BS, which is essentially a codeword chosen from the predefined codebook, s represents the symbol transmitted by the BS, and n ∼ CN (0 N , σ 2 I N ) represents the received noise with σ 2 representing the noise power. The beam training is to measure the power of y to find the best codeword from the codebook.
Next, we will briefly review the existing near-field XL-MIMO channel model for existing near-field beam training schemes.
B. Near-Field XL-MIMO Channel Model
When the distance between the BS and the UE is smaller than the Rayleigh distance [15], the near-field XL-MIMO channel should be modeled with the spherical wave assumption, which can be expressed by
h = √ N αb (θ, r) .(2)
where α is the complex path gain. b (θ, r) represents the near-field array response vector, which can be represented by [10] b(θ, r)
= 1 √ N [e −j 2π λ (r (1) −r) , · · · , e −j 2π λ (r (N ) −r) ] H ,(3)
where r represents the distance from the UE to the center of the antenna array, r (n) = r 2 + δ 2 n d 2 − 2rδ n dθ represents the distance from the UE to the nth BS antenna, and δ n = 2n−N −1 2 with n = 1, 2, · · · , N .
Before data transmission, beam training should be applied to estimate the physical angles and distances of near-field channel paths. The near-field response vector b(θ, r) implies that the optimal beam training codeword should focus on the spatial angle θ and BS-UE distance r. The existing near-field beam training scheme is conducting an exhaustive search in the polar-domain codebook [10], which can be represented as
A = [b(θ 1 , r 1 1 ), · · · , b(θ 1 , r S 1 1 ), · · · , b(θ N , r S N N )],(4)
where each column of polar-domain codebook A is a codeword aligned with the grid (θ n , r sn n ), with s n = 1, 2, · · · , S n , S n denotes the number of sampled distance grids at θ n . Therefore, the number of total sampled grids of the whole propagation environment is S = N n=1 S n . Apparently, in XL-MIMO systems, the codebook should not only sample angle but also distance, which leads to a large-size codebook and unfordable beam training overhead. Thus, to address this problem, we design the hierarchical near-field codebook with multi-resolution codebooks, and then propose the corresponding near-field 2D hierarchical beam training. To design the multi-resolution nearfield codebooks, we will first formulate the design problem of a near-field codeword with different angle coverage and distance coverage.
C. Formulation of Codebook Design Problem
Suppose the angle coverage and distance coverage of
codeword v are B v,θ ≜ [θ, θ + B θ ] and B v,r ≜ [r, r + B r ],
where B θ and B r are the angle sampled step and distance sampled step. The ideal beam pattern vector of the codeword v is denote as
g v = g v (θ 1 , r 1 1 ), · · · , g v (θ N , r 1 N ), · · · , g v (θ N , r S N N ) ,(5)
where g v (θ, r) = |g v (θ, r)| e jfv(θ,r) is the theoretical beamforming gain. The amplitude information |g v (θ, r)| of the ideal beam pattern can be further represented by
|g v (θ, r)| = √ C v , θ ∈ B v,θ , r ∈ B v,r 0, θ / ∈ B v,θ , r / ∈ B v,r .(6)
For the ideal beam pattern in (5), the amplitude information |g v (θ, r)| of ideal beam pattern vector in target angle coverage and distance coverage are fixed and flattened while other beamforming gains are zero. Meanwhile, the phase information f v (θ, r) of the ideal beam pattern vector can be designed flexibly. Compared to a far-field codeword, the near-field codeword should cover not only a certain angle range but also a certain distance range.
To evaluate the effectiveness of the codeword v, we reference G (v, θ, r) as the beamforming gain of v in the angle θ and the distance r. The G (v, θ, r) can be represented as
G(v, θ, r) = √ N b(θ, r) H v.(7)
Thus, according to the definition of polar-domain codebook A in (4), the beam pattern obtained by beamforming with codeword v can be presented as A H v.
The aim of designing a codeword is to make the beam pattern A H v obtained by beamforming with the codeword v as close as possible to the ideal beam pattern g v . Thus, the objective of the theoretical codeword v design can be express as
min v,f (θ,r) A H v − g v 2 2 . (P1)
In (P1), the ideal theoretical codeword v can only be realized by the fully digital architecture, where each antenna requires one dedicated radio frequency (RF) chain to realize fully digital signal processing. However, fully digital architecture in the XL-MIMO system results in unaffordable energy consumption. In fact, a hybrid digital-analog structure is usually preferred in XL-MIMO systems to improve energy efficiency [16]. In this structure, we need to design practical codewords considering the hardware constraints in terms of phase shifter resolution and the number of radio frequency (RF) chains N RF [17].
Specifically, based on the ideal theoretical codeword v, the design of the practical codeword v p ≜ F RF f BB can be represented as
min F RF ,f BB ∥v − F RF f BB ∥ 2 s.t. ∥F RF f BB ∥ 2 = 1, [F RF ] n,i = e jδ n,i , δ n,i ∈ Φ b n = 1, 2, . . . , N, i = 1, 2, . . . , N RF ,(P2)
where the F RF ∈ C N ×N RF and f BB ∈ C N RF ×1 are the analog beamforming matrix and the digital beamforming vector.
Φ b = π −1 + 1 2 b , π −1 + 3 2 b , . . . π 1 − 1 2 b
is the set of quantized phase shifters with b bits.
All the codewords in the codebook can be designed based on (P1) and (P2). Next, we introduce the design method of the theoretical codeword v in Section III and practical codeword v p Section IV.
III. PROPOSED GERCHBERG-SAXTON ALGORITHM BASED NEAR-FIELD THEORETICAL
CODEWORD DESIGN
In this section, we will first briefly review the Gerchberg-Saxton algorithm applied in the phase retrial problem in the hologram optical system, and the relationship between the phase retrieval problem and the codeword design problem is analyzed. Next, we propose a GS-based theoretical codeword design scheme. Finally, the convergence property of the GS algorithm in near-field codeword design is provided.
A. Preview of the Phase Retrieval Problem and Gerchberg-Saxton algorithm 1) Phase retrieval problem in digital holography imaging: In recent years, with the development of modern optics and computer science, digital holography imaging technology has changed the traditional imaging object-image relationship and structure by combining the frontend optical system design with the back-end signal processing. The back-end signal processing algorithm of the original data collected by the camera can break through the traditional imaging bottleneck.
In specific, in optical systems, the amplitude information is easy to measure, while the direct recording of the phase information is not allowed. The reason is that the electromagnetic field oscillates at a very high frequency that rare electronic measurement devices can follow [18].
Thus, in order to realize the imaging of the original object, one of the most important problems in digital holography imaging technology is conducting phase retrieval [19]. Fortunately, with the help of the measured amplitude information, some signal processing algorithms offer alternative methods for recovering the phase information of optical images without requiring sophisticated devices.
Reviewing the theoretical codeword design problem in (P1), it is obvious that the problem (P1) is similar to the phase retrieval in digital holography imaging, where the phase information (f v (θ, r) of the ideal beam pattern vector) should be obtained by measured amplitude information (|g v (θ, r)| of ideal beam pattern vector).
2) Gerchberg-Saxton algorithm: One of the most popular methods to solve the phase retrieval problem is Gerchberg-Saxton (GS)-based algorithm [20], [21] as shown in Fig. 1 (a), where two amplitude measurements are iteratively imposed in the object plane and diffraction pattern plane [22], [23]. It is worth noting that the diffraction pattern plane is also known as the Fourier Some modified versions of the GS algorithm have been proposed afterward [24] to match various imaging problems. Instead of utilizing the GS algorithm in the imaging problem, we improved the GS algorithm in the near-field codeword design problem. In specific, we replace one of updating processes with measured amplitude information by applying normalization to match the power constraint of the codeword.
B. Design of the Theoretical Codeword v
In order to solve the (P1), we draw the experience from the Gerchberg-Saxton (GS) algorithm, which is widely applied in phase retrieval problems in digital hologram imaging of optical systems. In the phase retrieval problem, the phase information needed to be obtained with the fixed amplitude information, which is the same as the phase information f v (θ, r) design of the ideal beam pattern in the problem (P1). Specifically, the proposed GS-based near-field codeword design procedure is shown in Algorithm 1.
For notation simplicity, in the description of the GS algorithm, we usev (s) , g (s) , g ′ (s) , and v ′ (s) to denote the designed codeword vector, the beam pattern vector realized by the designed codeword, the revised beam pattern vector with ideal beam pattern amplitude, and the codeword vector obtained by revised beam pattern vector in the s-th iteration of GS algorithm.
Before the GS algorithm starts, we should first obtain the initial beam pattern vector g (0) with randomly generated phase f (0) (θ, r) and amplitude information g v (θ, r) of ideal beam pattern vector g v . In this way, the g (0) can be represented as
g (0) = g v (θ 1 , r 1 1 ) f (0) (θ 1 , r 1 1 ),· · ·,|g v (θ N , r 1 N )|f (0) (θ N , r 1 N ), · · · , |g v (θ N , r S N N )|f (0) (θ N , r S N N ) .(8)
Algorithm 1: GS-based theoretical codeword design Inputs:
|g v |, C v , S max , A, B v,θ , B v,r .
Initialization: randomly generate f (0) (θ, r) and obtain the g (0) by (8).
1.v ′ (0) = AA H −1 Ag (0) 2. Obtainv (1) by normalizingv ′ (0)
3. for s = 1, 2, · · · , S max do 4.
calculate g (s) based onv (s) by (9) 5. calculate g ′ (s) based on g (s) and g v by (10) 6.
calculatev ′ (s) based on g ′ (s) by (11) 7.
if s < S max 8. calculatev (s+1) based onv ′ (s) by (12) 9.
end if 10. end for
11. v =v ′ (Smax) /||v ′ (Smax) || 2 Output: Theoretical codeword v.
In s-th iteration, with provided designedv (s) ,
g (s) = A Hv (s) .(9)
Then, in order to maintain the amplitude information of the ideal beam pattern vector g v to approach the ideal beam pattern, we assign the amplitude information |g v (θ, r)| of ideal beam pattern g v to g ′ (s) , and the phase information fv (s) (θ, r) of current beam pattern g (s) to g ′ (s) . In this case, the g ′ (s) can be presented as
g ′ (s) = g v (θ 1 , r 1 1 ) fv (s) (θ 1 , r 1 1 ), · · · , |g v (θ N , r 1 N )|fv (s) (θ N , r 1 N ), · · · , |g v (θ N , r S N N )|fv (s) (θ N , r S N N ) .(10)
Base on the (P1), given g ′ (s) , thev ′ (s) can be obtained by least square algorithm aŝ
v ′ (s) = AA H −1 Ag ′ (s) = A † g ′ (s) ,(11)
where the pseudo inverse of A H is denoted as A † . Finally, we normalize thev ′ (s) aŝ
v (s+1) =v ′ (s) /||v ′ (s) || 2 .(12)
After the iteration number reaches S max , we utilizev ′ (Smax) to obtain the designed theoretical codeword v.
C. Convergence Property of GS Algorithm in Near-Field Codeword Design
As mentioned before, the original GS algorithm assumes that the object and the diffraction pattern planes are connected through a Fourier Transform (FT). The convergence of the original GS algorithm with FT assumption is proved based on Parseval's theorem of FT [25], where the energy of wavefronts in the object and the diffraction pattern planes before and after FT and inverse FT are the same. However, the codeword vector plane and beam pattern vector plane in the proposed GS algorithm are connected with the polar-domain transformation, which does not satisfy Parseval's theorem. Thus, the convergence property of the proposed GS algorithm based on polar-domain transformation in near-field codeword design should be analyzed.
In this paper, the convergence of the proposed GS algorithm is supervised by the squared error in each iteration. Specifically, the squared error of the beam pattern plane in s-th iteration can be presented as
E (s) = ∥g (s) (θ, r) − g ′ (s) (θ, r)∥ 2 2 dθdr. = ∥A Hv (s) (u, w) − A Hv′ (s) (u, w)∥ 2 2 dudw(13)
It is worth noting that the codewords in the polar-domain codebook A H have been rearranged,
where the codewords aligned with the largest distance S n of each θ n are brought to the front columns of A H . Thus, the A H can be rewritten as
A H = [A 1 , A 2 ] H ,(14)
where
A 1 = [b(θ 1 , r S 1 1 ), b(θ 2 , r S 2 2 ), · · · , b(θ N , r S N N )], A 2 = [b(θ 1 , r 1 1 ),· · · ,b(θ 1 , r S 1 −1 1 ),· · · ,b(θ N , r 1 N ),· · · ,b(θ N , r S N −1 N )].
Since the S n in each column b(θ n , r Sn 1 ) of A 1 is larger than Rayleigh distance, b(θ n , r Sn 1 ) approximates to the far-field codeword aligned with the physical direction θ n . In this case, the A 1 is equal to a far-field w) is an FT process, which satisfies Parseval's theorem as
DFT codebook. Thus, A H 1 v (s) (u, w)−v ′ (s) (u,∥ v (s) (u, w)−v ′ (s) (u, w) ∥ 2 2 dudw = ∥A H 1 v (s) (u, w)−v ′ (s) (u, w) ∥ 2 2 dudw.(15)
Therefore, E (s) can be further expressed as
E (s) = ∥A H 1 v (s) (u, w)−v ′ (s) (u, w) ∥ 2 2 + ∥A H 2 v (s) (u, w)−v ′ (s) (u, w) ∥ 2 2 dudw. ≥ ∥ v (s) (u, w)−v ′ (s) (u, w) ∥ 2 2(16)
The squared error of the codeword vector plane of s + 1-th iteration for the GS algorithm can be expressed as
E 0 (s) = ∥v (s+1) (u, w) −v ′ (s) (u, w)∥ 2 2 dudw.(17)
Then, we provide Lemma 1 to show the change of squared error between adjacent iteration in codeword vector plane.
Lemma 1:
In the codeword vector plane of GS algorithm, the error betweenv (s) (u, w) and
v ′ (s) (u, w) not less than than the error betweenv
(s+1) (u, w) andv ′ (s) (u, w), i.e., ∥v (s) (u, w)−v ′ (s) (u, w)∥ 2 2 > ∥v (s+1) (u, w)−v ′ (s) (u, w)∥ 2 2 . proof: See Appendix A.
From the (16), (17), and Lemma 1, we can derive that
E 0 (s) ≤ ∥v (s) (u, w) −v ′ (s) (u, w)∥ 2 2 dudw ≤ E (s)(18)
On the other hand, E 0 (s) can be further expressed as
E 0 (s) = ∥v (s+1) (u, w) −v ′ (s) (u, w)∥ 2 2 dudw = ∥A † g (s+1) (θ, r) − A † g ′ (s) (θ, r)∥ 2 2 dθdr.(19)
Utilizing the uniqueness of pseudo inverses, we can easily know that A † = (A H ) −1 , 0 (S−N )×N .
In this case, since A H 1 g (s+1) (θ, r)−g ′ (s) (θ, r) is a inverse FT process, which also satisfies Parseval's theorem. Thus,
E 0 (s) = ∥g (s+1) (θ, r) − g ′ (s) (θ, r)∥ 2 2 dθdr.(20)
Similar to Lemma 1, we can obtain that
∥g (s+1) (θ, r)−g ′ (s) (θ, r)∥ 2 2 ≥ ∥g (s+1) (θ, r)−g ′ (s+1) (θ, r)∥ 2 2(21)
Thus, given (20) and (21) E 0 (s) ≥ ∥g (s+1) (θ, r) − g ′ (s+1) (θ, r)∥ 2 2 dθdr = E (s+1) .
Combining the equation (18) and (22), we can observe that
E (s+1) ≤ E 0 (s) ≤ E (s) ,(23)
which means that the squared error in each iteration decreases. Thus, the convergence property of the proposed GS algorithm is proven.
IV. PROPOSED ALTERNATING OPTIMIZATION BASED NEAR-FIELD PRACTICAL CODEWORD
DESIGN
It is well known that each antenna requires one dedicated radio-frequency (RF) chain to realize the fully digital architecture. In this way, an XL-MIMO system with a very large number of antennas leads to an equally large number of RF chains, which will result in unaffordable hardware costs and energy consumption. To solve this problem, hybrid digital-analog architecture is preferred in practice, where the fully digital beamforming matrix is decomposed into a highdimensional analog beamforming matrix and a low-dimensional digital beamforming vector.
Moreover, quantized phase shifts instead of continuous quantized phase shifts are accessible for realizing analog beamforming matrix. Thus, in this section, alternating optimization is proposed for practical codeword design considering the hybrid digital-analog architecture and quantized phase shifts.
Based on the theoretical codeword v obtained by Algorithm 1, we solve the practical codeword v p design problem (P2) by alternating optimizing the digital beamforming vector f BB and the analog beamforming matrix F RF considering the hardware constraints. Algorithm 2 provides the specific procedure to design the practical codeword.
For the given analog beamforming matrix F RF , the optimization problem of the digital beamforming vector f BB can be expressed as
min f BB ∥v − F RF f BB ∥ 2 , (P2.1)
which can be solved by least square aŝ
f BB = F H RF F RF −1 F H RF v(24)
Algorithm 2: Practical codeword design
Inputs: v, T max , P max , Φ b , N , N RF .
Initialization: randomly generate F 0 RF . 1. for t = 1, 2, · · · , T max do // Design the digital beamforming vector.
2. calculate the f t BB by (24) // Design the analog beamforming matrix.
3. for p = 1, 2, · · · , P max do
4.
for n = 1, 2, · · · , N do
5.
for i = 1, 2, · · · , N RF do
6.
Search δ n,i to satisfy (25
Output: f BB = f Tmax BB , F RF = F Tmax RF , v p = F Tmax RF f Tmax BB
Then, for the given analog beamforming vector f BB , the optimization problem of F RF can be expressed as min
F RF ∥v − F RF f BB ∥ 2 s.t. ∥F RF f BB ∥ 2 = 1, [F RF ] n,i = e jδ n,i , δ n,i ∈ Φ b , n = 1, 2, . . . , N, i = 1, 2, . . . , N RF , (P2.2)
The optimization of F RF problem (P2.2) can be converted to the minimization absolute value of each entry of the vector v − F RF f BB . Hence, the problem (P2.2) can be transformed into N sub-problems, which can be optimized one by one. The n-th sub-problem is rewritten as
min θ 1 ,θ 2 ,...,θ N RF [v] n − N RF i=1 [f BB ] i e jδ n,i s.t. δ n,i ∈ Φ b , i = 1, 2, . . . , N RF .(25)
To obtain the solution to (25), the exhaustive search is a obvious choice, where all the combination of δ n,1 , · · · , δ n,N RF are test to minimize the objective. However, the number of combination is 2 bN RF , which has prohibitively high computational complexity. For example, if b = 4, N RF = 32, the 2 bN RF ≈ 7.9 × 10 28 ! Thus, we need to investigate near-optimal search method to reduce complexity.
In this case, we propose a high efficient individual search method, where each δ n,i is determined separately in each iteration. The specific procedures are summarized in Algorithm 2. We firstly initialize the δ 0 n,1 , · · · , δ 0 n,N RF by choosing the entry from the Φ b and generate F 0 RF . In p-th iteration, we find best δ n,1 , · · · , δ n,N RF one by one. In step 6, for δ n,i , we search through the Φ b to find the optimal choice to satisfy the (25). This iterative process performs stop until the number of iterations reaches predetermined figure or δ p−1 n,i = δ p n,i . Then the n-th row of the designedF RF can be expressed as and B r , the corresponding codeword has a lower resolution, and the size of the corresponding codebook becomes smaller. As mentioned before, we can generate near-field multi-resolution codebooks with different angle coverage and distance coverage based on the Algorithm 1 and
Algorithm 2.
Then, these multi-resolution codebooks are applied to conduct near-field 2D hierarchical beam training. Compared with far-field scenario, the near-field 2D hierarchical beam training need to reduce the search range of angle and distance at the same time as shown in Fig. 2.
The specific near-field beam training procedure is summarized in Algorithm 3. First, as shown in Step2, for l-th codebook generation, we need to divide the angle coverage B l v k ,θ and distance coverage B l v k ,r based on angle samples step B l θ and distance samples step B l r for each codeword v k . Then, in Steps 3-4, the codewords design scheme based on Algorithm 1 and Algorithm 2 is applied to obtain the l-th codebook W l . Then, Steps 7-16 are operated to search the optimal codeword in multi-resolution codebooks layer by layer.
B. Comparison of the Beam Training Overhead
Beam training overhead refers to the number of time slots used for beam training. Generally, the beam training overhead is determined by the spatial resolutions of an antenna array on Algorithm 3: Near-field 2D hierarchical beam training Inputs: L, B 1 θ , B 2 θ , · · · , B L θ , B 1 r , B 2 r , · · · , B L r , y opt = 0, s opt = 0 // Generate L sub-codebooks 1. for l = 1, 2, · · · , L do 2. generate the collection of B l v l,k ,θ and B l v l,k ,r based on B l θ and B l r 3. generate |g v (θ, r)| for based on (6) 4. obtain the practical codewords in l-th sub-codebook W l based on Algorithm 1 and Algorithm 2.
end for
6. W = W 1 // Conduct beam training 7. for l = 1, 2, · · · , L do 8. for v l,k in W do 9. y l k = h H v l,k s + n 10.
if y l k > y opt then 11. k opt = k 12. end if 13. end for Far-field hierarchical scheme [26] L l U (l) 40
14. choose v l+1,k in W l+1 satisfied B l+1 v l+1,k ,θ ∈ B l v l,
Far-field exhaustive search scheme [27] U 512
Near-field exhaustive search scheme [10] US 8192
Time-delay based near-field scheme [12] S 16
Proposed near-field 2D hierarchical scheme After we conduct beamforming with the designed practical codeword, we can obtain Fig. 3 (b), which presents the beamforming gains of different locations in space with the designed practical codeword. From Fig. 3 (b) we can see that the target location has the largest beamforming gain and other locations have much lower beamforming gains. Moreover, for the codeword in the layer 2 codebook, the designed practical codeword can also approach the ideal beam pattern Fig. 3 (c) and (d). Since the codeword in the layer 1 codebook should cover a larger range than that of layer 2 codebook, we can observe that the beamforming gain of non-target position in Fig. 3 (b) is also larger than that in Fig. 3 (d). [26], far-field exhaustive search beam training scheme [27], the near-field exhaustive search beam training scheme [10], and time-delay based near-field beam training scheme [12]. We set the number of angle and distance grids as U = 512 and Fig. 4 (a), where the bandwidth is 100 MHz, we can observe that the proposed near-field 2D hierarchical beam training can achieve the best performance of all schemes with relatively lower overhead. For example, the proposed scheme outperforms the far-field angle-domain codebook with only half of the beam training overhead. The reason is that the existing far-field codebook can only capture the angle information of the channel path. Moreover, the time-delay based scheme has worse performance than the proposed scheme in this narrow-band condition. The principal reason is that the ability of timedelay circuits to control the beam split will decrease by reducing the bandwidth. Meanwhile, Fig. 4 (b) illustrates the wideband situation, where the bandwidth is 500 MHz. It can be observed that the time-delay based beam training scheme has better performance than the proposed scheme.
However, the proposed scheme has much lower hardware cost and is bandwidth-independent.
Thus, we believe that the proposed scheme provides a tradeoff between the performance and overhead in near-field XL-MIMO beam training in a more general and cost-saving way. Fig. 4. From Fig. 5 (a), i.e., narrow band condition, it is obvious that the proposed beam training scheme outperforms all existing far-field and near-field schemes. In specific, around 36.6% improvement in achievable rate is accomplished by the proposed method compared to the time-delay based near-field beam training in SNR = 2 dB. In addition, we can observe that Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training the proposed method can also achieve better performance as long as SNR is smaller than 4 dB in the wideband situation. The reason why the near-field beam training scheme is vulnerable to noise is that the time-delay based near-field beam training scheme has to utilize beams with different frequencies to search different locations. the time-delay based near-field beam training scheme can not accumulate the power from all frequencies to combat noise as the near-field exhaustive beam training approach. Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training Achievable Rate (bit/s/Hz) Far-field hierarchical beam training [26] Far-field exhaustive search beam training [27] Near-field exhaustive search beam training [10] Time delay based near-field beam training [12] Proposed near-field 2D hierarchical beam training
VII. CONCLUSIONS
In this paper, we proposed a low-overhead near-field 2D hierarchical beam training by designing the near-field multi-resolution codebooks. Specifically, we first formulate the problem of designing near-field codeword and generating multi-resolution codebooks. It is worth pointing out that the proposed Gerchberg-Saxton (GS) based near-field codeword design algorithm can be utilized in designing codewords to realize arbitrary beam patterns. Then, a low-overhead
( a )
aIllustration of GS algorithm in iterative phase retrieval problem. (b) Illustration of GS algorithm in codeword design Fig. 1. Comparisons of the original and improved GS algorithm.
plane since the complex-valued wavefronts in the object and the diffraction pattern planes are usually connected through a Fourier transform with each other.Specifically, the GS algorithm initializes in the object plane, where the initial complex-valued wavefronts are created by combining the measured amplitude information with the random phase information. The iteration process of the GS algorithm consists of four steps: i) The forward diffraction propagation of the wavefronts in the object plane provides complex-valued wavefronts in the diffraction pattern plane; ii) Update the complex-valued wavefronts in the diffraction pattern plane: the amplitude information is substituted with the measured amplitude information U ′ ; iii) The backward diffraction propagation provides the complex-valued wavefronts in the object plane; iv) Update the complex-valued wavefronts in the diffraction plane: The amplitude information in the object plane is substituted with the measured amplitude information. The result of the GS algorithm is the recovered complex-valued wavefronts in the diffraction pattern plane.
Fig. 2 .
2jδ n,1 , e jδ n,2 , . . . , e jδ n,N RF(26) After T max iteration, we can obtain the final practical codeword asv p = F Tmax RF f Tmax BB(27)V. PROPOSED NEAR-FIELD 2D HIERARCHICAL BEAM TRAININGIn this section, we first introduce the proposed near-field 2D hierarchical beam training scheme,where the angle and distance ranges are reduced gradually layer by layer in multi-resolution codebooks. Then, the analysis of the proposed beam training overhead is provided.A. Near-Field 2D Hierarchical Beam Training SchemeIn order to obtain the tradeoff between the near-field beam training overhead and the performance, one of the methods is to apply a hierarchical near-field codebook, which consists of multi-resolution codebooks. The sizes of codebooks are determined by the angle sample step and distance sample step of the codebook, i.e., B θ and B r in(6). Specifically, as the increase of B θ Near-field exhaustive search beam Comparison between the far-field exhaustive search, near-field exhaustive search and the near-field 2D hierarchical beam training.
Fig. 3 .
3k opt ,θ and B l+1 v l+1,k ,r ∈ B l v l,k opt ,r15. the chosen codewords v l+1,k compose the W16.end forOutput: The feedback optimal codeword index k opt from the user. the angle and distance, i.e., the number of sampled angle grids U and the number of sampled distance grids S. It is worth pointing out that U is usually set as the same as the number of antennas on the array. The training overhead of the exhaustive near-field beam training scheme is U S. Meanwhile, the training overhead of the time-delay based beam training is only related to the number of sampled distance grids S. For the proposed 2D hierarchical beam training method, the beam training overhead can be represented as O (log (U ) + log (S)). It is obvious that, the training overhead of the proposed 2D hierarchical beam training is much less than that of the exhaustive near-field beam training. Since the number of sampled angle grids U is usually Comparison of the beam patterns of different layers of the hierarchical codebook.large than the number of sampled distances S[12], the training overhead of the proposed 2D hierarchical beam training is larger than that of the time-delay based beam training. However, the performance of the time-delay based beam training heavily depends on the extra hardware overhead and wideband condition, which will be further verified by simulation results in Section VI.VI. SIMULATION RESULTSFor simulations, we assume that the number of BS antennas and RF chains are N = 512 and N RF = 100. The wavelength is set as λ = 0.005 meters, corresponding to the 60 GHz frequency. The quantified bits number of phase shifters is set as b = 5. The path gain α, angle θ and distance r are generated as following: α l ∼ CN (0, 1), θ l ∼ U (−1, 1), and r l ∼ U (20, 100) meters. The SNR is defined as 1/σ 2 .
Fig. 3
3shows the comparison of the ideal beam pattern and the normalized practical beam pattern obtained by conducting beamforming with the designed codeword. In these heat maps, the brighter the color, the greater the beamforming gain at this position. It is worth noting that, we utilize the rectangular coordinate system to present the beamforming gains of the locations in twodimension space to show the beam pattern more clearly, where the coordinates of the X-axis and Y-axis satisfy x = r cos(θ), and y = r cos(θ).Fig. 3 (a) presents an ideal beam pattern of the layer 1 codebook, where the beam should focus on the target location, i.e., x = [55, 75], y = [−5, 15].
S
= 16, respectively. The overhead of the far-field exhaustive search is set as the same as the number of sampled angle grids, i.e., 512. The overhead of the near-field exhaustive search beam training scheme is set as 512 × 16 = 8192. The overhead of time-delay based near-field beam training relates to the number of sampled distance grids, which is set as 16. For the farfield hierarchical beam training scheme, U (l) is the number of sampled angles in the l-th layer, where U (1) = 4, U (2) = 4, U (2) = 32. Thus, the overhead of far-field hierarchical beam training is L l U (l) = 4 + 4 + 32 = 40. For the proposed near-field 2D hierarchical beam training algorithm, we use a three-layer hierarchical codebook. The size of the layer 1 codebook can be calculated as 64 × 4 = 256, where the numbers of sampled angle and distance grids are set as 64 and 4. For the layer 2 and layer 3 codebooks, we only need to search 8 and 4 codewords. Thus the overhead of the proposed near-field 2D hierarchical beam training algorithm is 268, which is almost half of 512 and only 3.3 % of 8192.
Fig. 4
4presents the performance of achievable rate comparisons against the beam training overhead under different bandwidths. The training overhead increases from 0 to 1000. In the beam training process, we utilize the optimal beamforming vector with the largest achievable rate searched in the current time slots to serve the user. From
Fig. 5
5presents the performance of achievable rate comparisons against the SNR under different bandwidths, where SNR is from 0 dB to 5 dB. The simulation parameters are the same as those in
Fig. 4 .
4Achievable sum-rate performance comparison with respect to the beam training overhead under different bandwidths. (a) 100 MHz; (b) 500 MHz.
Fig. 5 .
5Achievable sum-rate performance comparison with respect to the SNR under different bandwidths. (a) 100 MHz; (b) 500 MHz.
Fig. 6
6presents the performance of achievable rate comparisons against the distance under different bandwidths, where the distance is from 25 m to 75 m at SNR = 5 dB. FromFig. 6(a), about 18.5% performance improvement compared to the time-delay based near-field beam training at distance = 55 m. Additionally, we can observe that the proposed method can also reach a 95.8% achievable rate of the time-delay based near-field beam training at distance = 55 m in the wideband situation.
Fig. 6 .
6Achievable sum-rate performance comparison with respect to the distance overhead under different bandwidths. (a) 100 MHz; (b) 500 MHz.
TABLE I
ICOMPARISONS OF BEAM TRAINING OVERHEADMethod
Overhead
Value
Table .
.I presents the comparison of beam training overhead for different methods. We com-
pare the proposed near-field 2D hierarchical beam training algorithm with the existing far-field
hierarchical beam training scheme
near-field 2D hierarchical beam training scheme is proposed to realize the tradeoff between the training overhead and performance. Significantly, the proposed scheme can achieve sub-optimal performance without restriction to the hardware cost and wideband condition.(16)can be further presented ascan be presented aswhere τ is the angle between thev (s+1) (u, w) andv ′ (s) (u, w). As shown in Step 8 of the Algorithm 1, where (12) presents the normalization of thev ′ (s) (u, w), thus, ∥v (s) (u, w)∥ 2 = ∥v (s+1) (u, w)∥ 2 = 1, and the phase information ofv (s+1) (u, w) andv ′ (s) (u, w) are the same, i.e., the angle between thev (s+1) (u, w) andv ′ (s) (u, w) τ is 0. In this case, (28)-(29) is written asSince ∥v ′ (s) (u, w)∥ 2 (1 − cos ϕ) is always greater than zero, we can obtain that ∥v (s) (u, w)−v ′ (s) (u, w)∥ 2 2 > ∥v (s+1) (u, w)−v ′ (s) (u, w)∥ 2 2 .
Toward 6G networks: Use cases and technologies. M Giordani, M Polese, M Mezzavilla, S Rangan, M Zorzi, IEEE Commun.Mag. 583M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, "Toward 6G networks: Use cases and technologies," IEEE Commun.Mag., vol. 58, no. 3, pp. 55-61, Mar. 2020.
A vision on 6G-enabled NIB: Requirements, technologies, deployments, and prospects. P P Ray, N Kumar, M Guizani, IEEE Wireless Commun. 284P. P. Ray, N. Kumar, and M. Guizani, "A vision on 6G-enabled NIB: Requirements, technologies, deployments, and prospects," IEEE Wireless Commun., vol. 28, no. 4, pp. 120-127, May 2021.
Non-stationarities in extra-large-scale massive MIMO. E D Carvalho, A Ali, A Amiri, M Angjelichinoski, R W Heath, IEEE Wireless Commun. 274E. D. Carvalho, A. Ali, A. Amiri, M. Angjelichinoski, and R. W. Heath, "Non-stationarities in extra-large-scale massive MIMO," IEEE Wireless Commun., vol. 27, no. 4, pp. 74-80, Aug. 2020.
Near-field MIMO communications for 6G: Fundamentals, challenges, potentials, and future directions. M Cui, Z Wu, Y Lu, X Wei, L Dai, IEEE Commun. Mag. M. Cui, Z. Wu, Y. Lu, X. Wei, and L. Dai, "Near-field MIMO communications for 6G: Fundamentals, challenges, potentials, and future directions," IEEE Commun. Mag., Jan. 2023.
Compressive sensing-based adaptive active user detection and channel estimation: Massive access meets massive MIMO. M Ke, Z Gao, Y Wu, X Gao, R Schober, IEEE Trans. Signal Process. 68M. Ke, Z. Gao, Y. Wu, X. Gao, and R. Schober, "Compressive sensing-based adaptive active user detection and channel estimation: Massive access meets massive MIMO," IEEE Trans. Signal Process., vol. 68, pp. 764-779, Jan. 2020.
Efficient beam training and sparse channel estimation for millimeter wave communications under mobility. S H Lim, S Kim, B Shim, J W Choi, IEEE Transa. Commun. 6810S. H. Lim, S. Kim, B. Shim, and J. W. Choi, "Efficient beam training and sparse channel estimation for millimeter wave communications under mobility," IEEE Transa. Commun., vol. 68, no. 10, pp. 6583-6596, Jul. 2020.
Channel estimation and hybrid precoding for millimeter wave cellular systems. A Alkhateeb, O El Ayach, G Leus, R W Heath, IEEE J. Sel. Top. Signal Process. 85A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath, "Channel estimation and hybrid precoding for millimeter wave cellular systems," IEEE J. Sel. Top. Signal Process., vol. 8, no. 5, pp. 831-846, Oct. 2014.
Multi-resolution codebook based beamforming sequence design in millimeterwave systems. S Noh, M D Zoltowski, D J Love, IEEE Global Communications Conference (GLOBECOM'15). S. Noh, M. D. Zoltowski, and D. J. Love, "Multi-resolution codebook based beamforming sequence design in millimeter- wave systems," in IEEE Global Communications Conference (GLOBECOM'15), 2015, pp. 1-6.
Fourier plane-wave series expansion for holographic mimo communications. A Pizzo, L Sanguinetti, T L Marzetta, IEEE Trans. Wireless Commun. 219A. Pizzo, L. Sanguinetti, and T. L. Marzetta, "Fourier plane-wave series expansion for holographic mimo communications," IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 237-246, Sep. 2022.
Channel estimation for extremely large-scale MIMO: Far-field or near-field?. M Cui, L Dai, 70M. Cui and L. Dai, "Channel estimation for extremely large-scale MIMO: Far-field or near-field?" vol. 70, no. 4, pp. 2663-2677, Apr. 2022.
Codebook design and beam training for extremely large-scale RIS: Far-field or near-field?. X Wei, L Dai, Y Zhao, G Yu, X Duan, China Commun. 196X. Wei, L. Dai, Y. Zhao, G. Yu, and X. Duan, "Codebook design and beam training for extremely large-scale RIS: Far-field or near-field?" China Commun., vol. 19, no. 6, pp. 193-204, Jun. 2022.
Near-field rainbow: Wideband beam training for XL-MIMO. M Cui, L Dai, Z Wang, S Zhou, N Ge, IEEE Trans. Wireless Commun. M. Cui, L. Dai, Z. Wang, S. Zhou, and N. Ge, "Near-field rainbow: Wideband beam training for XL-MIMO," IEEE Trans. Wireless Commun., 2023.
Two-step codeword design for millimeter wave massive MIMO systems with quantized phase shifters. K Chen, C Qi, G Y Li, IEEE Trans. Signal Process. 68K. Chen, C. Qi, and G. Y. Li, "Two-step codeword design for millimeter wave massive MIMO systems with quantized phase shifters," IEEE Trans. Signal Process., vol. 68, pp. 170-180, Dec. 2020.
Low-complexity beam training for 5G millimeter-wave massive MIMO systems. W Wu, D Liu, X Hou, M Liu, IEEE Trans. Veh. Technol. 691W. Wu, D. Liu, X. Hou, and M. Liu, "Low-complexity beam training for 5G millimeter-wave massive MIMO systems," IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 361-376, Jul. 2020.
Fraunhofer and fresnel distances: Unified derivation for aperture antennas. K T Selvan, R Janaswamy, IEEE Ant. Propag. Mag. 594K. T. Selvan and R. Janaswamy, "Fraunhofer and fresnel distances: Unified derivation for aperture antennas," IEEE Ant. Propag. Mag., vol. 59, no. 4, pp. 12-15, Aug. 2017.
Iterative channel estimation using LSE and sparse message passing for mmwave MIMO systems. C Huang, L Liu, C Yuen, S Sun, IEEE Trans. Signal Process. 671C. Huang, L. Liu, C. Yuen, and S. Sun, "Iterative channel estimation using LSE and sparse message passing for mmwave MIMO systems," IEEE Trans. Signal Process., vol. 67, no. 1, pp. 245-259, Nov. 2019.
Codebook design for millimeter-wave channel estimation with hybrid precoding structure. Z Xiao, P Xia, X.-G Xia, IEEE Trans. Wireless Commun. 161Z. Xiao, P. Xia, and X.-G. Xia, "Codebook design for millimeter-wave channel estimation with hybrid precoding structure," IEEE Trans. Wireless Commun., vol. 16, no. 1, pp. 141-153, Oct. 2017.
Phase retrieval with application to optical imaging: A contemporary overview. Y Shechtman, Y C Eldar, O Cohen, H N Chapman, J Miao, M Segev, IEEE Signal Process. Mag. 323Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, "Phase retrieval with application to optical imaging: A contemporary overview," IEEE Signal Process. Mag., vol. 32, no. 3, pp. 87-109, Apr. 2015.
Terahertz digital holographic imaging. M S Heimbeck, H O Everitt, Advances in Opt. Photonics. 121M. S. Heimbeck and H. O. Everitt, "Terahertz digital holographic imaging," Advances in Opt. Photonics, vol. 12, no. 1, pp. 1-59, Mar. 2020.
A practical algorithm for the determination of plane from image and diffraction pictures. R W Gerchberg, W O Saxton, Optik. 352R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of plane from image and diffraction pictures," Optik, vol. 35, no. 2, pp. 237-246, Sep. 1972.
Intersection approach to array pattern synthesis. O Bucci, G Franceschetti, G Mazzarella, G Panariello, IEEE Photonics Journal. 1376O. Bucci, G. Franceschetti, G. Mazzarella, and G. Panariello, "Intersection approach to array pattern synthesis," IEEE Photonics Journal, vol. 137, no. 6, pp. 349-357, Dec. 1990.
Plug-and-play pixel super-resolution phase retrieval for digital holography. X Chang, L Bian, Y Gao, L Cao, J Suo, J Zhang, Opt. Lett. 47X. Chang, L. Bian, Y. Gao, L. Cao, J. Suo, and J. Zhang, "Plug-and-play pixel super-resolution phase retrieval for digital holography," Opt. Lett., vol. 47, pp. 2658-2661, May 2022.
3D gerchberg-saxton optical correlation. W Chen, IEEE Photonics Journal. 102W. Chen, "3D gerchberg-saxton optical correlation," IEEE Photonics Journal, vol. 10, no. 2, pp. 1-9, Apr. 2018.
Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimen. J Miao, P Charalambous, J Kirz, D Sayre, Nature. 4006742342J. Miao, P. Charalambous, J. Kirz, and D. Sayre, "Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimen," Nature, vol. 400, no. 6742, p. 342?344, May 1999.
The Fourier Transform and Its Applications. R N Bracewell, McGraw-HillNew YorkR. N. Bracewell, The Fourier Transform and Its Applications. New York: McGraw-Hill, 1986.
Multi-resolution codebook and adaptive beamforming sequence design for millimeter wave beam alignment. S Noh, M D Zoltowski, D J Love, IEEE Trans. Wireless Commun. 169S. Noh, M. D. Zoltowski, and D. J. Love, "Multi-resolution codebook and adaptive beamforming sequence design for millimeter wave beam alignment," IEEE Trans. Wireless Commun., vol. 16, no. 9, pp. 5689-5701, Sep. 2017.
Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications. J Lee, G Gil, Y H Lee, IEEE Trans. Wireless Commun. 646J. Lee, G. Gil, and Y. H. Lee, "Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications," IEEE Trans. Wireless Commun., vol. 64, no. 6, pp. 2370-2386, Jun. 2016.
| []
|
[
"HOW TO SELECT PREDICTIVE MODELS FOR CAUSAL INFERENCE? How to select predictive models for causal inference?",
"HOW TO SELECT PREDICTIVE MODELS FOR CAUSAL INFERENCE? How to select predictive models for causal inference?"
]
| [
"Matthieu Doutreligne \nInria, Soda\nSaclayFrance\n\nMission Data\nHaute Autorité de Santé\nSaint-DenisFrance\n\nIntroduction\n\n",
"Gaël Varoquaux \nInria, Soda\nSaclayFrance\n\nIntroduction\n\n",
"A Preprint "
]
| [
"Inria, Soda\nSaclayFrance",
"Mission Data\nHaute Autorité de Santé\nSaint-DenisFrance",
"Introduction\n",
"Inria, Soda\nSaclayFrance",
"Introduction\n"
]
| []
| As predictive models -eg from machine learning-give likely outcomes, they may be used to reason on the effect of an intervention, a causal-inference task. The increasing complexity of health data has opened the door to a plethora of models, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Here we highlight that classic machine-learning model selection does not select the best outcome models for causal inference. Indeed, causal model selection should control both outcome errors for each individual, treated or not treated, whereas only one outcome is observed. Theoretically, simple risks used in machine learning do not control causal effects when treated and non-treated population differ too much. More elaborate risks build proxies of the causal error using "nuisance" re-weighting to compute it on the observed data. But does computing these nuisance adds noise to model selection? Drawing from an extensive empirical study, we outline a good causal model-selection procedure: using the so-called R-risk; using flexible estimators to compute the nuisance models on the train set; and splitting out 10% of the data to compute risks. | null | [
"https://export.arxiv.org/pdf/2302.00370v2.pdf"
]
| 258,715,346 | 2302.00370 | 2305016782c2d6eaa07e38d0614eafbd44c643d3 |
HOW TO SELECT PREDICTIVE MODELS FOR CAUSAL INFERENCE? How to select predictive models for causal inference?
May 17, 2023 16 May 2023
Matthieu Doutreligne
Inria, Soda
SaclayFrance
Mission Data
Haute Autorité de Santé
Saint-DenisFrance
Introduction
Gaël Varoquaux
Inria, Soda
SaclayFrance
Introduction
A Preprint
HOW TO SELECT PREDICTIVE MODELS FOR CAUSAL INFERENCE? How to select predictive models for causal inference?
May 17, 2023 16 May 2023Model SelectionTreatment EffectG-formulaObservational StudyMachine Learning
As predictive models -eg from machine learning-give likely outcomes, they may be used to reason on the effect of an intervention, a causal-inference task. The increasing complexity of health data has opened the door to a plethora of models, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Here we highlight that classic machine-learning model selection does not select the best outcome models for causal inference. Indeed, causal model selection should control both outcome errors for each individual, treated or not treated, whereas only one outcome is observed. Theoretically, simple risks used in machine learning do not control causal effects when treated and non-treated population differ too much. More elaborate risks build proxies of the causal error using "nuisance" re-weighting to compute it on the observed data. But does computing these nuisance adds noise to model selection? Drawing from an extensive empirical study, we outline a good causal model-selection procedure: using the so-called R-risk; using flexible estimators to compute the nuisance models on the train set; and splitting out 10% of the data to compute risks.
Introduction
Extending prediction to prescription requires causal model selection
Increasingly rich data drives new predictive models. In health, new risks or prognostic models leverage routinelycollected data, sometimes with machine learning [50]: predicting morbidity-related outcomes from administrative data [86], heart failure from claims [19], sepsis from clinical records [32], suicide attempts from patient records and questionnaires [78]... Data may be difficult to control and model, but claims of accurate prediction can be established on left-out data [3,60,83]. Given a model predicting of an outcome of interest, it is tempting to use it to guide decisions: will an individual benefit or not from an intervention such as surgery [22]? This is a causal-inference task and principled approaches can build on contrasting the prediction of the outcome with and without the treatment [79,11].
Outcome modeling is an integral part of the causal modeling toolkit, under the names of G-computation, G-formula [65], Q-model [79], or conditional mean regression [85]. On observational data, causal inference of treatment effects is brittle to un-accounted for confounding, but it can assess effectiveness and safety in real-word practice [10,29] or offlabel drug usage [62] for potential repurposing [34,21]. Epidemiology has historically focused on methods that model treatment assignment [8,25], based on the propensity score [68] but recent empirical results [85,20] show benefits of outcome modeling to estimate average treatment effect (ATE). A major benefit of using outcome modeling for causal inference is that these methods naturally go beyond average effects, estimating individualized or conditional average treatment effects (CATE), important for precision medicine. For this purpose, such methods are also invaluable on randomized trials [81,45,31].
Recent developments have seen the multiplication of predictive modeling methods. Leaving aside the, overwhelming, machine-learning literature, even methods specifically designed for causal inference are numerous: Bayesian Additive Regression Trees [30], Targeted Maximum Likelihood Estimation [44,74], causal boosting [61], causal multivariate adaptive regression splines (MARS) [61], random forests [84,5], Meta-learners [41], R-learners [54], Doubly robust estimation [14]... The wide variety of methods leaves the applied researcher with the difficult choice of selecting between different estimators based on the data at hand. Indeed, estimates may vary markedly when using different models. For instance, Figure 1 shows large variations obtained across four different outcome estimators on 2016 semi-synthetic datasets [20]. Flexible models such as random forests are doing well in most settings except for setups where treated and untreated populations differ markedly in which case a simple linear model (ridge) is to be preferred. However a different choice of hyper-parameters (max depth= 2) for random forests yield the poorest performances. A simple rule of thumb such as preferring more flexible models does not work in general; model selection is necessary.
Standard practices to select models in predictive settings rely on cross-validation on the error on the outcome [60,83]. However, as we will see, these model-selection practices may not pick the best models for causal inference, as they can be misled by inhomogeneities due to treatment allocation. Given complex, potentially noisy, data, which model is to be most trusted to yield valid causal estimates? Because there is no single learner that performs best on all data sets, there is a pressing need for clear guidelines to select outcome models for causal inference.
Objectives and structure of the paper In this paper, we study model selection procedures with a focus on practical settings: finite samples settings and without well-specification assumption. One question is whether model-selection Figure 1: Different outcome models lead to different estimation error on the Average Treatment Effects, with six different outcome models on 77 classic simulations where the true causal effect is known [20]. The models are random forests, ridge regression with and without interaction with the treatment, (hyperparameters detailed in Appendix A). The different configurations are plotted as a function of increasing difference between treated and untreated population -detailed in subsection 4.3.
There is no systematic best performer: choosing the best model among a family of candidate estimators is important. procedures, that rely on data split, can estimate reliably enough the complex risks, theoretically motivated for causal inference. Indeed, these risks force of departure from standard model-selection procedures as they come with more quantities to estimate, which may bring additional variance, leading to worse model selection.
We first give a simple illustration of the problem of causal model selection and quickly review the important prior art. Then, in Section 2, we set causal model selection in the potential outcome framework and detail the causal risks and model-selection procedure. Section 3 gives our theoretical result. In section 4, we run a thorough empirical study, with many different settings covered. Finally, we comment our findings in Section 5. Results outline how to best select outcome models for causal inference with an adapted cross-validation that estimate the so-called R-risk.
The R-risk modulates observed prediction error to compensate for systematic differences between treated and nontreated individuals. It relies on the two nuisance models, themselves estimated from data and thus imperfect; yet these imperfections do not undermine the benefit of the R-risk.
Illustration: the best predictor may not estimate best causal effects
Using a predictor to reason on causal effects relies on contrasting the prediction of the outcome for a given individual with and without the treatment -as detailed in subsection 2.1. Given various predictors of the outcome, we are interested in selecting those that estimate best the treatment effect. Standard predictive modeling or machine-learning practice selects the best predictor, ie the one that minimizes the expected error. However, the best predictor may not be the best model to reason about causal effects of an intervention, as we illustrate below. Figure 2 gives a toy example: the outcome Y ∈ [0, 1], the probability of death, a binary treatment A ∈ {0, 1} and a covariate X ∈ R which summarizes the patient health status (eg. the Charlson co-morbidity index [13]). We simulate a situation for which the treatment is beneficial (decreases mortality) for patients with high Charlson scores (bad health status). On the contrary, the treatment has little effect for patients in good condition (small Charlson scores). Figure 2a shows a random forest predictor with a counter-intuitive behavior: it predicts well on average the outcome (as measured by a regression R 2 score) but perform poorly to estimate causal quantities: the average treatment effect τ (as visible via the error |τ −τ |) or the individual treatment effect (the error E[(τ (x) −τ (x)) 2 ]). On the contrary, Figure 2b shows a linear model with smaller R 2 score but better causal inference. Figure 2: Illustration: a) a random-forest estimator with high performance for standard prediction (high R 2 ) but that yields poor causal estimates (large error between true effect τ and estimatedτ ), b) a linear estimator with smaller prediction performance leading to better causal estimation.
Selecting the estimator with the smallest error to the individual treatment effect E[(τ (x) − τ (x)) 2 ] -the τ -risk, def. 1 -would lead to the best causal estimates; however computing this error is not feasible: computing it requires access to unknown quantities: τ (x).
While the random forest fits the data better than the linear model, it gives worse causal inference because its error is very inhomogeneous between the treated and untreated. The R 2 score does not capture this inhomogeneity.
a) Random forest, good average prediction but bad causal inference
0 1 Y = P[M ortality] Estimates (%) τ = -15.82 τ = -4.16
Metrics:
|τ − τ | = 11.66 % E[(τ (x) −τ (x)) 2 ] = 0.03 R 2 = 0.96 Untreated outcome Y 0 (x) Treated outcome Y 1 (x) Outcome predictionμ a (x) Untreated outcome Y 0 (x) Treated outcome Y 1 (x)
Outcome predictionμ a (x) 0 10 20 Confounding X = Charlson score
Populations covariate distributions b) Linear model, worse average prediction but better causal inference
0 1 Y = P[M ortality] Estimates (%) τ = -15.82 τ = -10.62 Metrics: |τ − τ | = 5.20 % E[(τ (x) −τ (x)) 2 ] = 0.01 R 2 = 0.86 Untreated outcome Y 0 (x) Treated outcome Y 1 (x) Outcome predictionμ a (x) Untreated outcome Y 0 (x) Treated outcome Y 1 (x)
Outcome predictionμ a (x) 0 10 20 Confounding X = Charlson score
Populations covariate distributions
Intuitively, the problem is that causal estimation requires controlling an error on both treated and non-treated outcome for the same individual: the observed outcome, and the non-observed counterfactual one. The linear model is misspecified -the outcome functions are not linear-, leading to poor R 2 ; but it interpolates better to regions where there are few untreated individuals -high Charlson score-and thus gives better causal estimates. Conversely, the random forest puts weaker assumptions on the data, thus has higher R 2 score but is biased by the treated population in the region with poor overlap region, leading to bad causal estimates.
This toy example illustrates that the classic minimum Mean Square Error criterion is not suited to choosing a model among a family of candidate estimators for causal inference.
1.3 Prior work: model selection for outcome modeling (g-computation)
A natural way to select a predictive model for causal inference would be an error measure between a causal quantity such as the conditional average treatment effects (CATE) and models' estimate. But such error is not a "feasible" risk: it cannot be computed solely from observed data and requires oracle knowledge.
Simulation studies of causal model selection Using eight simulations setups from [61], where the oracle CATE is known, Schuler et al. [73] [20], but did not include the R-risk and looked only at the agreement of the best selected model with the true CATE risk -τ -risk(f ) def. 1-, not on the full ranking of methods compared to the true CATE. We complete these prior empirical work by studying a wider variety of data generative processes and varying the influence of overlap, an important parameter of the data generation process which makes a given causal metric appropriate [17]. We also study how to best adapt cross-validation procedures to causal metrics which themselves come with models to estimate.
Theoretical studies of causal model selection Several theoretical works have proposed causal model selection procedures that are consistent: select the best model in a family given asymptotically large data. These work typically rely on introducing a CATE estimator in the testing procedure. For instance matching [67], an IPW estimate [27], a doubly robust estimator [71], or debiasing the error with influence functions [1]. However, for theoretical guarantees to hold, the test-set correction needs to converge to the oracle: it needs to be flexible enough -or well-posed-and asymptotic data. From a practical perspective, knowing that such requirements are met implies having a good CATE estimate, which amounts to having solved the original problem of causal model selection. We study how causal model-selection procedures behave outside of these settings.
Statistical guarantees on causal estimation procedures Much work in causal inference has focused on building procedures that guarantee asymptotically consistent estimators. Targeted Machine Learning Estimation (TMLE) [44,74] and Double Machine Learning [14] both provide estimators for Average Treatment Effect combining flexible treatment and outcome models. Here also, theories require asymptotic regimes and models to be well-specified.
By contrast, Johansson et al. [38] studies causal estimation without assuming that estimators are well specified. They derive an upper bound on the oracle error to the CATE (τ -risk) that involves the error on the outcome and the similarity of the distributions between the features of treated and control patients. However, they focus on using this upper bound for estimation, and do not give insights on model selection. In addition, for hyperparameter selection, they rely on a plugin estimate of the τ -risk built with counterfactual nearest neighbors, which has been shown ineffective [73]. Popular quantities of interest (estimands) are: at the population level, the Average Treatment Effect
(ATE) τ def = E Y (1),Y (0)∼D * [Y (1) − Y (0)];
to model heterogeneity, the Conditional Average Treatment Effect
(CATE) τ (x) def = E Y (1),Y (0)∼D [Y (1) − Y (0)|X = x].
Causal assumptions Some assumptions are necessary to assure identifiability of the causal estimands in observational settings [70]. We assume the usual strong ignorability assumptions, composed of 1) unconfoundedness {Y (0), Y (1)} ⊥ ⊥ A|X, 2) strong overlap ie. every patient has a strictly positive probability to receive each treatment, 3) consistency, and 4) generalization (detailed in Appendix B). In this work, we investigate the role of the overlap [17], which is testable with data.
Estimating treatment effects with outcome models Should we know the two expected outcomes for a given X, we could compute the difference between them, which gives the causal effect of the treatment. These two expected outcomes can be computed from the observed data: the consistency 3 and unconfoundedness 1 assumptions imply the equality of two different expectations:
E Y (a)∼D [Y (a)|X = x] = E Y ∼D [Y |X = x, A = a](1)
On the left, the expectation is taken on the counterfactual unobserved distribution. On the right, the expectation is taken on the factual observed distribution conditionally on the treatment. This equality is referred as the g-formula identification [64]. For the rest of the paper, the expectations will always be taken on the factual observed distribution D, and we will omit to explicitly specify the distribution. This identification leads to outcome based estimators (ie. g-computation estimators [79]), targeting the ATE τ with outcome modeling:
τ = E Y ∼D [Y (1) − Y (0)|X = x] = E Y ∼D [Y |A = 1] − E Y ∼D [Y |A = 0](2)
This equation has two central quantities: the conditional expectancy function of the outcome associated to specific covariates and treatment or not is, often called the response function:
(Response function) µ a (x) def = E Y ∼D [Y |X = x, A = a].(3)
Given a sample of data and the oracle response functions µ 0 , µ 1 , the finite sum version of Equation 2 leads to an estimator of the ATE written:τ
= 1 n n i=1 µ 1 (x i ) − µ 0 (x i )(4)
This estimator is an oracle finite sum estimator by opposition to the population expression of τ ,
E[µ 1 (x i ) − µ 0 (x i )],
which involves an expectation taken on the full distribution D, which is observable but requires infinite data. For each estimator taking an expectation over D, we use the symbolˆ to note its finite sum version.
Similarly to the ATE, for the CATE, at the individual level:
τ (x) = µ 1 (x) − µ 0 (x)(5)
Robinson decomposition Another decomposition of the outcome model plays in important role, the Rdecomposition [66]: introducing two quantities, the conditional mean outcome and the probability to be treated (known as propensity score [68]):
(Conditional mean outcome) m(x) def = E Y ∼D [Y |X = x].(6)(Propensity score) e(x) def = P[A = 1|X = x],(7)
the outcome can be written
(R-decomposition) y(a) = m(x) + a − e(x) τ (x) + ε(x; a) with E[ε(X; A)|X, A] = 0(8)
m and e are often called nuisances [14]; they are unknown in general.
Model-selection risks, oracle and feasible
Causal model selection We formalize model selection for causal estimation. Thanks to the g-formula identification (Equation 1), a given outcome model f : X × A → Y -learned from data or built from domain knowledge-induces feasible estimates of the ATE and CATE (eqs 4 and 5),τ f andτ f (x). Let F = {f : X × A → Y} be a family of such estimators. Our goal is to select the best candidate in this family for the observed dataset O using a risk of interest :
f * = argmin f ∈F (f, O)(9)
We now detail possible risks , risks useful for causal model selection, and how to compute them.
The τ -risk: an oracle error risk As we would like to target the CATE, the following evaluation risk is natural:
Definition 1 (τ -risk(f )) also called PEHE [72,30]:
τ -risk(f ) = E X∼p(X) [(τ (X) −τ f (X)) 2 ]
its finite-sum version over the observed data:
τ -risk(f ) = x∈O τ (x) −τ f (x) 2
However these risks are not feasible because the oracles τ (x) are not accessible, with the observed data (Y, X, A) ∼ D.
Feasible error risks Feasible risks are based on the prediction error of the outcome model and observable quantities.
Definition 2 (Factual µ-risk) [75] This is the usual Mean Squared Error on the target y. It is what is typically meant by "generalization error" in supervised learning and estimated with cross-validation:
µ-risk(f ) = E (Y,X,A)∼D (Y − f (X; A)) 2
The following risks use the nuisances e -propensity score, def 7-and m -conditional mean outcome, def 6. We give the definitions as semi-oracles, function of the true unknown nuisances, but later instantiate them with estimated nuisances, noted ě,m . Semi-oracles risks are superscripted with the symbol.
Definition 3 (µ-risk IP W ) [42] Let the inverse propensity weighting function w(x, a) = a e(x) + 1−a 1−e(x) , we define the semi-oracle Inverse Propensity Weighting risk,
µ-risk IP W (f ) = E (Y,X,A)∼D A e(X) + 1 − A 1 − e(X) (Y − f (X; A)) 2 Definition 4 (τ -risk IP W ) [84]
The CATE τ (x) can be estimated propensity score, with a regression against inverse propensity weighted outcomes [4,27,84]. From this objective, we can derive the τ -risk IP W .
τ -risk IP W (f ) = E (Y,X,A)∼D Y A − e(X) e(X)(1 − e(X)) − τ f (X) 2 = E (Y,X,A)∼D Y A e(X) − 1 − A 1 − e(X) − τ f (X) 2
Definition 5 (U -risk ) [41,54] Based on the Robinson decomposition -eq. 8 [66], the U-learner uses the A − e(X) term in the denominator. The derived risk is:
U -risk (f ) = E (Y,X,A)∼D Y − m (X) A − e (X) − τ f (X) 2
Note that extreme propensity weights in the denominator term might inflate errors in the numerator due to imperfect estimation of the mean outcome m the numerator errors, leading to a highly biased metric.
Definition 6 (R-risk ) [54,73] The R-risk also uses two nuisance m and e:
R-risk (f ) = E (Y,X,A)∼D (Y − m (X)) − (A − e (X)) τ f (X) 2
It is also motivated by the Robinson decomposition -eq. 8 [66]. It performs well in various simulations where using lasso, boosting or kernel ridge regressions for both the nuisances (ě,m) and the targetτ (x) [54].
These risks are summarized in Table 1.
mse(τ (X), τ f (X)) = τ -risk E X∼p(X) [(τ (X) −τ f (X)) 2 ] Eq. 1 [30] mse(Y, f (X)) = µ-risk E (Y,X,A)∼D (Y − f (X; A)) 2 Def. 2 [73] µ-risk * IP W E (Y,X,A)∼D A e(X) + 1−A 1−e(X) (Y − f (X; A)) 2 Def. 3 [42] τ -risk IP W E (Y,X,A)∼D Y A e(X) − 1−A 1−e(X) −τ f (X) 2 Def. 4 [84] U -risk * E (Y,X,A)∼D Y −m(X) A−e(X) −τ f (X) 2 Def. 5 [54] R-risk * 1 E (Y,X,A)∼D (Y − m (X)) − (A − e (X))τ f (X) 2
Def. 6 [54] 1 Called τ -risk R in Schuler et al. [73].
Estimation and model selection procedure
Causal model selection (as in Equation 9) may involve estimating various quantities from the observed data: the outcome model f , its induced risk as introduce in the previous section, and possibly nuisances required by the risk. Given a dataset with N samples, we split out a train and a test sets (T , S). We fit each candidate estimator f ∈ F on T . We also fit the nuisance models (ě,m) on the train set T , setting hyperparameters by a nested cross-validation before fitting the nuisance estimators with these parameters on the full train set. Causal quantities are then computed by applying the fitted candidates estimators f ∈ F on the test set S. Finally, we compute the model-selection metrics for each candidate model on the test set. This procedure is described in Algorithm 1 and Figure 3.
As extreme inverse propensity weights induce high variance, clipping can be useful for numerical stability [82,36].
Theory: Links between feasible and oracle risks
We now relate two feasible risks, µ-risk IP W and the R-risk to the oracle τ -risk. Both results make explicit the role of overlap for the performances of causal risks.
These bounds depend on a specific form of residual that we now define: for each potential outcome, a ∈ {0; 1}, the variance conditionally on x is [75]:
σ 2 y (x; a) def = y (y − µ a (x)) 2 p(y | x = x; A = a) dy
Integrating over the population, we get the Bayes squared error: σ 2 B (a) = X σ 2 y (x; a)p(x)dx and its propensity weighted version:σ 2 B (a) = X σ 2 y (x; a) p(x; a) dx. In case of a purely deterministic link between the covariates, the treatment, and the outcome, these residual terms are null.
τ -risk(f ) ≤ 2 µ-risk IP W (w, f ) − 2 σ 2 B (1) + σ 2 B (0)
This result has been derived in previous work [38]. It links µ-risk IP W to the squared residuals of each population thanks to a reweighted mean-variance decomposition. For completeness, we provide the proof in .
The upper-bound comes from the triangular inequality applied to the residuals of both populations. Interestingly, the two quantities are equal when the absolute residuals on treated and untreated populations are equal on the whole co-
variate space, ie for all x ∈ X , |µ 1 (x)−f (x, 1)| = |µ 0 (x)−f (x, 0)|.
The main source of difference between the oracle τ -risk and the reweighted mean squared error, µ-risk IP W , comes from heterogeneous residuals between populations. These quantities are difficult to characterize as they are linked both to the estimator and to the data distribution. This bound indicates that minimizing the µ-risk IP W helps to minimize the τ -risk, which leads to interesting optimization procedures [38]. However, there is no guarantee that this bound is tight, which makes it less useful for model selection.
Assuming strict overlap (probability of all individuals being treated or not bounded away from 0 and 1 by η, appendix B), the above bound simplifies into a looser one involving the usual mean squared error:
τ -risk(f ) ≤ 2 η µ-risk(f ) − 2 σ 2 B (1) + σ 2 B (0)
. For weak overlap (propensity scores not bounded far from 0 or 1), this bound is very loose (as shown in Figure 2) and is not appropriate to discriminate between models with close performances.
Reformulation of the R-risk as reweighted τ -risk
We now derive a novel rewriting of the R-risk, making explicit its link with the oracle τ -risk.
Proposition 2 (R-risk as reweighted τ -risk) Given an outcome model f , its R-risk appears as weighted version of its τ -risk (Proof in Appendix C.2):
R-risk * (f ) = x e(x) 1 − e(x) τ (x) − τ f (x) 2 p(x)dx +σ 2 B (1) +σ 2 B (0)(10)
The R-risk targets the oracle at the cost of an overlap re-weighting and the addition of the reweighted Bayes residuals, which are independent of f . In good overlap regions the weights e(x) 1 − e(x) are close to 1 4 , hence the R-risk is close to the desired gold-standard τ -risk. On the contrary, for units with extreme overlap violation, these weights goes down to zero with the propensity score.
Interesting special cases
Randomization special case If the treatment is randomized as in RCTs,
p(A = 1 | X = x) = p(A = 1) = p A , thus µ-risk IP W takes a simpler form: µ-risk IP W = E (Y,X,A)∼D A p A + 1 − A 1 − p A (Y − f (X; A)) 2
However, even if we have randomization, we still can have large differences between τ -risk and µ-risk IP W coming from heterogeneous errors between populations as noted in Section 3.1 and shown experimentally in simulations [73].
Concerning the R-risk, replacing e(x) by its randomized value p A in Proposition 2 yields the oracle τ -risk up to multiplicative and additive constants:
R-risk = p A (1 − p A ) τ -risk + (1 − p A ) σ 2 B (0) + p A σ 2 B (1)(11)
Therefore, optimizing estimators for CATE with R-risk * in the randomized setting is optimal if we target the τ -risk. This explains the strong performances of R-risk in randomized setups [73] and is a strong argument in favor of this risk for heterogeneity estimation in RCTs.
Simulation: D = 2, = 0.7, seed=8 One-dimensional cuts of the response surfaces Oracle Bayes predictor Consider the case where we have access to the oracle Bayes predictor for the outcome ie. f (x, a) = µ(x, a), then all risks are equivalent up to the residual variance:
τ -risk(µ) = E X∼p(X) [(τ (X) − τ µ (X)) 2 ] = 0 (12) µ-risk(µ) = E (Y,X,A)∼p(Y ;X;A) [ Y − µ A (X) 2 ] = X ,A ε(x, a) 2 p(a | x) p(x) dx da ≤ σ 2 B (0) + σ 2 B (1) (13) µ-risk IP W (µ) = σ 2 B (0) + σ 2 B (1) follows from Lemma 1 (14) R-risk(µ) =σ 2 B (0) +σ 2 B (1) ≤ σ 2 B (0) + σ 2 B (1) follows directly from Proposition 2(15)
Thus, differences between causal risks only matter in finite sample regimes. Universally consistent learners converge to the Bayes risk in asymptotic regimes, making all model selection risks equivalent. However, in practice choices must be made in non-asymptotic regimes.
Empirical Study
We evaluate the following causal metrics, oracle and feasible versions, presented in Table 1:
µ-risk * IP W , R-risk * , U -risk * , τ -risk IP W * , µ-risk, µ-risk IP W , R-risk, U -risk, τ -risk IP W .
We benchmark the metrics in a variety of settings: many different simulated data generation processes and three semi-simulated datasets 1 .
Caussim: Extensive simulation settings
Data Generation Process We use simulated data, on which the ground-truth causal effect is known. Going further than prior empirical studies of causal model selection [73,1], we use multiple generative processes, to reach more general conclusions (as discussed in Appendix 20).
We generate the response functions using random bases. Basis extension methods are common in biostatistics, eg functional regression with splines [33,58]. By allowing the function to vary at specific knots, they give flexible -nonlinear-models of the studied mechanisms. Taking inspiration from splines, we use random approximation of Radial Basis Function (RBF) kernels [63] to generate the response surfaces. RBF use the same process as polynomial splines but replace polynomial by Gaussian kernels. Unlike polynomials, Gaussian kernels have decreasing influences in the input space. This avoids unrealistic divergences of the population response surfaces at the ends of the feature space.
The number of basis functions -ie. knots-, controls the complexity of the ground-truth response surfaces and treatment. We first use this process to draw the non-treated response surface µ 0 and the causal-effect τ . We then draw the observations from a mixture two Gaussians, for the treated and non treated. We vary the separation between the two Gaussians to control the amount of overlap between treated and control populations, as it an important parameter for causal inference (related to η which appears in section 3.1). Finally, we generate the observed outcomes adding Gaussian noise. We generated such datasets 1000 times, with uniformly random overlap parameters θ ∈ [0, 2.5]. Appendix E.1 gives more details on the data generation.
Family of candidate estimators We test model selection on a family of candidate estimators that approximate imperfectly the data-generating process. To build such an estimator in two steps, we first use a RBF expansion similar as the one used for the data-generation generation process. Concretely, we choose two random knots and apply a transformation of the raw data features with the same Gaussian kernel used for the data-generation mechanism. This step is referred as the featurization. Then, we fit a linear regression on these transformed features. We consider two ways of combining these steps for outcome mode; using common nomenclature [41], we refer to these regression structures as different meta-learners which differ on how they model, jointly or not, the treated and the non treated:
• SLearner: A single learner for both population, taking the treatment as a supplementary covariate. • SftLearner: A single set of basis functions is sampled at random for both populations, leading to a given feature space used to model both the treat and the non treated, then two separate different regressors are fitted on this representation. • TLearner: Two completely different learners for each population, hence separate featurization and separate regressors.
We do not include more elaborated meta-learners such as R-learner [54] or X-learner [41]. Our goal is not to have the best possible learner but to have a variety of sub-optimal learners in order to compare the different causal metrics. For the same reason, we did not include more powerful outcome models such as random forests or boosting trees.
For the regression step, we fit a Ridge regression on the transformed features with 6 different choices of the regularization parameter λ ∈ [10 −3 , 10 −2 , 10 −1 , 1, 10 1 , 10 2 ], coupled with a TLearner or a SftLearner. We sample 10 different random basis for the learning procedure and the featurization yielding a family F of 120 candidate estimators.
Semi-simulated datasets
Datasets We also use semi-simulated datasets, where a known synthetic causal effect is added to real -non syntheticcovariate. We study datasets used in previous work to evaluate causal inference:
ACIC 2016 [20]: The dataset is based on the Collaborative Perinatal Project [55], a RCT conducted on a cohort of pregnant women to identify causes of infants' developmental disorders. The initial intervention was a child's birth weight (A = 1 if weight < 2.5kg), and outcome was the child's IQ after a given follow-up period. The study contained N = 4 802 data points with D = 55 features (5 binary, 27 count data, and 23 continuous). They simulated 77 different setups with varying parameters for treatment and response generation models, treatment assignment probabilities, overlap, and interactions between treatment and covariates 2 . We used 10 different seeds for every setup, totalizing 770 dataset instances. ACIC 2018 [77]: The raw covariates data comes from the Linked Births and Infant Deaths Database (LBIDD) [48] with D = 177 covariates. Treatment and outcome models have been simulated with complex models to reflect different scenarii. The data do not provide the true propensity scores, so we evaluate only feasible metrics, which do not require this nuisance parameter. We used all 432 datasets 3 of size N = 5 000. Twins [47]: It is an augmentation of the real data on twin births and mortality rates in the USA from 1989-1991 [2].
There are N = 11 984 samples (pairs of twins), and D = 50 covariates 4 , The outcome is the mortality and the treatment is the weight of the heavier twin at birth. This is a "true" counterfactual dataset -as remarked in [16]-in the sense that we have both potential outcomes with each twin. They simulate the treatment with a sigmoid model based on GESTAT10 (number of gestation weeks before birth) and x the 45 other covariates:
t i | x i , z i ∼ Bern σ w o x + w h (z/10 − 0.1) with w o ∼ N (0, 0.1 · I), w h ∼ N (5, 0.1)(16)
We built upon this equation, adding a non-constant slope in the treatment sigmoid, allowing us to control the amount of overlap between treated and control populations. We sampled uniformly 1 000 different overlap parameters between 0 and 2.5, totalizing 1 000 dataset instances. Unlike the previous datasets, only the overlap varies for these instances. The response surfaces are fixed by the original twin outcomes. Nuisance estimators Drawing inspiration from the TMLE literature that uses combination of flexible machine learning methods [74], we use as models for the nuisancesě (respectivelym) a form of meta-learner: a stacked estimator of ridge and boosting classifiers (respectively regressions). We select hyper-parameters with randomized search on a validation set V and keep them fix for model selection (detailed of the hyper parameters in Appendix E.2). As extreme inverse propensity weights induce high variance, we use clipping [82,36] to bound min(ě, 1 −ě) away from 0 with a fixed η = 10 −10 , ensuring strict overlap for numerical stability.
Measuring overlap between treated and non treated
Good overlap, or "positivity" between treated and control population is crucial for causal inference as it is required by the positivity assumption 2 for causal identification. It is typically assessed by qualitative methods using population histograms (as in Figure 2) or side-by-side box plots, or quantitative approaches such as Standardized Mean Difference [7,8]. While these methods are useful to decide if positivity holds, they do not summarize a dataset's overlap in a single measure. Rather, we compute the divergence between the population covariate distributions P(X|A = 0) and P(X|A = 1) to characterize the behavior of causal risk [17,38]. We introduce the Normalized Total Variation (NTV), a divergence based on the sole propensity score. Details are given in Appendix D.
Empirical results: factors driving good model selection across datasets
The R-risk is the best metric Each metric ranks differently the candidate models. Figure 5 shows the agreement between the ideal ranking of methods given the oracle τ -risk and the different causal metrics under evaluation. We measure this agreement with a relative 6 Kendall tau κ (eq. 20) [39]. Given the importance of overlap in how well metrics approximate the oracle τ -risk (subsection C.1), we separate strong and weak overlap -defined as first and last tertile of the Normalized Total Variation, eq. 18. 6 To remove the variance across datasets (some datasets lead to easier model selection than others), we report values for one metric relative to the mean of all metrics for a given dataset instance: Relative κ( , τ −risk) = κ( , τ −risk) − mean κ( , τ −risk) Figure 14 and with τ −risk gains in Figure 15. Table 4 displays the median and IQR for the relative Kendall's results.
Among all metrics the classical mean squared error (ie. factual µ-risk) is worse and reweighting it with propensity score (µ-risk IP W ) does not bring much improvements. The R-risk, which includes a model of mean outcome and propensity scores, leads to the best performances. Interestingly, the U -risk, which uses the same nuisances, has good performances for strong overlap but deteriorates in weak overlap, probably due variance inflation when dividing by extreme propensity scores.
Beyond the rankings, the differences in terms of absolute ability to select the best model are large: The model selected by the R-risk achieves a τ -risk only 1% higher than that of the best possible candidate for strong overlap on Caussim, but selecting with the µ-risk or µ-risk IP W -as per machine-learning practice-leads to 10% excess risk and using τ -risk IP W -as in some causal-inference methods [4,27]-leads to 100% excess risk ( Figure 15). Across other datasets, the R-risk consistently decreases the risk compared to the µ-risk: 0.1% compared to 1% on ACIC2016, 1% compared to 20% on ACIC 2018, and 0.05% compared to 1% on Twins.
Model selection is harder in settings of low population overlap Model selection for causal inference becomes more and more difficult with increasingly different treated and control populations ( Figure 6). The absolute Kendall's coefficient correlation with τ -risk drops from values around 0.9 (excellent agreement with oracle selection) to 0.6 on both Caussim and ACIC 2018 (Appendix E.3, Figure 14). Figure 14.
Nuisances can be estimated on the same data as outcome models Using the train set T both to fit the candidate estimator and the nuisance estimates is a form of double dipping which can lead errors in nuisances to be correlated to that of outcome models [54]. In theory, these correlations can bias model selection and, strictly speaking, there are theoretical arguments to split out a third separated data set -a "nuisance set"-to fit the nuisance models. The drawback is that it depletes the data available for model estimation and selection. However, Figure 7 shows that there is no substantial differences between a procedure with a separated nuisance set and the simpler shared nuisance-candidate set procedure (results for all metrics in Figure 16). : Nuisances can be estimated on the same data as outcome models: Results for the R-risk are similar between the shared nuisances/candidate set and the separated nuisances set procedures. Figure 16 details results for all metrics.
Stacked models are good overall estimators of nuisances Oracle versions of every risks recover more often the best estimator. However, in configurations that we investigated, stacked nuisances estimators (boosting and linear) lead to feasible metrics with close performances to the oracles ones. This suggests that the corresponding estimators recover well-enough the true nuisances. One may wonder if it may be useful to use simpler models for the nuisances, in particular in settings where there are less data or where the true models are linear. Figure 8 compares causal model selection estimating nuisances with stacked estimators or linear model. It comprises the Twins data, where the true propensity model is linear, and a downsampled version of this data, to study a situation favorable to linear models. In these settings, stacked and linear estimations of the nuisances performs equivalently. Overall, Figure 19 suggests that to estimate nuisance it suffices to use adaptive models as built by stacking linear models and gradient-boosted trees. Figure 18 in the appendices, details other causal metrics. Use 90% of the data to estimate outcome models, 10% to select them For best causal modeling, the analyst often faces a compromise: given a finite data sample, should she allocate most of the data to estimate the outcome model, thus maximizing chances of achieving a high-quality outcome model but leaving little data for model selection. Or, she could choose a bigger test set for model selection and effect estimation. For causal model selection, there is no established practice (as reviewed in Appendix F).
We investigate such tradeoff varying the ratio between train and test data size. For this, we first split out 30% of the data as a holdout set V on which we use the oracle response functions to derive silver-standard estimates of the causal quantities of interest. We then use the standard estimation procedure on the remaining 70% of the data, splitting it into train T and test S of varying sizes. We finally measure the error between this estimate and the silver-standard one.
We consider two different analytic goals: estimating a average treatment effect -a single number used for policy making-and a CATE -A full model of the treatment effect as a function of covariates X. Given that the latter is a much more complex object than the former, the optimal train/test ratio might vary. To measure errors, we use for the ATE the relative absolute ATE bias between the ATE computed with the selected outcome model on the test set, and the true ATE as evaluated on the holdout set V. For the CATE, we compare the τ -risk of the best selected model applied on the holdout set V. We explore this trade-off for the ACIC 2016 dataset and the R-risk. Figure 9 shows that a train/test ratio of 0.9/0.1 (K=10) appears as a good trade-off for CATE and ATE estimation, thought there is little difference with a split of 0.8/0.2 (K=5).
Discussion and conclusion
Predictive models are increasingly used to reason about causal effects. Our results highlight that they should be selected, validated, and tuned using different procedures and error measures than those classically used to assess prediction (estimating the so-called µ-risk). Rather, selecting the best outcome model according to the R-risk (eq. 6) leads to more valid causal estimates. Estimating this risk requires a markedly more complex procedure than standard cross-validation used e.g. in machine learning: it involves fitting nuisance models necessary for model evaluation, though our empirical results show that these can be learned on the same set of data as the outcome model evaluated. A poor estimation of the nuisance models may compromise the benefits of the more complex R-risk (as shown in Figure 8). However controlling and selecting these latter models is easier because they are associated to errors on observed distributions. Our empirical results show that when selecting these models in a flexible family of models the R-risk dominates simpler risks for model selection. Results show that going from an oracle R-risk -where the nuisances are known-to a feasible R-risk -where the nuisances are estimated-decreases only very slightly the model-selection performance of the R-risk. This may be explained by theoretical results that suggest that estimation errors on both nuisances partly compensate out in the R-risk [18,51,40,54,14].
For strong overlap, the conventional procedure (µ-risk) appears theoretically motivated (subsection 3.1), however empirical results show that even in this regime the R-risk brings a sizeable benefit, in agreement with Schuler et al. [73].
Our results points to K=5 or 10 for the choice of the number of folds used in cross-validation for CATE and ATE, aligned with previous empirical evidence for ATE [14].
Extension to binary outcome While we focused on continuous outcomes, in medicine, the target outcome is often a categorical variable such as mortality status or diagnosis. In this case, it may be interesting to focus on other estimands than the Average Treatment Effect
E[Y (1)] − E[Y (0)], for instance the relative risk P(Y (1)=1) P(Y (0)=1) or the odd ratio, P(Y (1)=1)/[1−P(Y (1)=1)] P(Y (0)=1)/[1−P(Y (0)=1
] are often used [6]. While the odds ratio is natural for case-control studies [69], a good choice of measure can reduce heterogeneity [15]. Using as an estimand the log of these values is suitable to additive models (for reasoning or noise assumptions). In the log domain, the relative risk or the odds ratio are written as a difference, as the ATE: log P(Y (1) = 1) − log P(Y (0 = 1) or log(P(Y (1) = 1)/[1 − P(Y (1) = 1)]) − log P(Y (0 = 1)/[1 − P(Y (0) = 1)]. Hence, the framework studied here (subsection 2.1) can directly apply. It is particularly easy for the log odds ratio, as it is the output of a logistic regression or any model with a cross-entropy loss.
Going further The R-risk needs good estimation of nuisance models. The propensity score e calls for a control on the estimation of the individual posterior probability. We have used the Brier score to select these models, as it is minimized by the true individual probability. Regarding model-selection for propensity score, an easy mistake is to use expected calibration errors popular in machine learning [59,87,53,49] as these select not for the individual posterior probability but for an aggregate error rate [57]. An open question is whether a better metric than the brier score can be designed that controls for e (1 − e), the quantity used in the R-risk, rather than e. The quality of model selection varies substantially from one data-generating mechanism to another. The overlap appears as an important parameter: when the treated and untreated, causal model selection is very hard. However, remaining variance in the empirical results suggests that other parameters of the data generation processes come into play. Intuitively, the complexity of the response surfaces and the treatment heterogeneity interact with overlap violations: when extrapolations to weak-overlap regions is hard, causal model selection is hard.
Nevertheless, from a practical perspective, our study establishes that the R-risk is the best option to select predictive models for causal inference, without requiring assumptions on the data-generating mechanism, the amount of data at hand, or the specific estimators used to build predictive models. Figure 1 shows ATE estimations for six different models used in g-computation estimators on the 76 configurations of the ACIC 2016 dataset. Outcome models are fitted on half of the data and inference is done on the other half -ie. train/test with a split ratio of 0.5. For each configuration, and each model, this train test split was repeated ten times, yielding non parametric variance estimates [12].
A Variability of ATE estimation on ACIC 2016
Outcome models are implemented with scikit-learn [56]
B Causal assumptions
We assume the following four assumptions, referred as strong ignorability and necessary to assure identifiability of the causal estimands with observational data [70]:
Assumption 1 (Unconfoundedness) {Y (0), Y (1)} ⊥ ⊥ A|X
This condition -also called ignorability-is equivalent to the conditional independence on e(X) [68]:
{Y (0), Y (1)} ⊥ ⊥ A|e(X).
Assumption 2 (Overlap, also known as Positivity))
η < e(x) < 1 − η ∀x ∈ X and some η > 0
The treatment is not perfectly predictable. Or with different words, every patient has a chance to be treated and not to be treated. For a given set of covariates, we need examples of both to recover the ATE.
As noted by [17], the choice of covariates X can be viewed as a trade-off between these two central assumptions. A bigger covariates set generally reinforces the ignorability assumption. In the contrary, overlap can be weakened by large X because of the potential inclusion of instruments: variables only linked to the treatment which could lead to arbitrarily small propensity scores.
Assumption 3 (Consistency)
The observed outcome is the potential outcome of the assigned treatment:
Y = A Y (1) + (1 − A) Y (0)
Here, we assume that the intervention A has been well defined. This assumption focuses on the design of the experiment. It clearly states the link between the observed outcome and the potential outcomes through the intervention [28].
Assumption 4 (Generalization)
The training data on which we build the estimator and the test data on which we make the estimation are drawn from the same distribution D * , also known as the "no covariate shift" assumption [37].
C Proofs: Links between feasible and oracle risks C.1 Upper bound of τ -risk with µ-risk IP W For the bound with the µ-risk IP W , we will decompose the CATE risk on each factual population risks:
Definition 7 (Population Factual µ-risk) [75] µ-risk a (f ) = Y×X (y − f (x; A = a)) 2 p(y; x = x | A = a) dydx
Applying Bayes rule, we can decompose the µ-risk on each intervention:
µ-risk(f ) = p A µ-risk 1 (f ) + (1 − p A ) µ-risk 0 (f )with p A = P(A = 1)
These definitions allows to state a intermediary result on each population:
Lemma 1 (Mean-variance decomposition) We need a reweighted version of the classical mean-variance decomposition.
For an outcome model f : x × A → X . Let the inverse propensity weighting function w(a; x) = ae(
x) −1 + (1 − a)(1 − e(x)) −1 . X (µ 1 (x) − f (x; 1)) 2 p(x)dx = p A µ-risk IP W,1 (w, f ) − σ 2 Bayes (1) And X (µ 0 (x) − f (x; 0)) 2 p(x)dx = (1 − p A )µ-risk IP W,0 (w, f ) − σ 2 Bayes (0) Proof 1 p A µ-risk IP W,1 (w, f ) = X ×Y 1 e(x) (y − f (x; 1)) 2 p(y | x; A = 1)p(x; A = 1)dydx = X ×Y (y − f (x; 1)) 2 p(y | x; A = 1) p(x; A = 1) p(x; A = 1) p(x)dydx = X ×Y (y − µ 1 (x)) 2 + (µ 1 (x) − f (x; 1)) 2 + 2 (y − µ 1 (x)) (µ 1 (x) − f (x, 1)) p(y | x; A = 1)p(x)dydx = X Y (y − µ 1 (x)) 2 p(y | x; A = 1)dy p(x)dx + X ×Y (µ 1 (x) − f (x; 1)) 2 p(x)p(y | x; A = 1)dxdy + 2 X Y (y − µ 1 (x)) p(y | x; A = 1)dy (µ 1 (x) − f (x, 1)) p(x)dx = X σ 2 y (x, 1)p(x)dx + X (µ 1 (x) − f (x; 1)) 2 p(x)dx + 0
Proposition 1 (Upper bound with mu-IPW) Let f be a given outcome model, let the weighting function w be the Inverse Propensity Weight w(x; a) = a e(x) + 1−a 1−e(x) . Then, under overlap (assumption 2),
τ -risk(f ) ≤ 2 µ-risk IP W (w, f ) − 2 (σ 2 Bayes (1) + σ 2 Bayes (0)) Proof 2 τ -risk(f ) = X (µ 1 (x) − µ 0 (x) − (f (x; 1) − f (x; 0)) 2 p(x)dx By the triangle inequality (u + v) 2 ≤ 2(u 2 + v 2 ): τ -risk(f ) ≤ 2 X (µ 1 (x) − f (x; 1)) 2 + (µ 0 (x) − f (x; 0)) 2 p(x)dx
Applying Lemma 1,
τ -risk(f ) ≤ 2 p A µ-risk IP W,1 (w, f ) + (1 − p A )µ-risk IP W,0 (w, f )(w, f ) − 2(σ 2 Bayes (0) + σ 2 Bayes (1)) = 2µ-risk IP W (w, f ) − 2(σ 2 Bayes (0) + σ 2 Bayes (1))
C.2 Reformulation of the R-risk as reweighted τ -risk Proposition 2 (R-risk as reweighted τ -risk) Proof 3 We consider the R-decomposition: [66],
y(a) = m(x) + a − e(x) τ (x) + ε(x; a)(17)
Where E[ε(X; A)|X, A] = 0 We can use it as plug in the R-risk formula:
R-risk(f ) = Y×X ×A [(y − m(x)) − a − e(x) τ f (x)] 2 p(y; x; a)dydxda = Y×X ×A a − e(x) τ (x) + ε(x; a) − a − e(x) τ f (x) 2 p(y; x; a)dydxda = X ×A a − e(x) 2 τ (x) − τ f (x) 2 p(x; a)dxda + 2 Y×X ×A a − e(x) τ (x) − τ f (x) Y ε(x; a)p(y | x; a)dyp(x; a)dxda + X ×A Y ε 2 (x; a)p(y | x; a)dyp(x; a)dxda
The first term can be decomposed on control and treated populations to force e(x) to appear:
X τ (x) − τ f (x) 2 e(x) 2 p(x; 0) + 1 − e(x) 2 p(x; 1) dx = X τ (x) − τ f (x) 2 e(x) 2 1 − e(x) p(x) + 1 − e(x) 2 e(x)p(x) dx = X (τ (x) − τ f (x)) 2 (1 − e(x))e(x)[1 − e(x) + e(x)]p(x)dx = X (τ (x) − τ f (x)) 2 (1 − e(x))e(x)p(x)dx.
The second term is null since, E[ε(x, a)|X, A] = 0.
The third term corresponds to the modulated residuals 3 :σ 2 B (0) +σ 2 B (1)
D Measuring overlap
Motivation of the Normalized Total Variation Computing overlap when working only on samples of the observed distribution, outside of simulation, requires a sophisticated estimator of discrepancy between distributions, as two data points never have the same exact set of features. Maximum Mean Discrepancy [24] is typically used in the context of causal inference [75,38]. However it needs a kernel, typically Gaussian, to extrapolate across neighboring observations. We prefer avoiding the need to specify such a kernel, as it must be adapted to the data which is tricky with categorical or non-Gaussian features, a common situation for medical data.
For simulated and some semi-simulated data, we have access to the probability of treatment for each data point, which sample both densities in the same data point. Thus, we can directly use distribution discrepancy measures and rely on the Normalized Total Variation (NTV) distance to measure the overlap between the treated and control propensities. This is the empirical measure of the total variation distance [80] between the distributions, T V (P(X|A = 1), P(X|A = 0)). As we have both distribution sampled on the same points, we can rewrite it a sole function of the propensity score, a low dimensional score more tractable than the full distribution P(X|A):
N T V (e, 1 − e) = 1 2N N i=1 e(x i ) p A − 1 − e(x i ) 1 − p A(18)
Formally, we can rewrite NTV as the Total Variation distance between the two population distributions. For a population O = (Y (A), X, A) ∼ D:
N T V (O) = 1 2N N i=1 e(x i ) p A − 1 − e(x i ) 1 − p A = 1 2N N i=1 P (A = 1|X = x i ) p A − P (A = 0|X = x i ) 1 − p A
Thus NTV approximates the following quantity in expectation over the data distribution D:
N T V (D) = X p(A = 1|X = x) p A − p(A = 0|X = x) 1 − p A p(x)dx = X p(A = 1, X = x) p A − p(A = 0, X = x) 1 − p A dx = X p(X = x|A = 1) − p(X = x|A = 0) dx
For countable sets, this expression corresponds to the Total Variation distance between treated and control populations covariate distributions : T V (p 0 (x), p 1 (x)).
Measuring overlap without the oracle propensity scores: For ACIC 2018, or for non-simulated data, the true propensity scores are not known. To measure overlap, we rely on flexible estimations of the Normalized Total Variation, using gradient boosting trees to approximate the propensity score. Empirical arguments for this plug-in approach is given in Figure 10.
Empirical arguments We show empirically that NTV is an appropriate measure of overlap by :
• Comparing the NTV distance with the MMD for Caussim which is gaussian distributed in Figure 12,
• Verifying that setups with penalized overlap from ACIC 2016 have a higher total variation distance than unpenalized setups in Figure 11.
• Verifying that the Inverse Propensity Weights extrema (the inverse of the ν overlap constant appearing in the overlap Assumption 2) positevely correlates with NTV for Caussim, ACIC 2016 and Twins in Figure 13. Even if the same value of the maximum IPW could lead to different values of NTV, we expect both measures to be correlated : the higher the extrem propensity weights, the higher the NTV.
Estimating NTV in practice Finally, we verify that approximating the NTV distance with a learned plug-in estimates of e(x) is reasonnable. We used either a logistic regression or a gradient boosting classifier to learn the propensity models for the three datasets where we have access to the ground truth propensity scores: Caussim, Twins and ACIC 2016. We respectively sampled 1000, 1000 and 770 instances of these datasets with different seeds and overlap settings. We first run a hyperparameter search with cross-validation on the train set, then select the best estimator. We refit on the train set this estimator with or without calibration by cross validation and finally estimate the normalized TV with the obtained model. This training procedure reflects the one described in Algorithm 1 where nuisance models are fitted only on the train set. Results in Figure 10 comparing bias to the true normalized Total Variation of each dataset instances versus growing true NTV indicate that calibration of the propensity model is crucial to recover a good approximation of the NTV. How to select predictive models for causal inference?
A PREPRINT Figure 11: NTV recovers well the overlap settings described in the ACIC paper [20] 0 the well known spline expansion. The estimators of the response function are learned with a linear model on another random basis (which can be seen as a stochastic approximation of the full data kernel [63]). We carefully control the amount of overlap between treated and control populations, a crucial assumption for causal inference.
• The raw features for both populations are drawn from a mixture of Gaussians: P(X) = p A P(X|A = 1) + (1 − p A )P(X|A = 0) where P(x|A = a) is a rotated Gaussian:
P(x|A = a) = W · N (1 − 2a)θ 0 ; σ 0 0 0 σ 1(19)
with θ a parameter controlling overlap (bigger yields poorer overlap), W a random rotation matrix and σ 2 0 = 2; σ 2 1 = 5. This generation process allows to analytically compute the oracle propensity scores e(x), to simply control for overlap with the parameter θ, the distance between the two Gaussian main axes and to visualize response surfaces.
• A basis expansion of the raw features increases the problem dimension. Using Radial Basis Function (RBF) Nystroem transformation 7 , we expand the raw features into a transformed space. The basis expansion sam-ples randomly a small number of representers in the raw data. Then, it computes an approximation of the full N-dimensional kernel with these basis components, yielding the transformed features z(x). We generate the basis following the original data distribution, [b 1 ..b D ] ∼ P(x), with D=2 in our simulations. Then, we compute an approximation of the full kernel of the data generation process RBF (x, ·) with x ∼ P(x) with these representers:
z(x) = [RBF γ (x, b d )] d=1..D · Z T ∈ R D with RBFµ 0 (x) = [z(x); 1] · β T µ τ (x) = [z(x); 1] · β T τ
• Adding a Gaussian noise, ε ∼ N (0, σ(x; a)), we construct the potential outcomes: y(a) = µ 0 (x) + a τ (x) + ε(x, a)
We generated 1000 instances of this dataset with uniformly random overlap parameters θ ∈ [0, 2.5].
E.2 Model selection procedures
Nuisances estimation The nuisances are estimated with a stacked regressor inspired by the Super Learner framework, [43]). The hyper-parameters are optimized with a random search with following search grid detailed in Table 3. All implementations come from scikit-learn [56]. Table 3: Hyper-parameters grid used for nuisance models
E.3 Additional Results
Definition of the Kendall's tau, κ The Kendall's tau is a widely used statistics to measure the rank correlation between two set of observations. It measures the number of concordant pairs minus the discordant pairs normalized by the total number of pairs. It takes values in the [−1, 1] range. Then, R-risk * is more efficient than all other metrics. The gain are substantial for every datasets. Table 4: Values of relative κ( , τ −risk) compared to the mean over all metrics Kendall's as shown in the boxplots of Figure 18 -Stacked models for the nuisances is more efficient For each metrics the benefit of using a stacked model of linear and boosting estimators for nuisances compared to a linear model. The evaluation measure is Kendall's tau relative to the oracle R-risk to have a stable reference between exepriments. Thus, we do not include in this analysis the ACIC 2018 dataset since R-risk is not available due to the lack of the true propensity score. Selecting different seeds and parameters is crucial to draw conclucions One strength of our study is the various number of different simulated and semi-simulated datasets. We are convinced that the usual practice of using only a small number of generation processes does not allow to draw statistically significant conclusions. Figure 20 illustrate the dependence of the results on the generation process for caussim simulations. We highlighted the different trajectories induced by three different seeds for data generation and three different treatment ratio instead of 1000 different seeds. The result curves are relatively stable from one setup to another for R−risk, but vary strongly for µ-risk and µ-risk IP W .
treated and control populations (NTV, see Sec. 4.
Settings
The Neyman-Rubin Potential Outcomes framework[52,35] enables statistical reasoning on causal treatment effects: Given an outcome Y ∈ R (eg. mortality risk or hospitalization length), function of a binary treatment A ∈ A = {0, 1} (eg. a medical act, a drug administration), and baseline covariates X ∈ X ⊂ R d , we observe the factual distribution, O = (Y (A), X, A) ∼ D = P(y, x, a). However, we want to model the existence of potential observations (unobserved ie. counterfactual) that correspond to a different treatment. Thus we want quantities on the counterfactual distribution O * = (Y (1), Y (0), X, A) ∼ D * = P(y(1), y(0), x, a).
Algorithm 1
1Evaluation of selection procedures for one simulation Given a train and a test sets (T , S) ∼ D, a family of candidate estimators {f ∈ F}, a set of causal metrics ∈ L: 1. Prefit: Learn estimators for unknown nuisance quantities (ě,m) on the training set T 2. Fit: ∀f ∈ F learnf (·, a) on T 3. Model selection: ∀x ∈ S predict f (x, 1),f (x, 0) and evaluate each candidate estimator with each causal metric M(f , S). For each causal metric ∈ L and each candidate estimator f ∈ F, store the metric value: (f, S) -possibly function ofě andm
Figure 3 :
3Estimation procedure for causal model selection.
3. 1
1Upper bound of τ -risk with µ-risk IP W Proposition 1 (Upper bound with µ-risk IP W )[38] Given an outcome model f , let a weighting function w(x; a) x) as the Inverse Propensity Weight. Then, under overlap (assumption 2), we have:
Figure 4 :
4Example of the simulation setup in the input space with two knots -ie.basis functions. The left panel gives views of the observations in feature space, while the right panel displays the two response surfaces on a 1D cut along the black lines drawn on the left panel.
Family of candidate estimators For these three datasets, the family of candidate estimators are gradient boosting trees for both the response surfaces and the treatment 5 with S-learner, learning rate in {0.01, 0.1, 1}, and maximum number of leaf nodes in {25, 27, 30, 32, 35, 40} resulting in a family of size 18.
Figure 5 :
5The R-risk is the best metric: Relative Kendall's τ agreement with τ -risk, measured as the difference between each metric Kendall's τ and the mean kendall's τ over all metric: κ( , τ −risk) − mean κ( , τ −rRisk) . Strong and Weak overlap correspond to the first and last tertiles of the overlap distribution measured with Normalized Total Varation eq. 18. Appendix E.3 presents the same results measured with absolute Kendall's in
Figure 6 :
6Model selection is harder in settings of low population overlap: Kendall's τ agreement with τ -risk. Strong, medium and Weak overlap are the tertiles of the overlap distribution measured with NTV eq. 18. Appendix E.3 presents results for all metrics in Figure 17 and in absolute Kendall's and continuous overlap values in
Figure 7
7Figure 7: Nuisances can be estimated on the same data as outcome models: Results for the R-risk are similar between the shared nuisances/candidate set and the separated nuisances set procedures. Figure 16 details results for all metrics.
Figure 8 :
8Stacked models are good overall estimators of the nuisances: Results are shown only for the R-risk. Details for every metrics are provided inFigure 18. For Twins, where the true propensity model is linear, stacked and linear estimations of the nuisances performs equivalently, even for a downsampled version (N=4794).
Figure 9 :
9a) For CATE, a train/test ratio of 0.9/0.1 appears as a good trade-off. b) For ATE, there is a small signal pointing also to 0.9/0.1 (K=10). for ATE. Experiences on 10 replications of all 78 instances of the ACIC 2016 data.
E. 1 Figure 10 :
110Details on the data generation processWe use Gaussian-distributed covariates and random basis expansion based on Radial Basis Function kernels. A random basis of RBF kernel enables modeling non-linear and complex relationships between covariates in a similar way to a) Without calibration, estimation of NTV is not trivial even for boosting models. b) Calibrated classifiers are able to recover the true Normalized Total Variation for all datasets where it is available.
Figure 12 :Figure 13 :
1213Good correlation between overlap measured as normalized Total Variation and Maximum Mean Discrepancy (200 sampled Caussim datasets) Maximal value of Inverse Propensity Weights increases exponentially with the overlap as measure by Normalized Total Variation.
•
γ being the Gaussian kernel K(x, y) = exp(−γ||x − y|| 2 ) and Z the normalization constant of the kernel basis, computed as the root inverse of the basis kernel Z = [K(b i , b j )] Functions µ 0 , τ are distinct linear functions of the transformed features:
κ = (number of concordant pairs ) − (number of discordant pairs) (number of pairs)(20)Values of relative κ( , τ −risk) compared to the mean over all metrics Kendall's as shown in the boxplots ofFigure 5
Figure 14 -Figure 15 -
1415Results measured in absolute Kendall's Results measured as distance to the oracle tau-risk To see practical gain in term of τ -risk, we plot the results as the normalized distance between the estimator selected by the oracle τ -risk and the estimator selected by each causal metric.
Figure 16 -
16Stacked models for the nuisances is more efficient For each metrics the benefit of using a stacked model of linear and boosting estimators for nuisances compared to a linear model.
Figure 5
5
Figure 17
17Low population overlap hinders model selection for all metrics
Figure 19 -
19Flexible models are performant in recovering nuisances even in linear setups
Figure 14 :Figure 15 :
1415Agreement with τ -risk ranking of methods function of overlap violation. The lines represent medians, estimated with a lowess. The transparent bands denote the 5% and 95% confidence intervals. Metric performances by normalized tau-risk distance to the best method selected with τ -risk. All nuisances are learned with the same estimator stacking gradient boosting and ridge regression. Doted and plain lines corresponds to 60% lowess quantile estimates. This choice of quantile allows to see better the oracle metrics lines for which outliers with a value of 0 distord the curves.
Figure 16 :
16Results are similar between the Shared nuisances/candidate set and the Separated nuisances set procedure. The experience has not been run on the full metrics for Caussim due to computation costs.
Figure 17 :Figure 18 :
1718Low population overlap hinders causal model selection for all metrics: Kendall's τ agreement with τ -risk. Strong, medium and Weak overlap correspond to the tertiles of the overlap distribution measured with Normalized Total Varation eq. 18. Learning the nuisances with stacked models (linear and gradient boosting) is important for successful model selection with R-risk. For Twins dataset, there is no improvement for stacked models compared to linear models because of the linearity of the propensity model.
Figure 19 :
19Flexible models are performant in recovering nuisances in the downsampled Twins dataset. The propensity score is linear in this setup, making it particularly challenging for flexible models compared to linear methods.
Figure 20 :
20Kendall correlation coefficients for each causal metric. Each (color, shape) pair indicates a different (treatment ratio, seed) of the generation process.
compare four causal risks, concluding that for CATE estimation the best model-selection risk is the so-called R-risk [54] -def. 6, below. Their empirical results are clear for randomized treatment allocation but less convincing for observational settings where both simple Mean Squared Error -MSE, µ-risk(f ) def. 2-and reweighted MSE -µ-risk IP W def. 3-appear to perform better than R-risk on half of the simulations. Another work [1] studied empirically both MSE and reweighted MSE risks on the semi-synthetic ACIC 2016 datasets
Table 1 :
1Review of causal risks
and the following hyper-parameters:Outcome Model
Hyper-parameters grid
Random Forests
Max depth: [2, 10]
Ridge regression without treatment interaction Ridge regularization: [0.1]
Ridge regression with treatment interaction
Ridge regularization: [0.1]
Table 2: Hyper-parameters grid used for ACIC 2016 ATE variability
Model Estimator Hyper-parameters grid Outcome, m StackedRegressor ridge regularization: [0.0001, 0.001, 0.01, 0.1, 1, 10, 100] (HistGradientBoostingRegressor, ridge) HistGradientBoostingRegressor learning rate: [0.01, 0.1, 1] HistGradientBoostingRegressor max leaf nodes: [10, 20, 30, 50 Treatment, e StackedClassifier LogisticRegression C: [0.0001, 0.001, 0.01, 0.1, 1, 10, 100] (HistgradientBoostingClassifier, LogisticRegression) HistGradientBoostingClassifier learning rate: [0.01, 0.1, 1]HistGradientBoostingClassifier max leaf nodes:[10,20,30,50
The evaluation measure is Kendall's tau relative to the oracle R-risk to have a stable reference between exepriments. Thus, we do not include in this analysis the ACIC 2018 dataset since R-risk is not available due to the lack of the true propensity score.Strong Overlap Weak Overlap
Median IQR Median IQR
Metric
Dataset
µ−risk
Twins (N=11 984)
-0.32 0.12
-0.19 0.12
ACIC 2016 (N=4 802)
-0.03 0.13
0.11 0.19
Caussim (N=5 000)
-0.40 0.55
-0.16 0.31
ACIC 2018 (N=5 000)
0.00 0.30
0.01 0.40
µ−risk IP W
Twins (N= 11 984)
-0.31 0.13
-0.17 0.12
ACIC 2016 (N=4 802)
-0.02 0.13
0.11 0.19
Caussim (N=5 000)
-0.34 0.50
0.09 0.31
ACIC 2018 (N=5 000)
0.00 0.30
-0.01 0.43
µ−risk
*
IP W
Twins (N= 11 984)
-0.32 0.13
-0.17 0.13
ACIC 2016 (N=4 802)
-0.02 0.13
0.11 0.21
Caussim (N=5 000)
-0.33 0.54
0.26 0.27
τ −risk IP W
Twins (N= 11 984)
0.13 0.12
0.27 0.12
ACIC 2016 (N=4 802)
-0.07 0.18
0.05 0.31
Caussim (N=5 000)
-0.19 0.43
-0.14 0.18
ACIC 2018 (N=5 000)
-0.16 0.40
-0.11 0.66
τ −risk
*
IP W
Twins (N= 11 984)
0.12 0.14
0.20 0.16
ACIC 2016 (N=4 802)
-0.03 0.16
-0.09 0.43
Caussim (N=5 000)
-0.15 0.46
-0.17 0.19
U − risk
Twins (N= 11 984)
0.13 0.12
0.02 0.25
ACIC 2016 (N=4 802)
0.04 0.11
0.11 0.26
Caussim (N=5 000)
0.04 0.43
-0.04 0.17
ACIC 2018 (N=5 000)
0.12 0.26
-0.02 0.50
U − risk
*
Twins (N= 11 984)
0.25 0.08
-0.41 0.45
ACIC 2016 (N=4 802)
0.08 0.13
-0.59 0.57
Caussim (N=5 000)
0.46 0.12
0.02 0.44
R − risk
Twins (N= 11 984)
0.15 0.10
0.25 0.18
ACIC 2016 (N=4 802)
0.07 0.12
0.22 0.15
Caussim (N=5 000)
0.34 0.26
0.13 0.21
ACIC 2018 (N=5 000)
0.13 0.27
0.21 0.47
R − risk
*
Twins (N= 11 984)
0.25 0.10
0.32 0.15
ACIC 2016 (N=4 802)
0.12 0.12
0.25 0.15
Caussim (N=5 000)
0.47 0.11
0.16 0.14
Scripts for the simulations and the selection procedure are available at https://github.com/strayMat/caussim.
Original R code available at https://github.com/vdorie/aciccomp/tree/master/2016 to generate 77 simulations settings.3 Using the scaling part of the data, from github.com/IBM-HRL-MLHLS/IBM-Causal-Inference-Benchmarking-Framework4 We obtained the dataset from https://github.com/AMLab-Amsterdam/CEVAE/tree/master/datasets/TWINS 5 Scikit-learn regressor, HistGradientBoostingRegressor, and classifier, HistGradientBoostingClassifier.
We use the Sklearn implementation,[56]
AcknowledgmentsWe acknowledge fruitful discussions with Bénédicte Colnet.Financial disclosureNone reported.Conflict of interestThe authors declare no potential conflict of interests.
Validating Causal Inference Models via Influence Functions. Ahmed Alaa, Mihaela Schaar, Van Der, International Conference on Machine Learning. Alaa, Ahmed and Schaar, Mihaela Van Der. "Validating Causal Inference Models via Influence Functions". In: International Conference on Machine Learning (May 24, 2019), pp. 191-201.
The Costs of Low Birth Weight. Douglas Almond, Chay, Y Kenneth, Lee, S David, The Quarterly Journal of Economics. 120Oxford University PressAlmond, Douglas, Chay, Kenneth Y., and Lee, David S. "The Costs of Low Birth Weight". In: The Quarterly Journal of Economics 120.3 (2005). Publisher: Oxford University Press, pp. 1031-1083.
Prognosis and prognostic research: validating a prognostic model. Douglas G Altman, Yvonne Vergouwe, Patrick Royston, Moons, G M Karel, Bmj. 338Altman, Douglas G, Vergouwe, Yvonne, Royston, Patrick, and Moons, Karel GM. "Prognosis and prognostic research: validating a prognostic model". In: Bmj 338 (2009).
Recursive partitioning for heterogeneous causal effects. Susan Athey, Guido Imbens, Proceedings of the National Academy of Sciences. the National Academy of Sciences113Athey, Susan and Imbens, Guido. "Recursive partitioning for heterogeneous causal effects". In: Proceedings of the National Academy of Sciences 113.27 (2016), pp. 7353-7360.
Generalized random forests. Susan Athey, Julie Tibshirani, Stefan Wager, Annals of Statistics. 47Publisher: Institute of Mathematical StatisticsAthey, Susan, Tibshirani, Julie, and Wager, Stefan. "Generalized random forests". In: Annals of Statistics 47.2 (Apr. 2019). Publisher: Institute of Mathematical Statistics, pp. 1148-1178.
Estimating the effect of treatment on binary outcomes using full matching on the propensity score. Peter C Austin, Elizabeth A Stuart, Statistical methods in medical research. 26Austin, Peter C and Stuart, Elizabeth A. "Estimating the effect of treatment on binary outcomes using full matching on the propensity score". In: Statistical methods in medical research 26.6 (2017), pp. 2505-2525.
An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies. Peter C Austin, Multivariate Behavioral Research. 463Austin, Peter C. "An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies". In: Multivariate Behavioral Research 46.3 (May 2011), pp. 399-424.
Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies. Peter C Austin, Elizabeth A Stuart, Statistics in Medicine. 34Austin, Peter C. and Stuart, Elizabeth A. "Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies". In: Statistics in Medicine 34.28 (Dec. 10, 2015), pp. 3661-3679.
EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation. Keith Battocchi, Dillon, Eleanor, Hei, Maggie, Greg Lewis, Oka, Paul, Miruna Oprescu, Vasilis Syrgkanis, Battocchi, Keith, Dillon, Eleanor, Hei, Maggie, Lewis, Greg, Oka, Paul, Oprescu, Miruna, and Syrgka- nis, Vasilis. EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation. https://github.com/microsoft/EconML. Version 0.14.0. 2019.
Why we need observational studies to evaluate the effectiveness of health care. Nick Black, Bmj. 312Black, Nick. "Why we need observational studies to evaluate the effectiveness of health care". In: Bmj 312.7040 (1996), pp. 1215-1218.
Reflection on modern methods: when worlds collide-prediction, machine learning and causal inference. Tony Blakely, Lynch, John, Simons, Koen, Rebecca Bentley, Sherri Rose, International journal of epidemiology. 49Blakely, Tony, Lynch, John, Simons, Koen, Bentley, Rebecca, and Rose, Sherri. "Reflection on modern meth- ods: when worlds collide-prediction, machine learning and causal inference". In: International journal of epi- demiology 49.6 (2020), pp. 2058-2064.
Accounting for Variance in Machine Learning Benchmarks. Xavier Bouthillier, Delaunay, Pierre, Bronzi, Mirko, Trofimov, Assya, Nichyporuk, Brennan, Justin Szeto, Mohammadi Sepahvand, Nazanin, Raff, Edward, Madan, Kanika, Voleti, Ebrahimi Vikram, Kahou, Samira, Michalski, Vincent, Chris, Gael Varoquaux, Pascal Vincent, Proceedings of Machine Learning and Systems. Machine Learning and Systems3Bouthillier, Xavier, Delaunay, Pierre, Bronzi, Mirko, Trofimov, Assya, Nichyporuk, Brennan, Szeto, Justin, Mohammadi Sepahvand, Nazanin, Raff, Edward, Madan, Kanika, Voleti, Vikram, Ebrahimi Kahou, Samira, Michalski, Vincent, Arbel, Tal, Pal, Chris, Varoquaux, Gael, and Vincent, Pascal. "Accounting for Variance in Machine Learning Benchmarks". In: Proceedings of Machine Learning and Systems 3 (Mar. 15, 2021), pp. 747- 769.
A new method of classifying prognostic comorbidity in longitudinal studies: Development and validation. Mary E Charlson, Pompei, Peter, Kathy L Ales, C Mackenzie, Ronald, Journal of Chronic Diseases. 40Charlson, Mary E., Pompei, Peter, Ales, Kathy L., and MacKenzie, C.Ronald. "A new method of classifying prognostic comorbidity in longitudinal studies: Development and validation". In: Journal of Chronic Diseases 40.5 (Jan. 1987), pp. 373-383.
Double/Debiased Machine Learning for Treatment and Structural Parameters. Chernozhukov, Victor, Chetverikov, Denis, Demirer, Mert, Duflo, Esther, Hansen, Christian, Whitney Newey, James Robins, The Econometrics Journal. 71Chernozhukov, Victor, Chetverikov, Denis, Demirer, Mert, Duflo, Esther, Hansen, Christian, Newey, Whitney, and Robins, James. "Double/Debiased Machine Learning for Treatment and Structural Parameters". In: The Econometrics Journal (2018), p. 71.
Which causal measure is easier to generalize. Colnet, Bénédicte, Julie Josse, Gaël Varoquaux, Erwan Scornet, arXiv:2303.16008arXiv preprintRisk ratio, odds ratio, risk differenceColnet, Bénédicte, Josse, Julie, Varoquaux, Gaël, and Scornet, Erwan. "Risk ratio, odds ratio, risk difference... Which causal measure is easier to generalize?" In: arXiv preprint arXiv:2303.16008 (2023).
Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation. Alicia Curth, David Svensson, James Weatherall, Neurips Process 2021. 14Curth, Alicia, Svensson, David, and Weatherall, James. "Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation". In: Neurips Process 2021 (2021), p. 14.
Overlap in observational studies with high-dimensional covariates. Alexander D'amour, Ding, Peng, Avi Feller, Lihua Lei, Jasjeet Sekhon, Journal of Econometrics. 221D'Amour, Alexander, Ding, Peng, Feller, Avi, Lei, Lihua, and Sekhon, Jasjeet. "Overlap in observational studies with high-dimensional covariates". In: Journal of Econometrics 221.2 (2021), pp. 644-654.
Double Robustness. Rhian M Daniel, Wiley StatsRef: Statistics Reference Online. John Wiley & Sons, LtdDaniel, Rhian M. "Double Robustness". In: Wiley StatsRef: Statistics Reference Online. John Wiley & Sons, Ltd, 2018, pp. 1-14.
Comparison of machine learning methods with traditional models for use of administrative claims with electronic medical records to predict heart failure outcomes. Rishi J Desai, Shirley V Wang, Vaduganathan, Muthiah, Thomas Evers, Sebastian Schneeweiss, JAMA network open. 31Desai, Rishi J, Wang, Shirley V, Vaduganathan, Muthiah, Evers, Thomas, and Schneeweiss, Sebastian. "Com- parison of machine learning methods with traditional models for use of administrative claims with electronic medical records to predict heart failure outcomes". In: JAMA network open 3.1 (2020), e1918962-e1918962.
Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. Dorie, Vincent, Hill, Jennifer, Shalit, Uri, Marc Scott, Dan Cervone, Statistical Science. 34Dorie, Vincent, Hill, Jennifer, Shalit, Uri, Scott, Marc, and Cervone, Dan. "Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition". In: Statistical Science 34.1 (2019), pp. 43-68.
Exploiting drug-disease relationships for computational drug repositioning. Joel T Dudley, Tarangini Deshpande, Butte, J Atul, Briefings in bioinformatics. 12Dudley, Joel T, Deshpande, Tarangini, and Butte, Atul J. "Exploiting drug-disease relationships for computa- tional drug repositioning". In: Briefings in bioinformatics 12.4 (2011), pp. 303-311.
Can machine learning algorithms predict which patients will achieve minimally clinically important differences from total joint arthroplasty?. Mark Fontana, Alan, Lyman, Stephen, Sarker, K Gourab, Padgett, E Douglas, Catherine H Maclean, In: Clinical orthopaedics and related research. 4771267Fontana, Mark Alan, Lyman, Stephen, Sarker, Gourab K, Padgett, Douglas E, and MacLean, Catherine H. "Can machine learning algorithms predict which patients will achieve minimally clinically important differences from total joint arthroplasty?" In: Clinical orthopaedics and related research 477.6 (2019), p. 1267.
Assessment of heterogeneous treatment effect estimation accuracy via matching. Zijun Gao, Trevor Hastie, Robert Tibshirani, Statistics in Medicine. 40Gao, Zijun, Hastie, Trevor, and Tibshirani, Robert. "Assessment of heterogeneous treatment effect estimation accuracy via matching". In: Statistics in Medicine 40.17 (2021).
A kernel two-sample test. Arthur Gretton, Karsten M Borgwardt, Rasch, J Malte, Bernhard Schölkopf, Alexander Smola, The Journal of Machine Learning Research. 131Gretton, Arthur, Borgwardt, Karsten M, Rasch, Malte J, Schölkopf, Bernhard, and Smola, Alexander. "A kernel two-sample test". In: The Journal of Machine Learning Research 13.1 (2012), pp. 723-773.
Use of Propensity Score Methodology in Contemporary High-Impact Surgical Literature. Elysia Grose, Wilson, Samuel, Barkun, Jeffrey, Bertens, Kimberly, Martel, Guillaume, Fady Balaa, Jad Khalil, Abou, Journal of the American College of Surgeons. Elsevier230e2Grose, Elysia, Wilson, Samuel, Barkun, Jeffrey, Bertens, Kimberly, Martel, Guillaume, Balaa, Fady, and Khalil, Jad Abou. "Use of Propensity Score Methodology in Contemporary High-Impact Surgical Literature". In: Jour- nal of the American College of Surgeons 230.1 (Jan. 1, 2020). Publisher: Elsevier, 101-112.e2.
tmle: An R Package for Targeted Maximum Likelihood Estimation. Susan Gruber, Van Der Laan, J Mark, 10.18637/jss.v051.i13Journal of Statistical Software. 51Gruber, Susan and van der Laan, Mark J. "tmle: An R Package for Targeted Maximum Likelihood Estimation". In: Journal of Statistical Software 51.13 (2012). doi:10.18637/jss.v051.i13, pp. 1-35.
Causal Inference and Uplift Modeling A review of the literature. Pierre Gutierrez, Jean-Yves Gerardy, Proceedings of The 3rd International Conference on Predictive Applications and APIs. Proceedings of Machine Learning Research. The 3rd International Conference on Predictive Applications and APIs. Machine Learning Research6714Gutierrez, Pierre and Gerardy, Jean-Yves. "Causal Inference and Uplift Modeling A review of the literature". In: Proceedings of The 3rd International Conference on Predictive Applications and APIs. Proceedings of Machine Learning Research 67 (2016), p. 14.
Causal Inference: What If. Hernán, J M Robins, Hernán, MA and Robins, JM. Causal Inference: What If. 2020.
Methods of Public Health Research -Strengthening Causal Inference from Observational Data. Miguel A Hernán, New England Journal of Medicine. 385Hernán, Miguel A. "Methods of Public Health Research -Strengthening Causal Inference from Observational Data". In: New England Journal of Medicine 385.15 (Oct. 7, 2021), pp. 1345-1348.
Bayesian Nonparametric Modeling for Causal Inference. Jennifer L Hill, Journal of Computational and Graphical Statistics. 201Hill, Jennifer L. "Bayesian Nonparametric Modeling for Causal Inference". In: Journal of Computational and Graphical Statistics 20.1 (Jan. 1, 2011), pp. 217-240.
A tutorial on individualized treatment effect prediction from randomized trials with a binary endpoint. Jeroen Hoogland, Joanna Inthout, Belias, Michail, Rovers, M Maroeska, Riley, D Richard, E HarrellJr, Frank, Moons, G M Karel, Debray, P A Thomas, Reitsma , Johannes B , Statistics in medicine. 40Hoogland, Jeroen, IntHout, Joanna, Belias, Michail, Rovers, Maroeska M, Riley, Richard D, E. Harrell Jr, Frank, Moons, Karel GM, Debray, Thomas PA, and Reitsma, Johannes B. "A tutorial on individualized treatment effect prediction from randomized trials with a binary endpoint". In: Statistics in medicine 40.26 (2021), pp. 5961- 5981.
Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. Steven Horng, David A Sontag, Halpern, Yoni, Jernite, Yacine, Shapiro, I Nathan, Nathanson Larry, A , PloS one. 12174708Horng, Steven, Sontag, David A, Halpern, Yoni, Jernite, Yacine, Shapiro, Nathan I, and Nathanson, Larry A. "Creating an automated trigger for sepsis clinical decision support at emergency department triage using ma- chine learning". In: PloS one 12.4 (2017), e0174708.
Splines for trend analysis and continuous confounder control. Chanelle J Howe, Stephen R Cole, Daniel J Westreich, Greenland, Sander, Sonia Napravnik, Joseph J Eron, Epidemiology. 22Howe, Chanelle J., Cole, Stephen R., Westreich, Daniel J., Greenland, Sander, Napravnik, Sonia, and Eron, Joseph J. "Splines for trend analysis and continuous confounder control". In: Epidemiology (Cambridge, Mass.) 22.6 (Nov. 2011), pp. 874-875.
Computational drug repositioning: from data to therapeutics. Mark R Hurle, Yang, Lun, Xie, Qing, Rajpal, K Deepak, Philippe Sanseau, Pankaj Agarwal, Clinical Pharmacology & Therapeutics. 93Hurle, Mark R, Yang, Lun, Xie, Qing, Rajpal, Deepak K, Sanseau, Philippe, and Agarwal, Pankaj. "Computa- tional drug repositioning: from data to therapeutics". In: Clinical Pharmacology & Therapeutics 93.4 (2013), pp. 335-341.
Causal inference in statistics, social, and biomedical sciences. Guido W Imbens, Donald B Rubin, Cambridge University PressImbens, Guido W. and Rubin, Donald B. Causal inference in statistics, social, and biomedical sciences. Cam- bridge University Press, 2015.
Truncated Importance Sampling. Edward L Ionides, Journal of Computational and Graphical Statistics. 172Ionides, Edward L. "Truncated Importance Sampling". In: Journal of Computational and Graphical Statistics 17.2 (June 2008), pp. 295-311.
Identifying Causal-Effect Inference Failure with Uncertainty-Aware Models. Andrew Jesson, Mindermann, Sören, Uri Shalit, Yarin Gal, Advances in Neural Information Processing Systems. 33Jesson, Andrew, Mindermann, Sören, Shalit, Uri, and Gal, Yarin. "Identifying Causal-Effect Inference Failure with Uncertainty-Aware Models". In: Advances in Neural Information Processing Systems 33 (Oct. 22, 2020), pp. 11637-11649.
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects. Fredrik D Johansson, Shalit, Uri, Nathan Kallus, David Sontag, arXiv:2001.07426cs, statJohansson, Fredrik D., Shalit, Uri, Kallus, Nathan, and Sontag, David. "Generalization Bounds and Repre- sentation Learning for Estimation of Potential Outcomes and Causal Effects". In: arXiv:2001.07426 [cs, stat] (Mar. 17, 2021).
A new measure of rank correlation. M G Kendall, Biometrika. 302Kendall, M. G. "A new measure of rank correlation". In: Biometrika 30.1-2 (June 1938), pp. 81-93.
Optimal doubly robust estimation of heterogeneous causal effects. Edward H Kennedy, arXiv:2004.14497arXiv preprintKennedy, Edward H. "Optimal doubly robust estimation of heterogeneous causal effects". In: arXiv preprint arXiv:2004.14497 (2020).
Metalearners for estimating heterogeneous treatment effects using machine learning. Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, Yu, Bin, Proceedings of the National Academy of Sciences. the National Academy of SciencesPNAS Plus116Publisher: National Academy of Sciences SectionKünzel, Sören R., Sekhon, Jasjeet S., Bickel, Peter J., and Yu, Bin. "Metalearners for estimating heterogeneous treatment effects using machine learning". In: Proceedings of the National Academy of Sciences 116.10 (Mar. 5, 2019). Publisher: National Academy of Sciences Section: PNAS Plus, pp. 4156-4165.
Unified methods for censored longitudinal data and causality. Mark J Laan, Van Der, M J Laan, J M Robins, Springer Science & Business MediaLaan, Mark J van der, Laan, MJ, and Robins, JM. Unified methods for censored longitudinal data and causality. Springer Science & Business Media, 2003.
. Mark J Laan, Van Der, Eric C Polley, Alan E Hubbard, Super Learner". In: Statistical Applications in Genetics and Molecular Biology. 6Laan, Mark J. van der, Polley, Eric C., and Hubbard, Alan E. "Super Learner". In: Statistical Applications in Genetics and Molecular Biology 6 (Sept. 16, 2007).
. Mark J Laan, Van, Sherri Rose, Targeted Learning. Springer Series in Statistics. Laan, Mark J. van der and Rose, Sherri. Targeted Learning. Springer Series in Statistics. 2011.
Identification of predicted individual treatment effects in randomized clinical trials. Andrea Lamont, Michael D Lyons, Jaki, Thomas, Stuart, Elizabeth, Daniel J Feaster, Tharmaratnam, Kukatharmini, Oberski, Daniel, Ishwaran, Wilson Hemant, K Dawn, Van Horn, Lee, Statistical methods in medical research. 27Lamont, Andrea, Lyons, Michael D, Jaki, Thomas, Stuart, Elizabeth, Feaster, Daniel J, Tharmaratnam, Kukatharmini, Oberski, Daniel, Ishwaran, Hemant, Wilson, Dawn K, and Van Horn, M Lee. "Identification of predicted individual treatment effects in randomized clinical trials". In: Statistical methods in medical research 27.1 (2018), pp. 142-157.
External control arm analysis: an evaluation of propensity score approaches, Gcomputation, and doubly debiased machine learning. Nicolas Loiseau, Trichelair, Paul, He, Maxime, Andreux, Mathieu, Zaslavskiy, Mikhail, Gilles Wainrib, Michael G Blum, BMC Medical Research Methodology. 22Loiseau, Nicolas, Trichelair, Paul, He, Maxime, Andreux, Mathieu, Zaslavskiy, Mikhail, Wainrib, Gilles, and Blum, Michael G. B. "External control arm analysis: an evaluation of propensity score approaches, G- computation, and doubly debiased machine learning". In: BMC Medical Research Methodology 22 (2022).
Causal Effect Inference with Deep Latent-Variable Models. Christos Louizos, Shalit, Uri, Mooij, Joris, Sontag, David, Richard Zemel, Max Welling, Advances in neural information processing systems. Louizos, Christos, Shalit, Uri, Mooij, Joris, Sontag, David, Zemel, Richard, and Welling, Max. "Causal Effect Inference with Deep Latent-Variable Models". In: Advances in neural information processing systems (Nov. 6, 2017).
Infant mortality statistics from the linked birth/infant death data set-1995 period data". eng. M F Macdorman, J O Atkinson, Monthly Vital Statistics Report. 462SupplMacDorman, M. F. and Atkinson, J. O. "Infant mortality statistics from the linked birth/infant death data set- 1995 period data". eng. In: Monthly Vital Statistics Report 46.6 Suppl 2 (Feb. 1998), pp. 1-22.
Revisiting the Calibration of Modern Neural Networks. Matthias Minderer, Djolonga, Josip, Romijnders, Rob, Hubis, Frances, Zhai, Xiaohua, Houlsby, Neil, Dustin Tran, Mario Lucic, Advances in Neural Information Processing Systems. 34Minderer, Matthias, Djolonga, Josip, Romijnders, Rob, Hubis, Frances, Zhai, Xiaohua, Houlsby, Neil, Tran, Dustin, and Lucic, Mario. "Revisiting the Calibration of Modern Neural Networks". In: Advances in Neural Information Processing Systems 34 (2021), pp. 15682-15694.
Big data in public health: terminology, machine learning, and privacy. Stephen J Mooney, Vikas Pejaver, Annual review of public health. 3995Mooney, Stephen J and Pejaver, Vikas. "Big data in public health: terminology, machine learning, and privacy". In: Annual review of public health 39 (2018), p. 95.
Challenges in Obtaining Valid Causal Effect Estimates with Machine Learning Algorithms. Ashley I Naimi, Alan E Mishler, Kennedy Edward, H , American Journal of Epidemiology. Naimi, Ashley I, Mishler, Alan E, and Kennedy, Edward H. "Challenges in Obtaining Valid Causal Effect Estimates with Machine Learning Algorithms". In: American Journal of Epidemiology (2021).
Defining and Identifying Average Treatment Effects. Ashley I Naimi, Brian W Whitcomb, American Journal of Epidemiology. Naimi, Ashley I and Whitcomb, Brian W. "Defining and Identifying Average Treatment Effects". In: American Journal of Epidemiology (2023).
Predicting good probabilities with supervised learning". en. Alexandru Niculescu-Mizil, Rich Caruana, Proceedings of the 22nd international conference on Machine learning -ICML '05. the 22nd international conference on Machine learning -ICML '05ACM PressNiculescu-Mizil, Alexandru and Caruana, Rich. "Predicting good probabilities with supervised learning". en. In: Proceedings of the 22nd international conference on Machine learning -ICML '05. ACM Press, 2005, pp. 625-632.
Quasi-Oracle Estimation of Heterogeneous Treatment Effects. Xinkun Nie, Stefan Wager, Biometrika. 1082Nie, Xinkun and Wager, Stefan. "Quasi-Oracle Estimation of Heterogeneous Treatment Effects". In: Biometrika 108.2 (Dec. 13, 2017), pp. 299-319.
United States National Institute of Neurologiacal Diseases and. The Women and Their Pregnancies: The Collaborative Perinatal Study of the National Institute of Neurological Diseases and Stroke. Google-Books-ID: A0bdVhlhDQkC. National Institute of Health. Kenneth R Niswander, Stroke, 562Niswander, Kenneth R. and Stroke, United States National Institute of Neurologiacal Diseases and. The Women and Their Pregnancies: The Collaborative Perinatal Study of the National Institute of Neurological Diseases and Stroke. Google-Books-ID: A0bdVhlhDQkC. National Institute of Health, 1972. 562 pp.
Scikit-learn: Machine Learning in Python. Fabian Pedregosa, Varoquaux, Gaël, Alexandre Gramfort, Michel, Vincent, Thirion, Bertrand, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Ron Weiss, Dubourg, Vincent, Jake Vanderplas, Alexandre Passos, Cournapeau, David, Brucher, Matthieu, Matthieu Perrot, Édouard Duchesnay, Journal of Machine Learning Research. 12Pedregosa, Fabian, Varoquaux, Gaël, Gramfort, Alexandre, Michel, Vincent, Thirion, Bertrand, Grisel, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Weiss, Ron, Dubourg, Vincent, Vanderplas, Jake, Passos, Alexandre, Cournapeau, David, Brucher, Matthieu, Perrot, Matthieu, and Duchesnay, Édouard. "Scikit-learn: Machine Learning in Python". In: Journal of Machine Learning Research 12.85 (2011), pp. 2825-2830.
Beyond calibration: estimating the grouping loss of modern neural networks. Alexandre Perez-Lebel, Marine Morvan, Le, Gaël Varoquaux, arXiv:2210.16315arXiv preprintPerez-Lebel, Alexandre, Morvan, Marine Le, and Varoquaux, Gaël. "Beyond calibration: estimating the group- ing loss of modern neural networks". In: arXiv preprint arXiv:2210.16315 (2022).
A review of spline function procedures in R. Aris Perperoglou, Sauerbrei, Willi, Michal Abrahamowicz, Matthias Schmid, BMC Medical Research Methodology. 19146Perperoglou, Aris, Sauerbrei, Willi, Abrahamowicz, Michal, and Schmid, Matthias. "A review of spline function procedures in R". In: BMC Medical Research Methodology 19.1 (Mar. 6, 2019), p. 46.
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. John C Platt, John C Platt, Advances in Large Margin Classifiers. Platt, John C. and Platt, John C. "Probabilistic Outputs for Support Vector Machines and Comparisons to Reg- ularized Likelihood Methods". In: Advances in Large Margin Classifiers (1999), pp. 61-74.
Establishment of best practices for evidence for prediction: a review. Russell A Poldrack, Grace Huckins, Gael Varoquaux, JAMA psychiatry. 77Poldrack, Russell A, Huckins, Grace, and Varoquaux, Gael. "Establishment of best practices for evidence for prediction: a review". In: JAMA psychiatry 77.5 (2020), pp. 534-540.
Some methods for heterogeneous treatment effect estimation in high dimensions. Powers, Scott, Qian, Junyang, Jung, Kenneth, Alejandro Schuler, Shah, H Nigam, Trevor Hastie, Robert Tibshirani, Statistics in Medicine. 37Powers, Scott, Qian, Junyang, Jung, Kenneth, Schuler, Alejandro, Shah, Nigam H., Hastie, Trevor, and Tibshi- rani, Robert. "Some methods for heterogeneous treatment effect estimation in high dimensions". In: Statistics in Medicine 37.11 (2018), pp. 1767-1787.
Off-label prescribing among office-based physicians. David C Radley, Finkelstein, N Stan, Stafford , Randall S , Archives of internal medicine. 1669Radley, David C, Finkelstein, Stan N, and Stafford, Randall S. "Off-label prescribing among office-based physi- cians". In: Archives of internal medicine 166.9 (2006), pp. 1021-1026.
Random Features for Large-Scale Kernel Machines. Ali Rahimi, Benjamin Recht, Advances in Neural Information Processing Systems. 20Rahimi, Ali and Recht, Benjamin. "Random Features for Large-Scale Kernel Machines". In: Advances in Neural Information Processing Systems. Vol. 20. 2008.
A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. James Robins, Mathematical Modelling. 7Robins, James. "A new approach to causal inference in mortality studies with a sustained exposure pe- riod-application to control of the healthy worker survivor effect". In: Mathematical Modelling 7.9 (Jan. 1, 1986), pp. 1393-1512.
The role of model selection in causal inference from non experimental data. James M Robins, Sander Greenland, American Journal of Epidemiology. 123Robins, James M. and Greenland, Sander. "The role of model selection in causal inference from non experi- mental data". In: American Journal of Epidemiology 123.3 (Mar. 1986), pp. 392-402.
Root-N-Consistent Semiparametric Regression. P Robinson, Econometrica. 56Econometric SocietyRobinson, P. M. "Root-N-Consistent Semiparametric Regression". In: Econometrica 56.4 (1988). Publisher: [Wiley, Econometric Society], pp. 931-954.
Model selection for estimating treatment effects. Craig A Rolling, Yuhong Yang, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 76Rolling, Craig A. and Yang, Yuhong. "Model selection for estimating treatment effects". In: Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76.4 (Sept. 2014), pp. 749-769.
The central role of the propensity score in observational studies for causal effects. Paul R Rosenbaum, Donald B Rubin, Biometrika. 70Rosenbaum, Paul R and Rubin, Donald B. "The central role of the propensity score in observational studies for causal effects". In: Biometrika 70 (1983), pp. 41-55.
Case-control studies, chapter 8. K J Rothman, S Greenland, T L Lash, Modern epidemiology. Rothman, KJ, Greenland, S, and Lash, TL. "Case-control studies, chapter 8". In: Modern epidemiology (2008), pp. 111-127.
Causal Inference Using Potential Outcomes. Donald B Rubin, Journal of the American Statistical Association. 100Taylor & FrancisRubin, Donald B. "Causal Inference Using Potential Outcomes". In: Journal of the American Statistical Asso- ciation 100.469 (Mar. 1, 2005). Publisher: Taylor & Francis, pp. 322-331.
Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models. Yuta Saito, Shota Yasui, PMLR. 2020International Conference on Machine Learning. Saito, Yuta and Yasui, Shota. "Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models". In: International Conference on Machine Learning. PMLR. 2020, pp. 8398-8407.
Reliable Decision Support using Counterfactual Models. Peter Schulam, Suchi Saria, Advances in neural information processing systems. 30Schulam, Peter and Saria, Suchi. "Reliable Decision Support using Counterfactual Models". In: Advances in neural information processing systems 30 (Mar. 30, 2017).
A comparison of methods for model selection when estimating individual treatment effects. Alejandro Schuler, Baiocchi, Michael, Robert Tibshirani, Nigam Shah, arXiv:1804.05146cs, statSchuler, Alejandro, Baiocchi, Michael, Tibshirani, Robert, and Shah, Nigam. "A comparison of methods for model selection when estimating individual treatment effects". In: arXiv:1804.05146 [cs, stat] (June 13, 2018).
Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies. Megan S Schuler, Sherri Rose, American Journal of Epidemiology. 185Schuler, Megan S. and Rose, Sherri. "Targeted Maximum Likelihood Estimation for Causal Inference in Obser- vational Studies". In: American Journal of Epidemiology 185.1 (Jan. 1, 2017), pp. 65-73.
Estimating individual treatment effect: generalization bounds and algorithms. Uri Shalit, Johansson, D Fredrik, David Sontag, International Conference on Machine Learning. Shalit, Uri, Johansson, Fredrik D, and Sontag, David. "Estimating individual treatment effect: generalization bounds and algorithms". In: International Conference on Machine Learning. PMLR. 2017, pp. 3076-3085.
An Evaluation Toolkit to Guide Model Selection and Cohort Definition in Causal Inference. Yishai Shimoni, Karavani, Ehud, Ravid, Sivan, Bak, Peter, Tan Ng, Hung, Sharon Alford, Hensley, Denise Meade, Yaara Goldschmidt, arXiv:1906.00442arXiv preprintShimoni, Yishai, Karavani, Ehud, Ravid, Sivan, Bak, Peter, Ng, Tan Hung, Alford, Sharon Hensley, Meade, Denise, and Goldschmidt, Yaara. "An Evaluation Toolkit to Guide Model Selection and Cohort Definition in Causal Inference". In: arXiv preprint arXiv:1906.00442 (2019).
Benchmarking Framework for Performance-Evaluation of Causal Inference Analysis. Yishai Shimoni, Yanover, Chen, Ehud Karavani, Yaara Goldschmnidt, arXiv:1802.05046cs, statShimoni, Yishai, Yanover, Chen, Karavani, Ehud, and Goldschmnidt, Yaara. "Benchmarking Framework for Performance-Evaluation of Causal Inference Analysis". In: arXiv:1802.05046 [cs, stat] (Mar. 20, 2018).
Predicting suicide attempts and suicide deaths following outpatient visits using electronic health records. Gregory E Simon, Johnson, Eric, Jean M Lawrence, Rebecca C Rossom, Ahmedani, Brian, Frances L Lynch, Arne Beck, Waitzfelder, Beth, Ziebell, Rebecca, Penfold, B Robert, American Journal of Psychiatry. 175Simon, Gregory E, Johnson, Eric, Lawrence, Jean M, Rossom, Rebecca C, Ahmedani, Brian, Lynch, Frances L, Beck, Arne, Waitzfelder, Beth, Ziebell, Rebecca, Penfold, Robert B, et al. "Predicting suicide attempts and suicide deaths following outpatient visits using electronic health records". In: American Journal of Psychiatry 175.10 (2018), pp. 951-960.
Implementation of G-computation on a simulated data set: demonstration of a causal inference technique. Jonathan M Snowden, Sherri Rose, Kathleen M Mortimer, American Journal of Epidemiology. 173Snowden, Jonathan M., Rose, Sherri, and Mortimer, Kathleen M. "Implementation of G-computation on a simulated data set: demonstration of a causal inference technique". eng. In: American Journal of Epidemiology 173.7 (Apr. 2011), pp. 731-738.
On integral probability metrics, \phi-divergences and binary classification. Bharath K Sriperumbudur, Fukumizu, Kenji, Gretton, Arthur, Bernhard Schölkopf, Gert R G Lanckriet, arXiv:0901.2698cs, mathSriperumbudur, Bharath K., Fukumizu, Kenji, Gretton, Arthur, Schölkopf, Bernhard, and Lanckriet, Gert R. G. "On integral probability metrics, \phi-divergences and binary classification". In: arXiv:0901.2698 [cs, math] (Oct. 12, 2009).
Random forests of interaction trees for estimating individualized treatment effects in randomized trials. Xiaogang Su, Annette T Peña, Lei Liu, Richard A Levine, Statistics in medicine. 37Su, Xiaogang, Peña, Annette T, Liu, Lei, and Levine, Richard A. "Random forests of interaction trees for esti- mating individualized treatment effects in randomized trials". In: Statistics in medicine 37.17 (2018), pp. 2547- 2560.
Counterfactual risk minimization: Learning from logged bandit feedback. Adith Swaminathan, Thorsten Joachims, PMLR. 2015International Conference on Machine Learning. Swaminathan, Adith and Joachims, Thorsten. "Counterfactual risk minimization: Learning from logged bandit feedback". In: International Conference on Machine Learning. PMLR. 2015, pp. 814-823.
Evaluating machine learning models and their diagnostic value. Gaël Varoquaux, Olivier Colliot, Varoquaux, Gaël and Colliot, Olivier. Evaluating machine learning models and their diagnostic value. 2022.
Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Stefan Wager, Susan Athey, Journal of the American Statistical Association. 113Wager, Stefan and Athey, Susan. "Estimation and Inference of Heterogeneous Treatment Effects using Random Forests". In: Journal of the American Statistical Association 113.523 (July 3, 2018), pp. 1228-1242.
Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases. T Wendling, K Jung, A Callahan, A Schuler, N H Shah, B Gallego, Statistics in Medicine. 37Wendling, T., Jung, K., Callahan, A., Schuler, A., Shah, N. H., and Gallego, B. "Comparing methods for esti- mation of heterogeneous treatment effects using observational data from health care databases". In: Statistics in Medicine 37.23 (2018), pp. 3309-3324.
A systematic review identifies valid comorbidity indices derived from administrative health data. Marko Yurkovich, Avina-Zubieta, Antonio, Jamie Thomas, Mike Gorenchtein, Diane Lacaille, Journal of clinical epidemiology. 68Yurkovich, Marko, Avina-Zubieta, J Antonio, Thomas, Jamie, Gorenchtein, Mike, and Lacaille, Diane. "A systematic review identifies valid comorbidity indices derived from administrative health data". In: Journal of clinical epidemiology 68.1 (2015), pp. 3-14.
Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers". en. Bianca Zadrozny, Charles Elkan, 8Zadrozny, Bianca and Elkan, Charles. "Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers". en. In: (2001), p. 8.
Splitting the data is common when using machine learning for causal inference, but practices vary widely in terms of the fraction of data to allocate to train models, outcomes and nuisances, and to evaluate them. Splitting the data is common when using machine learning for causal inference, but practices vary widely in terms of the fraction of data to allocate to train models, outcomes and nuisances, and to evaluate them.
Before even model selection, data splitting is often required for estimation of the treatment effect, ATE or CATE, for instance to compute the nuisances required to optimize the outcome model (as the R-risk, definition 6). The most frequent choice is use 80% of the data to fit the models, and 20% to evaluate them. For instance, for CATE estimation, the R-learner has been introduced using K-folds with K = 5 and K = 10: 80% of the data. 4 folds) to train the nuisances and the remaining fold to minimize the corresponding R-loss [54Before even model selection, data splitting is often required for estimation of the treatment effect, ATE or CATE, for instance to compute the nuisances required to optimize the outcome model (as the R-risk, definition 6). The most frequent choice is use 80% of the data to fit the models, and 20% to evaluate them. For instance, for CATE estimation, the R-learner has been introduced using K-folds with K = 5 and K = 10: 80% of the data (4 folds) to train the nuisances and the remaining fold to minimize the corresponding R-loss [54].
Yet, it has been implemented with K=5 in causallib. Yet, it has been implemented with K=5 in causallib
or K=3 in econML. 9or K=3 in econML [9].
14] introduce doubly-robust machine learning, recommending K=5 based on an empirical comparison K=2. However, subsequent works use doubly robust ML with varying choices of K: Loiseau. Chernozhukov Likewise, et al. [46] use K=3, Gao et al. [23] use K=2. In the econML implementation, K is set to 3 [9Likewise, for ATE estimation, Chernozhukov et al. [14] introduce doubly-robust machine learning, recommending K=5 based on an empirical comparison K=2. However, subsequent works use doubly robust ML with varying choices of K: Loiseau et al. [46] use K=3, Gao et al. [23] use K=2. In the econML implementation, K is set to 3 [9].
51] evaluate various machine-learning approaches -including R-learners-using K=5 and 10, drawing inspiration from the TMLE literature which sets K=5 in the TMLE package. Naimi, 26Naimi et al. [51] evaluate various machine-learning approaches -including R-learners-using K=5 and 10, drawing inspiration from the TMLE literature which sets K=5 in the TMLE package [26].
Causal model selection has been much less discussed. The only study that we are aware of, Schuler et al. [73], use a different data split: a 2-folds train/test procedure, training the nuisances on the first half of the data, and using the second half to estimate the R-risk and select the best treatment effect model. Causal model selection has been much less discussed. The only study that we are aware of, Schuler et al. [73], use a different data split: a 2-folds train/test procedure, training the nuisances on the first half of the data, and using the second half to estimate the R-risk and select the best treatment effect model.
| [
"https://github.com/strayMat/caussim.",
"https://github.com/vdorie/aciccomp/tree/master/2016",
"https://github.com/AMLab-Amsterdam/CEVAE/tree/master/datasets/TWINS",
"https://github.com/microsoft/EconML."
]
|
[
"Next Stop \"NoOps\": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics",
"Next Stop \"NoOps\": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics"
]
| [
"Michał Zasadziński [email protected] ",
"Marc Solé ",
"Alvaro Brandon [email protected] \nUniversitat Politecnica de Madrid\nMadridSpain\n\nUniversitat Politecnica de Catalunya\nBarcelonaSpain\n",
"Victor Muntés-Mulero ",
"David Carrera [email protected] ",
"\nCA Technologies\nBarcelonaSpain\n"
]
| [
"Universitat Politecnica de Madrid\nMadridSpain",
"Universitat Politecnica de Catalunya\nBarcelonaSpain",
"CA Technologies\nBarcelonaSpain"
]
| []
| Performing diagnostics in IT systems is an increasingly complicated task, and it is not doable in satisfactory time by even the most skillful operators. Systems and their architecture change very rapidly in response to business and user demand. Many organizations see value in the maintenance and management model of NoOps that stands for No Operations. One of the implementations of this model is a system that is maintained automatically without any human intervention. The path to NoOps involves not only precise and fast diagnostics but also reusing as much knowledge as possible after the system is reconfigured or changed. The biggest challenge is to leverage knowledge on one IT system and reuse this knowledge for diagnostics of another, different system. We propose a framework of weighted graphs which can transfer knowledge, and perform high-quality diagnostics of IT systems. We encode all possible data in a graph representation of a system state and automatically calculate weights of these graphs. Then, thanks to the evaluation of similarity between graphs, we transfer knowledge about failures from one system to another and use it for diagnostics. We successfully evaluate the proposed approach on Spark, Hadoop, Kafka and Cassandra systems. | 10.1109/cluster.2018.00039 | [
"https://arxiv.org/pdf/1809.07687v1.pdf"
]
| 52,310,275 | 1809.07687 | de966e1bb9c7458ed4acc1ef938812da45affc6d |
Next Stop "NoOps": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics
Michał Zasadziński [email protected]
Marc Solé
Alvaro Brandon [email protected]
Universitat Politecnica de Madrid
MadridSpain
Universitat Politecnica de Catalunya
BarcelonaSpain
Victor Muntés-Mulero
David Carrera [email protected]
CA Technologies
BarcelonaSpain
Next Stop "NoOps": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics
Graphssimilaritydiagnosticsroot cause classifi- cationlogsNoOps
Performing diagnostics in IT systems is an increasingly complicated task, and it is not doable in satisfactory time by even the most skillful operators. Systems and their architecture change very rapidly in response to business and user demand. Many organizations see value in the maintenance and management model of NoOps that stands for No Operations. One of the implementations of this model is a system that is maintained automatically without any human intervention. The path to NoOps involves not only precise and fast diagnostics but also reusing as much knowledge as possible after the system is reconfigured or changed. The biggest challenge is to leverage knowledge on one IT system and reuse this knowledge for diagnostics of another, different system. We propose a framework of weighted graphs which can transfer knowledge, and perform high-quality diagnostics of IT systems. We encode all possible data in a graph representation of a system state and automatically calculate weights of these graphs. Then, thanks to the evaluation of similarity between graphs, we transfer knowledge about failures from one system to another and use it for diagnostics. We successfully evaluate the proposed approach on Spark, Hadoop, Kafka and Cassandra systems.
I. INTRODUCTION
Today's IT systems are large, dynamic, complex, and heterogeneous. The current and the future systems will frequently change their architecture and resources according to the business and user demand. Diagnosing them efficiently in satisfactory time (less than minutes) is already not within reach of even the most experienced operators. Because of that, the majority of trends and efforts around the development of troubleshooting and diagnostics of IT systems is driven by NoOps 1,2,3 business model [1]. NoOps stands for No Operations. One of the ways is the software automation. Then, it means a scenario of fully automated and self-manageable IT infrastructure. The shift of conventional operations to NoOps model is achieved by the full automation of maintenance activities, including failure diagnostics. In this model of maintenance, problems occurring in an IT system are solved immediately without any human intervention.
However, to operate successfully in such a business model, the future diagnostic systems should perform precise, automated and fast root cause analysis. Also, these solutions should be able to diagnose problems even in a scenario where there is none or few data about failures and their causes. In many cases, recollecting the data necessary for diagnostics is expensive or even impossible. The use of similar data coming from another system with a different structure is a solution, but it is a considerable challenge. The solutions based on transfer learning can transfer and reuse as much knowledge on the behavior of a system as possible to keep pace with the changing architecture, infrastructure and rapidly growing number of knowledge domains.
So far, we have seen enormous work on automated diagnostics of IT systems, with use of data mining or Artificial Intelligence (AI) [2], [3]. Most of this work uses for diagnostics either metrics or logs. When both are used, the use of logs is limited to counting specific key terms or entries with a specific severity level. Another common limitation of current systems is the lack of inclusion of detailed system information, i.e., connectivity, hardware specification in diagnostics. There is still room for improvement in knowledge integration and knowledge transfer before we reach the era of NoOps. As we show in this publication, integrating log entries, metrics, and other system data improve the accuracy of the diagnostics for IT systems.
In this paper, we propose a cross-system root cause classification framework based on similarity evaluation of weighted graphs with multi-attribute nodes. The framework uses logs, metrics, configuration and connectivity information to represent the state of a system as a graph. Then, the framework evaluates the similarity between an abnormal state and a collection of previously diagnosed states. By finding the most similar graph in the solution space, we can classify the anomaly and provide a root cause. Moreover, we use automatically calculated weights to highlight the system metrics that better describe a failure. Finally, we use the framework for a cross-system failure classification. By acquiring a collection of diagnosed anomalies for one system architecture, we can establish the root cause for anomalies that occur in a completely different architecture (cross-system diagnostics).
Rapidly changing system architecture is a consequence of new requirements and scaling of a system. We leverage the proposed framework in this scenario. Using knowledge transfer, just after starting a new architecture of a system, we can diagnose it and proactively avoid failures. The proposed system does not only allow for precise diagnostics but also helps in proactive avoidance of failures. The system can output the nearest possible future failure as a result of graph similarity evaluation. Such an approach, saves time, effort and results in performance and reliability advantage over competitors.
We evaluate the proposed framework in the environments running representative and different Big Data applications such as Spark [4] and Hadoop [5]. We inject failures into these environments and evaluate the quality of failures classification, reaching more than 70% of both f1-score and accuracy. Then, we perform experiments using different architectures with containers running Cassandra [6] and Kafka [7] systems. We evaluate our cross-system nearest root cause classification when the symptoms of failures are known only for one of these systems. We receive average f1-score 77% with the same level of accuracy.
The remainder of this paper is divided into seven sections. Work related to graph-based root cause analysis systems, crosssystem knowledge transfer and use of logs for diagnostics is discussed in Section II. In Section III we describe the background for the graph similarity calculation. Then we present the framework for creation and similarity evaluation of automatically weighted graphs representing a system's state that contains: metrics, logs, system connectivity, infrastructure. Our contributions are:
• A solution on how to include logs in a graph representation of a system state. (Subsection IV-A)
• A method for automatic adjustment of weights of nodes and node attributes, according to the distribution of a metric. (Subsection IV-B)
• Evaluation of the proposed solution on real datasets for root cause classification. This Section presents an evaluation of the proposed framework on a cluster running Hadoop and Spark jobs. We prove that including logs and the automatic importance assignment system increases the accuracy of the classification with respect to other methods. (Section V)
• Evaluation of root cause classification in cross-system transfer learning; We search for a failure using knowledge captured from one system (Kafka) and utilize it in another system (Cassandra). We prove that the graph approach can transfer knowledge to/from Cassandra from/to Kafka. (Section VI)
Both evaluation sections contain results from four use cases running in different infrastructures: on-premise cluster and containers in a cloud. This strategy allows us to prove the reproducibility and broad usability of the proposed framework. We conclude the paper with the discussion and plans for future research in Section VII.
II. RELATED WORK
A. Graph-based systems for root cause classification Monitoring and logging systems are responsible for providing full observability of a system state, which is one of the most important inputs for a root cause classification system. Current research in this field is focused on dealing not only with the huge size and complexity of information encoded in logs but also with fault tolerance and use of partial information [8]. Usually, operators use these two sources of information in a troubleshooting process separately, and diagnostic tools do not combine well descriptive data with metrics. One of the best ways to do so is utilizing a graph representation of a system state. Constructing proper graph representation allows for anomaly detection and diagnostics [9]. Graphbased approaches are widely used for root cause classification, detection and prediction of abnormal events and failures [10], [11], [12], [13], [14].
B. Cross-system failures knowledge transfer through similarity evaluation
Diagnostic systems can gather the knowledge in one domain and reuse this knowledge for diagnosing similar systems with symptoms in similar knowledge domains. Generally, we call this type of use of knowledge transfer learning [15] and the heterogeneous transfer learning when the knowledge comes from different systems [16]. In this paper, we focus on a scenario of the transductive transfer learning, where the data is labeled in the source domain, but not in the target knowledge domain. According to this area, one of the paths to deploy transfer learning in diagnostic systems is to apply similarity measures between a diagnosed state and the abnormal state to be diagnosed. When we represent system states as graphs, we can compare the transferred state by evaluating similarities between them [17], [18]. The work of Papadimitriou et al. [19] is an important contribution in the field of diagnostics via graph similarity. This work evaluates graph similarities to find anomalies in the web. The authors consider different approaches for similarity evaluation which are limited to the topologies of compared graphs. In comparison, in our approach for the cross-system diagnostics, we encode more information. We use attributes of different types in both edges and nodes, together with the information contained in system and application logs, providing a much more detailed input for the graph similarity function. Work on the similarity between different texts and logs is presented in [20], [21] and it is widely used for diagnostics of IT systems. Research of Putra et al. [22] includes graph-based text similarity evaluation. Other important work on utilizing similarity between logs that are used for diagnostics can be found in [23], [24], [25].
C. Mining logs for root cause classification and diagnostics
The logs of an IT system are a valuable source of information used for data-driven diagnostics and prognostics of a system state. A usual method of working with logs, it is exploring the statistics and the occurrence of a set of key terms using log parsers, indexers, and miners. The authors of a survey on data-driven techniques in computing system management [26] claim that to realize the goal of self-management, systems need to automatically monitor, characterize, and understand their behaviors and dynamics; mine events to uncover useful patterns, and acquire valuable knowledge from historical log/event data. Fundamental knowledge of diverse approaches of error log processing is found in [27], [28]. Some simple mining methods include log key terms occurrence correlation [29], and modeling a multithreaded system behavior through graphs or sequences representing system calls. For instance, the authors of [27] deal with the problem of failure prediction through clustering similar log sequences. They propose an algorithm to assign the source of failures to logs, using Levenshtein's edit distance.
Recently, a considerable part of work on automated diagnostics is performed with the help of AI. DeepLog [30] is one of the most significant contributions. The authors propose a system for anomaly detection and diagnosis which is based on deep learning. The performance and accuracy of the solution are high. However, to use it, there is a necessity of defining metadata. For this reason, the solution has limited usability regarding full automation. Authors of [31] propose an approach to mine time-weighted graphs from logs with many threads running. The solution evaluated on the cloud environment performs with high f1-score that is about 80%. Authors in [32] use casual inference to diagnose network failures. They mine casual graphs from logs, considering connected devices in a graph. One of the conventional approaches to deal with log preprocessing and comparison is transforming log entries to vectors, using the Word2Vec algorithm [33], [34]. A recent attempt to leverage Word2Vec for root cause classification is described in [35]. The authors propose a method for processing logs with a Word2Vec model and then using a Bayesian classifier.
Analyzed state of the art shows the dependency between accuracy and the underlying complexity of the solutions. Much state of the art research is focused on accurate analysis and mining of logs based on metadata for a specific log structure. There are not many solutions which diagnose a system just by consuming logs without specific preprocessing techniques. With the solution that we propose in this paper, we would like to fill this gap. The solution is as general as possible, and it can work with many IT system types with little human effort to deploy the framework that uses logs, metrics and others system information data.
III. BACKGROUND: GRAPH SIMILARITIES
In this section, we provide background knowledge on the problem of similarity calculation between graphs. We define the problem, the graph representation, and how to calculate similarities between different node attribute types.
Similarity. According to [18], we define the problem of finding a similarity between two graphs as follows.
Definition 1: Given two graphs G 1 (V 1 , E 1 ), G 2 (V 2 , E 2 ).
Find an algorithm to calculate the similarity s of the graphs, which returns a number between 0 and 1. Two graphs have similarity s = 1 only when they are identical while a similarity value of 0 intuitively says that they are completely different.
A. Approximate Graph Similarity Calculation
Graph representation of a system state. Graphs allow representing an IT system state including all types of data which can describe that state. A graph is defined as a set of {E, V, W, A, S} corresponding to the sets of edges, vertices, weights, attributes, and similarity functions. Each of the system components is a node that has multiple attributes and represents a different level of abstraction, e.g., hardware, server, an application, application module, application thread, container, or a microservice. Edges represent the connectivity between system components. Attributes of a node contain different information encoding the system state, e.g., metric value, log entries, component type, software details. Also, to represent the different importance of each of the attributes, we introduce weights at each level of the graph structure. We use them with each element of a graph: edges, nodes and node attributes. A weight indicates how significant is the influence of the similarity between particular elements on the final similarity result. Primarily, an expert can define weights through the root cause analysis framework. When an anomaly is detected inside the system state graph, the expert can pinpoint the metrics and components that are more important inside that anomaly. These will be later used as inputs by the root cause classification system. Such an intuitive mechanism creates permanent opportunities for the framework to gather the expert knowledge. In Section IV, we introduce the automatic weight calculation mechanism to deal with limitations of the manual weight assignment.
Graph similarity calculation. We calculate the maximum similarity s(G 1 , G 2 ) between two graphs G 1 = (E, V, W, A, S) and G 2 = (E, V, W, A, S). It is a well-defined optimization problem, which consists of matching a node of a graph with the most similar one of the other graph. We use hill climbing [36] to solve the optimization problem. The similarity between two nodes is calculated by using their attributes, which can be both logs and metrics. In order to do that, we specify importance of each attribute -a weight, and a similarity function. We use weights for calculation of the weighted average similarity between attributes. In the Subsection III-B we propose different similarity functions depending on the attribute types -custom functions to compare different elements of a graph.
B. Similarity between different attribute types
We define similarity functions for numerical, vector, categorical and ontological attributes in Table I. Because of using different similarity functions, we manage the calculation of similarities between different attribute types, coming from the two compared graphs. Similarity between numerical attributes. This function is used for those metrics that take numerical values such as CPU usage, bytes written to disk or memory used to name a few. More specifically, for numerical type attributes a 1 and a 2 , we use the formula s(a 1 , a 2 ) = 1 − |a1−a2| |max−min| . Two points that are close on the scale, will have a higher similarity value. They achieve the maximum similarity = 1 only if they are equal.
Similarity between vectors. Vectors can represent a measurable state of a system module, but can also represent text inside a log file, as we will explain in Subsection IV-A. The similarity between vectors is usually defined by the value of cosine between two vectors. Also, other metrics that are based on different distance formulas can be used.
Similarity between types. Graph nodes contain attributes which specify a type. A taxonomy is a tree that represents a hierarchy of concepts in a given domain. In Figure 1, we present an example taxonomy. Each node in a graph can contain attributes that define its type inside this taxonomy. The functions used for similarity calculation between different types are introduced in Table I. An example taxonomy defining equipment type used in the evaluation. For instance, using the ontological similarity formula from Table I: similarity(M aster, Slave) = 0, 66, similarity(M aster, Switch) = 0, 4, similarity(Server, Switch) = 0, 5
Similarity between categories. They take values that are names or labels, e.g., the image of a Docker container (e.g., Haproxy, WordPress), disk label, hardware model. According to the categorical values, the similarity is 1 when the values are equal, otherwise 0.
IV. WEIGHTED GRAPHS REPRESENTING SYSTEM STATE FOR CROSS-DOMAIN DIAGNOSTICS
Motivated by the challenge of shifting operations to NoOps, we present the following contribution. First of all, we propose a diagnostic framework based on an automatic similarity calculation for graphs representing a system state. The framework automatically adjust graph weights according to the distribution of historical values of metrics. Also, the weight module allows for adjusting the importance of a metric according to an operator's feedback. Weights are used to indicate the important elements of a system which hold significant information for diagnostics. For instance, in case of a network failure, attributes with network-related metrics will be more important than those non-related, e.g., CPU, temperature. The framework reacts to a trigger based on anomaly detection mechanisms, e.g., an error message, exceeded the threshold of a metric. It outputs the similarity score between the current state of the system and previously acquired anomalous states. Such information can be used for early detection of failures and their prevention. In Figure 2, we present an automatically weighted graph representing a system state. Blue nodes represent system elements, in this case: hosts, and a switch. Each node contains many attributes which can contain static information, e.g., node type, and runtime data, e.g., metric values, metric distributions.
In Figure 3, we present the proposed framework for root cause classification. The framework manages the creation of weighted graphs and calculation of similarity between them. One graph comes from a repository with anomalous graphs that have been previously labeled with its root cause, and the other one represents an anomalous state of a diagnosed system. Note that we assume the existence of an anomaly detection system that can extract anomalous system graphs. Usually graphs are labeled automatically by an anomaly detector. Also, Fig. 2. An example graph with multi-attribute nodes representing a system state, including connectivity between devices their types, metrics, and logs. Each node contains many attributes, which are different types: categorical, numerical, vector, distribution, classification.
when it is necessary an expert can label them. The graph creator builds graphs that represent the system state. They use sources of data coming from different monitoring systems or other information about the system architecture. The content of graphs and their topology depends on the modeling approach. For instance, each node can represent a server, application or its module. The graph similarity module is used to find in a solution space the nearest graph to the anomalous system state graph. By finding this closest labeled graph, we can know the cause of a failure. In case of use of the proposed framework for failure prevention, we get a graph representing the most probable failure which is likely to occur. Fig. 3. Scheme presenting the architecture of the root cause classification framework working with an external anomaly detection system.
A. Including log data
In this subsection, we propose a log representation structure that can be embedded into our graph. In the proposed graph representation of a system state, the attributes capture information from different sources, including logs. In contrast to many state of the art solutions, we consume logs without any metadata or dependency on its structure. Thanks to this approach, our solution is agile and needs a minimal effort for the deployment. We only extract timestamp, severity level, and the rest is treated as a log entry that includes application module name, message, thread name, and other fields. Moreover, users (framework operators) can disassemble logs by modules and put them inside new nodes or attributes representing these modules in a system state graph. For instance, an operator deploying the proposed framework may decide that graph representation of a system should be a detailed one. Then, a node presenting a host is connected with its child nodes, representing some modules, e.g., threads, classes, application modules. Logs of this host are split among these nodes.
We propose to use vectorized logs using Word2Vec models, in a system's state representation for the following reasons. The whole log processing is a simple algorithm and includes removal of special chars, sequences, and stop words, tokenization and vectorization. The scheme illustrating the whole process is presented in Figure 4.
Filtering. After eliminating special char sequences e.g., hex strings, the vocabulary in logs is limited. Typically, humancreated templates of logs do not contain synonyms, just strict and simple phrases. After this stage, log entries contain less noise and represent a state of the generalized system, rather than a particular case. Also, removing special characters helps to avoid model over-fitting. This step does not only improve the model quality but also transforms a log into a universal form, which is mandatory in cross-system diagnostics.
Tokenization. The tokenization step disassembles sentences into bags of words.
Vectorization. Thanks to Word2Vec we transform log into vectors. The vectorization stage enables to represent log entries in relatively small models, which we show later in the evaluation in Section V. Firstly, it is necessary to create a model mapping the vocabulary into n dimensional space. The performance of a model depends on its configuration parameters and the size of the vocabulary used for training. A considerable advantage of using a Word2Vec embedding model is that it performs well even if it is trained using the vocabulary of one domain and used for another. Also, similarity calculations should be as fast as possible to enable diagnostics of failures in a dynamic environment. Hence, it is not feasible to use natural language processing (NLP) techniques such as key terms extraction using rank algorithms for each log sentence as we demonstrate in Section V, where we test different approaches. The proposed log processing algorithm does not need much configuration work. We only need to adjust a time window size, which starts with a specific severity type. In our case, we propose to use severities with a higher level than the warning one.
After a failure occurs, we can find messages on the logs containing information for that failure, while some others are just messages belonging to the usual operation of the system components. As discussed in [38], using smaller time windows capture the more detailed meaning of a word (in our case, if it mentions a failure), and large ones which capture the context (general context of the application used). We propose to use two log windows: one called (1) context window, and the other one (2) event meaning window. The context window represents the general state of a part of the system. Mainly, it enables to capture application's normal activities. The event meaning window captures log entries in a shorter time after a particular event. Logs in such a window represent specific information about the event. Both windows start when an error or warning message is written into the log. The reason we take this approach is that operators usually do not know when the system starts failing, but they know the precise time of every error or warning written to logs. We explain the concept of window lengths in Figure 5.
B. Using metrics distribution for automatic weighting of node attributes and measuring similarity
The distribution of values for a metric can be used to know how different or uncommon is a value observed in the system. In this subsection, we explain how we use the cumulative distribution function to our advantage by, firstly, calculating weights automatically inside the graph representation, and secondly, comparing two numerical attributes taking into account the distribution of their historical values.
1) Automatic weighting of node attributes: There are two ways of defining weights in graphs which represent the importance of the different elements of the system status.
Firstly, thanks to the weight assignment mechanism, operators adjust the importance of a particular metric in the graph representation based on their expert knowledge of a failure. For instance, operators might put a higher weight on the CPU load than on the disk IO, for a problem related to a system overload. Thanks to this approach, we do not require from operators to know specific characteristics or deviations of metrics. We use a part of their expertise which contains importance of metrics used in a troubleshooting process. However, for many systems this type of assignment can be impractical, e.g., complex system, weights get outdated.
The second possibility for weight assignment in graphs is an automatic weight calculation from available metric data.
In this subsection, we focus on the latter. We propose an automatic weight assignment mechanism to automatically assign the importance of an attribute, given its distribution.
According to the troubleshooting activities of IT operators, the more abnormal the attribute value is, the better describer of a particular failure. In this case, we define weights which are proportional to the deviation of the usual value for an attribute distribution of values. For instance, using the normal distribution X ∼ N (µ, σ 2 ) where µ stands for the metric mean value, and σ stands for standard deviation, we have the following definition.
Definition 2: The weight of a numerical attribute which is proportional to the deviation of a metric value a is defined as w(a) = |a−µ| σ 2) Measuring similarity from metric distributions: The similarity function based on metric distribution enables to utilize data containing historical values for an attribute. The function definition contains cumulative distribution function (CDF) and its parameters. We define the similarity function between two numerical attributes, as the formula similarity = 1 − distance where distance is the difference between CDF values of attributes. For the normal distribution used in the proposed framework, we have the following definition.
Definition 3: Given numerical attributes a 1 , a 2 from two graphs and distribution of these attributes X ∼ N (µ, σ 2 ), where φ stands for the CDF of this distribution, their similarity is provided with the formula similarity = 1 − |φ µ, σ 2 (a 1 ) − φ µ, σ 2 (a 2 )| The above two simple mechanisms allow to automatically include the importance of attributes in the graph representation of a system state and similarity calculation.
C. Enabling cross-system diagnostics
Finally, we use the proposed framework to transfer knowledge about failures from one system that we call source system to another that is called target system. In Figure 6, we present the cross-system knowledge transfer problem. A source system and a target system can have both different topologies and contents of nodes. We use the proposed graph representation of system states as a medium to transfer knowledge about failures. Then, thanks to the framework, we can compare two states of different systems and calculate the maximum similarity of these states. In the final step, we find the nearest graph, which best describes a target system state by knowledge coming from a source system.
In details, using our framework, knowledge transfer is possible because of: 1) Calculation of the maximum similarity between two graphs with different structures using different similarity functions (Subsection III-B). The framework finds the maximum similarity by matching proper subgraphs. Also, defining a taxonomy allows for the calculation of the similarity between two nodes that are different but represent the same concept in a domain. For instance, a slave server of Spark and a data node of Hadoop are close to each other inside the taxonomy, because they are both slaves in a masterslave architecture. 2) Inclusion of logs in the graph representation, as they describe in natural language events that happen in the system, independently of their architecture or resource usage (Subsection IV-A). The two log windows (context and event) contain universal descriptive information, no matter what the differences are between the topologies and components of the two system graphs. 3) Including the information contained in the distribution of the metrics for a given architecture. We do it through the automatic weight assignment and the similarity function based on the distance between distributions (Subsection IV-B). The metric values registered for the source and target system can be very different depending on their resource usage patterns. Calculating weights and measuring the similarity using their distributions, allows for a better comparison between two different systems. In this Section, we show through a series of experiments the quality of our proposed root cause classification framework. We evaluate different features of the framework and compare them to representative and popular state of the art techniques. We use a f1-score metric which both includes recall and precision. In this Section, we evaluate the quality of the framework in a scenario where the source and the target system is the same. For this task, we use two use cases: a Spark cluster and a Hadoop cluster. We evaluate cross-system diagnostics in Section VI, using Kafka and Cassandra systems.
A. Experimental methodology
Experimental environment. In the first set of experiments, we use the following experimental system to create the dataset. The system comprises:
• 5x amd server: 32GB RAM, AMD Opteron(tm) Processor 6168 (12 cores, 1.9 GHz), equipped with IPMI card and running Ubuntu OS
• Switch D-link DGS-1210-48
• 2x Power Analyzer ZES Zimmer LMG450. The device is a 4 Channel power analyzer mounted in a rack and connected between each power supply and servers and the switch.
The monitoring system acquires 22 metrics representing the system state, such as CPU total load: idle, iowait, softirq, system, user; disk: bytes read, bytes write, IO read, IO write; memory: buffer cache, free, map, used; network: received bytes, received packets, send bytes, send packets, and processes: load10, load15, load5, number of running processes. The power meters acquire energy consumption of the servers and the switch. The probing period is set to 5 seconds. The monitoring system works on InfluxDB 4 stack and we use ElasticSearch 5 stack for log storage.
Workloads. During the experiments, we generate Hadoop and Spark workloads using HiBench [39]. We use workloads such as sort, word count, k-means clustering, Bayesian classifier. Each workload takes from 20min to 2h. Random workloads run continuously.
B. Failure and anomaly injection
We inject different failures in the experimental environment. Each of the described failures is injected 20 times. We choose a set of failure types which are representative and wellaligned with use cases in real environments. Also, different failures should manifest exclusive symptoms in different metrics and logs. The next criterion of choosing the failure types is that they should differentiate possible scenarios of lacking data that are often caused by connectivity problems.
The following list presents the injected anomalous workloads and failures.
• High CPU load. Background process running CPU pattern of 100% load for 90% of server cores. This failure simulates a scenario of a node slow-down, caused by e.g., an unfinished job, unwanted or unfinished process. CPU performance degradation can also simulate a failure of one of many workers in a Big Data cluster.
• High disk load. Random write and read operations on a 10GB file, generated with the FIO utility 6 . This failure simulates a scenario of a failed disk in a disk array. Thanks to this failure type we can observe many HDFS errors.
• High network transfer. 20 threads are uploading and downloading 5GB files. It simulates significant network slowdowns, which can occur as a result of network infrastructure failure.
• Host shutdown. Immediate node shutdown through IPMI card. It simulates a node crash, a sudden and unexpected failure of the whole machine.
• Network failure. Physical disconnection. 4 https://www.influxdata.com/ 5 https://www.elastic.co/ 6 https://github.com/axboe/fio/ The symptoms of failures have an understandable impact on system metric values. As we mentioned before, we include power metrics of the servers and the switch. Regarding the switch power, we can observe different peaks and power values depending not only on the network transfer but also on the connection and disconnections. In Figure 7 we present the switch power distribution depending on the injected failure, and the referential distribution for the system running random workload without any failure injected. We can observe that different power consumption values characterize different failures. These distributions increase the quality of failure classification in similarity evaluation. For instance, high disk load manifests in a low switch power consumption, while high network use manifests in significantly higher median value. To evaluate the quality of the root cause classification, we use f1-score metric that is defined as follows.
Definition 4: f1-score is the harmonic mean of precision and recall: f 1 = Firstly, we evaluate different methodologies and their configurations for the use of logs for the classification task. In the evaluation, we present the result of solving the following problems. Fig. 9. Plot presenting root cause classification quality depending on the mechanism used. Average f1-score is calculated from all of the injected failures. The proposed framework performs better than state of the art solutions (Word2Vec).
• Model training vocabulary. We fit Word2Vec models using different vocabulary. It can be a specific vocabulary for a particular domain or a general dictionary e.g., English one. For instance, we can train such a model with logs from Spark cluster, and use this model to vectorize Hadoop logs.
• Model size. We evaluate different numbers of dimensions of a vocabulary space (vector size).
• Key terms extraction. We compare the performance of use of the whole available log entries with the key terms describing the system state.
• Log window length. Size of the window is a trade-off between generalization of logs and capturing precise event information. Taking to much text can fuzzify the meaning of the event, and opposite, taking too little text can mangle an analyzed system state. We evaluate different window lengths for both event and context windows.
Firstly, we test how different sources of vocabulary and model size impacts the quality of the classification task. We create Word2Vec models with the process described in Subsection IV-A evaluating different vector size and vocabulary used for model training. In Figure 8, we see average f1-scores of the failure classification for the two use cases: the Spark and Hadoop cluster. We present only the best results achieved during the evaluation of different log window sizes. Also, we present summarized results of vector size evaluation. For vector sizes between 3 and 80 f1-scores does not change much. In the Figure 8, the inner groups stand for the source of the vocabulary used for model training. As well as for Hadoop and Spark, the classification performs the best when the same vocabulary is used for model training and vectorization. For both use cases, the models perform well with small vector sizes -3 for Hadoop and 2 for Spark.
In the next step, we test different approaches of extracting information from logs and representing it in graphs. For the first approach, we use Word2Vec, as described above. In the second approach, we use SGRank [40] algorithm to extract key terms which best describe a system state. This algorithm combines statistical methods, e.g., TF-IDF, with graph-based approaches of key terms. In Figure 9, we confirm that using the whole text is the best method to represent the log meaning [33].
D. Evaluation: Root cause classification via similarity of weighted graphs
In this subsection, we present the results of evaluation of the root cause classification. We test four different configurations of the proposed framework and compare them with the state of the art methods. We show how augmenting the dataset used for the classification task improves its performance. In Figure 9, we present the results of the evaluation: average f1score and accuracy. Average f1-score is calculated over all of the injected failures. In evaluations where it is emphasized that we use automatic attribute importance assignment, we utilize both similarity function based on distribution and automatic weight calculation. In others, we use equal weights in a graph.
We can see that the proposed framework that contains context and event log window and automatic attribute importance calculation performs better than state of the art methods. Considering performance for two use cases, graphs with automatic weights reveals the best performance. Regarding the Hadoop use case, accuracy reaches 0.72, and f1-score reaches 0.71. As for the Spark use case, f1-score is a little bit lower 0.61 and accuracy 0.71. Note that in the case of Hadoop, adding the automatic weight calculations lowers the f1-score. Most probably it is because of that, the resource usage does not need to follow normal distribution [41], which we use as an estimator in the evaluation.
We evaluate the proposed framework for different event and context window lengths. In Figure 10, we present detailed results of this evaluation. The performance changes smoothly, there are local maxima of f1-score. These maxima show balance points between log generalization and extraction of precise information about a particular event. The greater is log window length, the more fuzzified information about an event is held in analyzed window.
In Figure 11 we present detailed evaluation results for each of the injected failures. We compare the use of logs with the proposed framework comprising automatically weighted graphs. The proposed framework performs significantly better than Word2Vec, especially with the classification of high CPU load and host shutdown. There is no observable difference in the performance of the proposed framework when used for Fig. 10. Plot presenting quality of failure classification via graphs with equal weights depending on the log window sizes. Average f1-score is calculated from all of the injected failures.
the Spark or Hadoop use case. The exception is high network transfer, which is classified well only for Hadoop by both Word2Vec and the proposed framework. High network transfer manifests in characteristic log entries for Hadoop, and for Spark only in network metrics. Also, it is important to emphasize that, received results come from similarity evaluation of graphs created automatically without any weight adjustment by a human. In this section, we evaluate our approach in a more cloudoriented environment, by running microservice architectures made up of containers. We use Grid'5000 a customizable testbed that provides access to different computing resources and infrastructures. We deploy a cluster of 7 virtual machines with 16GB of RAM and four cores. We install DC/OS 7 on these machines, a container orchestration tool that will allow us to deploy the microservice architectures. The setup is 1 master node, 1 public node, and 5 private nodes. Additional information about DC/OS parts can be found in their website 8 . We use two additional representative Big Data architectures to perform root cause analysis with them. The first one is a Cassandra deployment with 5 Cassandra containers that are going to be continuously queried by 10 containers with Yahoo Cloud Service Benchmark [42] installed. The second one is a Kafka architecture, in which we have 5 brokers, 10 producers that push messages to the Kafka cluster and 10 consumers that read those messages. Additionally, the Kafka brokers need a Zookeeper [43] instance to coordinate them. A simplified version of the graph representations we use for these deployments is shown in Figure 12. Note that these two architectures are very similar with a decentralized cluster of servers or brokers that interact with each other and clients that read or write data into this cluster. This scenario is a suitable one for our knowledge transfer approach since failures that happen in one system will have a similar effect if they also occur in the other one. Fig. 12. A simplified version of the graph representations we use for the microservice architectures. On the left the Kafka architecture with a Zookeeper instance coordinating the brokers and producers and consumers using the message queue. On the right a Cassandra cluster with the YCSB clients. Notice how the VMs are connected to the containers they are hosting through edges that represent this relationship.
B. Methodology
Regarding the failures, we injected them in both the hosts and the containers. For the hosts, we use the same high CPU, high disk, and high network transfer anomalies as in the Spark scenario to stress the machines. For the containers, we pause them through docker pause instead of using host shutdown and network failure. We do so because a container cannot be physically disconnected from the network as a host would. The anomalies are injected six times each, in one random element of the architecture for 120 seconds.
C. Evaluation: Cross-system diagnostics
We present detailed results of the evaluation in Figure 13. Average f1-score is 0.77 in case of using Cassandra as a source system and Kafka as a target one. In the reversed configuration, the result is 0.76. Note that the scores of the cross-system diagnostics are better than the first evaluation of the framework, due to the different number of types of the injected failures. Both quality results are approximately equal, thanks to the symmetry of similarity function. The small difference is caused by the task of finding the nearest graph (a one with the highest similarity number). This operation is not always symmetric. Considering that two systems are different, in their topology, behavior, and logs, the results are showing high performance of the proposed framework. Fig. 13. Plot presenting results of cross-system diagnostics via finding the nearest graph representing an anomalous state of a system. Results of two cases are presented. 1) Source system: Cassandra, target system: Kafka; 2) Source system: Kafka, target system: Cassandra. Average f1-score and accuracy: 1) 0.76, 0.77; 2) 0.77, 0.77.
VII. DISCUSSION AND CONCLUSION
In this paper, we proposed a framework for finding the nearest failure cause via similarity evaluation of weighted graphs. The framework is aimed to diagnose one system when the knowledge about failures is acquired from another system with a different structure. An example would be a new system that has just started operating, it fails, and it is hard to diagnose it. Also, the proposed framework aims to facilitate knowledge transfer between systems and operators. Firstly, we described the whole framework and its contributions. The most significant contributions are automatic calculations of metric weights, integration of logs with system topology and metrics into graph representation of a system, and leveraging historical metric values for similarity calculations. Then, we evaluated the proposed framework in total with four different systems. We inject common anomalies and failures, such as hardware overload, node crash, and network disconnections. In the first evaluation section, we use Spark and Hadoop clusters. We confirm the quality of root cause classification that achieves average f1-score of 0.71 for Hadoop and 0.61 for Spark. These results show that the framework outperforms state of the art methods. In the second evaluation, we utilize a cloud environment of containers. We evaluate cross-system diagnostics via knowledge transfer. That means diagnosing a target system when knowledge about failure causes and anomalous states is known only from a source system. We run two scenarios: Kafka acting as the source system and Cassandra as the target one and vice versa. Cross-system diagnostics reaches average f1-score of 0.77. The achieved results confirm that the proposed framework, and in particular its ability of knowledge transfer, allows reaching the state of self-manageable IT systems.
In the next stage of research we can focus on:
• Evaluation of the framework on the real large-scale environments. Also, an integration of the proposed framework with the proactive failure prevention system might be useful.
• There might be interesting research performed on knowledge transfer framework integrated with knowledge exploration solutions. Such a system could automatically mine knowledge on failures from parts of the system.
• Another vital issue to consider in the future work is an automatic taxonomy construction. Then the knowledge transfer would be much more automated.
• Aspect of explainable knowledge transfer in crosssystem diagnostics.
• Distinguishing random errors, and the ones which are critical for the future system performance and reliability.
• Mechanism for automatic propagation of weights for anomalous regions inside graphs
• Research in the field of predicting failures with use of transfer learning.
Fig. 1 .
1Fig. 1. An example taxonomy defining equipment type used in the evaluation. For instance, using the ontological similarity formula from Table I: similarity(M aster, Slave) = 0, 66, similarity(M aster, Switch) = 0, 4, similarity(Server, Switch) = 0, 5
Fig. 4 .
4An example process of transformation log entries to vectors.
Fig. 5 .
5Scheme presenting context and event window of logs. Both windows start on a first Error or Warning message.
Fig. 6 .
6Scheme illustrating an idea of cross-system graph comparison. V. EVALUATION AND EXPERIMENTS: ROOT CAUSE CLASSIFICATION
Fig. 7 .
7An impact of different failure types on the power consumption of a switch. In random workload no failures are injected.
Fig. 8 .
8Plot presenting the quality of root cause classification depending on the number of dimensions used in Word2Vec model, and the training vocabulary source. Log window length: 30s.
model with parameters reaching the maximum quality, chosen from Figure 8. Log window length:30s (b) Automatically weighted graphs. Context window length: 30s, event window length: 10s, metrics window length: 120s Fig. 11. Plots presenting quality of failure classification for Word2Vec and the proposed framework. VI. EVALUATION AND EXPERIMENTS: CROSS-SYSTEM DIAGNOSTICS -TRANSFERRING KNOWLEDGE A. Experimental environment
TABLE I .
ISIMILARITY FUNCTIONS USED IN THE PROPOSEDFRAMEWORK
Type of attributes
Similarity function
Numerical
1 − scaled distance(a1, a2)
Vector
cos(a1, a2); inverse Euclidean distance; Minkowski p distance
Categorical
1 if a1 == a2 else 0
Ontological
Modified Wu and Palmer [37] similarity metric
2·d(C)
d(C1)+d(C2)
http://cloudcomputing.sys-con.com/node/4054335/ 2 https://www.ibm.com/blogs/bluemix/2016/06/moving-devops-noopsmicroservice-architecture-bluemix/ 3 http://www.bmc.com/blogs/itops-devops-and-noops-oh-my/
https://dcos.io/ 8 https://mesosphere.com/
ACKNOWLEDGMENT This research is supported by the BigStorage project (ref. 642963) funded by Marie Skłodowska-Curie ITN for Early Stage Researchers, and it is a part of a doctorate at UPC. Special thanks to German Climate Supercomputing Center (DKRZ, Hamburg, Germany) and Grid'5000 (Inria, France) for the access to computing resources for the experiments.
The death of system administration. T Underwood, login:: the magazine of USENIX & SAGE. 39T. Underwood, "The death of system administration," ; login:: the magazine of USENIX & SAGE, vol. 39, no. 2, pp. 6-8, 2014.
A survey of deep learning-based network anomaly detection. D Kwon, H Kim, J Kim, S C Suh, I Kim, K J Kim, Cluster Computing. D. Kwon, H. Kim, J. Kim, S. C. Suh, I. Kim, and K. J. Kim, "A survey of deep learning-based network anomaly detection," Cluster Computing, pp. 1-13, 2017.
High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. S M Erfani, S Rajasegarar, S Karunasekera, C Leckie, Pattern Recognition. 58S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie, "High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning," Pattern Recognition, vol. 58, pp. 121 -134, 2016. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/S0031320316300267
Spark: Cluster computing with working sets. M Zaharia, M Chowdhury, M J Franklin, S Shenker, I Stoica, Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing, ser. HotCloud'10. the 2Nd USENIX Conference on Hot Topics in Cloud Computing, ser. HotCloud'10Berkeley, CA, USAUSENIX AssociationM. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, "Spark: Cluster computing with working sets," in Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing, ser. HotCloud'10. Berkeley, CA, USA: USENIX Association, 2010, pp. 10-10.
Apache hadoop yarn: Yet another resource negotiator. V K Vavilapalli, A C Murthy, C Douglas, S Agarwal, M Konar, R Evans, T Graves, J Lowe, H Shah, S Seth, B Saha, C Curino, O O'malley, S Radia, B Reed, E Baldeschwieler, Proceedings of the 4th Annual Symposium on Cloud Computing, ser. SOCC '13. the 4th Annual Symposium on Cloud Computing, ser. SOCC '13New York, NY, USAACM516V. K. Vavilapalli, A. C. Murthy, C. Douglas, S. Agarwal, M. Konar, R. Evans, T. Graves, J. Lowe, H. Shah, S. Seth, B. Saha, C. Curino, O. O'Malley, S. Radia, B. Reed, and E. Baldeschwieler, "Apache hadoop yarn: Yet another resource negotiator," in Proceedings of the 4th Annual Symposium on Cloud Computing, ser. SOCC '13. New York, NY, USA: ACM, 2013, pp. 5:1-5:16.
Cassandra: A decentralized structured storage system. A Lakshman, P Malik, SIGOPS Oper. Syst. Rev. 442A. Lakshman and P. Malik, "Cassandra: A decentralized structured storage system," SIGOPS Oper. Syst. Rev., vol. 44, no. 2, pp. 35-40, Apr. 2010.
Kafka: a distributed messaging system for log processing. J Kreps, N Narkhede, J Rao, J. Kreps, N. Narkhede, and J. Rao, "Kafka: a distributed messaging system for log processing," 2011.
Pivot tracing: Dynamic causal monitoring for distributed systems. J Mace, R Roelke, R Fonseca, Proceedings of the 25th Symposium on Operating Systems Principles, ser. SOSP '15. the 25th Symposium on Operating Systems Principles, ser. SOSP '15New York, NY, USAACMJ. Mace, R. Roelke, and R. Fonseca, "Pivot tracing: Dynamic causal monitoring for distributed systems," in Proceedings of the 25th Sympo- sium on Operating Systems Principles, ser. SOSP '15. New York, NY, USA: ACM, 2015, pp. 378-393.
Graph based anomaly detection and description: a survey. L Akoglu, H Tong, D Koutra, Data Mining and Knowledge Discovery. 293L. Akoglu, H. Tong, and D. Koutra, "Graph based anomaly detection and description: a survey," Data Mining and Knowledge Discovery, vol. 29, no. 3, pp. 626-688, May 2015.
Failure prediction methodology for improved proactive maintenance using bayesian approach. A Abu-Samah, M Shahzad, E Zamai, A Said, 9th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS. 48A. Abu-Samah, M. Shahzad, E. Zamai, and A. Said, "Failure prediction methodology for improved proactive maintenance using bayesian approach," IFAC-PapersOnLine, vol. 48, no. 21, pp. 844 -851, 2015, 9th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS 2015. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2405896315017619
Fast failure detection and recovery in sdn with stateful data plane. C Cascone, D Sanvito, L Pollini, A Capone, B Sans, https:/onlinelibrary.wiley.com/doi/abs/10.1002/nem.1957International Journal of Network Management. 2721957C. Cascone, D. Sanvito, L. Pollini, A. Capone, and B. Sans, "Fast failure detection and recovery in sdn with stateful data plane," International Journal of Network Management, vol. 27, no. 2, p. e1957, 2016. [Online]. Available: https://onlinelibrary.wiley.com/doi/ abs/10.1002/nem.1957
Anomaly detection on graph time series. D Hsu, abs/1708.02975CoRR. D. Hsu, "Anomaly detection on graph time series," CoRR, vol. abs/1708.02975, 2017. [Online]. Available: http://arxiv.org/abs/1708. 02975
Graph-based anomaly detection. C C Noble, D J Cook, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. the ninth ACM SIGKDD international conference on Knowledge discovery and data miningACMC. C. Noble and D. J. Cook, "Graph-based anomaly detection," in Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2003, pp. 631-636.
Anomaly detection in log data using graph databases and machine learning to defend advanced persistent threats. T Schindler, abs/1802.00259CoRR. T. Schindler, "Anomaly detection in log data using graph databases and machine learning to defend advanced persistent threats," CoRR, vol. abs/1802.00259, 2018. [Online]. Available: http://arxiv.org/abs/ 1802.00259
Transfer learning using computational intelligence: A survey. J Lu, V Behbood, P Hao, H Zuo, S Xue, G Zhang, 25th anniversary of Knowledge-Based Systems. 80J. Lu, V. Behbood, P. Hao, H. Zuo, S. Xue, and G. Zhang, "Transfer learning using computational intelligence: A survey," Knowledge-Based Systems, vol. 80, pp. 14 -23, 2015, 25th anniversary of Knowledge-Based Systems. [Online]. Available: http: //www.sciencedirect.com/science/article/pii/S0950705115000179
A survey on transfer learning. S J Pan, Q Yang, IEEE Transactions on knowledge and data engineering. 2210S. J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transac- tions on knowledge and data engineering, vol. 22, no. 10, pp. 1345- 1359, 2010.
Graph similarity scoring and matching. L A Zager, G C Verghese, Applied mathematics letters. 211L. A. Zager and G. C. Verghese, "Graph similarity scoring and matching," Applied mathematics letters, vol. 21, no. 1, pp. 86-94, 2008.
Algorithms for graph similarity and subgraph matching. D Koutra, A Parikh, A Ramdas, J Xiang, D. Koutra, A. Parikh, A. Ramdas, and J. Xiang, "Algorithms for graph similarity and subgraph matching," 2011.
Web graph similarity for anomaly detection. P Papadimitriou, A Dasdan, H Garcia-Molina, Journal of Internet Services and Applications. 11P. Papadimitriou, A. Dasdan, and H. Garcia-Molina, "Web graph similarity for anomaly detection," Journal of Internet Services and Applications, vol. 1, no. 1, pp. 19-30, 2010.
A survey of text similarity approaches. W H Gomaa, A A Fahmy, International Journal of Computer Applications. 6813W. H. Gomaa and A. A. Fahmy, "A survey of text similarity ap- proaches," International Journal of Computer Applications, vol. 68, no. 13, 2013.
Semantic text similarity using corpus-based word similarity and string similarity. A Islam, D Inkpen, ACM Transactions on Knowledge Discovery from Data (TKDD). 2210A. Islam and D. Inkpen, "Semantic text similarity using corpus-based word similarity and string similarity," ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 2, no. 2, p. 10, 2008.
Evaluating text coherence based on semantic similarity graph. J W G Putra, T Tokunaga, Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing. TextGraphs-11: the Workshop on Graph-based Methods for Natural Language ProcessingJ. W. G. Putra and T. Tokunaga, "Evaluating text coherence based on semantic similarity graph," in Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing, 2017, pp. 76-85.
Mining users' preference similarities in e-commerce systems based on webpage navigation logs. P Li, C Wu, S Zhang, X Yu, H Zhong, International Journal of Computers. 125Communications & ControlP. Li, C. Wu, S. Zhang, X. Yu, and H. Zhong, "Mining users' prefer- ence similarities in e-commerce systems based on webpage navigation logs," International Journal of Computers, Communications & Control, vol. 12, no. 5, 2017.
Mining user preferences, page content and usage to personalize website navigation. S Flesca, S Greco, A Tagarelli, E Zumpano, World Wide Web. 83S. Flesca, S. Greco, A. Tagarelli, and E. Zumpano, "Mining user preferences, page content and usage to personalize website navigation," World Wide Web, vol. 8, no. 3, pp. 317-345, 2005.
Dependency-driven analytics: a compass for uncharted data oceans. R Mavlyutov, C Curino, B Asipov, P Cudre-Mauroux, Tech. Rep.OnlineR. Mavlyutov, C. Curino, B. Asipov, and P. Cudre- Mauroux, "Dependency-driven analytics: a compass for un- charted data oceans," Tech. Rep., October 2016. [On- line]. Available: https://www.microsoft.com/en-us/research/publication/ dependency-driven-analytics-compass-uncharted-data-oceans/
Datadriven techniques in computing system management. T Li, C Zeng, Y Jiang, W Zhou, L Tang, Z Liu, Y Huang, 45:1-45:43ACM Comput. Surv. 503T. Li, C. Zeng, Y. Jiang, W. Zhou, L. Tang, Z. Liu, and Y. Huang, "Data- driven techniques in computing system management," ACM Comput. Surv., vol. 50, no. 3, pp. 45:1-45:43, Jul. 2017.
Error log processing for accurate failure prediction. F Salfner, S Tschirpke, Proceedings of the First USENIX conference on Analysis of system logs. USENIX Association. the First USENIX conference on Analysis of system logs. USENIX AssociationF. Salfner and S. Tschirpke, "Error log processing for accurate failure prediction," in Proceedings of the First USENIX conference on Analysis of system logs. USENIX Association, 2008.
System log pre-processing to improve failure prediction. Z Zheng, Z Lan, B H Park, A Geist, 2009 IEEE/IFIP International Conference on Dependable Systems Networks. Z. Zheng, Z. Lan, B. H. Park, and A. Geist, "System log pre-processing to improve failure prediction," in 2009 IEEE/IFIP International Con- ference on Dependable Systems Networks, June 2009, pp. 572-577.
Logan: Problem diagnosis in the cloud using log-based reference models. B C Tak, S Tao, L Yang, C Zhu, Y Ruan, 2016 IEEE International Conference on Cloud Engineering (IC2E). B. C. Tak, S. Tao, L. Yang, C. Zhu, and Y. Ruan, "Logan: Problem diagnosis in the cloud using log-based reference models," in 2016 IEEE International Conference on Cloud Engineering (IC2E), April 2016, pp. 62-67.
Deeplog: Anomaly detection and diagnosis from system logs through deep learning. M Du, F Li, G Zheng, V Srikumar, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS '17. the 2017 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS '17M. Du, F. Li, G. Zheng, and V. Srikumar, "Deeplog: Anomaly detection and diagnosis from system logs through deep learning," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communica- tions Security, ser. CCS '17, 2017, pp. 1285-1298.
Logsed: Anomaly diagnosis through mining time-weighted control flow graph in logs. T Jia, L Yang, P Chen, Y Li, F Meng, J Xu, 2017 IEEE 10th International Conference on Cloud Computing (CLOUD). T. Jia, L. Yang, P. Chen, Y. Li, F. Meng, and J. Xu, "Logsed: Anomaly diagnosis through mining time-weighted control flow graph in logs," in 2017 IEEE 10th International Conference on Cloud Computing (CLOUD), June 2017, pp. 447-455.
Mining causes of network events in log data with causal inference. S Kobayashi, K Fukuda, H Esaki, IFIP/IEEE Symposium on Integrated Network and Service Management. S. Kobayashi, K. Fukuda, and H. Esaki, "Mining causes of network events in log data with causal inference," in IFIP/IEEE Symposium on Integrated Network and Service Management, May 2017, pp. 45-53.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G Corrado, J Dean, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing SystemsUSACurran Associates Inc2ser. NIPS'13T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, "Distributed representations of words and phrases and their compositionality," in Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, ser. NIPS'13. USA: Curran Associates Inc., 2013, pp. 3111-3119.
word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method. Y Goldberg, O Levy, abs/1402.3722CoRR. Y. Goldberg and O. Levy, "word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method," CoRR, vol. abs/1402.3722, 2014. [Online]. Available: http://arxiv.org/abs/1402. 3722
Experience report: Log mining using natural language processing and application to anomaly detection. C Bertero, M Roy, C Sauvanaud, G Tredan, 2017 IEEE 28th International Symposium on Software Reliability Engineering. C. Bertero, M. Roy, C. Sauvanaud, and G. Tredan, "Experience report: Log mining using natural language processing and application to anomaly detection," in 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE), 2017, pp. 351-360.
Reactive tabu search for measuring graph similarity. S Sorlin, C Solnon, Proceedings of the 5th IAPR International Conference on Graph-Based Representations in Pattern Recognition, ser. GbRPR'05. the 5th IAPR International Conference on Graph-Based Representations in Pattern Recognition, ser. GbRPR'05Berlin, HeidelbergSpringer-VerlagS. Sorlin and C. Solnon, "Reactive tabu search for measuring graph similarity," in Proceedings of the 5th IAPR International Conference on Graph-Based Representations in Pattern Recognition, ser. GbRPR'05. Berlin, Heidelberg: Springer-Verlag, 2005, pp. 172-182.
Verbs semantics and lexical selection. Z Wu, M Palmer, Proceedings of the 32nd annual meeting on Association for Computational Linguistics. the 32nd annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsZ. Wu and M. Palmer, "Verbs semantics and lexical selection," in Pro- ceedings of the 32nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, 1994.
Dependency-based word embeddings. O Levy, Y Goldberg, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics2Short Papers)O. Levy and Y. Goldberg, "Dependency-based word embeddings," in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2, 2014, pp. 302-308.
The hibench benchmark suite: Characterization of the mapreduce-based data analysis. S Huang, J Huang, J Dai, T Xie, B Huang, New Frontiers in Information and Software as Services. D. Agrawal, K. S. Candan, and W.-S. LiBerlin, Heidelberg; Berlin HeidelbergSpringerS. Huang, J. Huang, J. Dai, T. Xie, and B. Huang, "The hibench bench- mark suite: Characterization of the mapreduce-based data analysis," in New Frontiers in Information and Software as Services, D. Agrawal, K. S. Candan, and W.-S. Li, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 209-228.
Sgrank: Combining statistical and graphical methods to improve the state of the art in unsupervised keyphrase extraction. S Danesh, T Sumner, J H Martin, Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics. the Fourth Joint Conference on Lexical and Computational SemanticsS. Danesh, T. Sumner, and J. H. Martin, "Sgrank: Combining statistical and graphical methods to improve the state of the art in unsupervised keyphrase extraction," in Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, 2015, pp. 117-126.
Hadoop's adolescence: An analysis of hadoop usage in scientific workloads. K Ren, Y Kwon, M Balazinska, B Howe, Proc. VLDB Endow. VLDB Endow6K. Ren, Y. Kwon, M. Balazinska, and B. Howe, "Hadoop's adolescence: An analysis of hadoop usage in scientific workloads," Proc. VLDB Endow., vol. 6, no. 10, pp. 853-864, Aug. 2013. [Online].
. 10.14778/2536206.2536213Available: http://dx.doi.org/10.14778/2536206.2536213
Benchmarking cloud serving systems with ycsb. B F Cooper, A Silberstein, E Tam, R Ramakrishnan, R Sears, Proceedings of the 1st ACM Symposium on Cloud Computing, ser. SoCC '10. the 1st ACM Symposium on Cloud Computing, ser. SoCC '10New York, NY, USAACMB. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears, "Benchmarking cloud serving systems with ycsb," in Proceedings of the 1st ACM Symposium on Cloud Computing, ser. SoCC '10. New York, NY, USA: ACM, 2010, pp. 143-154.
Zookeeper: Wait-free coordination for internet-scale systems. P Hunt, M Konar, F P Junqueira, B Reed, Proceedings of the 2010 USENIX Conference on USENIX Annual Technical Conference, ser. USENIXATC'10. the 2010 USENIX Conference on USENIX Annual Technical Conference, ser. USENIXATC'10Berkeley, CA, USAP. Hunt, M. Konar, F. P. Junqueira, and B. Reed, "Zookeeper: Wait-free coordination for internet-scale systems," in Proceedings of the 2010 USENIX Conference on USENIX Annual Technical Conference, ser. USENIXATC'10, Berkeley, CA, USA, 2010.
| [
"https://github.com/axboe/fio/"
]
|
[
"Electromagnetic enhancement generated by ˆ A p term of cavity quantum electrodynamics demonstrated by single coupled systems between plasmon and molecular exciton",
"Electromagnetic enhancement generated by ˆ A p term of cavity quantum electrodynamics demonstrated by single coupled systems between plasmon and molecular exciton"
]
| [
"Tamitake Itoh \nNano-Bioanalysis Research Group\nHealth Research Institute\nNational Institute of Advanced Industrial Science and Technology (AIST)\n761-0395TakamatsuKagawaJapan\n",
"Yuko S Yamamoto \nSchool of Materials Science\nInstitute of Science and Technology (JAIST)\n923-1292NomiIshikawaJapan Advanced, Japan\n"
]
| [
"Nano-Bioanalysis Research Group\nHealth Research Institute\nNational Institute of Advanced Industrial Science and Technology (AIST)\n761-0395TakamatsuKagawaJapan",
"School of Materials Science\nInstitute of Science and Technology (JAIST)\n923-1292NomiIshikawaJapan Advanced, Japan"
]
| []
| In non-relativistic quantum electrodynamics, an electromagnetic (EM) interaction between a photon and a molecular exciton can be expressed by a ˆ Apterm and a 2 A term, where  and p are the operators of the vector potential of the EM field and the momentum of the exciton, respectively. We developed a method for investigating the contribution of the ˆ Ap and 2 A terms to EM enhancement, which occurs in coupled systems composed of a plasmon polariton and a molecular exciton. The spectral shapes of the ˆ Apand 2A terms, and the EM enhancement were experimentally obtained from absorption, Rayleigh scattering, and ultrafast surface enhanced fluorescence (ultrafast SEF) of the systems, respectively. The relationships between them reveal that the absorption spectra correctly reproduce EM enhancement, indicating that ultrafast SEF can be described as a two-step process using the ˆ Ap terms.Furthermore, we demonstrate that the origin of spectral deviation between Rayleigh scattering and EM enhancement is subradiant plasmon resonance, which spectra are visualized in the absorption not in Rayleigh scattering, with numerical calculation based on electromagnetism. | null | [
"https://export.arxiv.org/pdf/2304.02874v1.pdf"
]
| 257,985,408 | 2304.02874 | 0effac38169fd0a45e9232209db9038b382e9019 |
Electromagnetic enhancement generated by ˆ A p term of cavity quantum electrodynamics demonstrated by single coupled systems between plasmon and molecular exciton
Tamitake Itoh
Nano-Bioanalysis Research Group
Health Research Institute
National Institute of Advanced Industrial Science and Technology (AIST)
761-0395TakamatsuKagawaJapan
Yuko S Yamamoto
School of Materials Science
Institute of Science and Technology (JAIST)
923-1292NomiIshikawaJapan Advanced, Japan
Electromagnetic enhancement generated by ˆ A p term of cavity quantum electrodynamics demonstrated by single coupled systems between plasmon and molecular exciton
1
In non-relativistic quantum electrodynamics, an electromagnetic (EM) interaction between a photon and a molecular exciton can be expressed by a ˆ Apterm and a 2 A term, where  and p are the operators of the vector potential of the EM field and the momentum of the exciton, respectively. We developed a method for investigating the contribution of the ˆ Ap and 2 A terms to EM enhancement, which occurs in coupled systems composed of a plasmon polariton and a molecular exciton. The spectral shapes of the ˆ Apand 2A terms, and the EM enhancement were experimentally obtained from absorption, Rayleigh scattering, and ultrafast surface enhanced fluorescence (ultrafast SEF) of the systems, respectively. The relationships between them reveal that the absorption spectra correctly reproduce EM enhancement, indicating that ultrafast SEF can be described as a two-step process using the ˆ Ap terms.Furthermore, we demonstrate that the origin of spectral deviation between Rayleigh scattering and EM enhancement is subradiant plasmon resonance, which spectra are visualized in the absorption not in Rayleigh scattering, with numerical calculation based on electromagnetism.
I. INTRODUCTION
The cross-sections of the electromagnetic (EM) interactions of coupled systems composed of a plasmon polariton and a molecular exciton are largely enhanced by a tightly confined EM field inside the plasmonic cavity [1]. In particular, a molecule located inside the cavity, such as a nanogap between metallic nanoparticle (NP) aggregates exhibits an EM enhancement factor of up to 10 10 in Raman scattering, realizing single molecule (SM) spectroscopy under the resonant Raman condition [2][3][4][5].
This phenomenon is called surface-enhanced resonant Raman scattering (SERRS) and such a nanogap is called a hotspot (HS). HSs have received considerable attention in the cavity quantum electrodynamics (cQED) field because HSs exhibit various exotic phenomena, such as a nonlinear spectroscopy with CW laser excitation [6], ultra-fast surface-enhanced fluorescence (ultrafast SEF) [7,8], vibrational pumping [9], and the field gradient effect [10]. Furthermore, the EM coupling energy between a plasmon polariton and a molecular exciton at HSs exceeds several hundred meV, resulting in new physics and chemistry, e.g., strong to ultrastrong coupling [11], molecular optomechanics [12], and polariton chemistry [13,14].
In cQED, the EM interaction between a photon and a molecular exciton is expressed by the ˆ Apand 2 A terms, where  and p are the operators of the vector 4 potential and momentum, respectively. [15,16]. Under the dipole approximation, in which the light wavelength is enough larger than the molecule size as in Figs. 1(a1) and 1(a2), theˆ Apand 2 A terms generate absorption (or emission) and Rayleigh scattering, respectively, in the first order terms of a perturbation calculation. However, it is not obvious that a dipole approximation is applicable in describing EM interactions inside HSs, in which EM fields are confined within the order of nanometers as in Figs. 1(b1) and 1(b2). Indeed, a breakdown of the dipole approximation has been observed in vibrational spectroscopy using HSs [10,[17][18][19]. Furthermore, strong coupling between molecular excitons and vacuum EM field tightly confined in HSs are reported as illustrated in Fig. 1(b3) [10][11][12][13][14]. Therefore, a method for evaluating the contribution of the ˆ Apand 2 A terms to EM enhancement needs to be developed for various plasmonic systems having HSs [1,11].
In this study, the contributions of the ˆ Apand 2 A terms to EM enhancement at HSs were evaluated by ultrafast SEF using single silver NP dimer gaps as the HSs. The ultrafast SEF appears as a broad background in SERRS spectra when the SEF rate becomes faster than the vibrational decay rate of the excited electronic states [6,7].
Spectroscopic methods for obtaining absorption, Rayleigh scattering, and ultra-fast SEF from HSs were developed to examine the spectral shapes of the ˆ Ap term, 2 A term, 5 and EM enhancement, respectively. We found that absorption spectra show EM enhancement more consistently than Rayleigh scattering spectra, revealing that ultrafast SEF can be described by a two-step process using the two ˆ Apterms. This result indicates that the dipole approximation is applicable in describing EM interactions inside HSs. We also showed that spectral inconsistency between Rayleigh scattering and EM enhancement is induced by subradiant plasmon resonance, which spectra are visualized in the absorption not in Rayleigh scattering, with electromagnetic calculation with changes to the morphology of the dimers.
II. THEORITICAL MODEL
We explained the relationship between the ˆ Apand 2 A terms and the linear optical processes i.e., fluorescence, Rayleigh scattering, and spontaneous Raman scattering [15]. 6 1(b3) [14]. int H , which indicates the EM interactions between electrons and the EM field, is described as
2 2 int 11( ) ( ) 2 NN ii i i i ii ii ee H mm == = − + A r p A r ,(2)
where i e , ,t r . By assuming that the system initially in (2). Using the dipole approximation of a ˆ Ap term as in Eq. (S44), the first term corresponds to a one-photon absorption or one-emission well known as Fermi's golden rule and is described as
0 int 0 0 int 0 0 int 0ˆ( ) f i f n n i f ni b t H H H + ,(7)( ) 0 int 0 1 () N fi f i i i i H AP f e i = − i r ,(8)
where A term as in Eq. (S50), the latter term is described as
2 0 int 0 1 () N fi i H A f i = i . (9)
fi can be nonzero for if = , meaning that the two-photon process of Eq. (9) corresponds to Rayleigh scattering.
We discuss the transitions using the second order perturbation term
0 int 0 0 int 0f n n i ni HH in Eq. (7)
. The matrix elements corresponding to Rayleigh and Raman scattering should include int () H AP as a non-zero term. Thus, this term is rewritten as
9 0 int 0 0 int 0( ) ( ) f n n i ni H AP H AP . (10)
The matrix element of Eq. (10) A terms can contribute to Rayleigh scattering and absorption (or emission or Raman scattering), and therefore such a spectral correlation cannot be observed.
III. MATERIALS AND METHODS
We describe the synthesis and spectroscopic investigation of the HSs of a coupled system composed of silver NP dimers including dye molecules. We have been studied that the coupling energy of the system reaches several tens to hundreds meV [20][21][22].
11
In measuring the scattering (extinction) spectra, the N.A. of the objective lens (LCPlanFL 100×, Olympus, Tokyo) was set to be 0.6 (1.3) to realize dark-(bright-) field illumination. Gold NPs (mean diameters of 60, 80, and 100 nm; EMGC40, Funakoshi, Japan) were used to convert the scattering (extinction) intensities into their cross-sections [27]. The relationship between the cross-sections of extinction ext(),
Rayleigh scattering sca(), and absorption abs() is
( ) ( ) ( ) ext sca abs =+[28],
where is the angular frequency of light. Thus, abs() is derived by subtracting sca() from ext(). Figures 2(a1)-1(a3) show the scheme for obtaining abs().
IV. RESULTS AND DISCUSSION
Raman and fluorescence processes are composed of excitation and de-excitation transitions consisting of a two-photon process as shown in Eq. (10) and a two-step one photon process as shown in Eq. (9), respectively. Thus, the EM enhancement factor of SERRS and ultra-fast SEF is described as a product of an excitation enhancement factor FR(ex) and a de-excitation factor FR() due to plasmon resonance as loc loc
II 22 ex R ex R ex ( , ) ( , ) ( , ) ( , ) ( ) ( ) EE FF EE = rr rr ,(11)
where EI and Eloc indicate the amplitudes of the incident and enhanced local electric fields, respectively; ex and denote the angular frequencies of the incident and Raman-scattered light, respectively; and r is the position of a molecule in a HS [29].
The mode volume of the EM field confined to inside a HS as in Eq.
( ) ( ) uS ex R ex R F ex , ( , ) ( , ) , FF = rr ,(12)
where F() is the cross-section of ultra-fast fluorescence without EM enhancement.
The spectral shape of F() is obtained by averaging uS() spectra from a large number of NP aggregates. In other words, the spectral shape of FR() in Eq. (12) is flattened by the averaging effect [30]. Thus, FR() can be obtained by dividing good spectral correlation between sca() and SERRS with ultra-fast SEF as in Fig. 4(a).
In contrast, the asymmetric dimers do not exhibit such correlation. Thus, the experimentally obtained spectral relationships between sca(), abs() and FR() were examined by changing the degree of asymmetry of the dimers. value was selected to ensure consistency between experiments and calculations [22,31].
We consider that a small number of R6G molecules, with a size of around 1.0 nm, were inserted into the gap and determined the gap distance [31]. The spectra of sca(), abs(), and FR() at the gap with a phase retardation of Eloc against EI were calculated by changing the ratio D1/D2. The excitation polarization direction was set to be parallel to the long axis of the dimer because intense FR() is generated along this direction at a HS. Figures 4(d1) and 4(d2) illustrate the charge distributions of DD and DQ coupled plasmons, respectively, by changing D1/D2. In our calculations, the non-local effect, which reduces FR() by Landau damping due to unscreened surface electrons [8], was not considered because Landau damping does not change the spectral shape of FR() but rather its intensity [8].
V. CONCLUTION
In this study, we investigated the origin of EM enhancement inside HSs of coupled systems between plasmon polariton and molecular exciton. The EM interaction between a photon and an exciton is expressed by the ˆ Ap and 2 A terms. Thus, we developed an experimental method for evaluating these terms and EM enhancement using absorption, Rayleigh scattering, and ultrafast SEF of the single coupled systems, respectively. A spectral comparison revealed that the absorption spectra can correctly reproduce the EM enhancement. This result indicates that ultrafast SEF is a two-step process with the ˆ Apterms, showing the dipole approximation is applicable to the present HSs. The subradiant resonance, which induces spectral deviation between the scattering and absorption, was observed in the absorption and EM enhancement spectra.
These results were well supported by calculations based on electromagnetism. This method will be applied to EM enhancement of various cQED systems and plasmonic spectroscopies [34][35][36][37][38].
ACKNOWLEDGMENTS
This work was supported by the JSPS KAKENHI Grant-in-Aid for Scientific Research (C), number 21K04935.
Figure captions
( ) ( ) S ex em F ex em R ex R em R em , / , ( ) ( ) ( ) F F F = .
rE
are the charge, effective mass, and position of the ith electron, respectively[15]. The derivation of Eq.(2) is described in SI. ˆi p is the operator of the momentum of the ith electron as ˆi k are Planck constant, an angular frequency of light, a creation operator, and an annihilation operator and ( ) 1or2 = is direction of polarization. V in Eq. (3) is the mode volume of the EM field. Note that the origin of EM enhancement of plasmonic systems is the small value of V [1]. The first and second terms of Eq. (2) are called the ˆ Ap term ( int () H AP ) and the 2 A term ( 2 int () HA), respectively. The time-dependent Schrödinger equation for describing interactions between molecularelectrons and the EM field in this system is as follows: r is the wave function of the entire system. In the case are the wave function and energy of the system of the nth state, respectively, where the subscript 0 denotes that these quantities are associated with the unperturbed system. In perturbation theory, ( )
to the first and second orders of the perturbation term, respectively, and the subscripts f and n denote the final and intermediate states, respectively.We discuss the transitions induced by
i and f are the eigenfunctions of the initial and final states of the electron system, respectively, and i and f are their energies, respectively, as in Eqs. (S28)and (S29). Fluorescence is composed of a one-photon absorption and emission; thus, the two-step one-photon process of Eq.
value of FR [1]. The relationship between Eq. (11) and the mode volume in Eq. (3) is explained as the effective Purcell's factor in Ref. 1. Equation (11) indicates the cross-section of ultra-fast SEF uS(), as follows:
uS() by F(). Figures 2(b1)-1(b3) show the scheme of division for obtaining FR().
Figures 3 (
3a1)-3(a4) exhibit spectra of sca(), abs() and FR() for four dimers showing single Lorentzian maxima, which are dipole-dipole (DD) coupled plasmon resonances[26] between approximately 1.9 to 2.2 eV. One may notice that the spectral shapes of FR() look more consistent with those of abs() than with those of sca().
Figure 3 (
3a5) shows the relationship between the peak energies of sca() ħsca, those of abs() ħabs against those of FR() ħR. The positions of ħabs overlap substantially with those of ħR. This result indicates that the origin of FR() is int () H AP in Eq. (8), 13 which shows that the dipole approximation is applicable for the HSs of the present coupled system. Regarding the size of R6G molecule around 1 nm and that of a HS of several nm, the applicability of dipole approximation is rather surprising. We consider that the molecules exist on the saddle point of the electromagnetic potential inside the HS. The positions of ħsca are always redshifted from those of ħR. These redshifts may be induced by the contribution of a subradiant plasmon e.g., dipole-quadrupole (DQ)coupled plasmon, whose resonances appear in the higher energy region of ħsca, to FR()[31].Figures 3(b1)-3(b4) show spectra of sca(), abs() and FR() for the four dimers with more complicated spectral shapes than those in Figs. 3(a1)-3(a4). The abs() and FR() spectra show several common spectral maxima which do not exist in sca(), as indicated by the black arrows. These common maxima reveal that the spectral shapes of FR() are determined by abs(). Figure 3(b5) shows the relationship between ħsca, ħabs against ħR. For ħsca, we selected the lowest energy spectral maxima, which corresponds to the superradiant plasmon resonance. The positions of ħabs always agree with those of ħR. From the correlation between ħabs and ħR, we conclude that the origin of FR() is int () H AP in Eq. (8). The results of Fig. 3 indicate that the contribution of int () H AP to FR() at HSs. These results can be reproduced by electromagnetism in the form of a spectral correlation between abs() and FR(). Thus, we examined this correlation using the FDTD method (EEM-FDM Ver.5.1, EEM Co., Ltd., Japan). The complex refractive index of silver NPs was taken from Ref. 32. The detailed calculation conditions for reproducing the experimental conditions have been described elsewhere [26]. Figures 4(a) and 4(b) show the typical spectra of sca() and SERRS with ultra-fast SEF of the dimers with their SEM images (JSM-6700F, JEOL). The symmetric dimers exhibit a
Figure 4 (
4c) illustrates the light excitation and a dimer composed of two spherical NPs with diameters D1 and D2 while maintaining a gap of 1 nm for FDTD calculation. This
Figures 5 (
5a1)-5(a3) show sca() and abs() by changing the ratio D1/D2 while keeping the value of D1 as 30 nm. As the degree of asymmetry is increased, the shapes of sca() and abs() deviate from each other. Figure 5(a4) shows the relationship between ħsca, ħabs and D2. The higher energy shifts in abs() are clearly observed for D2 > 80 nm, indicating that the quadrupole of NP for D2 resonantly interacts with the dipole of another NP as in Fig. 4(d2). Figures 5(b1)-5(b3) show sca() and FR() by changing the ratio D1/D2. As the degree of asymmetry is increased, the shape of FR() exhibits common spectral deviation from that of abs(), as in Figs. 5(a1)-5(a3).Figure 5(b4) shows the relationship between ħsca, ħF and D2 for the positions of phase retardations of = 90° (DD coupled plasmon resonating with incident light) and 180° (DQ coupled plasmon resonating with the DD coupled plasmon). The position of = 90° agrees with that of ħsca, indicating a DD coupled plasmon. The position of ħF 16 follows that of = 90° for D2 between 30 and 80 nm. Then, it moves to = 180°, indicating that the origin of FR() switches from a DD to a DQ coupled resonance with an increase in the degree of asymmetry. The switch is caused by near-field interactions between the DD and DQ coupled resonance. Figures 5(c1)-5(c3) show abs() and FR() for changing ratios of D1/D2. As expected from Figs. 5(a) and 5(b), the maxima of FR() are similar to those of abs(). The positions of ħF also exhibit identical behaviour to those of ħabs, supporting the experiments that both abs() and FR() are generated by the ˆ Apterm. The spectral deviation between abs() and FR() may be due to a contribution from light absorption outside the HS. These calculations of electromagnetism support the experimentally obtained conclusion that the origin of FR() is int () H AP .
FIG. 1 .
1(a1) and (b1) Schematic images of a molecule in a free space and in a HS between two silver NP, respectively, excited by incident light. (a2) and (b2) The intensity distribution of incident light around a molecule in a free space and in a HS, respectively. (a3) and (b3) Energy level diagrams of a molecule in a free space and in a HS, respectively. Here, |g〉 and |e〉 are the ground and excited states of the molecule assumed to be a two-level system, respectively; |0〉 and |1〉 are the zero-photon and one-photon states of the plasmon resonance; 2ħg indicates vacuum Rabi splitting, where ħg corresponds to the coupling energy between the two-level system and . (a1)-(a3) Spectra of ext(), sca(), and abs() of single silver NP dimer, respectively. Insets of (a1) and (a2) are the extinction and scattering images observed under a bright-and dark-field microscope, respectively. The positions of the peaks in sca() at ħsca and abs()
(b1) and (b2) Spectra of SERRS with ultra-fast SEF of single silver NP dimer and large silver NP aggregate, respectively. Insets of (b1) and (b2) show the SERRS with ultra-fast SEF images of them. (b3) Spectrum of FR(). The position of the peak of FR() at ħR is indicated in the panel.
FIG. 3 .
3(a1)-(a4) Spectra of sca() (blue lines), abs() (green lines), and FR() (red lines) of single dimers exhibiting single Lorentzian maxima, respectively. (a5) Relationships between ħsca (blue open circles), ħabs (green open circles) and ħR. 25 (b1)-(b4) Spectra of sca() (blue lines), abs() (green lines), and FR() (red lines) of single dimers, respectively, showing more complicated spectral shapes than those in (a1)-(a4). (b5) Relationships between ħsca (blue open circles), ħabs (green open circles) and ħR.
FIG. 4 .
4(a) Ultra-fast SEF with SERRS (red line) and sca() spectrum (blue line) of symmetric dimer. Insets are SEM images of the dimer. (b) Ultra-fast SEF with SERRS (red line) and sca() spectrum (blue line) of asymmetric dimer. Insets are SEM images of the dimer. Scale bars are 100 nm. (c) FDTD calculation setup for sca(), abs(), and FR() of a single NP dimer composed of two NP with diameters with D1 and D2. The gap was set to 1 nm. The position of the HS is indicated by a red dot. The excitation polarization direction is parallel to the long axis of the dimer. (d1) and (d2) Schematic images of the charge distributions of DD and DQ coupled plasmons of the symmetric and asymmetric dimers, respectively.
FIG. 5 .
5(a1)-(a3) D2 dependence of the sca() and abs() spectra of dimers with D1 of 30 nm and D2 of 30, 80, and 120 nm, respectively. (a4) D2 dependences of ħsca (blue open circles) and ħabs (green open circles) for dimers with D1 of 30 nm. (b1)-(b3)
Figure 2
Photon energy (eV)
Figure 4
Photon energy (eV) 2.4
Photon energy (eV) 2.4
The Hamiltonian describing a set of molecular electrons interacting with H expresses a Hamiltonian of the molecular excitons strongly coupled with vacuum fluctuation of the EM field at a HS as shown in Fig.an EM field are
e
ph
intˆˆĤ
H H H
= +
+
, (1)
where
ê
H and pĥ
H are the free Hamiltonians of the electrons and the EM field,
respectively. More correctly,
ê
can contribute to Rayleigh scattering for if = to fluorescence and Raman scattering, the spectral shapes of the EM enhancement of ultra-fast SEF and SERRS are expected to be correlated to the absorption spectra of the coupled systems. If the dipole approximation breaks down at HSs, the ˆ Apand 2and
Raman scattering for if . The Rayleigh scattering intensity for Eq. (10) may be
much weaker than that for Eq. (9) because of the higher order of the perturbation. Thus,
the matrix element in Eq. (10) mainly contributes to Raman scattering. In short,
Rayleigh scattering is generated by
2
0
int
0
()
fi
HA
, fluorescence is a two-step
process of
0
int
0
()
fi
H AP
, and Raman scattering is generated by
0
int
0
0
int
0(
)
( )
f
n
n
i
ni
H AP
H AP
. Regarding the contribution of int ()
H AP
Colloidal silver NPs (mean diameter ~30 nm, 1.10×10 -10 M) were prepared using the Lee and Meisel method[23]. An equal amount (to the NP dispersion) of an R6G aqueous solution (1.28×10 -8 M) was added to the NP dispersion with NaCl (5 mM) and left for 30 min for the aggregation of NPs with including R6G molecules in the HSs. 5×10 -11 M) are almost identical to the reported SM SERRS conditions as shown by the results of the two-analyte or isotope technique[24,25]. Thus, we may detect SERRS and SEF signals exclusively from a small number of molecules inside the HSs of the NP aggregates[26]. The sample solution was dropped onto a slide glass plate coated with Poly-D-Lysine and sandwiched with a cover glass plate. This sample plate was set onto the stage of an inverted optical microscope (IX-71, Olympus, Tokyo). The scattering (extinction) spectra of single NP aggregates were measured by illumination with white light from a 50-W halogen lamp through a dark-field condenser (N.A. 0.92). SERRS and ultrafast SEF spectra of identical aggregates were measured by illumination with anThe final concentrations of the R6G solutions (6.34×10 -9 M) and NP dispersion
(5.unpolarised polarised excitation green laser beam (2.33 eV (532 nm), 3.5 W/cm 2 ) from
a CW Nd3+: YAG laser (DPSS 532, Coherent, Tokyo). Note that the SERRS-active
aggregates were almost dimers, if we selected the aggregates showing a clear dipole
plasmon resonance with maxima around 1.7-2.1 eV [26].
Photon energy (eV)Photon energy (eV)1.6 1.8 2.0 2.2
2.6 2.8
2.4
(a1)
ħ sca
ħ abs
5 mm
5 mm
(a2)
(a3)
0
200
400
600
800
1000
1.6 1.8 2.0 2.2
2.6 2.8
2.4
SERRS
intensity
(Counts)
=
SERRS
intensity
(Normalized)
0
1
1.6 1.8 2.0 2.2
2.6 2.8
2.4
Photon energy (eV)
1.6 1.8 2.0 2.2
2.6 2.8
2.4
Photon energy (eV)
0
1
(b1)
ħ R
F R
(Normalized)
100 mm
5 mm
(b2)
(b3)
29
Figure 3
0
1
sca
,
abs
(Normalized)
1.6
1.8
2.0
2.2
2.6
2.8
2.4
1.7
1.8 1.9
2.0 2.1
2.2
ħ R (eV)
ħ R (eV)
1.7
1.8
1.9
2
2.1
2.2
ħ sca
,
ħ abs
(eV)
1.6
1.8
2.0
2.2
2.6
2.8
2.4
1.7
1.8
1.9
2
2.1
2.2
ħ sca
,
ħ abs
(eV)
1.7
1.8 1.9
2.0 2.1
2.2
Plasmon-enhanced spectroscopy of absorption and spontaneous emissions explained using cavity quantum optics. T Itoh, Y S Yamamoto, Y Ozaki, Chem. Soc. Rev. 463904T. Itoh, Y. S. Yamamoto, and Y. Ozaki, Plasmon-enhanced spectroscopy of absorption and spontaneous emissions explained using cavity quantum optics, Chem. Soc. Rev. 46, 3904 (2017).
Probing single molecules and single nanoparticles by 18 surface-enhanced Raman scattering. S Nie, S Emory, Science. 2751102S. Nie and S. Emory, Probing single molecules and single nanoparticles by 18 surface-enhanced Raman scattering, Science 275, 1102 (1997).
Single molecule detection using surface-enhanced Raman scattering (SERS). K Kneipp, Y Wang, H Kneipp, L Perelman, I Itzkan, R R Dasari, M Feld, Phys. Rev. Lett. 781667K. Kneipp, Y. Wang, H. Kneipp, L. Perelman, I. Itzkan, R. R. Dasari, and M. Feld, Single molecule detection using surface-enhanced Raman scattering (SERS), Phys. Rev. Lett. 78, 1667 (1997).
Spectroscopy of single hemoglobin molecules by surface enhanced Raman scattering. H Xu, E J Bjerneld, M Käll, L Börjesson, Phys. Rev. Lett. 834357H. Xu, E. J. Bjerneld, M. Käll, and L. Börjesson, Spectroscopy of single hemoglobin molecules by surface enhanced Raman scattering, Phys. Rev. Lett. 83, 4357 (1999).
Surface enhanced Raman spectroscopy of individual rhodamine 6G molecules on large Ag nanocrystals. A M Michaels, M Nirmal, L E Brus, J. Am. Chem. Soc. 1219932A. M. Michaels, M. Nirmal, and L. E. Brus, Surface enhanced Raman spectroscopy of individual rhodamine 6G molecules on large Ag nanocrystals, J. Am. Chem. Soc. 121, 9932 (1999).
Evaluation of electromagnetic enhancement of surface enhanced hyper Raman scattering using plasmonic properties of binary active sites in single Ag nanoaggregates. T Itoh, H Yoshikawa, K Yoshida, V Biju, M Ishikawa, J. Chem. Phys. 130214706T. Itoh, H. Yoshikawa, K. Yoshida, V. Biju, and M. Ishikawa, Evaluation of electromagnetic enhancement of surface enhanced hyper Raman scattering using plasmonic properties of binary active sites in single Ag nanoaggregates, J. Chem. Phys. 130, 214706 (2009).
Mechanisms of spectral profile modification in surface-enhanced fluorescence. E C Le Ru, P G Etchegoin, J Grand, N Félidj, J Aubard, G Lévi, J. Phys. Chem. C. 111E. C. Le Ru, P. G. Etchegoin, J. Grand, N. Félidj, J. Aubard, and G. Lévi, Mechanisms of spectral profile modification in surface-enhanced fluorescence, J. Phys. Chem. C 111, 44, 16076-16079 (2007).
Excitation laser energy dependence of surface-enhanced fluorescence showing plasmon. T Itoh, Y S Yamamoto, H Tamaru, V Biju, N Murase, Y Ozaki, 19T. Itoh, Y. S. Yamamoto, H. Tamaru, V. Biju, N. Murase, and Y. Ozaki, Excitation laser energy dependence of surface-enhanced fluorescence showing plasmon-induced 19
ultrafast electronic dynamics in dye molecules. Phys. Rev. B. 87235408ultrafast electronic dynamics in dye molecules, Phys. Rev. B 87, 235408 (2013).
Population pumping of excited vibrational states by spontaneous surface-enhanced Raman scattering. K Kneipp, Y Wang, H Kneipp, Ir, R R Itzkan, M S Dasari, Feld, Phys. Rev. Lett. 762444K. Kneipp, Y. Wang, H. Kneipp, Ir. Itzkan, R. R. Dasari, and M. S. Feld, Population pumping of excited vibrational states by spontaneous surface-enhanced Raman scattering, Phys. Rev. Lett. 76, 2444 (1996).
. M Takase, H Ajiki, Y Mizumoto, K Komeda, M Nara, H Nabika, S Yasuda, H Ishihara, K Murakoshi, Nature Photonics. 7M. Takase, H. Ajiki, Y. Mizumoto, K. Komeda, M. Nara, H. Nabika, S. Yasuda, H. Ishihara, K. Murakoshi Nature Photonics 7, 550-554 (2013).
Extreme nanophotonics from ultrathin metallic gaps. J J Baumberg, J Aizpurua, M H Mikkelsen, D R Smith, Nat. Mater. 18668J. J. Baumberg, J. Aizpurua, M. H. Mikkelsen, and D. R. Smith, Extreme nanophotonics from ultrathin metallic gaps, Nat. Mater. 18, 668 (2019).
Molecular optomechanics approach to surface-enhanced Raman scattering. R Esteban, J J Baumberg, J Aizpurua, Acc. Chem. Res. 552022R. Esteban, J. J. Baumberg, and J. Aizpurua, Molecular optomechanics approach to surface-enhanced Raman scattering, Acc. Chem. Res. 55, 1889-1899 2022
. R F Ribeiro, L A Martínez-Martínez, M Du, J Campos-Gonzalez-Angulo, J , R. F. Ribeiro, L. A. Martínez-Martínez, M. Du, J. Campos-Gonzalez-Angulo, and J.
Polariton chemistry: controlling molecular dynamics with optical cavities. Yuen-Zhou, Yuen-Zhou, Polariton chemistry: controlling molecular dynamics with optical cavities.
. Chem. Sci. 96325Chem. Sci. 9, 6325 (2018).
Between plasmonics and surface-enhanced resonant Raman spectroscopy: toward single-molecule strong coupling at a hotspot. T Itoh, Y S Yamamoto, Nanoscale. 131566T. Itoh and Y. S. Yamamoto, Between plasmonics and surface-enhanced resonant Raman spectroscopy: toward single-molecule strong coupling at a hotspot, Nanoscale 13, 1566 (2021).
J J Sakurai, Advanced Quantum Mechanics. Reading, MAAddison-Wesley20J. J. Sakurai, Advanced Quantum Mechanics (Addison-Wesley, Reading, MA, 1967). 20
Light-matter decoupling in the deep strong coupling regime: The breakdown of the Purcell effect. S De Liberato, Phys. Rev. Lett. 11216401S. De Liberato, Light-matter decoupling in the deep strong coupling regime: The breakdown of the Purcell effect, Phys. Rev. Lett. 112, 016401 (2014).
Electric field gradient effects on the spectroscopy of adsorbed molecules. J K Sass, H Neff, M Moskovits, S Holloway, J. Phys. Chem. 85J. K. Sass, H. Neff, M. Moskovits, and S. Holloway, Electric field gradient effects on the spectroscopy of adsorbed molecules, J. Phys. Chem. 85, 621-623 (1981).
Electric field gradient effects in Raman spectroscopy. E J Ayars, H D Hallen, C L Jahncke, Phys. Rev. Lett. 854180E. J. Ayars, H. D. Hallen, and C. L. Jahncke, Electric field gradient effects in Raman spectroscopy, Phys. Rev. Lett. 85, 4180 (2000).
. K Kneipp, A Jorio, H Kneipp, S D M Brown, K Shafer, J Motz, R Saito, G , K. Kneipp, A. Jorio, H. Kneipp, S. D. M. Brown, K. Shafer, J. Motz, R. Saito, G.
Polarization effects in surface-enhanced resonant Raman scattering of single-wall carbon nanotubes on colloidal silver clusters. M S Dresselhaus, Dresselhaus, Phys. Rev. B. 6381401Dresselhaus, and M. S. Dresselhaus, Polarization effects in surface-enhanced resonant Raman scattering of single-wall carbon nanotubes on colloidal silver clusters, Phys. Rev. B 63, 081401 (2001).
Single-molecular surface-enhanced resonance Raman scattering as a quantitative probe of local electromagnetic field: The case of strong coupling between plasmonic and excitonic resonance. Y S Itoh, H Yamamoto, V Tamaru, S Biju, Y Wakida, Ozaki, Phys. Rev. B. 89195436Itoh, Y. S. Yamamoto, H. Tamaru, V. Biju, S. Wakida, and Y. Ozaki, Single-molecular surface-enhanced resonance Raman scattering as a quantitative probe of local electromagnetic field: The case of strong coupling between plasmonic and excitonic resonance, Phys. Rev. B 89, 195436 (2014).
Reproduction of surface-enhanced resonant Raman scattering and fluorescence spectra of a strong coupling system composed of a single silver nanoparticle dimer and a few dye molecules. T Itoh, Y S Yamamoto, J. Chem. Phys. 149244701T. Itoh and Y. S. Yamamoto, Reproduction of surface-enhanced resonant Raman scattering and fluorescence spectra of a strong coupling system composed of a single silver nanoparticle dimer and a few dye molecules, J. Chem. Phys. 149, 244701 (2018).
Anti-crossing property of strong 21. T Itoh, Y S Yamamoto, T Okamoto, T. Itoh, Y. S. Yamamoto, and T. Okamoto, Anti-crossing property of strong 21
coupling system of silver nanoparticle dimers coated with thin dye molecular films analyzed by electromagnetism. J. Chem. Phys. 15254710coupling system of silver nanoparticle dimers coated with thin dye molecular films analyzed by electromagnetism, J. Chem. Phys. 152, 054710 (2020).
Adsorption and surface-enhanced Raman of dyes on silver and gold sols. P Lee, D Meisel, J. Phys. Chem. 863391P. Lee and D. Meisel, Adsorption and surface-enhanced Raman of dyes on silver and gold sols, J. Phys. Chem. 86, 3391 (1982).
Proof of single-molecule sensitivity in surface enhanced Raman scattering (SERS) by means of a two-analyte technique. E C Le Ru, M Meyer, P G Etchegoin, J. Phys. Chem. B. 1101944E. C. Le Ru, M. Meyer, and P. G. Etchegoin, Proof of single-molecule sensitivity in surface enhanced Raman scattering (SERS) by means of a two-analyte technique, J. Phys. Chem. B 110, 1944 (2006).
Single molecule surface-enhanced Raman spectroscopy without nanogaps. A B Zrimsek, A I Henry, R P Van Duyne, J. Phys. Chem. Lett. 43206A. B. Zrimsek, A. I. Henry, and R. P. Van Duyne, Single molecule surface-enhanced Raman spectroscopy without nanogaps, J. Phys. Chem. Lett. 4, 3206 (2013).
Quantitative evaluation of electromagnetic enhancement in surface-enhanced resonance Raman scattering from plasmonic properties and morphologies of individual Ag nanostructures. K Yoshida, T Itoh, H Tamaru, V Biju, M Ishikawa, Y Ozaki, Phys. Rev. B. 81115406K. Yoshida, T. Itoh, H. Tamaru, V. Biju, M. Ishikawa, and Y. Ozaki, Quantitative evaluation of electromagnetic enhancement in surface-enhanced resonance Raman scattering from plasmonic properties and morphologies of individual Ag nanostructures, Phys. Rev. B 81, 115406 (2010).
Absorption cross-section spectroscopy of a single strong-coupling system between plasmon and molecular exciton resonance using a single silver nanoparticle dimer generating surface-enhanced resonant Raman scattering. T Itoh, Y S Yamamoto, T Okamoto, Phys. Rev. B. 9922T. Itoh, Y. S. Yamamoto, and T. Okamoto, Absorption cross-section spectroscopy of a single strong-coupling system between plasmon and molecular exciton resonance using a single silver nanoparticle dimer generating surface-enhanced resonant Raman scattering, Phys. Rev. B, 99, 235409 (2019). 22
Absorption and Scattering of Light by Small Particles. C F Bohren, D R Huffman, WileyNew YorkC. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, New York, 1983).
Surface-enhanced Raman scattering and fluorescence near metal nanoparticles. P Johansson, H Xu, M Käll, Phys. Rev. B. 7235427P. Johansson, H. Xu, and M. Käll, Surface-enhanced Raman scattering and fluorescence near metal nanoparticles, Phys. Rev. B 72, 035427 (2005).
Quantitative evaluation of blinking in surface enhanced resonance Raman scattering and fluorescence by electromagnetic mechanism. T Itoh, M Iga, H Tamaru, K Yoshida, V Biju, M Ishikawa, J. Chem. Phys. 13624703T. Itoh, M. Iga, H. Tamaru, K. Yoshida, V. Biju, and M. Ishikawa, Quantitative evaluation of blinking in surface enhanced resonance Raman scattering and fluorescence by electromagnetic mechanism, J. Chem. Phys. 136, 024703 (2012).
Contribution of subradiant plasmon resonance to electromagnetic enhancement in resonant Raman with fluorescence examined by single silver nanoparticle dimers. T Itoh, Y S Yamamoto, J. Phys. Chem. C. 127T. Itoh and Y. S. Yamamoto, Contribution of subradiant plasmon resonance to electromagnetic enhancement in resonant Raman with fluorescence examined by single silver nanoparticle dimers, J. Phys. Chem. C 127, 5886-5897 (2023).
Optical constants of the noble metals. P B Johnson, R W Christy, Phys. Rev. B. 6P. B. Johnson and R. W. Christy, Optical constants of the noble metals, Phys. Rev. B 6, 4370-4379 (1972).
Quantum mechanical limit to plasmonic enhancement as observed by surface-enhanced Raman scattering. W Zhu, K B Crozier, Nat. Commun. 55228W. Zhu and K. B. Crozier, Quantum mechanical limit to plasmonic enhancement as observed by surface-enhanced Raman scattering. Nat. Commun. 5, 5228 (2014).
One-dimensional plasmonic hotspots located between silver nanowire dimers evaluated by surface-enhanced resonance Raman scattering. T Itoh, T Y S Yamamoto, Y Kitahama, J Balachandran, Phys. Rev. B. 95115441T. Itoh, T. Y. S. Yamamoto, Y. Kitahama, and J. Balachandran, One-dimensional plasmonic hotspots located between silver nanowire dimers evaluated by surface-enhanced resonance Raman scattering, Phys. Rev. B 95, 115441 (2017).
Propagation mechanism of surface 23. T Itoh, T Y S Yamamoto, J Balachandran, T. Itoh, T. Y. S. Yamamoto, and J. Balachandran, Propagation mechanism of surface 23
plasmons coupled with surface-enhanced resonant Raman scattering light through a one-dimensional hotspot along a silver nanowire dimer junction. Phys. Rev. B. 103245425plasmons coupled with surface-enhanced resonant Raman scattering light through a one-dimensional hotspot along a silver nanowire dimer junction, Phys. Rev. B 103, 245425 (2021).
Present and future of surface-enhanced Raman scattering. J Langer, ACS Nano. 14J. Langer et al., Present and future of surface-enhanced Raman scattering, ACS Nano 14, 28-117 (2020).
Electromagnetic theories of surface-enhanced Raman spectroscopy. S Y Ding, E M You, Z Q Tian, M Moskovits, Chem. Soc. Rev. 46S. Y. Ding, E. M. You, Z. Q. Tian, and M. Moskovits, Electromagnetic theories of surface-enhanced Raman spectroscopy, Chem. Soc. Rev. 46, 4042-4076 (2017).
Toward a new era of SERS and TERS at the nanometer scale: from fundamentals to innovative applications. T Itoh, M Prochazka, Z.-C Dong, W Ji, Y S Yamamoto, Y Zhang, Y Ozaki, Chem. Rev. 1231552T. Itoh, M. Prochazka, Z.-C. Dong, W. Ji, Y. S. Yamamoto, Y. Zhang, and Y. Ozaki, Toward a new era of SERS and TERS at the nanometer scale: from fundamentals to innovative applications, Chem. Rev. 123, 1552 (2023).
FR() spectra with D1 of 30 nm and D2 of 30, 80, and 120 nm, respectively. (b4) D2 dependences of ħsca (blue open circles), ħF (red open circles), = 90° (black solid line, DD resonance), and = 180° (black dashed line, DQ resonance) for dimers with D2 of 30 nm. (c1)-(c3) D2 dependence of the abs() and FR() spectra with D1 of 30 nm and D2 of 30. 802and 120 nm, respectively. (c4)dependence of the sca() and FR() spectra with D1 of 30 nm and D2 of 30, 80, and 120 nm, respectively. (b4) D2 dependences of ħsca (blue open circles), ħF (red open circles), = 90° (black solid line, DD resonance), and = 180° (black dashed line, DQ resonance) for dimers with D2 of 30 nm. (c1)-(c3) D2 dependence of the abs() and FR() spectra with D1 of 30 nm and D2 of 30, 80, and 120 nm, respectively. (c4) D2
ħF (red open circles), = 90° (black solid line, DD resonance), and = 180° (black dashed line, DQ resonance) for dimers with D2 of 30 nm. dependences of ħabs (blue open circles). dependences of ħabs (blue open circles), ħF (red open circles), = 90° (black solid line, DD resonance), and = 180° (black dashed line, DQ resonance) for dimers with D2 of 30 nm.
| []
|
[
"Parareal exponential θ-scheme for longtime simulation of stochastic Schrödinger equations with weak damping",
"Parareal exponential θ-scheme for longtime simulation of stochastic Schrödinger equations with weak damping"
]
| [
"Jialin Hong ",
"Xu Wang ",
"Liying Zhang "
]
| []
| []
| A parareal algorithm based on an exponential θ-scheme is proposed for the stochastic Schrödinger equation with weak damping and additive noise. It proceeds as a twolevel temporal parallelizable integrator with the exponential θ-scheme as the propagator on the coarse grid. The proposed algorithm in the linear case increases the convergence order from one to k for θ ∈ [0, 1] \ { 1 2 }. In particular, the convergence order increases to 2k when θ = 1 2 due to the symmetry of the algorithm. Furthermore, the algorithm is proved to be suitable for longtime simulation based on the analysis of the invariant distributions for the exponential θ-scheme. The convergence condition for longtime simulation is also established for the proposed algorithm in the nonlinear case, which indicates the superiority of implicit schemes. Numerical experiments are dedicated to illustrate the best choice of the iteration number k, as well as the convergence order of the algorithm for different choices of θ. | 10.1137/18m1176749 | [
"https://arxiv.org/pdf/1803.09188v1.pdf"
]
| 119,165,160 | 1803.09188 | 075af3cfed1e7df46c7798635453c7113f5200d1 |
Parareal exponential θ-scheme for longtime simulation of stochastic Schrödinger equations with weak damping
25 Mar 2018 November 9, 2018
Jialin Hong
Xu Wang
Liying Zhang
Parareal exponential θ-scheme for longtime simulation of stochastic Schrödinger equations with weak damping
25 Mar 2018 November 9, 2018arXiv:1803.09188v1 [math.NA]AMS subject classification: 60H3565M1265W05 Key Words: stochastic Schrödinger equationparareal algorithmexponential θ-schemeinvariant measure
A parareal algorithm based on an exponential θ-scheme is proposed for the stochastic Schrödinger equation with weak damping and additive noise. It proceeds as a twolevel temporal parallelizable integrator with the exponential θ-scheme as the propagator on the coarse grid. The proposed algorithm in the linear case increases the convergence order from one to k for θ ∈ [0, 1] \ { 1 2 }. In particular, the convergence order increases to 2k when θ = 1 2 due to the symmetry of the algorithm. Furthermore, the algorithm is proved to be suitable for longtime simulation based on the analysis of the invariant distributions for the exponential θ-scheme. The convergence condition for longtime simulation is also established for the proposed algorithm in the nonlinear case, which indicates the superiority of implicit schemes. Numerical experiments are dedicated to illustrate the best choice of the iteration number k, as well as the convergence order of the algorithm for different choices of θ.
Introduction
In the numerical approximation for both deterministic and stochastic evolution equations, several methods have been developed to improve the convergence order of classical schemes, such as (partitioned) Runge-Kutta methods, schemes via modified equations, predictorcorrector schemes and so on (see [4,14,17,18] and references therein). For high order numerical approximations of stochastic partial differential equations (SPDEs), the computing cost can be prohibitively large due to the high dimension in space, especially for longtime simulations. It motivates us to study algorithms allowing for parallel implementations to obtain a significant improvement of efficiency.
The parareal algorithm was pioneered in [15] as a time discretization of a deterministic partial differential evolution equation on finite time intervals, and was then modified in [16] to tackle non-differential evolution equations. This algorithm is described through a coarse propagator calculated on a coarse grid with step size δT and a fine propagator calculated in parallel on each coarse interval with step size δt = δT /J, where J ∈ N + denotes the number of available processors. It is pointed out in [15] and [16] that the error caused by the parareal architecture after a few iterations is comparable to the error caused by a global use of the fine propagator without iteration. More specifically, for a fixed iterated step k ∈ N + , the parareal algorithm could show order kp with respect to δT , if a scheme with local truncation error O(δT p+1 ) is chosen as the coarse propagator and the exact flow is chosen as the fine propagator. Over the past few years, the parareal algorithm has been further studied by [2,19] on its stability, by [12,13] on the potential of longtime simulation, and by [3,11] on the application to stochastic problems.
When exploring parareal algorithms for stochastic differential equations (SDEs) driven by standard Brownian motions, one of the main differences from the deterministic case is that the stochastic systems are less regular than the deterministic ones. Moreover, the convergence order of classical schemes such as explicit Euler scheme, implicit Euler scheme and midpoint scheme, when applied to SDEs, are in general half of those in deterministic case. The circumstance becomes even worse when SPDEs are taken into consideration since the temporal regularity of the solution may be worse. One may not get the optimal convergence rate of the parareal algorithm for the stochastic case following the procedure of the deterministic case. The author in [3] deals with this problem for SDEs adding assumptions on drift and diffusion coefficients as well as their derivatives, and considers the parareal algorithm when the explicit Euler scheme is chosen as the coarse propagator. The optimal rate k 2 (α ∧ 3 − 1) is deduced taking advantages of the independency between the increments of Brownian motions, where α variant for different drift and diffusion coefficients and α = 2 in general.
For the stochastic nonlinear Schrödinger equation considered in this paper, there are two main obstacles when establishing implementable parareal algorithms for longtime simulation. One is that the stiffness caused by the noise makes it unavailable to construct parareal algorithms based on existing stable schemes (see e.g. [5]). It may require higher regularity assumptions due to the iteration adopted in parareal algorithms, see Remark 4. These assumptions are usually not satisfied by SPDEs. The other one is that the C-valued nonlinear coefficient does not satisfy one-sided Lipschitz type conditions in general. It leads to strict restrictions on the scale of the coarse grid, especially for explicit numerical schemes, when one wishes to get uniform convergence rate.
In this paper, we propose an exponential θ-scheme based parareal algorithms with θ ∈ [0, 1]. It allow us to perform the iteration without high regularity assumptions on the numerical solution taking advantages of the semigroup generated by the linear operator of the considered model. For the linear case with θ ∈ [ 1 2 , 1], the exponential θ-scheme possesses a unique invariant Gaussian distribution, which converges to the invariant measure of the exact solution. This type of absolute stability ensures the uniform convergence of the proposed parareal algorithm with order k for θ > 1 2 and 2k for θ = 1 2 . If θ ∈ [0, 1 2 ) and the damping α > 0 is large enough, the uniform convergence still holds. Otherwise, the algorithm is only suitable for simulation over finite time interval, which coincide with the fact that the distribution of the exponential θ scheme diverges over longtime in this case, see Section 3.2. For the nonlinear case, we take the proposed algorithm with θ = 0 as a keystone to illustrate the convergence analysis for fully discrete schemes with the fine propagator being a numerical solver as well. This result is only available over bounded time interval. To get a time-uniform estimate, internal stage values are utilized in the analysis for the nonlinear case with general θ ∈ [0, 1]. The results give the convergence condition on θ, L F , α and δT , and indicate that the restriction on α and δT is weaker when θ gets larger.
The paper is organized as follows. Section 2 introduces some notations and assumptions used in the subsequent sections, and gives a brief recall about parareal algorithms. Section 3 is dedicated to analyze the stability of the parareal exponential θ-scheme by investigating the distribution of the exponential θ-scheme over longtime. The rate of convergence for both unbounded and bounded intervals is given for the linear case. Section 4 focus on the application of the proposed parareal algorithm for the nonlinear case as well as the fully discrete scheme based on the the parareal algorithm. Moreover, some modifications are made on the parareal algorithm to release the conditions under which the proposed scheme converges by iteration. This improvement is also illustrated through numerical experiments in Section 5.
Preliminaries
We consider the following initial-boundary problem of the stochastic nonlinear Schrödinger equation driven by additive noise:
du = (i∆u − αu + iF (u)) dt + Q 1 2 dW, u(t, 0) = u(t, 1) = 0, t ∈ (0, T ], u(0, x) = u 0 (x), x ∈ [0, 1],(1)
where α ≥ 0 is the damping coefficient and W (t) is an cylindrical Wiener process defined on the completed filtered probability space (Ω, B, P, {B} t≥0 ). The Karhunen-Loève expansion of W yields
W (t) = ∞ m=1 e m (x)β m (t), t ∈ [0, T ], x ∈ [0, 1],
where {β m (t)} m∈N is a family of mutually independent identically distributed C-valued Brownian motions.
Notations
Throughout this paper, we denote by H := L 2 (0, 1) the square integrable space, and denote by H 0 the space H with homogenous Dirichlet boundary condition for simplicity. Then {e m (x)} m∈N := { √ 2 sin(mπx)} m∈N is an eigenbasis of the Dirichlet Laplacian in H, and the associated eigenvalues of the linear operator Λ := −i∆ + α are expressed as {λ m } m∈N := {i(mπ) 2 + α} m∈N with 1 ≤ |λ m | → ∞ as m → ∞. Furthermore, we denote the inner product in H by
v 1 , v 2 := 1 0 v 1 (x)v 2 (x)dx, v 1 , v 2 ∈ H.
In the sequel, we will use the following spacė
H s := D(Λ s 2 ) = u u = ∞ m=1 u, e m e m ∈ H 0 , s.t., ∞ m=1 | u, e m | 2 |λ m | s < ∞ , equipped with the norm u 2Ḣ s = ∞ m=1 | u, e m | 2 |λ m | s ,
which is equivalent to the Sobolev norm · H s when s = 0, 1, 2. We use the notation · instead of · H for convenience. For the nonlinear function F and operator Q in (1), we give the following assumptions.
Assumption 1. There exists a positive constant L F such that
F (v) − F (w) ≤ L F v − w , ∀ v, w ∈ H.
In addition, F (0) = 0 and
ℑ v, F (v) = 0, ∀ v ∈ H.
Assumption 2. Assume that Q is a nonnegative symmetric operator on H with (−∆) Let S(t) := e −tΛ be the semigroup generated by operator Λ. The mild solution of (1) exists globally under Assumptions 1 and 2 with the following form
u(t) = S(t)u 0 + i t 0 S(t − s)F (u)ds + t 0 S(t − s)Q 1 2 dW (s).(2)
For any 0 ≤ r ≤ l, it holds
S(t) L(Ḣ l ,Ḣ r ) := sup v∈Ḣ s S(t)v Ḣr v Ḣl ≤ e −αt .
Framework of parallelization in time
In this section, we briefly recall the procedure of parareal algorithms, which are constructed through the interaction of a coarse and a fine propagators under different time scales. The parareal algorithm, or equivalently the time-parallel algorithm, consists of four parts in general: interval partition, initialization, time-parallel computation, and correction. The numerical solution is expected to converge fast by iteration to the solution of a global use of fine propagator F .
Interval partition
The considered interval [0, T ] is first divided into N parts with a uniform coarse step size δT = T n − T n−1 for any n = 1, · · · , N as follows.
δT
T 0 = 0 T n−1 T n T N = T
Each subinterval is further divided into J parts with a uniform fine step size δt = t n,j+1 − t n,j = δT J for any n = 0, · · · , N − 1 and j = 1, · · · , J − 1. It satisfies that t n−1,0 = T n−1 and t n−1,J =: t n,0 . δt t n−1,0 = T n−1 t n−1,j t n−1,j+1 t n−1,J = T n If the value at the coarse grid {T n } N n=0 is given, denoted by {u n } N n=0 , the numerical solutions at the fine grid {t n−1,j } J j=1 on each subinterval [T n−1 , T n ] can be calculated independently by choosing u n−1 as the initial value over the subinterval.
Initialization
We define a coarse propagator G
u n = G(T n , T n−1 , u n−1 )(3)
based on some specific scheme to gain a numerical solution {u n } N n=0 at coarse grid {T n } N n=0 . The coarse propagator G gives a rough approximation on the coarse grid {T n } N n=0 , which makes it possible to calculate the numerical solutions on each subinterval parallel to one another. In general, G is required to be easy to calculate and need not to be of high accuracy. On the other hand, the fine propagator F defined on each subinterval is assumed to be more accurate than G to ensure that the proposed parareal algorithm is accurate enough.
Time-parallel computation
We consider the subinterval [T n−1 , T n ] with initial value u n−1 at T n−1 , and apply a fine propagator F over this subinterval. More precisely, we denote byû n−1,1 := F (t n−1,1 , t n−1,0 , u n−1 ) the one step approximation obtained by F starting from u n−1,0 := u n−1 at time t n−1,0 := T n−1 , see Figure 1. Thus, the numerical solution at time t n,j can be expressed aŝ
u n−1,j = F (t n−1,j , t n−1,j−1 ,û n−1,j−1 ) = F (t n−1,j , t n−1,0 ,û n−1,0 ), ∀ j = 1, · · · , J. For j = J, we getû n−1,J = F (T n , T n−1 , u n−1 ) which is B Tn -adapted. t n−1,0 = T n−1 t n−1,J = t n,0 = T n t n−1,1 u n−1 =û n−1,0 u n = G(T n , T n−1 , u n−1 ) u n−1,1û n−1,J = F (T n , T n−1 , u n−1 ) need correction
Correction
Note that we get two numerical solutions u n andû n−1,J at time T n from above procedure, which are not equal to each other in general, see Figure 1. Some correction should be applied to get a family of numerical solution on the grid {T n } N n=0 such that it is more accurate than the one obtained by G. The correction iteration (see also [3,12,13]) is defined as (4) is obtained after the calculation of {u (k−1) n } 0≤n≤N , and is {B Tn } 0≤n≤N -adapted for any k ∈ N.
u (0) n =G(T n , T n−1 , u (0) n−1 ) u (k) n =G(T n , T n−1 , u (k) n−1 ) + F (T n , T n−1 , u (k−1) n−1 ) − G(T n , T n−1 , u (k−1) n−1 ), k ∈ N + (4) starting from u (k) 0 = u 0 for all k ∈ N. The solution {u (k) n } 0≤n≤N ⊂ H of
Parareal exponential θ-scheme for the linear case
This section is devoted to study parareal algorithms based on the exponential θ-scheme for the following linear equation
du = (i∆u − αu + iλu)dt + Q 1 2 dW(5)
with λ ∈ R. We show that the proposed parareal algorithms are valid for longtime simulation with a unique invariant Gaussian distribution under some restrictions on θ ∈ [0, 1]. Rewriting above equation through its components u m := u, e m , we obtain
du m = (−λ m + iλ)u m dt + ∞ i=1 Q 1 2 e i , e m dβ i , m = 1, · · · , M.
Its solution is given by an Ornstein-Uhlenbeck process
u m (t) = e (−λm+iλ)t u m (0) + ∞ i=1 t 0 e (−λm+iλ)(t−s) Q 1 2 e i , e m dβ i (s) with u m (0) = u 0 , e m .
Complex invariant Gaussian measure
Note that {u m (t)} t≥0 satisfies a complex Gaussian distribution N (m, C, R) defined by its mean m, covariance C and relation R:
m (u m (t)) :=E [u m (t)] = e (−λm+iλ)t m [u m (0)] , C (u m (t)) :=E |u m (t) − m (u m (t))| 2 = e −2αt C (u m (0)) + 1 − e −2αt α Q 1 2 e m 2 , R (u m (t)) :=E (u m (t) − m (u m (t))) 2 = e 2(−λm+iλ)t R (u m (0)) .
We use the notation µ m t := N (m(u m (t)), C(u m (t)), R(u m (t))) for simplicity.
Remark 1.
We consider a one-dimensional C-valued Gaussian random variable Z = a + ib with a and b being two R-valued Gaussian random variables. If its relation vanishes, i.e.,
R(Z) = E|a − Ea| 2 − E|b − Eb| 2 + 2i(E[ab] − EaEb) = 0,
it implies E|a − Ea| 2 = E|b − Eb| 2 and E[ab] = EaEb. Since a and b are both Gaussian, we obtain equivalently that a and b are independent with the same covariance.
Remark 2. The characteristic function of a one-dimensional complex Gaussian variable Z with distribution ν = N (m, C, R) reads (see e.g. [1])
ν(c) :=E[exp{iℜ(cZ)}] = C exp{iℜ(cz)}ν(dz) = exp iℜ(cm) − 1 4 (cCc + ℜ(cRc)) , c ∈ C.
It can be generalized for the infinite dimensional case utilizing inner product in H:
ν(w) := exp iℜ w, m − 1 4 ( Cw, w + ℜ Rw,w ) , w ∈ H.
Hence, we get that the unique invariant measure of (5) is a complex Gaussian distribution, which is stated in the following theorem. We refer to [9,10] and references therein for the existence of invariant measures for the nonlinear case, and refer to [4,6] and references therein for other types of SPDEs.
µ ∞ = N 0, 1 α Q, 0 .
Proof. Based on Remark 1, we define
u m ∞ = Q 1 2 e m √ 2α (ξ m + ir m )
with {ξ m , r m } m∈N being independent standard R-valued normal random variables, i.e., ξ m , r m ∼ N (0, 1). Apparently,
u m ∞ ∼ N 0, Q 1 2 e m 2 α , 0 =: µ m ∞ .
We claim that the following random variable has the distribution µ ∞ :
u ∞ := ∞ m=1 u m ∞ e m = ∞ m=1 Q 1 2 e m √ 2α (ξ m + ir m )e m .
Compared with u(t) = ∞ m=1 u m (t)e m , it then suffices to show that the distribution µ m t of u m (t) converges to µ m ∞ . As a result of Remark 2, the characteristic function of µ m t iŝ
µ m t (c) = exp iℜ(ce (−λm+iλ)t E [u m (0)]) − 1 4 ℜ e 2(−λm+iλ)t R (u m (0))c 2 − 1 4 e −2αt C (u m (0)) + 1 − e −2αt α Q 1 2 e m 2 |c| 2 andμ m t (c) → exp{− Q 1 2 em 2 4α |c| 2 } =μ m ∞ (c).
Parareal exponential θ-scheme
In this section, we construct a parareal algorithm based on the exponential θ-scheme as the coarse propagator. We show that proposed parareal algorithm converges to the solution generated by the fine propagator F as k → ∞.
We first define the exponential θ-scheme applied to (5):
u n = S(δT )u n−1 + i(1 − θ)λδT S(δT )u n−1 + iθλδT u n + S(δT )Q 1 2 δ n W,
or equivalently,
u n = (1 + i(1 − θ)λδT )S θ S(δT )u n−1 + S θ S(δT )Q 1 2 δ n W =: G θ (T n , T n−1 , u n−1 ) (6) with S θ := (1 − iθλδT ) −1 , θ ∈ [0, 1] and δ n W := W (T n ) − W (T n−1 )
. The initial value of the numerical solution is the same as the initial value of the exact solution, and apparently
{u n } N n=0 is {B Tn } N n=0 -adapted. The distribution of {u n } N n=0
can also be calculated in the same procedure as Theorem 3.1 by rewriting the Fourier components u m n := u n , e m of u n as
u m n =(1 + i(1 − θ)λδT )S θ e −λmδT u m n−1 + S θ e −λmδT ∞ i=1 Q 1 2 e i , e m δ n β i =η n e −λmδT n u m 0 + S θ e −λmδT n−1 j=0 η j e −λmδT j ∞ i=1 Q 1 2 e i , e m δ n−j β i with η := (1 + i(1 − θ)λδT )S θ = 1 + i(1 − θ)λδT 1 − iθλδT .
Then according to the independence of {δ n−j β i } 1≤j≤n−1,i≥1 and E|δ n−j β i | 2 = 2δT , we derive the distribution of u m n defined by its mean, covariance and relation:
m(u m n ) =η n e −λmδT n E[u m 0 ], C(u m n ) =|η| 2n e −2αδT n C(u m 0 ) + 1 + θ 2 λ 2 δT 2 e 2αδT −1 1 −η n 1 −η Q 1 2 e m 2 (2δT ), R(u m n ) =η 2n e −2λmδT n R(u m 0 ), whereη := 1 + (1 − θ) 2 λ 2 δT 2 (1 + θ 2 λ 2 δT 2 ) e 2αδT = |η| 2 e −2αδT
is called the stable function here. The distribution of u m n converges to µ m ∞ as n → ∞ and δT → 0 for any α > 0 if and only if |η| < 1, or equivalently, θ ∈ [ 1 2 , 1], see Figure 2. The surface in each subfigures in Figure 2 denotes the stable function for different θ = 0, 0.3, 0.5 and δT = 0.1, 0.005. This condition also leads to the time-independent error analysis of the parareal algorithm, see Theorem 3.2.
The parareal algorithm (4) with G θ being the coarse propagator is expressed as
u (k) n =(1 + i(1 − θ)λδT )S θ S(δT )u (k) n−1 − (1 + i(1 − θ)λδT )S θ S(δT )u (k−1) n−1 + F (T n , T n−1 , u (k−1) n−1 ) =ηS(δT )u (k) n−1 − ηS(δT )u (k−1) n−1 + F (T n , T n−1 , u (k−1) n−1 ).(7)
The following result gives the error caused by the parareal algorithms. When the coarse step size δT is not extremely small, the convergence shows order k with respect to δT in a strong sense. n } 0≤n≤N,k∈N be the solution of (7) with F being the exact propagator. Assume that λδT < 1. Then for a fixed iteration step k ∈ N, u with C = C(k, α, θ, λ) independent of time interval. Here,
1 2 − θ + := 1 2 − θ ∨ 0. Otherwise, sup 0≤n≤N u(T n ) − u (k) n L 2 (Ω;H) ≤ CδT k sup 0≤n≤N u(T n ) − u (0) n L 2 (Ω;H)
with C = C(T N , k) and T N = δT N for some fixed N ∈ N.
Proof. The parareal algorithm based on G θ with F denoting the exact propagator yields
u (k) n =ηS(δT )u (k) n−1 − ηS(δT )u (k−1) n−1 + S(δT )u (k−1) n−1 + iλ Tn T n−1 S(T n − s)u u (k−1) n−1 (s)ds + Tn T n−1 S(T n − s)Qǫ (k) n =ηS(δT )ǫ (k) n−1 − ηS(δT )ǫ (k−1) n−1 + S(δT )ǫ (k−1) n−1 + iλ Tn T n−1 S(T n − s) u u (k−1) n−1 (s) − u u(T n−1 ) (s) ds =ηS(δT )ǫ (k) n−1 + e iλδT − η S(δT )ǫ (k−1) n−1 ,
where in the last step we have used the following fact
u u (k−1) n−1 (s) − u u(T n−1 ) (s) =e (i∆−α+iλ)(s−T n−1 ) u (k−1) n−1 + s T n−1 e (i∆−α+iλ)(s−r) Q 1 2 dW (r) − e (i∆−α+iλ)(s−T n−1 ) u(T n−1 ) − s T n−1 e (i∆−α+iλ)(s−r) Q 1 2 dW (r) =S(s − T n−1 )e iλ(s−T n−1 ) ǫ (k−1) n−1 .
Hence, we get
ǫ (k) n L 2 (Ω;H) ≤|η|e −αδT ǫ (k) n−1 L 2 (Ω;H) + |e iλδT − η|e −αδT ǫ (k−1) n−1 L 2 (Ω;H) ≤ |η|e −αδT n ǫ (k) 0 L 2 (Ω;H) + |e iλδT − η|e −αδT n−1 j=0 |η|e −αδT n−1−j ǫ (k−1) j L 2 (Ω;H) =|e iλδT − η|e −αδT n−1 j=1 |η|e −αδT n−1−j ǫ (k−1) j L 2 (Ω;H)(8)
based on the fact ǫ ⊤ and the n-dimensional matrix (see also [13])
M(β) = 0 0 · · · 0 0 1 0 · · · 0 0 β 1 · · · 0 0 β 2 β · · · 0 0 . . . . . . . . . . . . . . . β n−2 β n−3 · · · 1 0 ,
we can rewrite (8) as
ε (k) ≤ |e iλδT − η|e −αδT M(|η|e −αδT )ε (k−1) ≤ |e iλδT − η| k e −αδT k M k (|η|e −αδT )ε (0) .
It is shown in [13] that
M k (β) ∞ ≤ min 1 − β n−1 1 − β k , n − 1 k if β < 1, β n−1−k n − 1 k if β ≥ 1, where n − 1 k = (n − 1)(n − 2) · · · (n − k) k! ≤ n k k! . If α > 1 2 − θ + |λ|, we get e 2αδT > 1 + 2α 2 δT 2 > 1 + (1 − 2θ) + λ 2 δT 2 > 1 + (1 − 2θ)λ 2 δT 2 1 + θ 2 λ 2 δT 2 = |η| 2 ,
which then yields |η|e −αδT < 1. It is apparent that this condition holds for all α > 0 if θ ∈ [ 1 2 , 1]. We conclude under this condition that
ε (k) ∞ ≤ |e iλδT − η|e −αδT 1 − |η|e −αδT k ε (0) ∞ .
The solution of (7) with F being the exact flow converges to the exact solution as k → ∞ if |e iλδT − η|e −αδT + |η|e −αδT < 1.
For some fixed k ∈ N, we get through Taylor expansion that
|e iλδT − η| k e −αδT k ≤ 1 2 (2θ − 1)λ 2 δT 2 + CδT 3 k e −αδT k ,
and in addition
M k (|η|e −αδT ) ∞ ≤ (1 − |η|e −αδT ) −k ≤ 1 − e ( 1 2 −θ) + |λ|−α δT −k ≤ (CδT −1 ) k , where above constant C = C α − 1 2 − θ + |λ| decreases as α − 1 2 − θ + |λ| becomes larger. Eventually, we conclude ε (k) ∞ ≤ (C(2θ − 1)δT + CδT 2 ) k ε (0) ∞ .
If θ ∈ [0, 1 2 ) and α ≤ 1 2 − θ |λ|, we revise above proof as
ε (k) ∞ ≤ |e iλδT − η|e −αδT k (|η|e −αδT ∨ 1) n−1−k n k k! ε (0) ∞ ≤ CδT 2 e −αδT k e √ 2(1−2θ)|λ|−α Tn n k k! ε (0) ∞ ≤ (CT n e −αδT ) k k! e √ 2(1−2θ)|λ|−α Tn δT k ε (0) ∞ ,
which converges as k → ∞ and shows order k only on finite time intervals.
Remark 3. Note that the Fourier components of the noise term
t 0 e −λm(t−s) ∞ i=1 Q 1 2 e i , e m dβ i (s), m ∈ N
are Gaussian processes and their increments can be simulated through random variables in the same distribution. Hence, scheme (6) can also be replaced by
u n = (1 + i(1 − θ)λδT )S θ S(δT )u n−1 + S θ Tn T n−1 S(T n − s)Q 1 2 dW (s),
and the accuracy of parareal algorithm (7) remains the same.
Remark 4. If instead, the implicit Euler scheme is considered as the coarse propagator G, the parareal algorithm (4) with F being the exact propagator turns to be To gain a convergence order, the estimations of S (δT ) −S δT L(Ḣ s ,H) and ǫ (0) n Ḣks will be needed. It then requires a extremely high regularity of both u(t) and u (0) n , and that parameter s in Assumption 2 is large enough, while it is not proper to give such regularity assumptions.
u (k) n =S δT u (k) n−1 −S δT u (k−1)
Application to the nonlinear case
For the nonlinear case (1), parareal exponential θ-scheme is also suitable for longtime simulation with some restriction on δT and α. We take the case θ = 0 as a keystone to show the convergence of the proposed parareal algorithm and its fully discrete scheme with F being a numerical propagator.
Moreover, to ensure that less restriction on δT is needed, some modification of the coarse propagator is required instead of using the exponential θ-scheme. We give the convergence condition for the modified exponential θ-scheme with general θ ∈ [0, 1].
Parareal exponential Euler scheme(θ = 0)
We define the coarse propagator based on the exponential Euler scheme
u n+1 = S(δT )u n + iS(δT )F (u n )δT + S(δT )Q 1 2 δ n+1 W =: G I (T n+1 , T n , u n )(9)
with δ n+1 W := W (T n+1 ) − W (T n ). The initial value of the numerical solution is the same as the initial value of the exact solution, and apparently {u n } N n=1 is {B Tn } N n=1 -adapted. The following result gives the error caused by the parareal algorithms. When the coarse step size δT is not extremely small, the convergence shows order k with respect to δT in a strong sense. Its proof is quite similar to that of Theorem 3.2 and is given in the Appendix. n } 0≤n≤N,k∈N be the solution of (4) with F being the exact propagator and G = G I being the propagator defined in (9). Then for α ≥ 0 and any 1 ≤ n ≤ N, u (k) n converges to u(T n ) as k → ∞. More precisely,
sup 0≤n≤N u(T n ) − u (k) n L 2 (Ω;H) ≤ e −αδT k (CT ) k k! e (L F −α)T ∨ 1 sup 0≤n≤N u(T n ) − u (0) n L 2 (Ω;H)
for any k ∈ N with some positive constant C depending only on L F and α.
If α > 0, there exists some δT * = δT * (α) ∈ (0, 1) satisfying δT −1 * ln δT −1 * = α such that the error above shows order k with respect to δT when δT ∈ [δT * , 1):
sup 0≤n≤N u(T n ) − u (k) n L 2 (Ω;H) ≤ (δT ) k (CT ) k k! e (L F −α)T ∨ 1 sup 0≤n≤N u(T n ) − u (0) n L 2 (Ω;H) .
To obtain an implementable numerical method, the fine propagator F need to be chosen as a proper numerical method instead of the exact propagator. In this case, it is called a fully discrete scheme, which does not mean the discretization in both space and time direction as it usually does. We refer to [5] for the discretization in space of stochastic cubic nonlinear Schrödinger equation, which is also available for the model considered in the present paper.
In particular, we choose F as a propagator obtained by applying the exponential integrator repeatedly on the fine grid with step size δt:
F I (t n,j , t n,j−1 , v) := S(δt)v + iS(δt)F (v)δt + S(δt)Q 1 2 δ n,j W, ∀ v ∈ H
with δ n,j W := W (t n,j ) − W (t n,j−1 ). Hence, we get the following fully discrete scheme:
u (0) n+1 = G I (T n+1 , T n , u (0) n ), u (0) 0 = u 0 , n = 0, · · · , N − 1, u (k−1) n,j = F I (t n,j , t n,j−1 ,û (k−1) n,j−1 ),û (k−1) n,0 = u (k−1) n , j = 1, · · · , J, k ∈ N\{0}, u (k) n+1 = G I (T n+1 , T n , u (k) n ) +û (k−1) n,J − G I (T n+1 , T n , u (k−1) n ), k ∈ N\{0},(10)
where the notation t n,j has been defined in Section 2. The approximate error of the fully discrete scheme (10) comes from two parts: the parareal technique based on a coarse propagator and the approximate error of the fine propagator. In fact, the second part is exactly the approximate error of a specific serial scheme without iteration and depends heavily on the regularity of the noise given in Assumption 2, which will not be dealt with here. The readers are referred to [5,7,8] and references therein for the study on accuracy of serial schemes. We now focus on the error caused by the former part and aim to show that the solution of (10) converges to the solution of the fine propagator F as k goes to infinity. To this end, we denote by v n,j = F I (t n,j , t n,j−1 , v n,j−1 ), n = 0, · · · , N, j = 1, · · · , J the solution of F on fine gird {t n,j } n∈{0,··· ,N },j∈{0,··· ,J} starting from v 0,0 = u 0 , where t n+1,0 = T n+1 = t n,J and v n+1,0 := v n,J . n } 0≤n≤N,k∈N be the solution of (10). Then for any k ∈ N, it holds
sup 0≤n≤N u (k) n − v n,0 L 2 (Ω;H) ≤ e −αδT k (CT ) k k! e (L F −α)T ∨ 1 sup 0≤n≤N u (0) n − v n,0 L 2 (Ω;H) .
In addition, if δT ∈ [δT * , 1) with δT * being defined as in Theorem 4.1, the error shows order k with respect to δT similar to that in Theorem 4.1.
The proof of this theorem follows the same procedure as that of Theorem 4.1 and is given in the Appendix for the readers' convenience.
Parareal exponential θ-scheme over longtime
We now consider the exponential θ-scheme in the nonlinear case
u n =S(δT )u n−1 + i(1 − θ)δT S(δT )F (u n−1 ) + iθδT F (u n ) + S(δT )Q 1 2 δ n W.
The existence and uniqueness of the numerical solution is obtained under Assumptions 1 and 2 through the same procedure as those in [5,7]. So we denote the unique solution of above scheme by u n =G θ (T n , T n−1 , u n−1 ).
The parareal algorithm based onG θ with F denoting the exact propagator can be expressed as
u (k) n =G θ (T n , T n−1 , u (k) n−1 ) + F (T n , T n−1 , u (k−1) n−1 ) −G θ (T n , T n−1 , u (k−1) n−1 ) = : a k + b k−1 − a k−1 ,(11)
where a k =S(δT )u (k)
n−1 + i(1 − θ)δT S(δT )F (u (k) n−1 ) + iθδT F (a k ) + S(δT )Q 1 2 δ n W, b k−1 =S(δT )u (k−1) n−1 + i Tn T n−1 S(T n − s)F u u (k−1) n−1 (s) ds + Tn T n−1 S(T n − s)Q 1 2 dW.
Based on the Taylor expansion of F (a k ) = F (a k−1 ) + F ′ (τ k )(a k − a k−1 ) with τ k being determined by a k and a k−1 , we derive
a k − a k−1 =S(δT ) u (k) n−1 − u (k−1) n−1 + i(1 − θ)δT S(δT ) F (u (k) n−1 ) − F (u (k−1) n−1 ) + iθδT (F (a k ) − F (a k−1 )) =S(δT ) u (k) n−1 − u (k−1) n−1 + i(1 − θ)δT S(δT ) F (u (k) n−1 ) − F (u (k−1) n−1 ) + iθδT F ′ (τ k )(a k − a k−1 )
Hence, scheme (11) can be expressed as Moreover, the accuracy of the convergence is faster than [f (θ)] k , which decreases as θ being larger.
u (k) n =S θ,k S(δT )u (k) n−1 + (1 − S θ,k ) S(δT )u (k−1) n−1 + i(1 − θ)δT S θ,k S(δT ) F (u (k) n−1 ) − F (u (k−1) n−1 ) + i Tn T n−1 S(T n − s)F u u (k−1) n−1 (s) ds + Tn T n−1 S(T n − s)Q 1 2 dW, where S θ,k := (1 − iθδT F ′ (τ k )) −1 .
Proof. Based on the notation ǫ (k) Then the Gronwall inequality yields
n := u (k) n − u(T n ) again, we derive ǫ (k) n =S θ,k S(δT )ǫ (k) n−1 + (1 − S θ,k ) S(δT )ǫ (k−1) n−1 + i(1 − θ)δT S θ,k S(δT ) F (u (k) n−1 ) − F (u (k−1) n−1 ) + i Tn T n−1 S(T n − s) F u u (k−1) n−1 (s) − F u u(T n−1 ) (s) ds.G(s) L 2 (Ω;H) ≤ 1 + L F (s − T n−1 )e L F (s−T n−1 ) e −α(s−T n−1 ) ǫ (k−1) n−1 L 2 (Ω;H) .
Above estimations finally lead to
ǫ (k) n L 2 (Ω;H) ≤ (1 + (1 − θ)L F δT ) e −αδT ǫ1 + L F (s − T n−1 )e L F (s−T n−1 ) ds =L F δT e L F δT + L F δT + 1 − e L F δT ≤ L F δT e L F δT .
Based on the arguments in Theorem 3.2, the error converge to zero as k → ∞ if
f (θ) = γ 1 + γ 2 = 1 + (2 − θ)L F δT + L F δT e L F δT e −αδT < 1.
The convergence rate turns to be
ε (k) ∞ ≤ γ 2 1 − γ 1 k ε (0) ∞ = f (θ) − γ 1 1 − γ 1 k ε (0) ∞ < [f (θ)] k ε (0) ∞ with ε (k) := ǫ (k) 1 L 2 (Ω;H) , · · · , ǫ (k) n L 2 (Ω;H) ⊤ .
In addition, the fact f ′ (θ) = −L F δT e −αδT < 0 indicates that the parareal exponential θ-scheme converges faster when θ is larger, see Figure 3.
Numerical experiments
This section is devoted to investigate the relationship between the convergence error and several parameters, i.e., α, λ and θ, based on which we can find a proper number k as the terminate iteration number for different cases.
We consider the linear equation (5) with initial value u 0 = 0. Throughout the numerical experiments, we use the average of 1000 sample paths as an approximation of the expectation, and choose dimension M = 10 for the spectral Galerkin approximation in spatial direction. We get from Theorem 3.2 that the time-uniform convergence holds for all λ ∈ R and α > 0 if θ ∈ [ 1 2 , 1], which is illustrated in Figure 4 for θ = 0.5, 1 and time interval T = 1, 20. Figure 4 shows the evolution of the mean square error (sup 1≤n≤N E u in Theorem 3.2. Figure 5 also shows evolution of the mean square error with respect to k for θ = 0 and T = 1, 20, 100. It can be find that if the condition α > 1 2 − θ |λ| is not satisfied, e.g., λ = 5, α = 1, the proposed algorithm diverges as time going larger.
In particular, based on numerical experiments above, we now fix k = 3 to verify the convergence order of the proposed scheme for different θ ∈ [0, 1]. Figure 6 considers the convergence order of the proposed parareal algorithm for different λ and α with fine step size δt = 2 −8 . The order turns to be k for θ = 0, 0.4, 0.55, 0.9, but increases to 2k when θ = 1 2 , which coincides with the result in Theorem 3.2.
u (k) n+1 =S(δT )u (k) n + iS(δT )F (u (k) n )δT − iS(δT )F (u (k−1) n )δT + i T n+1 Tn S(T n+1 − s)F (u u (k−1) n (s))ds + T n+1 Tn S(T n+1 − s)Q 1 2 dW (s),(12)
compared with the exact solution u(T n+1 ) = F (T n+1 , T n , u(T n )).
Denoting the error ǫ (k)
n := u(T n ) − u (k) n , we get ǫ (k) n+1 =S(δT )ǫ (k) n − iS(δT )F (u (k) n )δT + iS(δT )F (u (k−1) n )δT + i T n+1 Tn S(T n+1 − s)F (u u(Tn) (s))ds − i T n+1 Tn S(T n+1 − s)F (u u (k−1) n (s))ds =S(δT )ǫ (k) n + iS(δT ) F (u(T n )) − F (u (k) n ) δT − iS(δT ) F (u(T n )) − F (u (k−1) n ) δT + iII L 2 (Ω;H) ≤ L F δT e −αδT ǫ (k) n L 2 (Ω;H)(13)
and
III L 2 (Ω;H) ≤ L F δT e −αδT ǫ (k−1) n L 2 (Ω;H) .(15)
It then suffices to estimate term IV . In fact, denoting G(s) := u u(Tn) (s) − u u (k−1) n (s) and according to the mild solution (2), we obtain for any s ∈ [T n , T n+1 ] that As a result,
IV L 2 (Ω;H) ≤L F T n+1 Tn e −α(T n+1 −s) G(s) L 2 (Ω;H) ds ≤ 1 + L F δT e L F δT L F δT e −αδT ǫ (k−1) n L 2 (Ω;H) .(16)
Based on estimations (13)- (16) and the fact that ǫ (k) 0 = 0 for all k ∈ N, we derive for n = 1, · · · , N − 1 that ⊤ and the N-dimensional matrix (see also [13])
ǫ (k) n+1 L 2 (Ω;H) ≤(1 + L F δT )e −αδT ǫ (k)M(β) = 0 0 · · · 0 0 1 0 · · · 0 0 β 1 · · · 0 0 β 2 β · · · 0 0 . . . . . . . . . . . . . . . β N −2 β N −3 · · · 1 0 ,
we can rewrite (17) as ε (k) ≤ CδT e −αδT M(β)ε (k−1) ≤ (CδT e −αδT ) k M k (β)ε (0) .
It is shown in [13] that
M k (β) ∞ ≤ (N − 1)(N − 2) · · · (N − k) k! (β ∨ 1) N −k−1 ≤ N k k! β N ∨ 1 ,
which leads to the first result in the theorem:
ε (k) ∞ ≤ CδT e −αδT k N k k! β N ∨ 1 ε (0) ∞ ≤ e −αδT k (CT ) k k! e (L F −α)T ∨ 1 ε (0) ∞ .
Note that the function f (δT ) := e −αδT − δT is continuous and takes value in (e −α − 1, 1] for δT ∈ [0, 1). Hence, there exists some δT * = δT * (α) ∈ (0, 1) such that f (δT ) ≤ 0 for any δT ∈ [δT * , 1). In fact, δT * satisfies that δT −1 * ln δT −1 * = α, which decreases when α increases.
Proof of Theorem 4.2
Note that
In the following, we still denote the above error by ǫ (18) and (19). For the first three terms, we derive Ĩ L 2 (Ω;H) ≤e −αδT ǫ To get the estimation of termĨV , we defineG j :=û (k−1)
n−1,j − v n−1,j for any j = 0, · · · , J, then (18) and (19)
Figure 1 :
1Numerical solutions obtained by F and G on [T n−1 , T n ]
Assume that Assumption 2 holds with s = 0. The solution u in (5) possesses a unique invariant measure
Figure 2 :
2Convergence area (grey) vs. α and λ.
Theorem 3. 2 .
2Let Assumptions 1 and 2 hold with s = 0, and {u (k)
an approximation of u(T n ) with order k. More precisely, (Ω;H) ≤ C (2θ − 1)δT k + δT 2k sup n∈N u(T n ) − u (0) n L 2 (Ω;H)
n
− u(T n ), we obtain
any k ∈ N. Denoting the error vector ε (k)
δT = (1 + αδT − iλδT − iδT ∆) −1 andS(δT ) = e (i∆−α+iλ)δT .In this case, the error between u
Theorem 4 . 1 .
41Let Assumptions 1 and 2 hold with s = 0, and {u (k)
Theorem 4 . 2 .
42Let Assumptions 1 and 2 hold with s = 0 and {u (k)
Theorem 4. 3 .
3Let Assumptions 1 and 2 hold with s = 0, and {u(k) n } 0≤n≤N,k∈N be the solution of (11). Then the proposed algorithm (11) converges to the exact solution as k → ∞ over unbounded time domain if f (θ) := 1 + (2 − θ)L F δT + L F δT e L F δT e −αδT < 1.
≤e
(1 + (1 − θ)L F δT ) S θ,k L(H) e −αδT ǫ (k) n−1 L 2 (Ω;H)+ 1 − S θ,k L(H) + (1 − θ)L F δT S θ,k L(H) ) − u u(T n−1 ) (s). For operator 1 − S θ,k , we deduce 1 − S θ,k L(H) = S θ,k L(H) iθδT F ′ (τ k ) L(H) ≤ θL F δT due to the fact S θ,k L(H) < 1.Moreover, according to the mild solution (2), we get for any s ∈ [T n−1 , T n ] that G(s) L 2 (Ω;H) = u u −α(s−r) G(r) L 2 (Ω;H) dr.
+
L F δT 1 + e L F δT e −αδT ǫ
Figure 3 :
3Convergence area (grey) of f (θ) vs. α and θ.
Figure 4 :
4Mean square error (sup 1≤n≤N E u
Figure 5 :
5iteration number k. For T = 1, the iteration number can be chosen as k = 4 for θ = 1 2 and k = 7 when θ = 1, which coincides with the result that the convergence order is 2k instead of k when θ = 1 2 . For larger time T = 20, since the constant C in Theorem 3.2 is negatively correlated with α for θ ∈ [ 1 2 , 1], the proposed algorithm also converges but with different iteration number k.When θ ∈ [0, 1 2 ), the convergence result holds uniformly if α > 1 2 − θ |λ| as stated Mean square error (sup 1≤n≤N E u . iteration number k.
Figure 6 :
6Mean square order with respect to δT = 2 −i , i = 2, · · · ,
n+1 − s) F (u u(Tn) (s)) − F (u u (k−1) n (s)) ds = : I + II − III + IV. Thus, the mean square error reads ǫ (k) n+1 L 2 (Ω;H) ≤ I L 2 (Ω;H) + II L 2 (Ω;H) + III L 2 (Ω;H) + IV L 2 (Ω;H) , where I L 2 (Ω;H) ≤ e −αδT ǫ (k) n L 2 (Ω;H) ,
Ge
(s) L 2 (Ω;H) = u u(Tn) (s) − u u (k−1) n (s) L 2 (Ω;H) −α(s−r) G(r) L 2 (Ω;H) dr. Then the Gronwall inequality yields G(s) L 2 (Ω;H) ≤ 1 + L F (s − T n )e L F (s−Tn) e −α(s−Tn) ǫ (k−1) n L 2 (Ω;H) ≤ 1 + L F δT e L F δT e −α(s−Tn) ǫ (
L F δT e L F δT L F δT e −αδT ǫ (k−1) n L 2 (Ω;H) ≤ 2 + L F δT e L F δT L F δT e notation β := (1 + L F δT )e −αδT > 0. Denoting the error vector ε (k)
n
− v n,0 for convenience, which has the same symbol as in the proof of Theorem 4.1 but with different meaning. Then we can decompose the error into several partsǫ (k) n = G I (T n , T n−1 , u (k) n−1 ) − v n,0 − G I (T n , T lδt)F (v n−1,J−l )δt = :Ĩ +ĨI −Ĩ II +ĨV according to
2 (Ω;H) ≤ (1 + L F δT )e −αδT ǫ (k) n−1 2 L 2 (Ω;H) + CδT e −αδT ǫ (k−1) n−1 2 L 2 (Ω;H) , which leads to the final results based on the procedure in the proof of Theorem 4.1.
AppendixProof of Theorem 4.1Since F is the exact propagator, it has the following expressionTn S(T n+1 − s)F (u u (k−1) n (s))ds
Linear and graphical models. H H Andersen, M Højbjerre, D Sørensen, P S Eriksen, Lecture Notes in Statistics. 101Springer-VerlagFor the multivariate complex normal distributionH. H. Andersen, M. Højbjerre, D. Sørensen, and P. S. Eriksen. Linear and graphical models, volume 101 of Lecture Notes in Statistics. Springer-Verlag, New York, 1995. For the multivariate complex normal distribution.
On the convergence and the stability of the parareal algorithm to solve partial differential equations. G Bal, Domain decomposition methods in science and engineering. BerlinSpringer40G. Bal. On the convergence and the stability of the parareal algorithm to solve partial differential equations. In Domain decomposition methods in science and engineering, volume 40 of Lect. Notes Comput. Sci. Eng., pages 425-432. Springer, Berlin, 2005.
Parallelization in time of (stochastic) ordinary differential equations. G Bal, PreprintG. Bal. Parallelization in time of (stochastic) ordinary differential equations. Preprint, 2006.
High order integrator for sampling the invariant distribution of a class of parabolic stochastic PDEs with additive space-time noise. C-E Bréhier, G Vilmart, SIAM J. Sci. Comput. 384C-E. Bréhier and G. Vilmart. High order integrator for sampling the invariant distri- bution of a class of parabolic stochastic PDEs with additive space-time noise. SIAM J. Sci. Comput., 38(4):A2283-A2306, 2016.
Approximation of invariant measure for damped stochastic nonlinear Schrödinger equation via an ergodic numerical scheme. C Chen, J Hong, X Wang, Potential Anal. 462C. Chen, J. Hong, and X. Wang. Approximation of invariant measure for damped stochastic nonlinear Schrödinger equation via an ergodic numerical scheme. Potential Anal., 46(2):323-367, 2017.
Ergodicity for infinite-dimensional systems. G Da Prato, J Zabczyk, London Mathematical Society Lecture Note Series. 229Cambridge University PressG. Da Prato and J. Zabczyk. Ergodicity for infinite-dimensional systems, volume 229 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cam- bridge, 1996.
A semi-discrete scheme for the stochastic nonlinear Schrödinger equation. A De Bouard, A Debussche, Numer. Math. 964A. De Bouard and A. Debussche. A semi-discrete scheme for the stochastic nonlinear Schrödinger equation. Numer. Math., 96(4):733-770, 2004.
Weak and strong order of convergence of a semidiscrete scheme for the stochastic nonlinear Schrödinger equation. A De Bouard, A Debussche, Appl. Math. Optim. 543A. De Bouard and A. Debussche. Weak and strong order of convergence of a semidis- crete scheme for the stochastic nonlinear Schrödinger equation. Appl. Math. Optim., 54(3):369-399, 2006.
Ergodicity for a weakly damped stochastic non-linear Schrödinger equation. A Debussche, C Odasso, J. Evol. Equ. 53A. Debussche and C. Odasso. Ergodicity for a weakly damped stochastic non-linear Schrödinger equation. J. Evol. Equ., 5(3):317-356, 2005.
Existence of invariant measures for the stochastic damped Schrödinger equation. I Ekren, I Kukavica, M Ziane, Stoch. Partial Differ. Equ. Anal. Comput. 53I. Ekren, I. Kukavica, and M. Ziane. Existence of invariant measures for the stochastic damped Schrödinger equation. Stoch. Partial Differ. Equ. Anal. Comput., 5(3):343-367, 2017.
Parallel in time simulation of multiscale stochastic chemical kinetics. S Engblom, Multiscale Model. Simul. 81S. Engblom. Parallel in time simulation of multiscale stochastic chemical kinetics. Mul- tiscale Model. Simul., 8(1):46-68, 2009.
Analysis for parareal algorithms applied to Hamiltonian differential equations. M J Gander, E Hairer, J. Comput. Appl. Math. 259part AM. J. Gander and E. Hairer. Analysis for parareal algorithms applied to Hamiltonian differential equations. J. Comput. Appl. Math., 259(part A):2-13, 2014.
Analysis of the parareal time-parallel time-integration method. M J Gander, S Vandewalle, SIAM J. Sci. Comput. 292M. J. Gander and S. Vandewalle. Analysis of the parareal time-parallel time-integration method. SIAM J. Sci. Comput., 29(2):556-578, 2007.
High Order Conformal Symplectic and Ergodic Schemes for the Stochastic Langevin Equation via Generating Functions. J Hong, L Sun, X Wang, SIAM J. Numer. Anal. 556J. Hong, L. Sun, and X. Wang. High Order Conformal Symplectic and Ergodic Schemes for the Stochastic Langevin Equation via Generating Functions. SIAM J. Numer. Anal., 55(6):3006-3029, 2017.
Résolution d'EDP par un schéma en temps "pararéel. J-L Lions, Y Maday, G Turinici, C. R. Acad. Sci. Paris Sér. I Math. 3327J-L. Lions, Y. Maday, and G. Turinici. Résolution d'EDP par un schéma en temps "pararéel". C. R. Acad. Sci. Paris Sér. I Math., 332(7):661-668, 2001.
A parareal in time procedure for the control of partial differential equations. Y Maday, G Turinici, C. R. Math. Acad. Sci. 3354Y. Maday and G. Turinici. A parareal in time procedure for the control of partial differential equations. C. R. Math. Acad. Sci. Paris, 335(4):387-392, 2002.
Parallel methods for the numerical integration of ordinary differential equations. W L Miranker, W Liniger, Math. Comp. 21W. L. Miranker and W. Liniger. Parallel methods for the numerical integration of ordinary differential equations. Math. Comp., 21:303-320, 1967.
Second order Runge-Kutta methods for Itô stochastic differential equations. Andreas Rößler, SIAM J. Numer. Anal. 473Andreas Rößler. Second order Runge-Kutta methods for Itô stochastic differential equa- tions. SIAM J. Numer. Anal., 47(3):1713-1738, 2009.
Rø nquist. Stability of the parareal algorithm. G A Staff, E M , Domain decomposition methods in science and engineering. BerlinSpringer40G. A. Staff and E. M. Rø nquist. Stability of the parareal algorithm. In Domain decomposition methods in science and engineering, volume 40 of Lect. Notes Comput. Sci. Eng., pages 449-456. Springer, Berlin, 2005.
| []
|
[
"Globally Optimal Solution to Inverse Kinematics of 7DOF Serial Manipulator Pavel Trutman CIIRC CTU in Prague",
"Globally Optimal Solution to Inverse Kinematics of 7DOF Serial Manipulator Pavel Trutman CIIRC CTU in Prague"
]
| [
"Mohab Safey \nSorbonne Université\nCNRS Didier Henrion LAAS-CNRS\nFEE CTU\nLIP6Prague\n",
"El Din \nSorbonne Université\nCNRS Didier Henrion LAAS-CNRS\nFEE CTU\nLIP6Prague\n",
"Tomas Pajdla \nCIIRC CTU\nPrague\n"
]
| [
"Sorbonne Université\nCNRS Didier Henrion LAAS-CNRS\nFEE CTU\nLIP6Prague",
"Sorbonne Université\nCNRS Didier Henrion LAAS-CNRS\nFEE CTU\nLIP6Prague",
"CIIRC CTU\nPrague"
]
| []
| The Inverse Kinematics (IK) problem is to find robot control parameters to bring it into the desired position under the kinematics and collision constraints. We present a global solution to the optimal IK problem for a general serial 7DOF manipulator with revolute joints and a quadratic polynomial objective function. We show that the kinematic constraints due to rotations can all be generated by seconddegree polynomials. This is important since it significantly simplifies further step where we find the optimal solution by Lasserre relaxations of non-convex polynomial systems. We demonstrate that the second relaxation is sufficient to solve the 7DOF IK problem. Our approach is certifiably globally optimal. We demonstrate the method on the 7DOF KUKA LBR IIWA manipulator and show that we are able to compute the optimal IK or certify in-feasibility in 99 % tested poses. | 10.1109/lra.2022.3163444 | [
"https://arxiv.org/pdf/2007.12550v1.pdf"
]
| 220,769,166 | 2007.12550 | 2542e882a8fb5dcf086ee64e4827df0a2b959ce8 |
Globally Optimal Solution to Inverse Kinematics of 7DOF Serial Manipulator Pavel Trutman CIIRC CTU in Prague
July 27, 2020
Mohab Safey
Sorbonne Université
CNRS Didier Henrion LAAS-CNRS
FEE CTU
LIP6Prague
El Din
Sorbonne Université
CNRS Didier Henrion LAAS-CNRS
FEE CTU
LIP6Prague
Tomas Pajdla
CIIRC CTU
Prague
Globally Optimal Solution to Inverse Kinematics of 7DOF Serial Manipulator Pavel Trutman CIIRC CTU in Prague
July 27, 2020
The Inverse Kinematics (IK) problem is to find robot control parameters to bring it into the desired position under the kinematics and collision constraints. We present a global solution to the optimal IK problem for a general serial 7DOF manipulator with revolute joints and a quadratic polynomial objective function. We show that the kinematic constraints due to rotations can all be generated by seconddegree polynomials. This is important since it significantly simplifies further step where we find the optimal solution by Lasserre relaxations of non-convex polynomial systems. We demonstrate that the second relaxation is sufficient to solve the 7DOF IK problem. Our approach is certifiably globally optimal. We demonstrate the method on the 7DOF KUKA LBR IIWA manipulator and show that we are able to compute the optimal IK or certify in-feasibility in 99 % tested poses.
Introduction
The Inverse Kinematics (IK) problem is one of the most important problems in robotics [26]. The problem is to find robot control parameters to bring it into the desired position under the kinematics and collision constraints [13].
The IK problem has been extensively studied in robotics and control [23,24]. The classical formulation [23] of the problem for 6 degrees of freedom (6DOF) serial manipulators leads to solving systems of polynomial equations [4,27]. This is in general hard ("EXPSPACE complete" [21]) algebraic [15]. (right) We can optimally solve its inverse kinematics (green) or find it infeasible (blue) in 99 % of 10 000 tested poses. computational problem, but practical solving methods have been developed for 6DOF manipulators [23,20,7].
An important generalization of the IK problem aims at finding the optimal control parameters for an under-constrained mechanism, i.e. when the number of controlled joints in a manipulator is larger than six. Then, an algebraic computation problem turns into an optimization problem over an algebraic variety [4] of possible IK solutions. It is particularly convenient to choose a polynomial objective function to arrive at a semi-algebraic optimization problem [17].
Semi-algebraic optimization problems are in general non-convex but can be solved with certified global optimality [18] using the Lasserre hierarchy of convex optimization problems [17]. Computationally, however, semialgebraic optimization problems are in general extremely hard and were often considered too expensive to be used in practice. In this paper, we show that using "algebraic pre-processing" in semi-algebraic optimization methods becomes practical in solving the IK problem of general 7DOF serial manipulators with a polynomial objective function.
Contribution
Our main contributions are: (1) We prove that the variety of IK solutions of all generic 7DOF revolute serial manipulators can be generated by second-degree polynomials only (Theorem 1). This considerably reduces the complexity of semi-algebraic optimization and makes it computationally feasible.
(2) We provide a method for computing a globally optimal solution to the IK problem for a general 7DOF serial manipulator with a polynomial objective function.
(3) We demonstrate that our approach works on a practical 7DOF KUKA LBR IIWA manipulator and allows us to solve 99 % configurations while the straightforward semi-algebraic optimization fails in approx 34 % of cases. (4) We employ techniques from algebraic geometry [4] and polynomial optimization [18] to solve the 7DOF IK problem exactly (within the numerical accuracy of computation). Our approach is also able to certify the in-feasibility of solving when it happens.
Previous work
The first breakthrough in solving IK problems was the global solution to IK for a general 6DOF serial manipulator, which was given in [25,20]. It leads to solving a polynomial system with 16 solutions. Another important result was the solution to the forward kinematics problem of the Stewart-Gough parallel manipulator platform [19] leading to a polynomial system with 40 solutions. See recent work [5] for the review of local and other approximate techniques for solving IK problems. We next review only the most relevant work.
The most relevant previous work
The closest previous works are related to solving IK for mechanisms, which are under-constrained when considering positions of the final actuator only. The standard approach is to employ additional dynamics, time optimality, and collision constraints.
In [6], a technique for planning dynamic whole-body motion of a humanoid robot has been developed. It solves IK as a part of motion planning by local optimization methods taking into account kinematics, dynamics, and collision model. The planning method requires good initialization to converge, and depending on the quality of the initialization may take from minutes to hours of running time. Our approach provides a globally optimal solution for 7DOF kinematics subchains of more complex mechanisms and could be used to initialize kinematic part of motion planning.
Work [15] presented IK solution for 7DOF manipulators with zero link offsets, e.g. KUKA LBR IIWA manipulators. The solution uses special kinematics of its class of manipulators to decompose the general IK problem to two simpler IK problems that can be solved in a closed-form. The onedimensional variety of self-motions becomes circular, and hence the paper proposes to parameterize it by the angle of a point of the circle. Our approach generalizes this solution to a general 7DOF manipulator and shows that it is feasible to solve the IK problem for completely general 7DOF manipulators and optimize over their self-motion varieties.
Paper [5] presents a global (approximate) solution to IK for 7DOF manipulators. It formulates the IK problem as a mixed-integer convex optimization program. The key idea of the paper is to approximate the non-convex space of rotations by piecewise linear functions on several intervals that partition the original space. This turns the original non-convex problem into an approximate convex problem when the right interval is chosen. Selecting the values of auxiliary binary variables to pick the actual interval of approximation leads to the integer part of the optimization. This is the first practical globally optimal approach, but it is only approximate and as such delivers solutions with errors in units of centimeters and units of degrees. It also fails to detect about 5 % of infeasible poses. Our approach solves the original problem with sub-10 −4 mm and sub-10 −2 degree error and we can solve/decide the feasibility in all but 1 % of tested cases. Computation times of [5] and our approach are roughly similar, in units of seconds.
Problem formulation
Here we formulate the IK problem for the 7DOF serial manipulators as a semi-algebraic optimization problem with a polynomial objective function.
The task is to find joint coordinates of the manipulator in a way that the end-effector reaches the desired pose in space. The IK problem is called under-constrained for manipulators, which have more DOF than they require to execute the given task. In our case, to reach the desired pose in space, manipulators require to have six DOF, and therefore the IK problem for a 7DOF manipulator is under-constrained. The consequence is that the IK problem has an infinite number of solutions for reachable generic end-effector poses for such manipulators. This results in the self-motion property of these manipulators. Self-motion is a motion of a manipulator, which is not observed in the task space, i.e. the end-effector pose of the manipulator is static while the links of the manipulator are moving. Therefore, moving the manipulator along a path consisting of joint configurations of different solutions of the IK problem for the same pose in space will result in the self-motion of the manipulator.
The self-motion property provides the manipulator more adaptability since it allows, e.g. to avoid more obstacles in the paths and to avoid singularities, which leads to a more versatile mechanism.
On the other hand, increasing the degrees of freedom increases dramatically the difficulty of the IK problem computation. The IK problem has no longer a finite number of solutions. It can be formulated as a constrained optimization problem choosing the optimal solution from the set of all feasible solutions.
Scope of the proposed method
In this work, we present a general method for solving the IK problem for 7DOF serial manipulators. We aim at a method that solves the IK problem, and that selects the globally optimal solution w.r.t. the given objective function from the infinite number of all feasible solutions. It is naturally more time consuming to find the global solution than to find any solution, and therefore we do not expect our method to be an on-line method. For on-line methods, such as used in the control units of the manipulators, the local methods are more suitable as they are fast and sufficiently accurate.
We see the application of our presented method in the developing process and the exploration of the capabilities of the manipulators. The off-line method suits these tasks well as we are not typically limited by time. Such a method can be used with an advantage when designing new 7DOF serial manipulators and optimizing their parameters, such as manipulability in regions of interest of the Cartesian space. We see this as a reasonable approach as 7DOF serial manipulators are currently the most common redundant manipulators in the industry.
With regard to the presented scope, we next show how the IK problem for 7DOF serial manipulators can be modeled as a polynomial optimization problem (POP).
Description by forward kinematics
We describe manipulators by Denavit-Hartenberg (D-H) convention [10]
to construct D-H transformation matrices M i (θ i ) ∈ R 4×4 from link i to i − 1.
D-H matrices are parametrized by the joint angles θ i . The product of the D-H matrices for i from 1 to 7 gives us the transformation matrix M , which represents the transformation from the end-effector coordinate system to the base coordinate system
7 i=1 M i (θ i ) = M.(1)
The matrix M consists of the position vector t ∈ R 3 and the rotation matrix R ∈ SO (3), which together represent the end-effector pose w.r.t. the base coordinate system. When knowing the joint angles θ i , easy evaluation Eqn. (1) gives the end-effector pose in the base coordinate system. Due to kinematics constraints, manipulators come with joint limits, i.e. with restrictions on the joint angles θ i . Typically, maximal θ High i and minimal θ Low i values of joint angles are given as
θ Low i ≤ θ i ≤ θ High i , i = 1, . . . , 7.
(2)
Inverse kinematics problem
The forward kinematics problem is very easy to solve for serial manipulators.
On the other hand, the IK problem is much more difficult since it leads to solving systems of polynomial equations. To solve the IK problem we set up our desired pose of the end-effector in the form of matrix M and then solve matrix Eqn. (1) for the joint coordinates θ i . For redundant manipulators, there is an infinite number of solutions, and therefore we introduce an objective function to select the solution on which the evaluation of the objective function is minimal. In our case, we prefer the solutions that minimize the weighted sum of distances of the joint angles θ = [θ 1 , . . . , θ 7 ] from their preferred valuesθ = [θ 1 , . . . ,
θ 7 ] min θ∈ −π;π) 7 7 i=1 w i (θ i −θ i ) mod π ,(3)
where w i ≥ 0 and 7 i=1 w i = 1. This objective function is widely used in the literature, e.g. [22]. In practice, the preferred valuesθ can be set to the previous configuration of the manipulator, and then the total movement of the actuators to reach the desired pose is minimized.
Next, we add joint limits to obtain the following optimization problem
min θ∈ −π;π) 7 7 i=1 w i (θ i −θ i ) mod π s.t. 7 i=1 M i (θ i ) = M θ Low i ≤ θ i ≤ θ High i (i = 1, . . . , 7)(4)
To be able to use techniques of polynomial optimization, we need to remove trigonometric functions that are contained in Eqn. (1). We do that by introducing new variables c = [c 1 , . . . , c 7 ] and s = [s 1 , . . . , s 7 ] , which represent the cosines and sines of the joint angles θ = [θ 1 , . . . , θ 7 ] respectively. Then, we can rewrite Problem (4) in the new variables. In order to preserve the structure, we need to add the trigonometric identities
q i (c, s) = c 2 i + s 2 i − 1 = 0, i = 1, . . . , 7.(5)
Matrix Eqn. (1) contains 12 trigonometric equations and can be directly rewritten as 12 polynomial equations of degrees up to seven in the newly introduced variables. However, we use the following clever manipulation with the matrix multiplication, which relies on the fact that the inverse of a rotation matrix is its transpose, i.e. it is a linear function of the original matrices,
5 i=3 M i (θ i ) − M −1 2 (θ 2 )M −1 1 (θ 1 )M M −1 7 (θ 7 )M −1 6 (θ 6 ) = 0.(6)
It reduces the maximal degree of the polynomials in unknowns c and s to four. We denote these polynomials in Eqn. (6) as
p j (c, s) = 0, j = 1, . . . , 12(7)
The next step is to change objective (3) into a polynomial in the new variables c, s. We notice that the objective function (3) is minimal on the same solutions as the following objective function
min c∈ −1,1 7 , s∈ −1,1 7 7 i=1 w i (c i − cosθ i ) 2 + (s i − sinθ i ) 2 (8) = min c∈ −1,1 7 , s∈ −1,1 7 7 i=1 2w i (1 − c i cosθ i − s i sinθ i ).(9)
After rewriting the joint limits inequalities into the polynomial form, we obtain the following polynomial optimization problem
min c∈ −1,1 7 , s∈ −1,1 7 7 i=1 2w i (1 − c i cosθ i − s i sinθ i ) s.t. p j (c, s) = 0 (j = 1, . . . , 12) q i (c, s) = 0 (i = 1, . . . , 7) −(c i + 1) tan θ Low i 2 + s i ≥ 0 (i = 1, . . . , 7) (c i + 1) tan θ High i 2 − s i ≥ 0 (i = 1, . . . , 7)(10)
We show how this polynomial optimization problem can be solved in the following sections.
Since the presented framework is general, any objective function can be chosen as long as it can be expressed as a low degree polynomial in sines and cosines of the joint angles. Different objective functions will be chosen for different tasks, but we demonstrate the presented approach with the objective function (9).
After solving Problem (10), we recover θ from c and s by function atan2, which takes into account signs of the arguments.
Polynomial optimization
Next, we describe the polynomial optimization methods we use to solve Problem (10).
Polynomial optimization problems (POPs) are generally non-convex, but they can be solved with global optimality certificates with the help of convex optimization, as surveyed in [18]. The idea consists of building a hierarchy of convex optimization problems of increasing size whose values converge to the value of the POP. The convergence proof is based on results of real algebraic geometry, namely the representation of positive polynomials, or Positivstellensatz (PSatz for short). One of the most popular Psatz is due to Putinar, and it expresses a polynomial positive on a compact basic semialgebraic set as a weighted sum of squares (SOS). Finding this SOS representation amounts to solving a semidefinite programming (SDP) problem, a particular convex optimization problem that can be solved efficiently numerically with interior point algorithms. By increasing the degree of the SOS representation, we increase the size of the SDP problem, thereby constructing a hierarchy of SDP problems. Dual to this polynomial positivity problem is the problem of characterizing moments of measures supported on a compact basic semialgebraic set. This also admits an SDP formulation, called moment relaxations, yielding a dual hierarchy, indexed by the so-called relaxation order. The primal-dual hierarchy is called the moment-SOS hierarchy or also the Lasserre hierarchy since it was first proposed in [17] in the context of POP with convergence and duality proofs. When the relaxation order increases, the Lasserre hierarchy generates a monotone sequence of superoptimal bounds on the global optimum of a given POP, and results on the moment problems can be used to certify exactness of a given bound, at a finite relaxation order. In this case, it is not necessary to go further in the hierarchy: the non-convex POP is solved at the price of solving a convex SDP problem of a given size. A Matlab package GloptiPoly [11] has been designed to construct the SDP problems in the hierarchy and solve them with a general-purpose SDP solver.
As observed in many applications, the main limitation of the Lasserre hierarchy (in its original form) is its poor scalability as a function of the number of variables and degree of the POP. This is balanced by the practical observation that, very often, global optimality is certified at the second or third-order relaxation. As our experiments reveal, for the degree 4 POP studied in our paper, the third order relaxation is out of reach of state-ofthe-art SDP solvers. It becomes hence critical to investigate reformulation techniques to reduce the degree as much as possible. This is the topic of the next section.
Symbolic reduction of the POP
Here we provide the description of the algebraic geometry technique we use to reduce the degree of our POP problem to obtain a practical solving method. See [4] for algebraic-geometric notation and concepts.
The POP we have at hand is a constrained with polynomial equations
f 1 = · · · = f s = 0 (11) of degree 4 in Q[x 1 , . . . , x n ].
Observe that one can replace these polynomial equations in the formulation of the POP by any other set of polynomial equations
g 1 = · · · = g t = 0(12)
as long as both systems of equations have the same solution set. Natural candidates for the g i 's are to pick them in the ideal generated by (f 1 , . . . , f s ), i.e. the set of algebraic combinations
I = { i q i f i | q i ∈ Q[x 1 , . . . , x n ]}.
It is clear that if all the f i 's vanish simultaneously at a point, any polynomial g in this set will vanish at this point. The difficulty is how to understand the structure of this set and find a nice finite representation of it that would allow many algebraic operations (such as deciding whether a given polynomial lies in this set). Solutions have been brought by symbolic computation, aka computer algebra, through the development of algorithms computing Gröbner bases, which were introduced by Buchberger [4]. These are finite sets, depending on a monomial ordering [4], which generate I as input equations do, but from which the whole structure of I can be read.
Modern algorithms for computing Gröbner bases (F 4 and F 5 algorithms), which significantly improved by several orders of magnitude the state-of-theart, were introduced next by J. C. Faugère [9,8]. These latter algorithms bring a linear algebra approach to Gröbner bases computations. In particular, noticing that the intersection of I with the subset of polynomials in Q[x 1 , . . . , x n ] of degree ≤ d is a vector space of finite dimension, is a key to reduce Gröbner bases computations to exact linear algebra operations.
Hence, Gröbner bases provide bases of such vector spaces when one uses monomial orderings which filter monomials w.r.t. degree first. Finally, going back to our problem, a Gröbner basis computation allows us to discover if I contains degree 2 polynomials (and is generated by such quadrics).
While this is never the case when starting with generic degree 4, observe that there are many relations between the coefficients of the degree 4 equations of our POP. Hence, we are not facing a generic situation there and we'll see further that actually a Gröbner basis computation provides a set of quadrics that can replace our initial set of constraints. Note also that since Gröbner basis algorithms rely on exact linear algebra, such a property holds for any instance of our POP if it holds for a randomly chosen one (the trace of the computation will always be the same, giving rise to polynomials of degree ≤ 2).
Solving the IK problem
In order to solve the IK problem, we need to solve the optimization problem (10). First, we apply the implementation GloptiPoly [12] of the method described in Section 4 directly on the Problem (10).
Direct application of polynomial solver
Since the original Problem (10) contains polynomials of degrees up to 4, we start with the first relaxation of order two. That means we substitute each monomial in the original 14 variables up to degree four by a new variable, and therefore the resulting SDP program will have 3060 variables.
Solving the first relaxation typically does not yield the solution for this parametrization of the problem, and therefore it is required to go higher in the relaxation hierarchy. Unfortunately, relaxation order three for a polynomial problem in 14 variables leads to an SDP problem in 38 760 variables. Such a huge problem is still often solvable on contemporary computers, but it often takes hours to finish.
Symbolic reduction
In the view of the previous paragraph, we aim at simplifying the original polynomial problem to be able to obtain solutions even for the relaxation of order two, which takes seconds to solve.
Here is our main result that allows us to do it. We claim that polynomials p j and q i of degrees up to four in Problem (10) can be reduced to polynomials of degree two. Theorem 1. The ideal generated by the kinematics constraints (7) for a generic serial manipulator with seven revolute joints and for generic pose M with the addition of the trigonometric identities (5) can be generated by a set of degree two polynomials.
Proof. The proof is computational. We generate generic instances of serial manipulators and generic poses. Then a Gröbner basis G [3] of polynomials p j and q i is computed for each instance of the manipulator and pose. We select a subset S of degree two polynomials from the basis G, and by computing a new Gröbner basis G from S, we verify that S generates the same ideal as the original set of polynomials. See Maple code in Listing 1. The polynomials p j and q i are put into the variable eq, and the last command of the code will be evaluated to True if the bases G and G are equal, and therefore generate the same ideal.
Listing 1: Maple code for the proof of Theorem 1.
1 # compute the reduced Groebner basis from pj and qi polynomials (in variables of eq) 2 G := Basis(eq, tdeg(op(indets(eq)))): 3 # select degree two polynomials from the basis and compute a new reduced Groebner basis 4 idxDegTwo := SearchAll(2, map(degree, G)): 5 eqPrime := G[[idxDegTwo]]: 6 GPrime := Basis(eqPrime, tdeg(op(indets(eq)))): 7 # compare the two bases 8 evalb(G = GPrime);
Solving the reduced polynomial optimization problem
We exploit Theorem 1 in our approach to solve the IK problem. First, we compute a Gröbner basis of the kinematics constraints (7) and (5) from which we select only polynomials of degree two. Then, we construct the Problem (10) but with polynomial constraints given by the degree two polynomials only. We solve the problem by hierarchies of semidefinite programs. Reducing the degree of polynomials from four to two allows us to start with SDP relaxation of order one. The size of this SDP problem, in terms of the number of variables, is now 120. Practical experiments have shown that the first relaxation is not tight enough to yield the solution. On the other hand, the second relaxation gives a solution for almost all poses, see Tab. 1.
Experiments
We demonstrate our method on IK problem for KUKA LBR IIWA arm with seven revolute joints. The structure of the manipulator is designed in a special way such that the IK problem is simple to compute. advantages is that for a fixed end-effector pose, the joint angle θ 4 is constant within the self-motion. This allows for a geometrical derivation of a closedform solution to the IK problem, such a [15], where the authors introduce new angle parameter δ that fixes the left DOF of the IK problem. Another approach is to solve the problem by local non-linear optimization techniques [2], but such methods do not provide global optima, and the found solution is highly dependent on the initial guess.
Solving the IK problem globally is more computationally challenging. To be able to tackle the problem in a matter of seconds, relaxations of the problem were developed in the past. Dai et al. in [5] proposed mix-integer convex relaxation of the non-convex rotational constraints. Their method finds all classes of solutions that are in correspondence with a different set of active binary variables, but they are unable to select a global optima w.r.t. an objective function.
Polynomial optimization problem for KUKA LBR IIWA
We directly parameterize Problem (10) by the D-H parameters of the KUKA LBR IIWA manipulator. We set the weights equally to w i = 1 7 , and we set the preferred values ofθ i to zero, which is in the middle of the joint allowed interval. This leads to POP in 14 variables and with polynomials p j of degrees up to four.
Direct application of polynomial solver
First, we solve Problem (10) directly by polynomial optimization toolbox GloptiPoly [12] for relaxation order two with the use of MOSEK [1] as the semidefinite problem solver.
Our dataset consists of 10 000 randomly chosen poses within and outside of the working space of the manipulator, as shown in Fig. 2. For poses marked by red color, GloptiPoly failed to compute the solution or report infeasibility. That is mainly due to the small relaxation order of the semidefinite relaxation of the POP. There is 32.4 % of such poses, which makes this approach quite impractical. Computations for the next degree three relaxation is still often feasible on contemporary computers but takes hours to finish.
POP with symbolic reduction
Since the performance of GloptiPoly highly depends on the number of variables of the POP and the relaxation degree, which grows with the degrees of the polynomials contained in the POP, we first symbolically reduce polynomials p j and q i and then solve the resulting POP by GloptiPoly.
Firstly, we take advantage of the simplified structure of the KUKA LBR IIWA manipulator, i.e. that the joint angle θ 4 is constant within the selfmotion, and therefore it plays no role in the objective function (3). That allows us to eliminate the variables c 4 and s 4 from the equations. Secondly, we reduce the polynomials p j and q j symbolically with the use of Theorem 1.
In this way, we have reduced the number of variables from 14 to 12, and we have reduced the degrees of the polynomials to two, which significantly speeds up the SDP solver. Practical experiments showed that GloptiPoly is now able to compute IK for more poses with the same relaxation order two than by the naïve approach used before, see Fig. 3. Now only 1.2 % of poses failed to be solved on the same dataset as in Section 7.2.
To verify the numerical stability of the solver, we have computed the forward kinematics problem based on the found joint angles from the IK problem. Then, we have computed the translation error and rotation error of this pose w.r.t. the desired pose. The histogram of the translation and rotation error can be seen in Fig. 4.
For practical applications, the execution time of this method is important. In Fig. 5, we show histograms of the execution time of the on-line phase of GloptiPoly as well as of the symbolic reduction of the initial polynomials to degree two polynomials. We observe that our execution times are comparable to computation times in [5] when using off-the-shelf POP and GB computation tools. We next plan to develop optimized solvers leading to considerable speedup, as it was done in solving polynomial systems in computer vision [16].
Conclusions
We presented a practical method for globally solving the 7DOF IK problem with a polynomial objective function. Our solution is accurate and can solve/decide infeasibility in 99 % cases out of 10 000 cases tested on the KUKA LBR IIWA manipulator. The code is open-sourced at https: //github.com/PavelTrutman/Global-7DOF-IKT.
For future work, we consider two interesting directions. First, it is desirable to return a certificate of infeasibility when POP constraints are incom- patible, e.g., by computing a SOS representation for the polynomial -1 on the quadratic module corresponding to the feasible set [14]. Secondly, it is interesting to exploit the structure of our POP to prove the exactness of the observed second SDP relaxation in the moment-SOS hierarchy.
Figure 1 :
1(left) 7DOF serial manipulator (KUKA LBR IIWA), and (middle) its kinematic model
Figure 2 :
2Generated poses of the manipulator. Green dots are poses marked as feasible by direct solving with GloptiPoly, blue as infeasible, and for the red ones the computation failed (32.4 %).
Figure 3 :
3Generated poses of the manipulator. Green dots are poses marked as feasible by GloptiPoly after symbolic simplification, blue as infeasible, and for the red ones the computation failed (1.2 %).
Figure 4 :
4Histogram of translation and rotation error of the poses computed from the forward kinematics on found solutions w.r.t. the desired poses. There are 0 zero translation errors and 0 zero rotation errors.
Figure 5 :
5Histograms of execution time. Left: execution time of the on-line phase of GloptiPoly. Right: execution time of the symbolic reduction and elimination in Maple.
Table 1 :
1Overview of execution times and accuracy of the presented methods.Execution time [s]
Median error
% of failed
Reduction step GloptiPoly Tran. [mm] Rot. [deg]
poses
Deg. 4 pol.
-
21.3
3.92 · 10 −4
6.11 · 10 −5
32.4 %
Deg. 2 pol.
2.7
5.6
7.27 · 10 −5
5.59 · 10 −3
1.2 %
The MOSEK optimization toolbox for MATLAB manual. Mosek Aps, Version 8.0.MOSEK ApS. The MOSEK optimization toolbox for MATLAB manual. Version 8.0., 2016.
Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods. R Samuel, Buss, IEEE Journal of Robotics and Automation. 1716Samuel R Buss. Introduction to inverse kinematics with jacobian trans- pose, pseudoinverse and damped least squares methods. IEEE Journal of Robotics and Automation, 17(1-19):16, 2004.
Ideals, varieties, and algorithms: an introduction to computational algebraic geometry and commutative algebra. David Cox, John Little, Donal Oshea, Springer Science & Business MediaDavid Cox, John Little, and Donal OShea. Ideals, varieties, and algo- rithms: an introduction to computational algebraic geometry and com- mutative algebra. Springer Science & Business Media, 2013.
Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. David A Cox, John Little, Donald O' Shea, SpringerDavid A. Cox, John Little, and Donald O'Shea. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer, 2015.
Global inverse kinematics via mixed-integer convex optimization. Hongkai Dai, Gregory Izatt, Russ Tedrake, The International Journal of Robotics Research. 0278364919846512Hongkai Dai, Gregory Izatt, and Russ Tedrake. Global inverse kinemat- ics via mixed-integer convex optimization. The International Journal of Robotics Research, page 0278364919846512, 2017.
Whole-body motion planning with centroidal dynamics and full kinematics. Hongkai Dai, Andrés Valenzuela, Russ Tedrake, IEEE-RAS International Conference on Humanoid Robots. IEEEHongkai Dai, Andrés Valenzuela, and Russ Tedrake. Whole-body mo- tion planning with centroidal dynamics and full kinematics. In 2014 IEEE-RAS International Conference on Humanoid Robots, pages 295- 302. IEEE, 2014.
Automated Construction of Robotic Manipulation Programs. Rosen Diankov, 3448143Pittsburgh, PA, USAPhD thesisRosen Diankov. Automated Construction of Robotic Manipulation Pro- grams. PhD thesis, Pittsburgh, PA, USA, 2010. AAI3448143.
A new efficient algorithm for computing grÖbner bases without reduction to zero (f5). Jean Charles Faugère, Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation, ISSAC '02. the 2002 International Symposium on Symbolic and Algebraic Computation, ISSAC '02New York, NY, USAACMJean Charles Faugère. A new efficient algorithm for computing grÖbner bases without reduction to zero (f5). In Proceedings of the 2002 Interna- tional Symposium on Symbolic and Algebraic Computation, ISSAC '02, pages 75-83, New York, NY, USA, 2002. ACM.
A new efficient algorithm for computing gröbner bases (f4). Jean-Charles Faugére, Journal of Pure and Applied Algebra. 1391Jean-Charles Faugére. A new efficient algorithm for computing gröbner bases (f4). Journal of Pure and Applied Algebra, 139(1):61 -88, 1999.
A kinematic notation for lower pair mechanisms based on matrices. S Richard, Jacques Hartenberg, Denavit, Journal of applied mechanics. 772Richard S Hartenberg and Jacques Denavit. A kinematic notation for lower pair mechanisms based on matrices. Journal of applied mechanics, 77(2):215-221, 1955.
Gloptipoly: Global optimization over polynomials with matlab and sedumi. Didier Henrion, Jean-Bernard Lasserre, ACM Transactions on Mathematical Software (TOMS). 292Didier Henrion and Jean-Bernard Lasserre. Gloptipoly: Global opti- mization over polynomials with matlab and sedumi. ACM Transactions on Mathematical Software (TOMS), 29(2):165-194, 2003.
Gloptipoly 3: moments, optimization and semidefinite programming. Didier Henrion, Jean-Bernard Lasserre, Johan Löfberg, Optimization Methods & Software. 244-5Didier Henrion, Jean-Bernard Lasserre, and Johan Löfberg. Gloptipoly 3: moments, optimization and semidefinite programming. Optimization Methods & Software, 24(4-5):761-779, 2009.
Reza Jazar, Theory of Applied Robotics: Kinematics, Dynamics, and Control. SpringerReza Jazar. Theory of Applied Robotics: Kinematics, Dynamics, and Control. Springer, 2007.
An exact duality theory for semidefinite programming based on sums of squares. Igor Klep, Markus Schweighofer, Mathematics of Operations Research. 383Igor Klep and Markus Schweighofer. An exact duality theory for semidef- inite programming based on sums of squares. Mathematics of Operations Research, 38(3):569-590, 2013.
Robust inverse kinematics by configuration control for redundant manipulators with seven dof. I Kuhlemann, P Schweikard, F Jauer, Ernst, 2016 2nd International Conference on Control, Automation and Robotics (ICCAR). IEEEI Kuhlemann, A Schweikard, P Jauer, and F Ernst. Robust inverse kinematics by configuration control for redundant manipulators with seven dof. In 2016 2nd International Conference on Control, Automation and Robotics (ICCAR), pages 49-55. IEEE, 2016.
Beyond grobner bases: Basis selection for minimal solvers. Magnus Viktor Larsson, Kalle Oskarsson, Alge Åström, Zuzana Wallis, Tomás Kukelova, Pajdla, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAViktor Larsson, Magnus Oskarsson, Kalle Åström, Alge Wallis, Zuzana Kukelova, and Tomás Pajdla. Beyond grobner bases: Basis selection for minimal solvers. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3945-3954, 2018.
Global optimization with polynomials and the problem of moments. Jean Bernard Lasserre, SIAM Journal on optimization. 113Jean Bernard Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on optimization, 11(3):796-817, 2001.
An introduction to polynomial and semialgebraic optimization. Jean Bernard Lasserre, Cambridge University Press52Jean Bernard Lasserre. An introduction to polynomial and semi- algebraic optimization, volume 52. Cambridge University Press, 2015.
Generalized stewart platform: How to compute with rigid motions. Daniel Lazard, Daniel Lazard. Generalized stewart platform: How to compute with rigid motions. 1993.
Efficient inverse kinematics for general 6r manipulators. Dinesh Manocha, John F Canny, IEEE Trans. Robotics and Automation. 105Dinesh Manocha and John F. Canny. Efficient inverse kinematics for general 6r manipulators. IEEE Trans. Robotics and Automation, 10(5):648-657, 1994.
The complexity of the word problems for commutative semigroups and polynomial ideals. W Ernst, Albert R Mayr, Meyer, Advances in Mathematics. 463Ernst W Mayr and Albert R Meyer. The complexity of the word prob- lems for commutative semigroups and polynomial ideals. Advances in Mathematics, 46(3):305 -329, 1982.
An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. Ugo Pattacini, Francesco Nori, Lorenzo Natale, Giorgio Metta, Giulio Sandini, 2010 IEEE/RSJ international conference on intelligent robots and systems. IEEEUgo Pattacini, Francesco Nori, Lorenzo Natale, Giorgio Metta, and Giulio Sandini. An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. In 2010 IEEE/RSJ interna- tional conference on intelligent robots and systems, pages 1668-1674. IEEE, 2010.
Inverse kinematics of the general 6r manipulator and related linkages. Manasa Raghavan, Bernard Roth, Manasa Raghavan and Bernard Roth. Inverse kinematics of the general 6r manipulator and related linkages. 1993.
Solving polynomial systems for the kinematic analysis and synthesis of mechanisms and robot manipulators. Manasa Raghavan, Bernard Roth, Manasa Raghavan and Bernard Roth. Solving polynomial systems for the kinematic analysis and synthesis of mechanisms and robot manipu- lators. 1995.
Kinematic analysis of the 6r manipulator of general geometry. Madhusudan Raghaven, Bernard Roth, The Fifth International Symposium on Robotics Research. Cambridge, MA, USAMIT PressMadhusudan Raghaven and Bernard Roth. Kinematic analysis of the 6r manipulator of general geometry. In The Fifth International Symposium on Robotics Research, pages 263-269, Cambridge, MA, USA, 1990. MIT Press.
Theory of machines and mechanisms. McGraw-Hill series in mechanical engineering. J E Shigley, J J Uicker, McGraw-HillJ.E. Shigley and J.J. Uicker. Theory of machines and mechanisms. McGraw-Hill series in mechanical engineering. McGraw-Hill, 1980.
Numerical continuation methods for solving polynomial systems arising in kinematics. Charles W Wampler, Alexander P Morgan, Andrew J Sommese, Charles W. Wampler, Alexander P. Morgan, and Andrew J. Sommese. Numerical continuation methods for solving polynomial systems arising in kinematics. 1990.
| []
|
[
"Integrating Prior Knowledge Into Prognostic Biomarker Dis- covery based on Network Structure",
"Integrating Prior Knowledge Into Prognostic Biomarker Dis- covery based on Network Structure"
]
| [
"Yupeng Cun [email protected] \nAachen International Center for Information Technology (B-IT)\nUniversity of Bonn\nDahlmannstr. 253113Bonn, BonnGermany\n",
"Holger Fröhlich holgerfrö[email protected] \nAachen International Center for Information Technology (B-IT)\nUniversity of Bonn\nDahlmannstr. 253113Bonn, BonnGermany\n"
]
| [
"Aachen International Center for Information Technology (B-IT)\nUniversity of Bonn\nDahlmannstr. 253113Bonn, BonnGermany",
"Aachen International Center for Information Technology (B-IT)\nUniversity of Bonn\nDahlmannstr. 253113Bonn, BonnGermany"
]
| []
| Background: Predictive, stable and interpretable gene signatures are generally seen as an important step towards a better personalized medicine. During the last decade various methods have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinics is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks.Results: We propose a novel algorithm, called FrSVM, which integrates protein-protein interaction network information into gene selection for prognostic biomarker discovery. Our method is a simple filter based approach, which focuses on central genes with large differences in their expression. Compared to several other competing methods our algorithm reveals a significantly better prediction performance and higher signature stability. Moreover, obtained gene lists are highly enriched with known disease genes and drug targets. We extendd our approach further by integrating information on candidate disease genes and targets of disease associated Transcript Factors (TFs). | null | [
"https://arxiv.org/pdf/1212.3214v2.pdf"
]
| 7,338,406 | 1212.3214 | 64bda00749a1b3ce84d14e8f9d2dca42a63a86b1 |
Integrating Prior Knowledge Into Prognostic Biomarker Dis- covery based on Network Structure
Yupeng Cun [email protected]
Aachen International Center for Information Technology (B-IT)
University of Bonn
Dahlmannstr. 253113Bonn, BonnGermany
Holger Fröhlich holgerfrö[email protected]
Aachen International Center for Information Technology (B-IT)
University of Bonn
Dahlmannstr. 253113Bonn, BonnGermany
Integrating Prior Knowledge Into Prognostic Biomarker Dis- covery based on Network Structure
Background: Predictive, stable and interpretable gene signatures are generally seen as an important step towards a better personalized medicine. During the last decade various methods have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinics is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks.Results: We propose a novel algorithm, called FrSVM, which integrates protein-protein interaction network information into gene selection for prognostic biomarker discovery. Our method is a simple filter based approach, which focuses on central genes with large differences in their expression. Compared to several other competing methods our algorithm reveals a significantly better prediction performance and higher signature stability. Moreover, obtained gene lists are highly enriched with known disease genes and drug targets. We extendd our approach further by integrating information on candidate disease genes and targets of disease associated Transcript Factors (TFs).
adverse effects, in order to avoid ineffective treatment and to reduce drug side-effects and associated costs.
Prognostic or diagnostic biomarker signatures (mostly from gene expression data, but more recently also from other data types, such as miRNA, methylation patterns or copy number alterations) have been derived in numerous publications for various disease entities. One of the best known ones is a 70-gene signature for breast cancer prognosis (mammaprint) by [1], which has gained FDA approval.
A frequently taken approach to obtain a diagnostic or prognostic gene signature is to put patients into distinct groups and then constructing a classifier that can discriminative patients in the training set and is able to predict well unseen patients. Well known algorithms for this purpose are PAM [2], SVM-RFE [3], Random Forests [4] or statistical tests, like SAM [5], in combination with conventional machine learning methods (e.g. Support Vector Machines, k-NN, LDA, logistic regression, ...).
However, retrieved gene signatures are often not reproducible in the sense that inclusion or exclusion of a few patients can lead to quite different sets of selected genes. Moreover, these sets are often difficult to interpret in a biological way [6]. For that reason, more recently a number of approaches have been proposed, which try to integrate knowledge on canonical pathways, GO annotation or protein-protein interactions into gene selection algorithms [7][8][9][10][11][12][13][14][15]. A review on these and other methods can be found in Cun and Fröhlich [16].
The general hope is not only to make biomarker signatures more stable, but also more interpretable in a biological sense. This is seen as a key to making gene signatures a standard tool in clinical diagnosis [17].
In this paper we propose a simple and effective filter based gene selection mechanism, which employs the GeneRank algorithm [18] to rank genes according to their centrality in a protein-protein interaction (PPI) network and their (differential) gene expression. It has been shown previously that deregulated central genes have a strong association with the disease pathology in cancer [19]. Our method uses the span rule [20] as a bound on the leave-one-out error of Support Vector Machines (SVMs) to filter the top ranked genes and construct a classifier. It is thus conceptually and computationally much simpler than our previously proposed RRFE algorithm [15], which used a reweighting strategy of the SVM decision hyperplane. We here demonstrate that our novel method, called FrSVM, not only significantly outperforms RRFE, PAM, network based SVMs [14], pathway activity classification [11] and average pathway expression [10], but that it also yields extremely reproducible gene signatures.
In a second step we investigate, in how far our approach can be improved further by incorporating potential disease genes or targets of transcription factors, which were previously found to be enriched in known disease genes. It turns out that the combination with candidate disease genes can further improve the association to biological knowledge.
Methods
Datasets
We retrieved two breast cancer [21,22] and one prostate cancer [23] dataset from the NCBI GEO data repository [24]. Moreover, TCGA [25] was used to obtain an additional dataset for ovarian cancer (normalized level 3 data). All data were measured on Affymetrix HGU133 microarrays (22,283 probesets). Normalization was carried via FARMS (breast cancer datasets - [26]) and quantile normalization (prostate cancer dataset - [27]), respectively. As clinical end points we considered metastasis free (breast cancer) and relapse free (ovarian cancer) survival time after initial clinical treatment. For ovarian cancer only tumors with stages IIA -IV and grades G2 and G3 were considered, which after resection revealed at most 10mm residual cancer tissue and responded completely to initial chemo therapy.
Survival time information was dichotomized into two classes according whether or not patients suffered from a reported relapse / metastasis event within 5 (breast) and 1 year (ovarian), respectively. Patients with a survival time shorter than 5 / 1 year(s) without any reported event were not considered and removed from our datasets. For prostate cancer we employed the class information provided by [23]. A summary of our datasets can be found in Table 1.
Protein-Protein Interaction (PPI) Network
A protein interaction network was compiled from a merger of all non-metabolic KEGG pathways [28]-only gene-gene interactions were considered -together with the Pathway Commons database [29], which was downloaded in tab-delimited format (May 2010). The purpose was to obtain an as much as possible comprehensive network of known protein interactions. For the Pathway Commons database the SIF interactions INTERACTS WITH and STATE CHANGE were taken into account 1 and any self loops removed. For retrieval and merger of KEGG pathways, we employed the R-package KEGGgraph [30]. In the resulting network graph (13,840 nodes with 397,454 edges) we had directed as well as undirected edges. For example, a directed edge A → B could indicate that protein A modifies protein B (e.g. via phosphorylation). An undirected edge A − B implies a not further specified type of direct interaction between A and B. Nodes in this network were identified via Entrez gene IDs.
The R package, hgu133a.db [31], was employed to map probe sets on the microarray to nodes in the PPInetwork. This resulted in a protein-protein interaction network matrix of dimension 8876 × 8876, because several probe sets can map to the same protein in the PPI-network. Accordingly, expression values for probesets on the microarray that mapped to the same gene in the network were averaged. Probesets, which could not be mapped to the PPI network, were ignored for all network based approaches except for RRFE, which according to Johannes et al [15], assigns a minimal gene rank to them.
Gene Selection with PPI Information (FrSVM)
The GeneRank algorithm described in Morrison et al [18] is an adaption of Google's PageRank algorithm. It combines gene expression and protein-protein interaction information to obtain a ranking of genes by solving the linear equation system
I − dWD −1 r = (1 − d)e(1)
where W denotes the adjacency matrix of the PPI network, D is a diagonal matrix consisting of the node degrees and d a damping factor weighting (differential) gene expression e against network information. As suggested in Morrison et al [18] we set d = 0.85 here. The general idea of the algorithm is to give preference to proteins, which are central in the network (similar to web pages with many links) on one hand and have a high difference in their expression on the other hand.
As a score for differential gene expression (vector e) we employed the absolute value of t-statistics here.
That means we conducted for each probeset a t-test and then looked at the absolute t-value to assign weights to nodes in the PPI network. This in turn allowed us to apply GeneRank to calculate a rank for each probeset. We then filtered the top ranked 10, 11, ..., 30% of all probesets mapping to our PPI network and each time trained a Support Vector Machine (SVM). We used the span rule [20] to estimate an upper bound on the leave-one-out error in a computationally efficient way. This was only done on the training data and allowed us to select the best cutoff value for our filter. At the same time we could use the span rule also to tune the soft margin parameter C of the SVM in the range 10 −3 , 10 −2 , ..., 10 3 . Out approach is called FrSVM in the following.
Using Candidate Disease Genes
For many diseases several associated genes are known. Based on this information it is possible to prioritize candidate genes via their similarity to known disease genes: Schlicker et al [32] proposed a mechanism to compute similarities of gene products to candidate genes based on their Gene Ontology (GO) information.
The Endeavour software [33] employs a different algorithm to rank candidate genes based on their proximity in annotation space by combining information sources like GO, KEGG, text and others.
We here tested a combination of the propose FrSVM algorithm with both disease gene prioritization approaches (Endeavour and GO similarity): We selected the top ranked p% genes according to FrSVM as well as according to Endeavour and GO Similarity. The union of both sets was then used for SVM training.
For Endeavour we considered GO, KEGG, text and sequence motifs as information sources. Information on disease related genes was obtained from the DO-light ontology [34]. GO functional similarity was computed via the method proposed in [35] using the web tool FunSimMat [36], which uses the NCBI OMIM database for disease gene annotation. The combination of FrSVM with Endeavour is called FrSVM EN, and the combination with functional GO similarities is called FrSVM FunSim accordingly.
In addition to FrSVM EN and FrSVM FunSim we also considered to use the top ranked candidate disease genes only (without any further network information). The corresponding approaches are principally equivalent to FrSVM from the methodological point of view (just another ranking is used) and are called EN and FunSim, respectively.
Using Targets of Enriched Transcription Factors
A major factor influencing gene expression are transcription factors (TFs). We performed a hypergeometric test looked for enriched TF targets in disease associated genes (FDR cutoff 5%). Only probesets mapping to targets of enriched TFs were then taken into account to conduct a subsequent FrSVM training. We refer to this method as FrSVM TF. Again, information on disease relation of genes was obtained from the DO-light ontology. A TF-target gene network was compiled by computing TF binding affinities to promoter sequences of all human genes according to the TRAP model [37] via the author's R implementation. Upstream sequences of genes were retrieved here from the ENSEMBL database via biomaRt [38]. We assumed that promoter sequences were located in the range 0 -2Kbp upstream to the transcription start site of a gene. As trustworthy TF targets we considered those, for which a Holm corrected affinity p-value smaller than 0.01 was reported. In conclusion we found 6334, 8196 and 5866 probesets (having enriched binding sites of 33, 35 and 24 TFs) for breast, prostate and ovarian cancer.
Classification Performance, Signature Stability and Biological Interpretability
In order to assess the prediction performance we performed a 10 times repeated 10-fold cross-validation on each dataset. That means the whole data was randomly split into 10 fold, and each fold sequentially left out once for testing, while the rest of the data was used for training and optimizing the classifier (including gene selection, hyper-parameter tuning, standardization of expression values for each gene to mean 0 and standard deviation 1, etc.). The whole process was repeated 10 times. It should be noted extra that also standardization of gene expression data was only done on each training set separately and the corresponding scaling parameters then applied to the test data. The area under receiver operator characteristic curve (AUC)
was used here to measure the prediction accuracy, and the AUC was calculated by R-package ROCR [39].
To assess the stability of features selection methods, we computed the selecticomparableon frequency of each gene within the 10 times repeated 10-fold cross-validation procedure. In an ideal case probsets would be selected consistently, i.e. all probeset chosen 100 times. The more the probeset selection profile (which is essentially a histogram) resembles this ideal case the better. In order to capture this behavior numerically we defined a so-called stability index (SI) defined as
SI = i∈{10,20,...,100} i · f (i)
where f (i) denotes the fraction of probsets that have been selected> i − 10 and ≤ i times. Please note that i f (i) = 1. SI represents a weighted histogram count of selection frequencies. Obviously, the larger SI the more stable the algorithm is. In the optimal case SI = 100.
We also looked, in how far signatures obtained by training the classifier on the whole dataset could be related to existing biological knowledge. For this purpose we looked for enriched disease related genes and known targets of therapeutic compounds via a hypergeometric test. For disease related genes we made use of the tool "FunDO" [34]. Multiple testing correction is done here via Bonferroni's method. The list of therapeutic compounds and their known targets was retrieved via the software MetaCore TM (GeneGo Inc.) and is available in the supplements.
Results and Discussion
FrSVM improves Classification Performance and Signature Stability
We compared the prediction performance of our proposed FrSVM method to PAM [2], average gene expression of KEGG pathways (aveExpPath, [10]), pathway activity classification (PAC, [11]), network-based SVM (networkSVM, [14]) and reweighted recursive feature elimination (RRFE, [15]). For aveExpPat we first conducted a global test [40] to select pathways being significantly associated with the phenotype (FDR cutoff 1%) and then computed the mean expression of genes in these pathways.
Initially we only used PPI information for our FrSVM approach and found a clear improvement of AUC values for FrSVM compared to all other tested methods (Figure 1). This visual impression was confirmed via a two-way ANOVA analysis (using method, dataset as well as their interaction term as factors) with Tukey's post-hoc test, which revealed a significantly increased AUC for FrSVM with p < 1e-6 in all cases.
We further inspected the frequencies, by which individual probesets were selected by each of the tested methods ( Figure 2) as well as the stability indices (Figure 2b). This analysis showed that FrSVM selected probesets in a very stable manner (only comparable to networkSVM). The fraction of consistently selected probesets ranged from˜40% (ovarian cancer) to˜70% (Schmidt et al. breast cancer dataset). Interestingly these consistently selected genes typically showed a highly significant differential expression, which was assessed via SAM [5] here. For example, 60% of all consistently selected probesets in the Schmidt et al.
dataset had a q-value < 5%. This illustrates the behavior of FrSVM to focus on genes with large differences in their expression between the two compared groups, which are central in the PPI network.
Clear Association to Biological Knowledge
We trained each of our test methods on complete datasets to retrieve final signatures, which we tested subsequently for the enrichment of disease related genes and known drug targets (Figure 3 and Figure 4). This analysis showed that FrSVM derived signatures can be clearly associated to biological knowledge. The degree of enrichment was only comparable with aveExpPath and RRFE, which have previously been found to yield clearly interpretable signatures [41].
PPI Network Integration Helps Most
We went on to test, how much the performance of FrSVM would be affected by integrating candidate disease genes or restricting selectable probesets to targets of enriched TFs. Generally, incorporation of network knowledge appeared to yield a better prediction performance than only using candidate disease genes Figure 5, p < 0.01, two-way ANOVA with Tukey's post-hoc test). No significant benefit of additionally integrating candidate disease genes or targets of enriched TFs into FrSVM could be observed in terms of AUC values or signature stabilities Figure 5b). However, FrSVM EN showed a clearer association to disease genes than FrSVM ( Figure 6). This is not surprising, because the method explicitely integrates the top ranked candidate disease genes.
Conclusion
We proposed a simple and effective filter based algorithm to integrate PPI network information into prognostic or diagnostic biomarker discovery based on a modification of Google's PageRank algorithm. The method favors genes, which on one hand show a large difference in their expression (high absolute t-score) and on the other hand are central in the network. It has been shown previously that such genes are often associated to the disease phenotype [19]. Our approach significantly outperformed several other classification algorithms in terms of prediction performance and signature stability on four datasets. Moreover, it yielded signatures showing a very clear relation to existing biological knowledge. Additional integration of potential disease genes could further enhance this association, but nonetheless did not improve prediction performance or signature stability. PPI network integration appeared to be more effective than integration of candidate disease genes. Using only targets of TFs, which were previously found to be enriched in known disease genes, did not reveal any significant improvement. However, from a computational point of view this approach might still be interesting, because the set of candidate probesets is significantly restricted before any time consuming machine learning algorithm is applied.
In conclusion, our method offers a computationally cheap and effective mechanism to include prior knowledge into gene selection for biomarker discovery. Tables 1 -Overview about employed datasets
FiguresFigure 1 -
1Prediction performance of FrSVM in comparison to other methods in terms of area under ROC curve (AUC).
Figure 2 -
2Fraction of probesets that were selected 1 -10, 11 -20, ..., 99 -100 times within the 10 times repeated 10-fold CV procedure.
Figure 6 -
6Effect of integrating prior information in addition to protein interac-tions into FrSVM: prediction performance.
Figure 7 -Figure 8 -
78Effect of integrating prior information in addition to protein interac-tions into FrSVM: stability index Enrichment of signatures with disease related genes after integration of prior information additional to protein interactions.
http://www.pathwaycommons.org/pc/sif interaction rules.do
AcknowledgementThis work was partially supported by the state of NRW via the B-IT research school. We would like to thank Khalid Abnaof for providing the data of TFs genes.
Gene expression profiling predicts clinical outcome of breast cancer. L J Van 't Veer, H Dai, M J Van De Vijver, Y D He, Aam Hart, M Mao, H L Peterse, K Van Der Kooy, M J Marton, A T Witteveen, G J Schreiber, R M Kerkhoven, C Roberts, P S Linsley, R Bernards, S H Friend, [http:/dx.doi.org/10.1038/415530a]Nature. 4156871van 't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415(6871):530-536, [http://dx.doi.org/ 10.1038/415530a].
Diagnosis of multiple cancer types by shrunken centroids of gene expression. R Tibshirani, T Hastie, B Narasimhan, G Chu, [http:/dx.doi.org/10.1073/pnas.082099299]Proc Natl Acad Sci. 9910Tibshirani R, Hastie T, Narasimhan B, Chu G: Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc Natl Acad Sci U S A 2002, 99(10):6567-6572, [http://dx.doi.org/10.1073/pnas. 082099299].
Gene Selection for Cancer Classification using Support Vector Machines. I Guyon, J Weston, S Barnhill, V Vapnik, [http:/dx.doi.org/10.1023/A:1012487302797]Mach. Learn. 46Guyon I, Weston J, Barnhill S, Vapnik V: Gene Selection for Cancer Classification using Support Vector Machines. Mach. Learn. 2002, 46:389-422, [http://dx.doi.org/10.1023/A:1012487302797].
L Breiman, Random Forests. 45Breiman L: Random Forests. Machine Learning 2001, 45:5 -32.
Significance analysis of microarrays applied to the ionizing radiation response. V G Tusher, R Tibshirani, G Chu, [http:/dx.doi.org/10.1073/pnas.091062498]Proc Natl Acad Sci. 989Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci U S A 2001, 98(9):5116-5121, [http://dx.doi.org/10.1073/pnas.091062498].
Statistical aspects of gene signatures and molecular targets. M Gönen, Gastrointest Cancer Res. 32SupplGönen M: Statistical aspects of gene signatures and molecular targets. Gastrointest Cancer Res 2009, 3(2 Suppl):S19-S21.
Towards precise classification of cancers based on robust gene functional expression profiles. Z Guo, T Zhang, X Li, Q Wang, J Xu, H Yu, J Zhu, H Wang, C Wang, E J Topol, Q Wang, S Rao, [http:/dx.doi.org/10.1186/1471-2105-6-58]BMC Bioinformatics. 6Guo Z, Zhang T, Li X, Wang Q, Xu J, Yu H, Zhu J, Wang H, Wang C, Topol EJ, Wang Q, Rao S: Towards pre- cise classification of cancers based on robust gene functional expression profiles. BMC Bioinformatics 2005, 6:58, [http://dx.doi.org/10.1186/1471-2105-6-58].
Network-based classification of breast cancer metastasis. H Y Chuang, E Lee, Y T Liu, D Lee, T Ideker, [http:/dx.doi.org/10.1038/msb4100180]Mol Syst Biol. 3140Chuang HY, Lee E, Liu YT, Lee D, Ideker T: Network-based classification of breast cancer metastasis. Mol Syst Biol 2007, 3:140, [http://dx.doi.org/10.1038/msb4100180].
Classification of microarray data using gene networks. F Rapaport, A Zinovyev, M Dutreix, E Barillot, J P Vert, [http:/dx.doi.org/10.1186/1471-2105-8-35]BMC Bioinformatics. 835Rapaport F, Zinovyev A, Dutreix M, Barillot E, Vert JP: Classification of microarray data using gene networks. BMC Bioinformatics 2007, 8:35, [http://dx.doi.org/10.1186/1471-2105-8-35].
Pathway analysis of gene signatures predicting metastasis of node-negative primary breast cancer. J X Yu, A M Sieuwerts, Y Zhang, Jwm Martens, M Smid, Jgm Klijn, Y Wang, J A Foekens, [http:/dx.doi.org/10.1186/1471-2407-7-182]BMC Cancer. 7182Yu JX, Sieuwerts AM, Zhang Y, Martens JWM, Smid M, Klijn JGM, Wang Y, Foekens JA: Pathway analysis of gene signatures predicting metastasis of node-negative primary breast cancer. BMC Cancer 2007, 7:182, [http://dx.doi.org/10.1186/1471-2407-7-182].
Inferring pathway activity toward precise disease classification. E Lee, H Y Chuang, J W Kim, T Ideker, D Lee, [http:/dx.doi.org/10.1371/journal.pcbi.1000217]PLoS Comput Biol. 4111000217Lee E, Chuang HY, Kim JW, Ideker T, Lee D: Inferring pathway activity toward precise disease classi- fication. PLoS Comput Biol 2008, 4(11):e1000217, [http://dx.doi.org/10.1371/journal.pcbi.1000217].
Dynamic modularity in protein interaction networks predicts breast cancer outcome. I W Taylor, R Linding, D Warde-Farley, Y Liu, C Pesquita, D Faria, S Bull, T Pawson, Q Morris, J L Wrana, [http:/dx.doi.org/10.1038/nbt.1522]Nat Biotechnol. 272Taylor IW, Linding R, Warde-Farley D, Liu Y, Pesquita C, Faria D, Bull S, Pawson T, Morris Q, Wrana JL: Dynamic modularity in protein interaction networks predicts breast cancer outcome. Nat Biotechnol 2009, 27(2):199-204, [http://dx.doi.org/10.1038/nbt.1522].
Incorporating pathway information into boosting estimation of highdimensional risk prediction models. H Binder, M Schumacher, [http:/dx.doi.org/10.1186/1471-2105-10-18]BMC Bioinformatics. 10Binder H, Schumacher M: Incorporating pathway information into boosting estimation of high- dimensional risk prediction models. BMC Bioinformatics 2009, 10:18, [http://dx.doi.org/10.1186/ 1471-2105-10-18].
Network-based support vector machine for classification of microarray samples. Y Zhu, X Shen, W Pan, [http:/dx.doi.org/10.1186/1471-2105-10-S1-S21]BMC Bioinformatics. 10Suppl 1:S21Zhu Y, Shen X, Pan W: Network-based support vector machine for classification of microarray sam- ples. BMC Bioinformatics 2009, 10 Suppl 1:S21, [http://dx.doi.org/10.1186/1471-2105-10-S1-S21].
Integration of pathway knowledge into a reweighted recursive feature elimination approach for risk stratification of cancer patients. M Johannes, J C Brase, H Fröhlich, S Gade, M Gehrmann, M Fälth, H Sültmann, T Beissbarth, [http:/dx.doi.org/10.1093/bioinformatics/btq345]Bioinformatics. 201017Johannes M, Brase JC, Fröhlich H, Gade S, Gehrmann M, Fälth M, Sültmann H, Beissbarth T: Integration of pathway knowledge into a reweighted recursive feature elimination approach for risk stratifica- tion of cancer patients. Bioinformatics 2010, 26(17):2136-2144, [http://dx.doi.org/10.1093/bioinformatics/ btq345].
Review: Biomarker Gene Signature Discovery Integrating Network Knowledge. Y Cun, H Fröhlich, Biology. 2012Cun Y, Fröhlich H: Review: Biomarker Gene Signature Discovery Integrating Network Knowledge. Biology 2012, 1:5 -17.
Integration of gene signatures using biological knowledge. M E Blazadonakis, M E Zervakis, D Kafetzopoulos, [http:/dx.doi.org/10.1016/j.artmed.2011.06.003]Artif Intell Med. 53Blazadonakis ME, Zervakis ME, Kafetzopoulos D: Integration of gene signatures using biological knowl- edge. Artif Intell Med 2011, 53:57-71, [http://dx.doi.org/10.1016/j.artmed.2011.06.003].
GeneRank: using search engine technology for the analysis of microarray experiments. J L Morrison, R Breitling, D J Higham, D R Gilbert, [http:/dx.doi.org/10.1186/1471-2105-6-233]BMC Bioinformatics. 6233Morrison JL, Breitling R, Higham DJ, Gilbert DR: GeneRank: using search engine technology for the analysis of microarray experiments. BMC Bioinformatics 2005, 6:233, [http://dx.doi.org/10.1186/ 1471-2105-6-233].
Integration of breast cancer gene signatures based on graph centrality. J Wang, G Chen, M Li, Y Pan, BMC Systems Biology. 510Suppl 3Wang J, Chen G, Li M, Pan Y: Integration of breast cancer gene signatures based on graph centrality. BMC Systems Biology , 2011, 5(Suppl 3):S10.
Choosing Multiple Parameters for Support Vector Machines. O Chapelle, V Vapnik, O Bousquet, S Mukherjee, [http:/dx.doi.org/10.1023/A:1012450327387]10.1023/A:1012450327387Machine Learning. 46Chapelle O, Vapnik V, Bousquet O, Mukherjee S: Choosing Multiple Parameters for Support Vector Machines. Machine Learning 2002, 46:131-159, [http://dx.doi.org/10.1023/A:1012450327387]. [10.1023/A:1012450327387].
Genetic reclassification of histologic grade delineates new clinical subtypes of breast cancer. A V Ivshina, J George, O Senko, B Mow, T C Putti, J Smeds, T Lindahl, Y Pawitan, P Hall, H Nordgren, Jel Wong, E T Liu, J Bergh, V A Kuznetsov, L D Miller, [http:/dx.doi.org/10.1158/0008-5472.CAN-05-4414]1158/ 0008-5472.CAN-05-4414Cancer Res. 6621Ivshina AV, George J, Senko O, Mow B, Putti TC, Smeds J, Lindahl T, Pawitan Y, Hall P, Nordgren H, Wong JEL, Liu ET, Bergh J, Kuznetsov VA, Miller LD: Genetic reclassification of histologic grade delineates new clinical subtypes of breast cancer. Cancer Res 2006, 66(21):10292-10301, [http://dx.doi.org/10.1158/ 0008-5472.CAN-05-4414].
The humoral immune system has a key prognostic impact in node-negative breast cancer. M Schmidt, D Böhm, C Von Törne, E Steiner, A Puhl, H Pilch, H A Lehr, J G Hengstler, H Kölbl, M Gehrmann, [http:/dx.doi.org/10.1158/0008-5472.CAN-07-5206]Cancer Res. 6813Schmidt M, Böhm D, von Törne C, Steiner E, Puhl A, Pilch H, Lehr HA, Hengstler JG, Kölbl H, Gehrmann M: The humoral immune system has a key prognostic impact in node-negative breast cancer. Cancer Res 2008, 68(13):5405-5413, [http://dx.doi.org/10.1158/0008-5472.CAN-07-5206].
Optimizing molecular signatures for predicting prostate cancer recurrence. Y Sun, S Goodison, Prostate. 10Sun Y, Goodison S: Optimizing molecular signatures for predicting prostate cancer recurrence. Prostate. Jul 1; 2009, 69(10):1119-27.
NCBI GEO: archive for functional genomics data sets-10 years on. T Barrett, D B Troup, S E Wilhite, P Ledoux, C Evangelista, I F Kim, M Tomashevsky, K A Marshall, K H Phillippy, P M Sherman, R N Muertter, M Holko, O Ayanbule, A Yefanov, A Soboleva, [http:/dx.doi.org/10.1093/nar/gkq1184]Database issue):D1005-D1010. 39Barrett T, Troup DB, Wilhite SE, Ledoux P, Evangelista C, Kim IF, Tomashevsky M, Marshall KA, Phillippy KH, Sherman PM, Muertter RN, Holko M, Ayanbule O, Yefanov A, Soboleva A: NCBI GEO: archive for functional genomics data sets-10 years on. Nucleic Acids Res 2011, 39(Database issue):D1005-D1010, [http://dx.doi.org/10.1093/nar/gkq1184].
The Cancer Genome Atlas Research Network: Integrated genomic analyses of ovarian carcinoma. Nature. 474The Cancer Genome Atlas Research Network: Integrated genomic analyses of ovarian carcinoma. Nature 2011, 474:609 -615.
A new summarization method for Affymetrix probe level data. S Hochreiter, D A Clevert, K Obermayer, [http:/dx.doi.org/10.1093/bioinformatics/btl033]Bioinformatics. 228Hochreiter S, Clevert DA, Obermayer K: A new summarization method for Affymetrix probe level data. Bioinformatics 2006, 22(8):943-949, [http://dx.doi.org/10.1093/bioinformatics/btl033].
A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. B Bolstad, Bioinformatics. 19Bolstad B: A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics 2003, 19:185-193.
KEGG for linking genomes to life and the environment. M Kanehisa, M Araki, S Goto, M Hattori, M Hirakawa, M Itoh, T Katayama, S Kawashima, S Okuda, T Tokimatsu, Y Yamanishi, [http:/dx.doi.org/10.1093/nar/gkm882]Nucleic Acids Res. 36Kanehisa M, Araki M, Goto S, Hattori M, Hirakawa M, Itoh M, Katayama T, Kawashima S, Okuda S, Tokimatsu T, Yamanishi Y: KEGG for linking genomes to life and the environment. Nucleic Acids Res 2008, 36(Database issue):D480-D484, [http://dx.doi.org/10.1093/nar/gkm882].
Pathway Commons, a web resource for biological pathway data. E G Cerami, B E Gross, E Demir, I Rodchenkov, O Babur, N Anwar, N Schultz, G D Bader, C Sander, [http:/dx.doi.org/10.1093/nar/gkq1039]Nucleic Acids Res. 39(Database issueCerami EG, Gross BE, Demir E, Rodchenkov I, Babur O, Anwar N, Schultz N, Bader GD, Sander C: Pathway Commons, a web resource for biological pathway data. Nucleic Acids Res 2011, 39(Database issue):D685- D690, [http://dx.doi.org/10.1093/nar/gkq1039].
KEGGgraph: a graph approach to KEGG PATHWAY in R and bioconductor. J D Zhang, S Wiemann, [http:/dx.doi.org/10.1093/bioinformatics/btp167]Bioinformatics. 2511Zhang JD, Wiemann S: KEGGgraph: a graph approach to KEGG PATHWAY in R and bioconductor. Bioinformatics 2009, 25(11):1470-1471, [http://dx.doi.org/10.1093/bioinformatics/btp167].
Affymetrix Human Genome U133 Set annotation data (chip hgu133a) assembled using data from public repositories. M Carlson, S Falcon, H Pages, N Li, Bioconductor version: Release (2.2.12Carlson M, Falcon S, Pages H, Li N: Affymetrix Human Genome U133 Set annotation data (chip hgu133a) assembled using data from public repositories. Bioconductor version: Release (2.2.12) 2009.
Improving disease gene prioritization using the semantic similarity of Gene Ontology terms. A Schlicker, T Lengauer, M Albrecht, Bioinformatics. 2618Schlicker A, Lengauer T, Albrecht M: Improving disease gene prioritization using the semantic similarity of Gene Ontology terms. Bioinformatics 2010, 26 (18):i561-i567.
Endeavour update: a web resource for gene prioritization in multiple species. Lctrbsysvvpvlbcbdmsay Moreau, Nucl. Acids Res. 362Moreau LCTRBSYSVVPVLBCBDMSAY: Endeavour update: a web resource for gene prioritization in multiple species. Nucl. Acids Res. 2008, 36 (suppl 2):W377-W384.
Annotating the human genome with Disease Ontology. J D Osborne, J Flatow, M Holko, S M Lin, W A Kibbe, L J Zhu, M I Danila, G Feng, R L Chisholm, [http:/dx.doi.org/10.1186/1471-2164-10-S1-S6]1186/1471-2164-10-S1-S6BMC Genomics. 10Suppl 1:S6Osborne JD, Flatow J, Holko M, Lin SM, Kibbe WA, Zhu LJ, Danila MI, Feng G, Chisholm RL: Annotating the human genome with Disease Ontology. BMC Genomics 2009, 10 Suppl 1:S6, [http://dx.doi.org/10. 1186/1471-2164-10-S1-S6].
Gene Ontology term overlap as a measure of gene functional similarity. M Mistry, P Pavlidis, [http:/dx.doi.org/10.1186/1471-2105-9-327]BMC Bioinformatics. 9327Mistry M, Pavlidis P: Gene Ontology term overlap as a measure of gene functional similarity. BMC Bioinformatics 2008, 9:327, [http://dx.doi.org/10.1186/1471-2105-9-327].
FunSimMat: a comprehensive functional similarity database. A Schlicker, M Albrecht, [http:/dx.doi.org/10.1093/nar/gkm806]Nucleic Acids Res. 36Schlicker A, Albrecht M: FunSimMat: a comprehensive functional similarity database. Nucleic Acids Res 2008, 36(Database issue):D434-D439, [http://dx.doi.org/10.1093/nar/gkm806].
Predicting transcription factor affinities to DNA from a biophysical model. H G Roider, A Kanhere, T Manke, M Vingron, [http:/dx.doi.org/10.1093/bioinformatics/btl565]Bioinformatics. 232Roider HG, Kanhere A, Manke T, Vingron M: Predicting transcription factor affinities to DNA from a biophysical model. Bioinformatics 2007, 23(2):134-141, [http://dx.doi.org/10.1093/bioinformatics/btl565].
BioMart Central Portal: an open database network for the biological community. J M Guberman, Ai J Arnaiz, O Baran, J Blake, A Baldock, R Chelala, C Croft, D Cros, A Cutts, R J Génova, A D Forbes, S Fujisawa, T Gadaleta, E Goodstein, D M Gundem, G Haggarty, B Haider, S Hall, M Harris, T Haw, R Hu, S Hubbard, S Hsu, J Iyer, V Jones, P Katayama, T Kinsella, R Kong, L Lawson, D Liang, Y Lopez-Bigas, N Luo, J Lush, M Mason, J Moreews, F Ndegwa, N Oakley, D Perez-Llamas, C Primig, M Rivkin, E Rosanoff, S Shepherd, R Simon, R Skarnes, B Smedley, D Sperling, L Spooner, W Stevenson, P Stone, K Teague, J Wang, J Wang, J Whitty, B Wong, D T Wong-Erasmus, M Yao, L Youens-Clark, K Yung, C Zhang, J Kasprzyk, A , [http:/dx.doi.org/10.1093/database/bar041]Database (Oxford). Guberman JM, Ai J, Arnaiz O, Baran J, Blake A, Baldock R, Chelala C, Croft D, Cros A, Cutts RJ, Génova AD, Forbes S, Fujisawa T, Gadaleta E, Goodstein DM, Gundem G, Haggarty B, Haider S, Hall M, Harris T, Haw R, Hu S, Hubbard S, Hsu J, Iyer V, Jones P, Katayama T, Kinsella R, Kong L, Lawson D, Liang Y, Lopez-Bigas N, Luo J, Lush M, Mason J, Moreews F, Ndegwa N, Oakley D, Perez-Llamas C, Primig M, Rivkin E, Rosanoff S, Shepherd R, Simon R, Skarnes B, Smedley D, Sperling L, Spooner W, Stevenson P, Stone K, Teague J, Wang J, Wang J, Whitty B, Wong DT, Wong-Erasmus M, Yao L, Youens-Clark K, Yung C, Zhang J, Kasprzyk A: BioMart Central Portal: an open database network for the biological community. Database (Oxford) 2011, 2011:bar041, [http://dx.doi.org/10.1093/database/bar041].
T Sing, O Sander, N Beerenwinkel, T Lengauer, [http:/dx.doi.org/10.1093/bioinformatics/bti623]ROCR: visualizing classifier performance in R. Bioinformatics 2005. 21Sing T, Sander O, Beerenwinkel N, Lengauer T: ROCR: visualizing classifier performance in R. Bioinfor- matics 2005, 21(20):3940-3941, [http://dx.doi.org/10.1093/bioinformatics/bti623].
A global test for groups of genes: testing association with a clinical outcome. J Goeman, S Van De Geer, F De Kort, H Van Houwelingen, Bioinformatics. 20Goeman J, van de Geer S, de Kort F, van Houwelingen H: A global test for groups of genes: testing association with a clinical outcome. Bioinformatics 2004, 20:93-99.
Prognostic Gene Signatures for Patient Stratification in Breast Cancer -Accuracy, Stability and Interpretability of Gene Selection Approaches Using Prior Knowledge on Protein-Protein Interactions. Y Cun, H Fröhlich, BMC Bioinformatics. RevisedCun Y, Fröhlich H: Prognostic Gene Signatures for Patient Stratification in Breast Cancer -Accuracy, Stability and Interpretability of Gene Selection Approaches Using Prior Knowledge on Protein- Protein Interactions. BMC Bioinformatics 2012. [Revised].
| []
|
[
"TRANSFINITE ALMOST SQUARE BANACH SPACES",
"TRANSFINITE ALMOST SQUARE BANACH SPACES"
]
| [
"Antonio Avilés ",
"Stefano Ciaci ",
"Johann Langemets ",
"Aleksei Lissitsin ",
"Abraham Rueda Zoca "
]
| []
| []
| It is known that a Banach space contains an isomorphic copy of c0 if, and only if, it can be equivalently renormed to be almost square. We introduce and study transfinite versions of almost square Banach spaces with the purpose to relate them to the containment of isomorphic copies of c0(κ), where κ is some uncountable cardinal. We also provide several examples and stability results of the above properties by taking direct sums, tensor products and ultraproducts. By connecting the above properties with transfinite analogues of the strong diameter two property and octahedral norms, we obtain a solution to an open question from[8].2020 Mathematics Subject Classification. 46B20, 46B03, 46B04, 46B26. | 10.4064/sm220517-4-11 | [
"https://arxiv.org/pdf/2204.13449v1.pdf"
]
| 248,426,753 | 2204.13449 | fe21d4909e051ffec28db47ee21b92ea541ef6c3 |
TRANSFINITE ALMOST SQUARE BANACH SPACES
28 Apr 2022
Antonio Avilés
Stefano Ciaci
Johann Langemets
Aleksei Lissitsin
Abraham Rueda Zoca
TRANSFINITE ALMOST SQUARE BANACH SPACES
28 Apr 2022arXiv:2204.13449v1 [math.FA]
It is known that a Banach space contains an isomorphic copy of c0 if, and only if, it can be equivalently renormed to be almost square. We introduce and study transfinite versions of almost square Banach spaces with the purpose to relate them to the containment of isomorphic copies of c0(κ), where κ is some uncountable cardinal. We also provide several examples and stability results of the above properties by taking direct sums, tensor products and ultraproducts. By connecting the above properties with transfinite analogues of the strong diameter two property and octahedral norms, we obtain a solution to an open question from[8].2020 Mathematics Subject Classification. 46B20, 46B03, 46B04, 46B26.
Introduction
Since the starting point of the study of Banach space theory, a considerable effort has been made in order to determine how the presence of an isomorphic copy of c 0 or ℓ 1 in a Banach space affects its structure. This makes interesting the search of properties which characterise the containment of the abovementioned spaces. In this sense, let us point out two characterisations of the containment of the spaces ℓ 1 and c 0 of geometric nature. In [13,Theorem II.4] it is proved that a Banach space X contains an isomorphic copy of ℓ 1 if, and only if, it admits an equivalent norm ||| · ||| which is octahedral, that is, given a finite-dimensional subspace Y ⊂ X and ε > 0, there is an element x ∈ S (X,||| · |||) such that ||| y + rx ||| (2 − ε)(||| y ||| + |r|) for all y ∈ Y and r ∈ R.
Concerning the containment of c 0 , a more recent characterisation was given in [7,Corollary 2.4]: a Banach space X contains an isomorphic copy of c 0 if, and only if, it admits an equivalent almost square (ASQ, for short) norm ||| · |||, that is, given a finite-dimensional subspace Y ⊂ X and ε > 0, there is an element x ∈ S (X,||| · |||) such that ||| y + rx ||| (1 + ε)(||| y ||| ∨ |r|) for all y ∈ Y and r ∈ R.
At this point, it is natural to look for geometric characterisations of the containment of non-separable versions of ℓ 1 and c 0 . In this spirit, as far as the containment of ℓ 1 (κ) is concerned, transfinite generalisations of octahedral norms were introduced in [8] in different directions and some characterisations of the containment of ℓ 1 (κ) were obtained in [5,8]. To mention the strongest known result, it is proved in [5,Theorem 1.3] that a Banach space X contains an isomorphic copy of ℓ 1 (κ), where κ is an uncountable cardinal, if, and only if, there exists an equivalent norm ||| · ||| such that (X, ||| · |||) fails the (−1)-ball covering property for cardinals < κ ((−1)-BCP <κ , for short), that is satisfying that, given any subspace Y ⊂ X such that dens(Y ) < κ, there exists x ∈ S (X,||| · |||) such that ||| y + rx | || = ||| y ||| + |r| for all y ∈ Y and r ∈ R.
Motivated by the above results, in the present paper we aim to introduce different transfinite versions of ASQ spaces in order to search for a characterisation of those spaces that contain isomorphic copies of c 0 (κ).
Let us now describe in more detail the content of the paper. In Section 2 we define transfinite ASQ spaces and call them ASQ <κ spaces, where κ is some fixed cardinal (see Definition 2.1) and we provide many examples of Banach spaces enjoying these properties. In Section 3 we consider the relations between being ASQ <κ , admitting an equivalent ASQ <κ renorming and containing isomorphic copies of c 0 (κ). One of the highlights of this section is Example 3.1, in which we find, for every uncountable cardinal κ, a Banach space X which is ASQ <κ , but such that X * fails to contain ℓ 1 (ω 1 ) and, in particular, X cannot contain c 0 (ω 1 ). This means that the property of being renormable to be ASQ <κ is not strong enough to characterise the Banach spaces that contains c 0 (κ) isomorphically. Hence, we consider a strengthening of ASQ <κ that we call SQ <κ spaces (see Definition 2.1) and which will contain isomorphic copies of c 0 (κ). Therefore we face the question whether every Banach space containing c 0 (κ), for some uncountable cardinal κ, admits an equivalent SQ <κ renorming. Even though we do not know the answer in complete generality, we prove in Theorem 3.6 that, if dens(X) = κ, then X admits an equivalent SQ <cf(κ) renorming, where cf(κ) stands for the cofinality of κ.
In Section 4 we study various stability results for (A)SQ <κ spaces through different operations of Banach spaces, in order to enlarge the class of the known examples of Banach spaces enjoying these properties. We mainly extend known results for ASQ spaces, but which, in their transfinite version, produce new surprising results. For instance, with respect to absolute sums, in Theorem 4.1 we are able to produce ℓ ∞ -sums of spaces which are ASQ <κ even though none of its components is ASQ <κ , which is a notable difference from the previously known results for finite sums of ASQ spaces. We also analyse (A)SQ <κ properties with respect to taking spaces of operators and tensor products. In Corollary 4.7 we prove that, if X is (A)SQ <κ and Y is non-trivial, then the injective tensor product X ⊗ ε Y is (A)SQ <κ . If we also require Y being (A)SQ <κ , then so is its projective tensor product X ⊗ π Y (see Theorem 4.4). Observe that the latter result is important, because most of the known examples of ASQ spaces come from some kind of ∞-norm, but the projective norm on a tensor product has dramatically different behaviour.
We end the study of stability results with ultraproducts which, as one might expect, provide a lot of examples of SQ <κ spaces. Indeed, in Theorem 4.8 we prove that, if a family {X α : α ∈ A } consists of ASQ <κ spaces and if we consider a countable incomplete ultrafilter U , then the ultraproduct (X α ) U is SQ <κ . Lastly, we show in Example 4.9 that the requirement of the factors being ASQ <κ is not necessary.
In Section 5 we investigate the connection of (A)SQ <κ properties with other properties of Banach space, such as the transfinite versions of octahedrality and diameter two properties. As a consequence of our work, we derive that if X is ASQ <κ (respectively, SQ <κ ), then X has the SD2P <κ (respectively, the 1-ASD2P <κ ) and, consequently, X * is < κ-octahedral (respectively, fails the (−1)-BCP <κ ) (see Proposition 5.3). As a consequence, in Remark 5.4 we provide, for every uncountable cardinal κ, an example of a Banach space X which is < κ-octahedral but which fails to contain an isomorphic copy of ℓ 1 (ω 1 ), giving a negative solution to [8,Question 1].
In Section 6 we introduce a parametric version of (A)SQ <κ spaces (see Definition 6.1) which includes all of the known versions of ASQ spaces. Moreover, we improve the known isomorphic characterization of Banach spaces containing c 0 (see Theorem 6.8).
Terminology. Throughout the paper we only consider real Banach spaces. Given a Banach space X, we denote the closed unit ball and the unit sphere of X by B X and S X , respectively. We also denote by X * the topological dual of X. Given two Banach spaces X and Y we denote by L(X, Y ) the space of linear bounded operators from X into Y . Given a subset A of X we denote by span(A) (respectively, span(A)) the linear span (respectively, the closed linear span) of the set A, whereas dens(X) denotes the density character of a topological space X, i.e. the smallest cardinal which is the cardinality of a dense set in X.
Given a set A, we denote by |A| its cardinality and by P κ (A) and P <κ (A) the sets of all subsets of A of cardinality at most κ and strictly less than κ, respectively, for some cardinal κ. By N 2 we denote the set {n ∈ N : n 2}.
Given an infinite set A and an uncountable cardinal κ, a non-principal ultrafilter U over A is said to be κ-complete if it closed with respect to < κ many intersections. It is immediate to see that a non-principal ultrafilter U is ℵ 1 -incomplete if, and only if, there is a function f : A −→ R so that f (α) > 0 for every α ∈ A and so that lim U f (α) = 0.
Transfinite almost square Banach spaces and first examples
Let us begin with the definition of an (A)SQ Banach space depending on a given cardinal.
Definition 2.1. Let X be a Banach space and κ a cardinal. We say that (a) X is < κ-almost square (ASQ <κ , for short) if, for every set A ∈ P <κ (S X ) and ε > 0, there exists y ∈ S X such that x ± y 1 + ε holds for all x ∈ A, (b) X is < κ-square (SQ <κ , for short) if, for every set A ∈ P <κ (S X ), there exists y ∈ S X such that x ± y 1 holds for all x ∈ A.
As a special case, let us also define ASQ κ and SQ κ spaces by considering A ∈ P κ (S X ), instead.
Notice that, if κ is infinite, then we can equivalently require just that x + y ≤ 1 + ε. Moreover, a standard argument shows that, for the SQ case, it is equivalent to require that x ± y = 1.
It is known that a Banach space X is ASQ if and only if, for every finitedimensional subspace Y ⊂ X and ε > 0, there is x ∈ S X such that y + rx ≤ (1 + ε)( y ∨ |r|) for all y ∈ Y and r ∈ R (see [3, Proposition 2.1]). Even more is true, in fact, a straightforward application of [3, Lemma 2.2] provides a description of the above notions via subspaces of density < κ, whenever κ is uncountable. The version for κ = ℵ 0 is unknown for the authors. (i) X is ASQ <κ (SQ <κ , respectively).
(ii) For every subspace Y ⊂ X with dens(Y ) < κ and ε > 0 (ε 0, respectively), there exists x ∈ S X such that
y + rx (1 + ε)( y ∨ |r|)
holds for every y ∈ Y and every r ∈ R.
Let us devote the rest of the section to provide various examples of transfinite (A)SQ spaces. Example 2.3. Let κ be an uncountable cardinal, and let ℓ c ∞ (κ) be the elements of ℓ ∞ (κ) whose support is at most countable. If X is a subspace of ℓ c ∞ (κ) containing c 0 (κ), then X is SQ <κ . Indeed, fix A ∈ P <κ (S X ). Since supp(f ) ⊆ κ is at most countable for every f ∈ A, we can find λ ∈ κ so that λ / ∈ f ∈A supp(f ). Clearly f + e λ = 1 holds for every f ∈ A.
Example 2.4. Fix a non-principal ultrafilter U in N. For every x ∈ ℓ ∞ , denote by lim(x) the limit of x(n) with respect to U and define the norm
||| x ||| := | lim(x)| ∨ n∈N |x(n) − lim(x)|.
The Banach space X := (ℓ ∞ , ||| · |||) was defined in [7] and proved to be ASQ. In the following we prove that it actually is SQ <ℵ 0 . Nevertheless, X cannot be ASQ ℵ 0 (see Theorem 3.3) since ℓ ∞ = C(βN).
Fix x 1 , . . . , x k ∈ S X and define, for every n ∈ N and m ∈ {1, . . . , k}, the set A n,m := {p ∈ N : |x m (p) − lim(x m )| < 1/n}.
By definition of ultralimit, A n,m ∈ U , therefore A n := k m=1 A n,m ∈ U for every n ∈ N. Since U is non-principal, each A n is infinite, hence, for every n ∈ N, we can find f (n) ∈ A n such that f (n) < f (n + 1). Notice that, since ∅ / ∈ U , either f (2N) or f (2N + 1) is not in U , say for example that f (2N) / ∈ U . Define the formal series
y := n∈2N (1 − 1/n)e f (n) ∈ ℓ ∞ .
Notice that y ∞ = 1 and lim(y) = 0, therefore y ∈ S X . For every i ∈ {1, . . . , k}, notice that
||| x i + y ||| = | lim(x i )| ∨ n∈N |x i (n) − lim(x i ) + y(n)| ≤ ≤ 1 ∨ n∈2N |x i (f (n)) − lim(x i ) + (1 − 1/n)| ∨ n∈N\f (2N) |x i (n) − lim(x i )| ≤ ≤ 1 ∨ (1 − 1/n + 1/n) ∨ 1 = 1. Therefore X is SQ <ℵ 0 .
The previous example yields a non-separable example of a SQ <ℵ 0 space. A natural question at this point is whether or not there is a separable Banach space which is SQ <ℵ 0 . The next example is a modification of [3, Example 6.4] and provides an affirmative answer.
Example 2.5. Given n ∈ N, consider
X n := {f ∈ C(S R n ) : f (s) = −f (−s) for all s ∈ S R n }.
Let us show that X n is SQ n . Fix f 1 , . . . , f n ∈ S Xn . By a corollary of Borsuk-Ulam theorem [4, p. 485, Satz VIII], we can find s 0 ∈ S R n such that f i (s 0 ) = 0 holds for every i ∈ {1, . . . , n}. Pick any function h ∈ S Xn such that h(s 0 ) = 1 and define
g(s) := 1 − n i=1 |f i (s)| h(s).
Notice that g ∈ X n and that g(s 0 ) = 1, therefore g = 1. For every i ∈ {1, . . . , n} and s ∈ S R n we have
|f i (s) ± g(s)| ≤ |f i (s)| + |g(s)| ≤ |f i (s)| + 1 − n j=1 |f j (s)| ≤ 1, as required.
Now define X := c 0 (N, X n ). It is obvious that X is separable. Moreover, since X n is SQ n for every n ∈ N, it is immediate to check that X is SQ <ℵ 0 . In fact, fix x 1 , . . . , x k ∈ S X and without loss of generality assume that x i (k) = 1 for all i ∈ {1, . . . , k}. Find y ∈ S X k such that x i (k) + y ≤ 1 holds for all i ∈ {1, . . . , k}, therefore x i + y · e k ≤ 1.
Spaces of (almost) universal disposition were introduced in [14]. Let us recall their definitions. Given a family of Banach spaces K, a Banach space X is of almost universal disposition for K if for every S ⊂ T in K, any isometric embedding f : S → X extends to an ε-isometric embedding F : T → X. Moreover, a Banach space X is of universal disposition for K if for every S ⊂ T in K, any isometric embedding f : S → X extends to an isometric embedding F : T → X.
Example 2.6. If X is of almost universal disposition (of universal disposition, respectively) for Banach spaces with density character strictly less then κ, then X is ASQ <κ (SQ <κ , respectively). We show the claim for the ASQ case only. Fix a subspace Y ⊂ X with dens(Y ) < κ and ε > 0. The inclusion Y → X extends to an ε-isometrical embedding T : Y ⊕ ∞ R → X. Find r ∈ R such that T (0, r) = 1. We can do so since T is injective by picking any s = 0 and setting r := s/T (0, s). Notice that
|r| = (0, r) ∞ ≤ (1 + ε) T (0, r) = 1 + ε. It is clear that, for every y ∈ S Y , y + T (0, r) ≤ (1 + ε) (y, 0) + (0, r) ∞ = (1 + ε)( y ∨ |r|) ≤ (1 + ε) 2 .
In the following we study C 0 (X) spaces. It is known that, given a locally compact Hausdorff space X, C 0 (X) is ASQ if and only if X is non-compact [6, Proposition 2.1]. In the following we provide a topological description of X so that C 0 (X) is ASQ <κ , whenever κ is uncountable and, as a byproduct, we find out that being SQ <κ and ASQ <κ are equivalent in C 0 (X) spaces, at least under a mild regularity assumption on X.
Theorem 2.7. Let X be a T 4 locally compact space. If κ is an uncountable cardinal, then the following are equivalent:
(i) C 0 (X) is SQ <κ , (ii) C 0 (X) is ASQ <κ , (iii) If K ∈ P <κ (P(X)) is a family consisting of compact sets in X, then K is not dense in X. Proof. (i) =⇒ (ii) is obvious. (ii) =⇒ (iii)
. Fix a family K ∈ P <κ (P(X)) consisting of compact sets in X and fix any K ∈ K . Since K is compact and X is locally compact, we can find a covering U 1 , . . . , U n for K consisting of open relatively compact sets. Define U := n i=1 U i and notice that X \U = ∅, otherwise we would get that X = U , which is compact, and this would contradict the fact that C 0 (X) is ASQ <κ . On the other hand, it is clear that K and X \ U are disjoint closed sets, therefore, since X is normal, there exists a Urysohn's function f K : X → [0, 1] such that f K | K = 1 and
f K | X\U = 0. Notice that the support of f K is contained in U, which is compact, thus f K ∈ S C 0 (X) . Since C 0 (X) is ASQ <κ , there is g ∈ S C 0 (X) satisfying f K ± g ∞ ≤ 3/2 for every K ∈ K . It is clear by construction that |g(x)| ≤ 1/2 holds for every x ∈ K . Therefore the non-empty open set {x ∈ X : |g(x)| > 1/2} is disjoint from K , hence K is not dense in X. (iii) =⇒ (i). Fix A ∈ P <κ (S C 0 (X) )
. For every f ∈ A and n ∈ N, there exists a compact set K f,n ⊂ X such that |f (x)| < 1/n holds for every x ∈ X \ K f,n . Define K := {K f,n : f ∈ A and n ∈ N} and notice that |K | ≤ |A| · ℵ 0 < κ since κ is uncountable. By assumption we can find a non-empty open set U which is disjoint from K and, without loss of generality, we can assume it to be relatively compact. Since X is normal, there exists a Urysohn's function g : X → [0, 1] such that g ∞ = 1 and g| X\U = 0. Notice that the support of g is contained in U , which is compact, thus g ∈ S C 0 (X) . It is clear by construction that f + g ∞ = 1 holds for every f ∈ A.
A closer look to the proof of Theorem 2.7 reveals that (ii) ⇐⇒ (iii) actually holds without the assumption that κ is uncountable. This corresponds to the already recalled result that C 0 (X) is ASQ if and only if X is noncompact.
Let us note that in the case κ = ℵ 1 , the property (iii) of Theorem 2.7 can be stated as "X does not admit a dense sigma-compact set".
Banach spaces which admit transfinite ASQ renorming
Let X be a Banach space. In this section we will analyse the relations between the following properties:
(a) X admits an equivalent ASQ <κ renorming, (b) X admits an equivalent SQ <κ renorming, (c) X contains an isomorphic copy of c 0 (κ). Observe that an easy transfinite induction argument reveals that, if X admits an equivalent SQ <κ renorming, then X contains an isomorphic copy of c 0 (κ). The situation is dramatically different if we replace the SQ norm with an ASQ norm.
Example 3.1. Let κ be an infinite cardinal. Then X := c 0 (N 2 , ℓ n (κ)) is ASQ <κ but X * does not contain any isomorphic copy of ℓ 1 (ω 1 ), in particular, X does not contain any isomorphic copy of c 0 (ω 1 ).
Proof. Let us initially suppose that κ > ℵ 0 . In order to prove that X is ASQ <κ , it is enough to note that, for every set A ∈ P <κ (S ℓn(κ) ), there is y ∈ S ℓn(κ) such that x + y 2 1 n for all x ∈ A (later, in Section 6, we will call this property 2 − 1 n , 2 − 1 n -SQ <κ , see Definition 6.1). Indeed, since every x ∈ A has a countable support, we can find λ ∈ κ such that x(λ) = 0 for all x ∈ A. Take y defined by y(µ) = δ λµ . Now x + y = ( x n + y n ) 1 n = 2 1 n for all x ∈ A, as required. In order to conclude, fix A ∈ P <κ (S X ) and ε > 0. Find n ∈ N such that 2 1 n < 1 + ε and find y ∈ S ℓn(κ) such that x(n) + y ≤ 2 1 n holds for all x ∈ A, then x + y · e n ≤ 1 + ε. If κ = ℵ 0 , the proof is similar, but, for every ε > 0, we can only manage to find y such that x + y (2 + ε) 1 n , which is still enough. Indeed, given a finite set A ⊂ S ℓn , we can find m ∈ N such that |x(m)| < δ for all x ∈ A, where δ > 0 is chosen such that (1 + δ) n < 1 + ε, thus we only need to define y := e m .
In order to prove the second part, observe that X * = ℓ 1 (N 2 , ℓ n * (κ)) where n * is the conjugate index of n. Since X * is a countable sum of reflexive Banach spaces, we derive that X * is weakly compactly generated. Consequently X * cannot contain ℓ 1 (ω 1 ), which even fails weaker properties (e.g. it is immediate to see that ℓ 1 (ω 1 ) fails the Corson property (C) by using [10,Theorem 12.42], which is inherited by closed subspaces).
Finally, to conclude that X does not contain c 0 (ω 1 ), observe that if X contained c 0 (ω 1 ), taking adjoint we would have that ℓ 1 (ω 1 ) would be isomorphic to a quotient of X * . Since ℓ 1 (ω 1 ) has the lifting property, we would conclude that ℓ 1 (ω 1 ) is isomorphic to a subspace of X * , which entails a contradiction with the previous point.
Remark 3.2. The same proof as in Example 3.1 gives that ℓ ∞ (N 2 , ℓ n (κ)) also is ASQ <κ . More in detail, using the terminology from Example 3.1,
since each ℓ n (κ) is 2 − 1 n , 2 − 1 n
-SQ <κ , the claim follows from a direct computation or from Theorem 4.1. This shows that there are dual (actually bidual) Banach spaces which are ASQ <κ , for every infinite cardinal κ. Let us point out that the importance of this result is that, for classical ASQ spaces, it was posed in [3] the question whether there is any dual ASQ space, which was positively solved in [1].
Observe that the situation for SQ spaces is different, because they are clearly incompatible with the existence of extreme points in the unit ball, so no dual Banach space can enjoy any SQ property.
We have seen that the ASQ <κ condition does not imply the containment of large copies of c 0 . However, this behaviour is impossible in spaces of continuous functions, as the following theorem shows. Theorem 3.3. Let K be a compact Hausdorff topological space. If C(K) admits any equivalent ASQ <κ norm, then it contains an isomorphic copy of c 0 (κ).
Proof. Call X := C(K) and assume that
1 M f ||| f ||| M f
holds for every f ∈ X. Find p ∈ N big enough and ε > 0 small enough so that p > 2M 2 (1 + ε) p . If (X, ||| · |||) is ASQ <κ , then, by transfinite induction, we can find {f α : α < κ} ⊆ S (X,||| · |||) so that
||| f + rf α ||| (1 + ε)(||| f ||| ∨ |r|)
holds for every f ∈ span{f β : β < α} and r ∈ R. Note that f α 1 M . Up to consider −f α instead, we can assume that the set
V α := x ∈ K : f α (x) > 1 2M
is non-empty, and it is clearly open. Suppose by contradiction that c 0 (κ) does not embed in C(K), we then derive that K satisfies the κ-chain condition [
(1 + ε) p > p i=1 f i 1 M p i=1 f i 1 M p i=1 f i (x) p 2M 2 ,
which is a contradiction. Now it is time to analyse the following question.
Question. Let κ be an infinite cardinal. If a Banach space contains an isomorphic copy of c 0 (κ), does it admit an equivalent SQ <κ renorming?
We do not know the answer to the above question in complete generality. However, we are able to give some partial positive answers. The first one deals with Banach spaces that contains c 0 . Proposition 3.4. If X is a dual Banach space containing an isomorphic copy of c 0 , then X admits an equivalent SQ <ℵ 0 renorming.
Proof. If X is a dual Banach space containing c 0 , then X contains an isomorphic copy of ℓ ∞ [20, Proposition 2.e.8]. Because of its injectivity, ℓ ∞ is complemented in X (c.f. e.g. [10, Proposition 5.13]). Consequently, there is a subspace Z of X so that X = ℓ ∞ ⊕ Z. Consider the norm ||| · ||| on ℓ ∞ described in Example 2.4. Now, consider on X the equivalent norm so that X = (ℓ ∞ , ||| · |||) ⊕ ∞ Z. Then X, endowed with this norm, is SQ <ℵ 0 , because (ℓ ∞ , ||| · |||) is SQ <ℵ 0 together with Corollary 4.2.
We do not know whether c 0 has an equivalent SQ <ℵ 0 renorming and, when κ > ℵ 0 , we do not know if ℓ ∞ (κ) has an equivalent SQ <κ renorming. The best that we can say in this direction is the following.
Proposition 3.5. Let κ and λ be uncountable cardinals. If there exists a κ-complete ultrafilter U on λ. Then ℓ ∞ (λ) admits an equivalent SQ <κ renorming.
Proof. Define an equivalent norm by the same formula as in Example 2.3
||| x ||| := | lim(x)| ∨ µ∈λ |x(µ) − lim(x)|,
where lim(x) denotes the limit through the ultrafilter U . Similarly as before, if we take X ∈ P <κ (S ℓ∞(λ) ), then the sets
A n,x = {µ ∈ λ : |x(µ) − lim(x)| < 1/n} ∈ U for all n ∈ N and x ∈ X. So A := {µ ∈ λ : x(µ) = lim(x)} = n,x A n,x ∈ U .
If we take µ ∈ A, then it is easily checked that ||| x + e µ ||| = 1 holds for all x ∈ X.
This statement is quite unsatisfactory because λ must be a large cardinal, at least the first measurable cardinal. Using a variation of this idea using multiple ultrafilters instead of just a fixed one, we obtain another general result which says that, when X contains c 0 (κ) and X/c 0 (κ) is somehow small, then X admits an equivalent SQ <κ norm. This is the main result of this section.
Theorem 3.6. Let κ be an infinite cardinal of uncountable cofinality. If a Banach space of density character κ contains an isomorphic copy of c 0 (κ), then it admits an equivalent SQ <cf(κ) renorming.
Proof. Without loss of generality we can suppose that the copy of c 0 (κ) is isometric. Let Y ⊂ X be a subspace together with an isometric isomorphism S : Y −→ c 0 (κ). By Hahn-Banach, there exists an norm-1 operator T :
X −→ ℓ ∞ (κ) such that T | Y = S.
Now we aim to define a suitable one-to-one mapping g : κ −→ B X * such that all g(α)'s vanish on Y . After doing so, we define the equivalent norm
||| x ||| := x X/Y ∨ α<κ |T α (x) − g(α)(x)|
First step: ||| · ||| defines an equivalent norm on X. In fact, it is clear that ||| · ||| is a norm and that ||| · ||| 2 · . Now suppose by contradiction that that we cannot obtain the opposite inequality with respect to any fixed constant, then we can find a sequence (x n ) n∈N ⊂ S X satisfying lim n ||| x n ||| = 0. This implies that lim n x n X/Y = 0, so we can find elements y n ∈ Y such that lim n x n − y n = 0. This, as before, shows that lim n ||| x n − y n ||| = 0.
Since lim n ||| x n ||| = 0, we conclude that lim n ||| y n ||| = 0, but, since y n ∈ Y , then ||| y n ||| = α<κ |T α (y n )| = y n , hence lim n y n = 0. This, together with lim n x n −y n = 0, implies that lim n x n = 0, which is a contradiction.
Second step: If T β (x) = g(β)(x), then x + tS −1 (e β ) = ||| x ||| ∨ |t|.
In fact, call u β := S −1 (e β ) and observe that
||| x ||| = x X/Y ∨ α =β |T α (x) − g(α)(x)|.
Thanks to the fact that g(β)(u β ) = 0, we deduce that |T β (x+tu β )−g(β)(x+ tu β )| = |t|. Notice also that T α (u β ) = g(α)(u β ) = 0. Therefore
||| x + tu β ||| = x X/Y ∨ |T β (x + tu β ) − g(β)(x + tu β )| ∨ α =β |T α (x) − g(α)(x)| = = ||| x ||| ∨ |t|.
Third step: If Z ⊂ X is a subspace with dens(Z) < κ, then, for all α ∈ κ, except for < κ many α's, there exist functionals g α ∈ B X * that vanish on Y and such that T α (x) = g α (x) holds for all x ∈ Z.
In fact, consider the continuous function φ : βκ −→ B Z * given by
φ(U )(z) = lim U T γ (z),
where the limit is taken with respect to γ, and the topology on B Z * is the weak * topology. Notice that, for α < κ, if a non-principal ultrafilter U α in κ satisfies that φ(U α ) = φ(α), then g α := lim Uα T γ satisfies the desired conditions. So it is enough to show that for all but less than κ many α < κ such non-principal ultrafilter exists. Suppose for contradiction, that this is not the case. This means that there is a set A ⊂ κ of cardinality κ such that φ −1 {φ(α)} contains no non-principal ultrafilters for all α ∈ A. This means that φ −1 {φ(α)} consists only of isolated points of βκ, but it is also a compact set by continuity. Hence each set φ −1 {φ(α)} is finite, for α ∈ A. This implies that {φ(α) : α ∈ A} has cardinality κ. Now we prove that each point φ(α) is an isolated point of the range φ(βκ) ⊂ B Z * . This is a contradiction with the fact that B Z * has weight less than κ since Z had density less than κ. So suppose that φ(α) is not isolated in that range. Since κ is dense in βκ, we must have
φ(α) ∈ {φ(β) : β < κ, φ(β) = φ(α)}. Consider F = {B ⊂ κ : ∃W neighborhood of φ(α) : κ ∩ φ −1 (W \ {φ(α)}) ⊂ B}.
This is a filter of subsets of κ that contains all complements of finite sets, and satisfies {φ(α)} = B∈F φ(B). There is a non-principal ultrafilter U on κ that contains F, and we have
φ(U ) = φ(lim U β) = lim U φ(β) = φ(α).
This contradicts that φ −1 {α} contained no non-principal ultrafilter.
Fourth step: Definition of the map g : κ −→ B X * . Let {X γ : γ < cf(κ)} be a family consisting of subspaces of X of density character strictly less than κ, such that every subspace of X with density character strictly less than cf(κ) is contained in some X γ . Using the previous step, for each γ < cf(κ), we can inductively choose α(γ) < κ and g γ ∈ B X * , such that g γ vanishes on Y , T α(γ) | Xγ = g α(γ) | Xγ and α(γ ′ ) = α(γ) for γ ′ < γ. It is now legitimate to define
g(α) := g α(γ) if α = α(γ) for some γ < cf(κ), 0 if α ∈ {α(γ) : γ < cf(κ)}
Finally, we can conclude the proof of the theorem. For this purpose, fix a subspace Z ⊂ X with dens(Z) < cf(κ) and find γ < cf(κ) such that Z ⊂ X γ . By construction, T α(γ) (x) = g α(γ) (x) holds for all x ∈ X γ . Thanks to the second step, we can find an element y ∈ S (X,||| · |||) such that ||| x + ty ||| ≤ ||| x ||| ∨ |t| holds for all x ∈ X γ and t ∈ R.
As an application of the above results we get the following.
Corollary 3.7. ℓ ∞ /c 0 admits a SQ <cf(c) equivalent norm. In particular, a SQ ℵ 0 equivalent norm.
Proof. ℓ ∞ /c 0 contains a subspace isometric to c 0 (c), coming from an almost disjoint family of cardinality c and cf(c) > ℵ 0 .
Stability results
In this section we aim to produce more examples of Banach spaces which are transfinite (A)SQ.
Direct sums.
It is known that the only possible sums which may preserve ASQ are the c 0 and the ℓ ∞ sums [15, Theorem 3.1]. Because of that, we will only focus on these two cases.
Theorem 4.1. Let {X α : α ∈ A } be a family of Banach spaces and κ an infinite cardinal. If, for every ε > 0, there exists β ∈ A such that, for every set A ∈ P κ (S X β ), there is y ∈ S X β satisfying
x + y ≤ 1 + ε for all x ∈ A,
then ℓ ∞ (A , X α ) is ASQ <κ . Moreover, if |A | < cf(κ) and λ |A | < κ holds for every cardinal λ < κ, then the converse holds too.
Proof. Fix A ∈ P <κ (S ℓ∞(A ,Xα) ) and ε > 0. Find β ∈ A as in the assumption. Then there exists y ∈ S X β satisfying
x(β) + y ≤ (1 + ε)( x(β) ∨ 1) = 1 + ε for all x ∈ A.
We conclude that
x + y · e β ∞ = α∈A \{β} x(α) ∨ x(β) + y ≤ 1 ∨ (1 + ε) = 1 + ε holds for every x ∈ A. Hence ℓ ∞ (A , X α ) is ASQ <κ .
For the additional part, suppose that ℓ ∞ (A , X α ) is ASQ <κ and, by contradiction, that there exists ε > 0 such that for every α ∈ A there exists a set A α ∈ P <κ (S Xα ) such that for every y ∈ S Xα there is x ∈ A α satisfying either x + y > 1 + ε or x − y > 1 + ε.
Notice that
α∈A A α ≤ sup α∈A |A α | |A | < κ,
where the last inequality follows from observing that sup α∈A |A α | < κ since |A | < cf(κ) and λ |A | < κ hold by hypothesis for every cardinal λ < κ. Since ℓ ∞ (A , X α ) is ASQ <κ , we can find y ∈ S ℓ∞(Xα) such that
x + y ∞ ≤ 1 + ε/2 for every x ∈ α∈A A α .
Find β ∈ A such that y(β) ≥ 1 − ε/2. Then, for every x ∈ α∈A A α , we get
1 + ε/2 ≥ x + y ∞ ≥ x(β) + y(β) ≥ x(β) + y(β) y(β) − y(β) − y(β) y(β) ≥ x(β) + y(β) y(β) − ε/2.
This implies that x(β)+y(β)/ y(β) ≤ 1+ε, which is a clear contradiction since it holds for every x(β) ∈ A β . Proof. Notice that, with the notation from the statement of Theorem 4.1, |A | = 2 < cf(κ) and that λ |A| = λ · λ = λ < κ holds for every λ < κ. Therefore we can apply both directions of Theorem 4.1, hence the ASQ case follows. For the SQ case, notice that the first half of the proof of Theorem 4.1 holds also if we choose ε = 0. Moreover, the second half of the proof of Theorem 4.1 holds for ε = 0 too if A is finite.
It is know that, given any sequence of Banach spaces {X n : n ∈ N}, the Banach space c 0 (N, X n ) is always ASQ [3, Example 3.1]. A transfinite generalisation of this result is the following. Proof. Fix A ∈ P <|A | (S c 0 (A ,Xα) ). For every x ∈ A, supp(x) is at most countable, therefore, since A is uncountable, x∈A supp(x) ≤ |A| · ℵ 0 < |A |.
Find some β ∈ A \ x∈A supp(x) and notice that x + e β = 1 holds for every x ∈ A.
4.2.
Tensor products. In this subsection we give examples of projective and injective tensor products of Banach spaces which are transfinite (A)SQ. Our motivation for this are the known stability results of regular ASQ by taking tensor products coming from [18,23].
Let us begin with the projective tensor product, of which we briefly recall some notions. Recall that, given two Banach spaces X and Y , the projective tensor product of X and Y , denoted by X ⊗ π Y , is the completion of X ⊗ Y under the norm given by
u := inf n i=1 x i y i : u = n i=1 x i ⊗ y i . It is known that B X ⊗ π Y = co(B X ⊗ B Y ) = co(S X ⊗ S Y ) [24, Proposi- tion 2.2]
. Moreover, given two Banach spaces X and Y , it is well known that (X ⊗ π Y ) * = L(X, Y * ) (see [24] for background on tensor products).
In [23,Theorem 2.1], it was proved that, if X and Y are ASQ, then X ⊗ π Y is ASQ. The proof is based on averaging techniques in Banach spaces. In the following result we will obtain a transfinite version, which will give us more examples of transfinite (A)SQ spaces. Proof. Let us prove the ASQ case only, the other is similar. To this end, let A ∈ P <κ (S X ⊗ π Y ) and ε > 0. Since S X ⊗ π Y = co(S X ⊗ S Y ), for every u ∈ A and n ∈ N, we can find m n ∈ N, λ u,n i ≥ 0, x u,n i ∈ S X and y u,n i ∈ S Y for i ∈ {1, . . . , m n } such that
u − mn i=1 λ u,n i x u,n i ⊗ y u,n i ≤ 1/n and mn i=1 λ u,n i = 1.
Since κ is uncountable, the sets {x u,n i : u ∈ A, n ∈ N and i ∈ {1, . . . , m n }} and {y u,n i : u ∈ A, n ∈ N and i ∈ {1, . . . , m n }} have cardinality < κ, therefore we can find x ∈ S X and y ∈ S Y satisfying x u,n i + x ≤ (1 + ε) 1/2 and y u,n i + y ≤ (1 + ε) 1/2 for all u ∈ A, n ∈ N and i ∈ {1, . . . , m n }. Thanks to [23, Lemma 2.2], x u,n i ⊗y u,n i +x⊗y ≤ 1+ε holds for every u ∈ A, n ∈ N and i ∈ {1, . . . , m n }.
It is clear that
u + x ⊗ y ≤ mn i=1 λ u,n i (x u,n i ⊗ y u,n i + x ⊗ y) + 1/n ≤ mn i=1 λ u,n i x u,n i ⊗ y u,n i + x ⊗ y + 1/n ≤ ≤ (1 + ε) mn i=1
λ u,n i + 1/n = 1 + ε + 1/n holds for every u ∈ A and n ∈ N. In other words, x ⊗ y + u ≤ 1 + ε holds for every u ∈ A, and the proof is finished.
Remark 4.5. In general, we cannot obtain that a projective tensor product X ⊗ π Y is ASQ <κ if we only require one factor to be ASQ <κ . Indeed, if we take X = c 0 (κ) and Y = ℓ p for 2 < p < ∞, we get, from [19,Theorem 3.8], that X ⊗ π Y fails to be ASQ (it even contains a convex combination of slices of diameter smaller than 2). Now we turn our attention to when a space of operators can be transfinite ASQ, a study that will cover the injective tensor product too. Let X and Y be Banach spaces. Given an infinite cardinal κ, denote by L κ (Y, X) := {T ∈ L(Y, X) : dens(T (Y )) ≤ κ}.
Using the ideas in [18, Theorem 2.6], we get the following.
Theorem 4.6. Let λ < κ be infinite cardinals, X and Y be non-trivial Banach spaces. Suppose that X is (A)SQ <κ .
(a) If H ⊂ L λ (Y, X) is a closed subspace such that Y * ⊗ X ⊂ H, then H is (A)SQ <κ . (b) If H ⊂ L λ (Y * , X) is a closed subspace such that Y ⊗ X ⊂ H, then H is (A)SQ <κ .
Proof. Let us prove only (a) in the ASQ case. Fix T ∈ P <κ (S H ) and ε > 0. Consider the subspace
T (Y ) := T ∈T T (Y )
and notice that dens(T (Y )) ≤ |T | · λ < κ. By assumption, there exists x ∈ S X satisfying z + rx ≤ (1 + ε)( z ∨ |r|) for every z ∈ T (Y ) and r ∈ R.
Fix any y * ∈ S Y * , then the element y * ⊗ x ∈ S H satisfies, for every T ∈ T and y ∈ S Y , (T + y * ⊗ x)(y) = T (y) + y * (y) · x ≤ (1 + ε)( T (y) ∨ |y * (y)|) ≤ 1 + ε.
If we pass to the sup on the left hand-side, we conclude that T + y * ⊗ x ≤ 1 + ε holds for every T ∈ T , as desired.
Recall that, given two Banach spaces X and Y , the injective tensor product of X and Y , denoted by X ⊗ ε Y , is the closure of the space of finite-rank operators from Y * to X. Taking this into account, the following corollary is clear from Theorem 4.6.
Corollary 4.7. Let κ be an uncountable cardinal, X and Y non-trivial Banach spaces. If X is (A)SQ <κ , then X ⊗ ε Y is (A)SQ <κ .
4.3.
Ultrapowers. In this subsection we will provide examples of ultrapowers of Banach spaces which are ASQ. Our motivation comes from [17] where, in our language, it is proved that the ultrapower of a Banach space X is SQ <ℵ 0 if, and only if, X is ASQ.
Let us start with a bit of notation. Given a family of Banach spaces {X α : α ∈ A }, for an infinite set A , we denote
ℓ ∞ (A , X α ) := f : A −→ α∈A X α : f (α) ∈ X α ∀α and α∈A f (α) < ∞ .
Given a non-principal ultrafilter U over A , consider c 0,
U (A , X α ) := {f ∈ ℓ ∞ (A , X α ) : lim U f (α) = 0}. The ultrapower of {X α : α ∈ A } with respect to U is the Banach space (X α ) U := ℓ ∞ (A , X α )/c 0,U (A , X α ).
We will naturally identify a bounded function f : A −→ α∈A X α with the element (f (α)) α∈A . In this way, we denote by (x α ) α,U or simply by (x α ) U , if no confusion is possible, the coset in (X α ) U given by (x α ) α∈A + c 0,U (A , (X α )).
From the definition of the quotient norm, it is not difficult to prove that (x α ) U = lim U x α holds for every (x α ) U ∈ (X α ) U . Now we are ready to prove the following theorem.
Theorem 4.8. Let A be an infinite set and {X α : α ∈ A } a family of ASQ <κ spaces. If U is a ℵ 1 -incomplete non-principal ultrafilter over A , then (X α ) U is SQ <κ .
Proof. Since U is ℵ 1 -incomplete we can find a function f : A −→ R so that f (α) > 0 holds for every α ∈ A and so that lim U f (α) = 0.
Let us now prove that (X α ) U is SQ <κ . To this end, fix a set A ∈ P <κ (S (Xα) U ). Without loss of generality we can assume that x(α) = 1 holds for every α ∈ A and every x ∈ A. Now, since X α is ASQ <κ we can find, for every α ∈ A , an element y α ∈ S Xα so that
x(α) + y α 1 + f (α)
holds for every x ∈ A. Now consider (y α ) U ∈ S (Xα) U , and let us prove that it satisfies the desired inequality. To this end fix x ∈ A and notice that
x + (y α ) U = lim U x(α) + y α lim U
(1 + f (α)) = 1, as requested.
It is natural, in view of what happen with the behaviour of ASQ in ultrapowers, to ask whether (X α ) U ASQ implies X α ASQ for some α ∈ A . The following example shows that the answer is no. Example 4.9. Let κ be an infinite cardinal and set X n := ℓ n (κ), where n ∈ N 2 . Let U be a non-principal ultrafilter over N and consider X := (X n ) U . Let us prove that X is SQ <κ in spite of X n being reflexive for every n ∈ N 2 . Fix A ∈ P <κ (S X ) and assume that x(n) has norm one for each x ∈ A and n ∈ N 2 .
By the same argument as in Theorem 3.1 we get elements y n ∈ S Xn so that x(n) + y n 2 1 n holds for every x ∈ A and n ∈ N 2 . It is not difficult to prove, as before, that
x + (y n ) U = lim U x(n) + y n lim U 2 1 n = 1
holds for every x ∈ A, and the proof is finished.
Connections with other properties
It is known that almost square Banach spaces have deep connections with other properties of the geometry of Banach spaces such as diameter two properties, octahedrality and the intersection property (see [3]). The aim of the present section is to derive similar connections with transfinite counterparts of the abovementioned properties.
We will be specially interested in the connection between transfinite versions of almost squareness and octahedrality, because, as a consequence of our work, we will solve an open question from [8]. In order to do so, let us start with the following definition from [8]. (a) We say that X is < κ-octahedral if, for every subspace Y ⊂ X with dens(Y ) < κ and ε > 0, there exists x ∈ S X such that for all r ∈ R and y ∈ Y we have y + rx ≥ (1 − ε)( y + |r|). (b) We say that X fails the (−1)-BCP <κ if , for every subspace Y ⊂ X with dens(Y ) < κ, there exists x ∈ S X such that for all r ∈ R and y ∈ Y we have y + rx = y + |r|. If κ = ℵ 0 , analogous properties are defined by considering finite-dimensional subspaces Y ⊂ X instead.
It is known that if a Banach space X is ASQ, then X * is octahedral [3,Proposition 2.5]. In order to solve the abovementioned question, our aim will be to establish a transfinite version of this result. We will perform this proof, however, from a more general principle using transfinite versions of the strong diameter two property. (a) We say that X has the SD2P <κ if, for every A ∈ P <κ (S X * ) and ε > 0, there exist B ⊂ S X and x * ∈ S X * such that x * (x) ≥ 1 − ε holds for all x ∈ B and B (1 − ε)-norms A, that is x∈B y * (x) ≥ 1 − ε holds for every y * ∈ A.
(b) We say that X has the 1-ASD2P <κ if, for every A ∈ P <κ (S X * ), there exist B ⊂ S X and x * ∈ S X * such that x * (x) = 1 holds for all x ∈ B and B is norming for A, that is B 1-norms A.
Recall that, for all infinite cardinal κ, a Banach space X has the SD2P <κ if, and only if, X * is < κ-octahedral and that if X has the 1-ASD2P <κ , then X * fails the (−1)-BCP <κ [9, Theorem 3.2 and Proposition 3.6].
Proposition 5.3. Let X be a Banach space and κ an infinite cardinal. If X is ASQ <κ , then X has the SD2P <κ . Moreover, if κ is uncountable and X is SQ <κ , then X has the 1-ASD2P <κ .
Proof. We begin the proof by noticing that, if X is ASQ <κ , then it clearly satisfies a transfinite version of the symmetric strong diameter two property [2,15]. Namely, for every A ∈ P <κ (S X * ) and ε > 0 there exist B ⊂ S X and y ∈ S X such that B (1 − ε)-norms A and y ± B ⊂ (1 + ε)B X .
From this property we now deduce that X has the SD2P <κ . For this purpose, fix A ∈ P <κ (S X * ) and ε > 0. Find B ⊂ S X and y ∈ S X such that B (1 − ε/3)-norms A and y ± B ⊂ (1 + ε/3)B X .
We claim that y + B also (1 − ε)-norms A. In fact, for every x * ∈ A we can find x ∈ B such that x * (x) ≥ 1 − ε/3 and therefore
1 = x * ≥ x * (x ± y) 1 + ε/3 ≥ 1 − ε/3 ± x * (y) 1 + ε/3 ,
hence |x * (y)| ≤ 2ε/3. We conclude that x * (x + y) ≥ 1 − ε, that is the claim. In order to conclude, we need to find x * ∈ S X * such that x * (x + y) ≥ 1 − ε holds for every x ∈ A. Any x * ∈ S X * that attains its norm at y satisfies the desired condition, in fact, for every x ∈ A, we have that
1 = x * ≥ x * (y ± x) 1 + ε/3 = 1 ± x * (x) 1 + ε/3 ,
hence |x * (x)| ≤ ε/3 and therefore x * (x + y) ≥ 1 − ε/3. The additional part follows by repeating the same proof with ε = 0 and taking into account that, given any element x * ∈ S X * , then the set {x * } can be normed using a countable set.
Remark 5.4. In [8,Question 1] it was asked whether every < κ-octahedral Banach space must contain an isomorphic copy of ℓ 1 (κ). Observe that, by Proposition 5.3, the dual of the space exhibited in Example 3.1 provides a negative answer for every infinite cardinal κ.
Parametric ASQ spaces
In this last section we study a further generalization of ASQ spaces. Definition 6.1. Let X be a Banach space, κ be a cardinal, and r, s ∈ (0, 1]. We say that X is (r, s) -SQ <κ if, for every set A ∈ P <κ (S X ), there exists y ∈ S X satisfying rx ± sy ≤ 1 for every x ∈ A.
We say that X is (< r, s) -SQ <κ if it is (t, s) -SQ <κ for all t ∈ (0, r) and similar meaning is given to being (r, < s) -SQ <κ . As before, we put "κ" instead of "< κ" in the definitions above to mean the non-strict inequality on the cardinals.
Proof. One implication is obvious. For the vice-versa, notice that we only need to prove the claim when t ≥ 0. For this purpose, fix a subspace Y ⊂ X with dens(Y ) < κ and find x ∈ S X such that ry + sx ≤ 1 holds for every y ∈ S Y .
Such a x exists by a density argument. Fix t ≥ 0, y ∈ Y and notice that, thanks to Lemma 6.3,
ry + stx = ( y ∨ t) r y y ∨ t y y + st y ∨ t x ≤ ( y ∨ t).
Now, let us point out some geometrical considerations.
Lemma 6.5. Let X be a Banach space.
(a) If X is (1, < 1) -SQ <ℵ 0 (or just (1, k) -SQ 1 for some k ∈ (0, 1]), then B X cannot contain any extreme point. (b) If X is (< 1, 1) -SQ <ℵ 0 (or just (k, 1) -SQ 1 for some k ∈ (0, 1]), then X is not strictly convex.
Proof. (a). Let x ∈ S X . By our assumption we can find y ∈ B X \ {0} such that x ± y ≤ 1. Notice that
x = x + y 2 + x − y 2 .
Thus x is a middle point of two distinct elements of B X and cannot be an extreme point. (b). Let x ∈ kB X \ {0}. By our assumption we can find y ∈ S X such that x ± y ≤ 1. Observe that
1 = y = y + x 2 + y − x 2 .
From this we deduce, through a simple contradiction argument, that x ± y = 1 and that y is a norm-1 element which is a middle point of two distinct norm-1 elements, thus proving the claim.
Let us state a simple but useful observation about how (r, s) -SQ <κ properties pass from a component to the ∞-sum. Proposition 6.6. Let X and Y be non-trivial Banach spaces, r, s ∈ (0, 1], and let κ be a cardinal. If X is (r, s) -SQ <κ , then X ⊕ ∞ Y is (r, s) -SQ <κ .
Proof. Fix a set {x γ ⊕ ∞ y γ } γ∈Γ ⊂ S X⊕∞Y with |Γ| < κ. Find z ∈ S X such that rxγ xγ ± sz 1 for all x γ = 0. By Lemma 6.3 we have rx γ + sz 1 (even when x γ = 0), so that r(x γ ⊕ ∞ y γ ) + s(z ⊕ ∞ 0) 1 for all γ ∈ Γ. Now let us show some first easy examples.
Example 6.7. It is easy to check that c 0 is (1, < 1) -SQ <ℵ 0 . In fact, for every x 1 , . . . , x n ∈ S c 0 and ε > 0 we can find m ∈ N such that |x i (m)| ≤ ε holds for i ∈ {1, . . . , n} and it is clear that x i + (1 − ε)e m ≤ 1. Even more is true, given any sequence {X n : n ∈ N} of Banach spaces, c 0 (N, X n ) is (1, < 1) -SQ <ℵ 0 . On the other hand it is trivial to verify that c 0 is not SQ <ℵ 0 by considering the element x = ∞ n=1 n −1 e n ∈ S c 0 .
Following the same ideas as in [2, Theorem 2.5], the previous argument can be exploited also for proving more in general that somewhat regular subspaces of C 0 (X) spaces, where X is some non-compact locally compact and Hausdorff space, are (1, < 1) -SQ <ℵ 0 .
With similar ideas we can slightly improve the renorming result stated in [7, Theorem 2.3].
Theorem 6.8. A Banach space X contains an isomorphic copy of c 0 if and only if it admits an equivalent (1, < 1) -SQ <ℵ 0 norm.
Proof. Assume that X contains a subspace isometric to c 0 . Then there is a subspace Z of X * * so that X * * = ℓ ∞ ⊕ Z. Consider the norm ||| · ||| on ℓ ∞ described in Example 2.4. Now, consider on X * * the equivalent norm ||| · ||| ′ so that (X * * , ||| · ||| ′ ) = (ℓ ∞ , ||| · |||) ⊕ ∞ Z. By Corollary 4.2, (X * * , ||| · ||| ′ ) is SQ <ℵ 0 because (ℓ ∞ , ||| · |||) is SQ <ℵ 0 .
Now let x 1 = (u 1 , z 1 ), . . . , x k = (u k , z k ) ∈ S X and ε > 0. Keeping in mind the notation as in Example 2.4, find n ∈ N such that 1/n < ε and define y := (1 − ε)e m ∈ B c 0 , where m ∈ A n . Then a similar calculation as in the proof of [7, Theorem 2.3] yields that the element (y, 0) ∈ (1 − ε)B X and that x i + (y, 0) 1 holds for every i ∈ {1, . . . , k}. Hence, X is (1, < 1) -SQ <ℵ 0 . For the converse recall that every Banach space with (1, < 1) -SQ <ℵ 0 norm is ASQ and every ASQ space is known to contain c 0 by [3].
Notice that, if κ is an infinite cardinal and a Banach space X is (1, < 1) -SQ <κ , then a simple transfinite induction proves that X contains an isomorphic copy of c 0 (κ). Thus, in the case when κ is uncountable, the condition (1, < 1) -SQ <κ is different from ASQ <κ due to Example 3.1.
Before giving more examples, let us prove a variation of Theorem 4.1 that we will need later. Theorem 6.9. Let {X α : α ∈ A } be a family of Banach spaces and κ an infinite cardinal. If for every r ∈ (0, 1) there are infinitely many α ∈ A such that X α is (r, r) -SQ <κ , then ℓ ∞ (X α ) is (< 1, 1) -SQ <κ .
Proof. Fix r ∈ (0, 1) and A ∈ P <κ (S ℓ∞(Xα) ). For every s ∈ (r, 1) we can find α(s) ∈ A and y s ∈ S X α(s) satisfying sx(α(s)) + sy s ≤ 1 for all x ∈ A.
By our assumption, we can assume that, if s = s ′ , then α(s) = α(s ′ ). Define y ∈ S ℓ∞(Xα) by y(α) := sy s if α = α(s) for some s ∈ (r, 1), 0 otherwise.
Thanks to Lemma 6.3, we conclude that rx + y ∞ = 1 ∨ s∈(r, 1) rx(α(s)) + sy s } ≤ 1 holds for every x ∈ A.
Eventually we can present more examples of (r, s) -SQ <κ spaces that will also show that these properties are actually distinct from the regular (A)SQ <κ . Example 6.10. There exists a Banach space X which is M -embedded and strictly convex [16, P. 168]. Therefore X is ASQ [3, Corollary 4.3], however it is not (1, < 1) -SQ <ℵ 0 nor (< 1, 1) -SQ <ℵ 0 by Lemma 6.5.
Example 6.11. In the proof of Example 3.1, it is shown that the Banach space ℓ n (κ) is 2 −1/n , 2 −1/n -SQ <κ . Thus, X := ℓ ∞ (ℓ n (κ)) is (< 1, 1) -SQ <κ , thanks to Theorem 6.9, but it is not (1, < 1) -SQ <ℵ 0 , by Lemma 6.5, since it is a dual space.
Proposition 2 . 2 .
22Let X be a Banach space and κ an uncountable cardinal. The following are equivalent:
Corollary 4. 2 .
2Let X and Y be Banach spaces and κ an infinite cardinal. Then X ⊕ ∞ Y is (A)SQ <κ if and only if either X or Y is (A)SQ <κ .
Proposition 4 . 3 .
43Let {X α : α ∈ A } be an uncountable family of Banach spaces. Then the Banach space c 0 (A , X α ) is SQ <|A | .
Theorem 4 . 4 .
44Let κ be an uncountable cardinal. If X and Y are (A)SQ <κ , then X ⊗ π Y is (A)SQ <κ .
Definition 5.1. (see [8, Definitions 2.3 and 5.3]) Let X be a Banach space and κ an uncountable cardinal.
Definition 5.2. (see [9, Definitions 2.11 and 2.12]) Let X be a Banach space and κ an infinite cardinal.
AcknowledgmentThe research of A. AvilésIt is clear that being ASQ <κ coincides with being (< 1, < 1) -SQ <κ and that being SQ <κ corresponds to being (1, 1) -SQ <κ .Remark 6.2. Let s ∈ (0, 1]. The quantitative version of almost squareness studied in[21], named s-ASQ, corresponds to the space being (< 1, < s) -SQ <ℵ 0 .Before proceeding further, let us prove that every (r, s) -SQ <κ space can be described through subspaces of density character < κ.Proof. Suppose without loss of generality that s/s ′ ≤ r/r ′ and notice at first thatWe just proved that r ′ x + sy ≤ 1 holds for every r ′ ∈ (0, r]. We now can conclude sincewhere we used the first part of the proof together with the fact that r ′ s/s ′ ≤ r.Notice that, thanks to Lemma 6.3, we can already conclude that (r, s) -SQ <κ implies (r ′ , s ′ ) -SQ <κ whenever r ′ ≤ r and s ′ ≤ s. Theorem 6.4. Let X be a Banach space, κ an uncountable cardinal and r, s ∈ (0, 1]. Then X is (r, s) -SQ <κ if, and only if, for every subspace Y ⊂ X with dens(Y ) < κ, there exists x ∈ S X satisfying ry + stx ≤ y ∨ |t| for all y ∈ Y and t ∈ R.
Almost square dual Banach spaces. T A Abrahamsen, P Hájek, S Troyanski, J. Math. Anal. Appl. 4872124003T. A. Abrahamsen, P. Hájek and S. Troyanski, Almost square dual Banach spaces, J. Math. Anal. Appl. 487(2) (2020), 124003.
New applications of extremely regular functions spaces. T A Abrahamsen, O Nygaard, M Põldvere, Pacific J. Math. 301T. A. Abrahamsen, O. Nygaard and M. Põldvere, New applications of extremely regular functions spaces, Pacific J. Math. 301, 2 (2019), 385-394.
Almost square Banach spaces. T A Abrahamsen, J Langemets, V Lima, J. Math. Anal. Appl. 4342T. A. Abrahamsen, J. Langemets and V. Lima, Almost square Banach spaces, J. Math. Anal. Appl. 434(2) (2016), 1549-1565.
. P Alexandroff, H Hoff, I Topologie, Springer, Berlin-Heidelberg-New YorkP. Alexandroff and H. Hoff, Topologie I, Springer, Berlin-Heidelberg-New York, 1974.
A renorming characterization of Banach spaces containing ℓ1(κ), accepted in Publ. A Avilés, G Martínez-Cervantes, A. Rueda Zoca, MáthA. Avilés, G. Martínez-Cervantes and A. Rueda Zoca, A renorming characterization of Banach spaces containing ℓ1(κ), accepted in Publ. Máth.
Banach spaces containing c0 and elements in the fourth dual. A Avilés, G Martínez-Cervantes, A. Rueda Zoca, J. Math. Anal. Appl. 5082125911A. Avilés, G. Martínez-Cervantes and A. Rueda Zoca, Banach spaces containing c0 and elements in the fourth dual, J. Math. Anal. Appl. 508(2) (2022), 125911.
Some results on almost square Banach spaces. J Becerra Guerrero, G López-Pérez, A. Rueda Zoca, J. Math. Anal. Appl. 4382J. Becerra Guerrero, G. López-Pérez and A. Rueda Zoca, Some results on almost square Banach spaces, J. Math. Anal. Appl. 438(2) (2016), 1030-1040.
A characterization of Banach spaces containing ℓ1(κ) via ball-covering properties, accepted in Isr. S Ciaci, J Langemets, A Lissitsin, J. Math. S. Ciaci, J. Langemets and A. Lissitsin, A characterization of Banach spaces contain- ing ℓ1(κ) via ball-covering properties, accepted in Isr. J. Math.
Attaining strong diameter two properties for infinite cardinals. S Ciaci, J Langemets, A Lissitsin, J. Math. Anal. Appl. 5131126185S. Ciaci, J. Langemets and A. Lissitsin, Attaining strong diameter two properties for infinite cardinals, J. Math. Anal. Appl. 513(1) (2022), 126185.
Functional Analysis and Infinite-dimensional Geometry. M Fabian, P Habala, P Hájek, V Montesinos, J Pelant, V Zizler, CM Books in Mathematics. Springer-VerlagM. Fabian, P. Habala, P. Hájek, V. Montesinos, J. Pelant and V. Zizler, Functional Analysis and Infinite-dimensional Geometry, CM Books in Mathematics. Springer- Verlag. Berlin 2001.
Remarks on Gurarii spaces. J Garbulińska, W Kubis, Extracta Math. 26J. Garbulińska and W. Kubis, Remarks on Gurarii spaces, Extracta Math. 26 (2011), 235-269.
Unconditional almost squareness and applications to spaces of Lipschitz functions. L García-Lirola, A. Rueda Zoca, J. Math. Anal. Appl. 4511L. García-Lirola and A. Rueda Zoca, Unconditional almost squareness and applica- tions to spaces of Lipschitz functions, J. Math. Anal. Appl. 451(1) (2017), 117-131.
Metric characterization of first Baire class linear forms and octahedral norms. G Godefroy, Studia Math. 95G. Godefroy, Metric characterization of first Baire class linear forms and octahedral norms, Studia Math. 95 (1989), 1-15.
Spaces of universal placement, isotropic spaces and a problem of Mazur on rotations of Banach spaces (Russian). V I Gurarii, Sibirsk. Math.Ž. 7V. I. Gurarii, Spaces of universal placement, isotropic spaces and a problem of Mazur on rotations of Banach spaces (Russian), Sibirsk. Math.Ž. 7 (1966), 1002-1013.
Symmetric strong diameter two property. R Haller, J Langemets, V Lima, R Nadel, Mediterr. J. Math. 162pR. Haller, J. Langemets, V. Lima and R. Nadel, Symmetric strong diameter two property, Mediterr. J. Math. 16, No. 2, Paper No. 35, 17 p. (2019).
M -ideals in Banach spaces and Banach algebras. P Harmand, D Werner, W Werner, Lecture Notes in Math. 1547Springer-VerlagP. Harmand, D. Werner and W. Werner, M -ideals in Banach spaces and Banach algebras, Lecture Notes in Math. 1547, Springer-Verlag, Berlin-Heidelberg, 1993.
Summands in locally almost square and locally octahedral spaces. J D Hardtke, Acta Comment. Univ. Tartu. Math. 22J. D. Hardtke, Summands in locally almost square and locally octahedral spaces, Acta Comment. Univ. Tartu. Math. 22, 1 (2018), 149-162.
Almost square and octahedral norms in tensor product of Banach spaces. J Langemets, V Lima, A Rueda Zoca, RACSAM. 111J. Langemets, V. Lima and A. Rueda Zoca, Almost square and octahedral norms in tensor product of Banach spaces, RACSAM 111 (2017), 841-853.
Octahedral norms in tensor products of Banach spaces. J Langemets, V Lima, A Rueda Zoca, Quarterly J. Math. 68J. Langemets, V. Lima and A. Rueda Zoca, Octahedral norms in tensor products of Banach spaces, Quarterly J. Math. 68 (2017), 1247-1260.
Classical Banach spaces I. J Lindenstrauss, L Tzafriri, Springer VerlagBerlinJ. Lindenstrauss and L. Tzafriri, Classical Banach spaces I, Springer Verlag. Berlin (1977).
Quantitative versions of almost squareness and diameter 2 properties. E Oja, N Saealle, I Zolk, Acta Comment. Univ. Tartu. Math. 241E. Oja, N. Saealle and I. Zolk, Quantitative versions of almost squareness and diam- eter 2 properties, Acta Comment. Univ. Tartu. Math. 24, No. 1 (2020), 131-145.
On injective Banach spaces and the spaces L∞(µ) for finite measures µ. H P Rosenthal, Acta Math. 124H. P. Rosenthal, On injective Banach spaces and the spaces L∞(µ) for finite measures µ, Acta Math. 124 (1970), 205-247.
Almost squareness and strong diameter two property in tensor product spaces. A Rueda Zoca, RACSAM. 11484A. Rueda Zoca, Almost squareness and strong diameter two property in tensor product spaces, RACSAM 114 (2020), article 84.
Introduction to tensor products of Banach spaces. R A Ryan, Springer Monographs in Mathematics. Springer-VerlagR. A. Ryan, Introduction to tensor products of Banach spaces, Springer Monographs in Mathematics, Springer-Verlag, London, 2002.
J W Tukey, Convergence and uniformity in topology, Pricenton Univ. Press IX 90 p. 1940J. W. Tukey, Convergence and uniformity in topology, Pricenton Univ. Press IX 90 p, 1940.
Spain Email address: [email protected] (Ciaci, Langemets and Lissitsin) Institute of Mathematics and Statistics, University of Tartu. Avilés and Rueda Zoca) Universidad de Murcia, Departamento de Matemáticas, Campus de Espinardo 30100 MurciaNarva mnt 18, 51009 Tartu, Estonia Email address: [email protected](Avilés and Rueda Zoca) Universidad de Murcia, Departamento de Matemáticas, Campus de Espinardo 30100 Murcia, Spain Email address: [email protected] (Ciaci, Langemets and Lissitsin) Institute of Mathematics and Statistics, Uni- versity of Tartu, Narva mnt 18, 51009 Tartu, Estonia Email address: [email protected]
| []
|
[
"One Picture is Worth a Thousand Words: A New Wallet Recovery Process",
"One Picture is Worth a Thousand Words: A New Wallet Recovery Process"
]
| [
"Hervé Chabanne [email protected] \nIDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n\n",
"Vincent Despiegel \nIDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n\n",
"Linda Guiga \nIDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n\n"
]
| [
"IDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n",
"IDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n",
"IDEMIA and Télécom Paris\nIDEMIA\nIDEMIA and Télécom Paris\n"
]
| []
| We introduce a new wallet recovery process. Our solution associates 1) visual passwords: a photograph of a secretly picked object (Chabanne et al., 2013) with 2) ImageNet classifiers transforming images into binary vectors and, 3) obfuscated fuzzy matching (Galbraith and Zobernig, 2019) for the storage of visual passwords/retrieval of wallet seeds. Our experiments show that the replacement of long seed phrases by a photograph is possible. | 10.1109/globecom48099.2022.10001064 | [
"https://arxiv.org/pdf/2205.02511v1.pdf"
]
| 248,524,868 | 2205.02511 | 3a1c9526bcb486fe5249998eb00501a09bcd3cd3 |
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
Hervé Chabanne [email protected]
IDEMIA and Télécom Paris
IDEMIA
IDEMIA and Télécom Paris
Vincent Despiegel
IDEMIA and Télécom Paris
IDEMIA
IDEMIA and Télécom Paris
Linda Guiga
IDEMIA and Télécom Paris
IDEMIA
IDEMIA and Télécom Paris
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
Index Terms-Cryptographic ObfuscationImagenet ClassifierApplication of Machine Learning to Cryp- tocurrency Wallets
We introduce a new wallet recovery process. Our solution associates 1) visual passwords: a photograph of a secretly picked object (Chabanne et al., 2013) with 2) ImageNet classifiers transforming images into binary vectors and, 3) obfuscated fuzzy matching (Galbraith and Zobernig, 2019) for the storage of visual passwords/retrieval of wallet seeds. Our experiments show that the replacement of long seed phrases by a photograph is possible.
I. Introduction
Cryptocurrency wallets store private keys and make use of them for performing transactions among blockchains. Their loss is identified in [14] as one of the three challenges associated to Bitcoin. Today, they mainly rely on a seed phrase for their recovery [26]. In 2021, the New York Times reported [1] that "20 percent of the existing 18.5 million Bitcoin or around 3.7 million BTCs appear to be lost due to forgotten passwords".
To alleviate the burden of remembering this long password, we alternatively rely on the concept of visual passwords introduced in 2013 by Chabanne et al. [11].
The underlying principle of visual passwords is, in the context of authentication, the following:
• At the registration step, you choose an object and take a photograph of it. Your choice has to remain secret. • When you want to authenticate yourself, you take another photograph of the same object for a comparison image vs image with the reference. While [11] focuses on a single type of object: Hamiltonian circuits among a cube, with a design enabling many possible configurations; to ensure a good entropy, we here let the users choose among a great variety of different objects.
A special care is taken to the storage of references. We apply the work of Galbraith and Zobernig [17] to perform Hamming ball membership determining in an obfuscated way whether a binary vector lies close to a predetermined center.
Our main contribution is the introduction of a novel wallet recovery system. Moreover, we show its feasibility by our experiments transforming visual passwords by state of the art image processing algorithms into suitable binary vectors, called templates in the following, for secure storage/ seed retrieval.
The rest of the paper is organized as follows: in Sec. II we recall the techniques and security properties of obfuscated Hamming distance comparisons and show how to deliver a payload in case of a matching. In Sec. III, we describe how to transform visual password pictures into binary vector templates thanks to deep learning algorithms. In Sec. IV, we report our experiments. Sec. V details our proposal. Sec. VI concludes.
A. Related Works
For a general introduction to wallets in the context of Bitcoin; see, for instance, chapter 4 of [23].
While there are numerous other attempts to replace passwords using graphical interfaces and images [8], visual passwords [11] share a lot with biometric recognition. For instance, we are using in Sec. IV the same tools to evaluate the accuracy of our proposal, namely:
• The False Acceptance Rate (FAR) measures the proportion of times an imposter can fool the system. FAR is directly related to the security level. • On the opposite, the user's convenience is gauged thanks to the False Reject Rate (FRR) which corresponds to genuine attempts dismissed. As one cannot win both at the same time with FAR and FRR, the Detection Error Tradeoff (DET) curve which represents false rejection rate vs. false acceptance rate comes into consideration for determining the Equal Error Rate (EER) of the system where FAR and FRR are equal.
Major differences however differentiate biometrics and visual passwords. Biometrics are public and immutably linked to a person while visual passwords are secret and easy to renew. For instance, biometrics need liveness antispoofing countermeasures to thwart impersonation attacks and depending on their application, privacy enhancing technologies are necessary too.
Fuzzy matching has already been considered in the context of biometrics, around the notion of secure sketch introduced in [15]. Here the matching is realized thanks to an underlying error correcting code. As indicated in [17], parameters of secure sketch are thus "strongly constrained by the need for an efficient decoding algorithm". Moreover, when it comes to their implementation with real (biometric) data, their security is questionable [29], in particular regarding their reusability.
Cryptographic techniques such as a secret sharing mechanism [22] or multi-signature techniques [19] can also be envisaged for cryptocurrency wallets. Our proposal isthe same way, that passphrases are -complementary. Another solution [27] relies on a Diffie-Hellman exchange between hardware wallets with a human visual verification to thwart man-in-the-middle attacks.
II. Obfuscated Fuzzy Hamming Distance
Matching In this section, we show how to store our reference binary vector templates in a way that enables Hamming distance comparisons while preserving their confidentiality. I.e. we retrieve the wallet's seed when and only when a fresh template close to the reference is entered. For that, we make use of cryptographic obfuscation.
Obfuscation makes programs unintelligible while preserving their functionality. General obfuscation techniques are either impossible [7] or, despite major progress [21], ineffective. In 2014, [6] defines practical input-hiding obfuscation techniques for evasive functions including point functions "x == e", which return 1 when the input is equal to a predetermined constant e and 0 otherwise.
Example 1: [34] describes how to obfuscate variable comparisons "ax + b == y" where a, b are two k-bits constants. Let H stand for a preimage-resistant hash function with n-bits outputs, n > k. Choose at random t ∈ {0, 1} n−k and r ∈ {0, 1} k . Let h = H(r||t) and u = r + b. Values a, u, h are published. The obfuscated program then checks
H(ax + u − y||t) == h (1) while keeping the value b hidden. Note that when (1) is verified by inputs (x, y), b can be retrieved as b = y − ax.
Relying on a number-theoretic computational assumption called the Modular Subset Product (MSP) problem (see Fig. 1), [17] defines a Hamming distance obfuscator which checks whether an n-bits binary vector x is within Hamming distance r of a predetermined c for
r ≤ n/2 − log(2)nλ (2)
where λ is a security parameter. A vector c = (c 1 , . . . , c n ) ∈ {0, 1} n is encoded as
ENCODE(c) = ((p i ) i=1,...,n , q, C) (3) Fig. 2: An ImageNet Classifer where • C = n i=1 p ci i mod q; • (p i ) i=1,.
..,n are small distinct primes taken at random for each encoding; • q is a small safe prime verifing i∈I < q/2 for all I ⊂ 1, . . . , n with cardinality |I| < r. Typically, q ∼ (n log n) r . This encoding procedure keeps the vector c hidden when
r > log(2 √ 2πe) n log(nlog(n)) (4) A procedure DECODE is then defined s.t. DECODE((p i ) i=1,...,n , q, C, x) = c for each vector x which stands at Hamming distance d(c, x) < r. This procedure returns ⊥ for x s.t. d(c, x)
≥ r except when a false acceptance occurs. Note that, when r satisfies (4), this false acceptance cannot happen, see Sec. IV-C and [17] for details.
The obfuscated program with embedded data (p i ) i=1,...,n , q, C is executed as follows for an input x: In [17], the last line l 3 eliminates false acceptances. Write now c = c 1 ||c 2 and for b at random, ac 1 + b = c 2 . In our proposal, we replace l 3 by (using the notations introduced in Example 1):
l 1 : c = DECODE((p i ) i=1,l 3 ': If (1) stands for inputs c 1 ||c 2 = c Return b = c 2 −ac 1 ; else Return ⊥
We then obtain the RETRIEVESEED program comprised of the three lines: l 1 , l 2 , l 3 '.
III. Pictures Processing
In this section, we describe our choices for transforming photographs of visual passwords into binary vector templates. Our experiments are reported in the next section.
A. Templates Construction
Consider the architecture of an ImageNet classifier as in Fig. 2. It takes as an input an image from which its features are extracted thanks to a Convolutional Neural Network (CNN) to eventually output a classification. Similarly to the idea used in Face Recognition algorithms, the underlying representation is a good candidate feature for object recognition even if the objects were not in the training dataset. Consequently, in a first step, we remove the last classification layers to just keep floating point vectors of the internal representation. Finally, we binarize these vectors to obtain our templates.
B. Model Choice
We choose a model trained (with its parameters) among https://paperswithcode.com/sota/image-classification-on -imagenet.
After different trials (see Annex A), we pick VGG-16 [30] as the underlying model to classify images. This leads to vectors with 4096 floating-point coordinates. To reduce the dimension to only 512 bits, we apply Locality Sensitive Hashing (LSH) [20]. For this, we generate a random sparse matrix of shape (4096, 512) thanks to the Scikit library: https://scikit-learn.org/stable/. We then multiply the generated matrix by the original coordinates. Lastly, we only keep the signs of the resulting vector elements so as to turn the floating values into binary ones. At the end, our overall architecture is similar to the perceptual hashing algorithm NeuralHash [2].
IV. Experiments
A. Test Dataset
To validate our experiments, we use the Amsterdam Library of Object Images (ALOI) [18]. The ALOI dataset is made of 1,000 objects recorded under various viewing angles, illumination angles, and illumination colors (Fig. 3), yielding a total of 110,250 images for the collection.
B. Accuracy
To test the accuracy of our system, we select, for each object, 3 different views -corresponding to rotation angles of 0, 15 and 35 degrees -which seems realistic in terms of noise for the target scenario. We then obtain the resulting DET curves shown in Fig. 4.
From our observations, our binarization process only slightly degrades our overall accuracy. Some more sophisticated methods as in [32] could be used for binarization but as the degradation is under control, our experiments stick to this simple method.
C. Implementation Details
With n = 512, we choose r = 140, placing ourselves at the rightmost part of Fig. 1. These parameters satisfy both inequalities (2) and (4).
Our implementation of the ENCODE (resp. RE-TRIEVESEED) procedure yields on our laptop an average encoding time (resp. decoding time) of 50 ms (resp. 10 ms). These timings are in line with the ones given by [17].
We obtain then an FAR around 4.10 −4 for a rotation angle of 15 degrees (resp. 35 degrees) and a corresponding FRR of 1.8% (resp. 7%). Note that, in our system -in opposition to biometric systems where a false reject can imply for a user to be blocked at a gate -a false reject simply demands for a new photograph of the referenced object to be taken.
Remark 1: A user can store different objects. For each of them, its encoding enables us to hide a new secret k i , i = 1, . . . , m. By taking the exclusive OR of all of them k 1 ⊕ . . . ⊕ k m = k, the resulting k is obtained for an illegitimate user when and only when each of the m objects he had chosen leads to an encoding which matches with the genuine stored object.
V. Our Proposal
As pointed out by [5], there is a huge gap, e.g. for VGG-16, regarding the intrinsic dimension -i.e. the minimal number of parameters needed to describe a representation -between input images and outputs of CNNs. We rely on that observation to mitigate the risk of an attack which looks for false positives.
We thus envisage to implement a proprietary algorithm on a dedicated server -i.e. with dedicated metaparameters and training -to compute templates corresponding to photos of objects. This way, template requests can be recorded and a security policy be established to limit their number, enforcing access control to templates.
To protect users against the server, we suggest sending visual passwords encrypted thanks to homomorphic encryption. Template computation is now possible directly in the encrypted domain [13] and new progress is announced [9], [28].
We now consider that a homomorphic encryption scheme is chosen [4] and that users generate their own private key for this scheme. Note that they do not have to keep them.
A. Detailed Description
Our system is made of:
• users; • a dedicated server S in charge of computing templates for users from their visual password. The different steps of our wallet recovery process are summarized in Fig. 5 More specifically, a user U is going to ENCODE his template (3) and perform RETRIEVESEED using his own device; e.g. his mobile phone. This device also enables him to capture and then, encrypt his visual password (resp. decrypt his template) before sending it to S (resp. after receiving it from S). We consider that all these operations performed on the device of U are safe from attacks. The way his wallet's seed is used after its retrieval is out-of-scope of this paper.
Server S is considered honest-but-curious regarding the requests from the users. It protects the implementation of the proprietary model M and restricts templates computation.
B. Security Discussion
We consider three factors to be taken into account for the security of our wallet seeds recovery process:
• Storage of the obfuscated template; • Access to a templates construction algorithm; • Choice diversity for visual passwords. Our threat model is simple: we want to protect against an adversary who tries to retrieve the wallet's seed by presenting a binary vector close to the stored reference template.
Having access to the know-how for transforming visual passwords into templates enables this adversary to attempt to obtain a false acceptance. Otherwise, we rely on the security provided by the obfuscated fuzzy Hamming distance matching. Searching in {0, 1} n , n = 512 leads to a probability of 1/2 λ with λ = 87 for r = 140 (see (2)) to find by chance a vector in the targeted Hamming ball. In contrast, the knowledge of the underlying templates subspace enables the adversary to drastically reduce his efforts with a FAR of 4.10 −4 (see also Annex B). The confidentiality of the model M is paramount for our wallet recovery process.
Regarding that point, besides server compromise, given oracle access to a neural network as in our proposal, model extraction attacks can be launched [31]. Today, the best attacks [10], [12], [24], [25], [33] against ImageNet classifiers seem unpractical. Note also that a first defense strategy keeping the model's accuracy has been introduced in [16].
VI. Conclusion
We are confident that there is room for improvement of the accuracy of our system. For instance, facial recognition which is an image processing problem of roughly the same difficulty as ours obtains less than 1% of FRR at FAR of 10 −6 with systems working all around the world [3]. Our performances of Sec. IV-B look poor in comparison. As usual in big data, a huge dataset might enable us to improve our model. We are currently looking for a larger database of objects to work with. For instance, regarding the FAR we obtained, the presence of various objects which look similar -see, for instance, Fig. 6 -among the 1,000 within ALOI, tends to increase this rate in our experiments. A user does have a much larger choice at his disposal. For instance, a museum collection often counts more than a million objects while offering a long term storage for them.
Appendix A Selected Model
To turn images into binary vectors, we use an NN. We opt for the VGG architecture after various trials. For instance, we tried the pretrained EfficientNet architecture, which has a higher accuracy on ImageNet (see https://paperswithcode.com/sota/image-classification-on -imagenet). However, it yields a higher FRR for a given FAR on ALOI, as can be seen in Fig. 7. Similarly, in order to reduce the number of bits from 4,096 to 512, we select LSH even though other methods, such as Principal Component Analysis (PCA) exist. However, PCA depends on the training data, when LSH is independent of it. If the PCA is determined on Imagenette data rather than ALOI images, the resulting accuracy is lower than LSH, as can be seen in Fig. 8. Appendix B Probability of Randomly Selecting a Template in the Hamming Ball [17] determines an upper bound for the probability of randomly selecting an element y ∈ {0, 1} n in B x (r), the Hamming ball of radius r and center x, taking also into account a security parameter λ.
In our case, for n = 512, r = 140 and λ = 87, we have: P r y∈{0,1} 512 [y ∈ B x (140)] ≤ 1 2 87 . Exploiting the inherent correlation between coordinates of templates, we are now going to estimate an upper bound for P r y∈{0,1} n ∩ Templates subspace [y ∈ B x (r)]
corresponding to when an adversary restricts himself to search among the templates subspace. This is the case when he has access to the model M . Moreover in our experiments, we restrict ourselves to image inputs and we compute the templates of the 13,394 images coming from the Imagenette dataset (https:// github.com/f astai/imagenette) searching for a false acceptance with one of the 1,000 templates of the ALOI dataset (without any rotation). We obtain that 4 among these 1,000 ALOI templates get a false acceptance -a distance less than 140 -with respectively 2, 12, 1 and 2 templates coming from the other dataset. This leads to a ratio of 1.27×10 −6 of the comparisons. We lose two orders of magnitude from the FAR of Sec. IV-C due to the fact that we are here looking at images that can be quite quite different than the objects from ALOI.
Fig. 1 :
1Hardness of MSP Problem
...,n , q, C, x) l 2 : If c =⊥ Return 0 l 3 : Return the obfuscated point function comparison to c
Fig. 3 :
3Different Views of Object 197
Fig. 5 :
5U picks his visual password U generates his key for the underlying homomorphic encryption scheme and encrypts his visual password U asks the dedicated server S for the template associated to his visual password S computes U 's template, c, encrypted U decrypts c, computes ENCODE(c) (3) and stores it at a place of his choice Seed Recovery: U gets back ENCODE(c) and takes a new photography of his visual password, (optionally, if needed, U generates a new homomorphic encryption key), encrypts it and sends the ciphertext to S U asks S for the template x corresponding to this new photography S computes x in an encrypted form U decrypts x and retrieves his wallet's seed as RETRIEVESEED(ENCODE(c),x) Our Wallet Recovery Process
Fig. 6 :
6Objects 196, 197, 198 side-by-side
Fig. 7 :
7Rotation of 15 degrees on EfficientNet.
Fig. 8 :
8Rotation of 15 degrees with PCA on VGG-16.
Acknowledgements.This work was partly supported by the iMARS project (G.A. no 883356), funded by the European Union's Horizon 2020 research and innovation program.
Homomorphic encryption security standard. Martin Albrecht, Melissa Chase, Hao Chen, Jintai Ding, Shafi Goldwasser, Sergey Gorbunov, Shai Halevi, Jeffrey Hoffstein, Kim Laine, Kristin Lauter, Satya Lokam, HomomorphicEncryption.org. Amit Sahai, and Vinod VaikuntanathanTechnical reportDaniele MicciancioMartin Albrecht, Melissa Chase, Hao Chen, Jintai Ding, Shafi Goldwasser, Sergey Gorbunov, Shai Halevi, Jeffrey Hoffstein, Kim Laine, Kristin Lauter, Satya Lokam, Daniele Miccian- cio, Dustin Moody, Travis Morrison, Amit Sahai, and Vinod Vaikuntanathan. Homomorphic encryption security standard. Technical report, HomomorphicEncryption.org, 2018.
Intrinsic dimension of data representations in deep neural networks. Alessio Ansuini, Alessandro Laio, Jakob H Macke, Davide Zoccolan, NeurIPS. Alessio Ansuini, Alessandro Laio, Jakob H. Macke, and Davide Zoccolan. Intrinsic dimension of data representations in deep neural networks. In NeurIPS, pages 6109-6119, 2019.
Obfuscation for evasive functions. Boaz Barak, Nir Bitansky, Ran Canetti, Yael Tauman Kalai, Omer Paneth, Amit Sahai, TCC. Springer8349Boaz Barak, Nir Bitansky, Ran Canetti, Yael Tauman Kalai, Omer Paneth, and Amit Sahai. Obfuscation for evasive func- tions. In TCC, volume 8349 of Lecture Notes in Computer Science, pages 26-51. Springer, 2014.
On the (im)possibility of obfuscating programs. Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, P Salil, Ke Vadhan, Yang, CRYPTO. SpringerBoaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil P. Vadhan, and Ke Yang. On the (im)possibility of obfuscating programs. In CRYPTO, vol- ume 2139 of Lecture Notes in Computer Science, pages 1-18. Springer, 2001.
Graphical passwords: Learning from the first twelve years. Robert Biddle, Sonia Chiasson, Paul C Van Oorschot, ACM Comput. Surv. 44441Robert Biddle, Sonia Chiasson, and Paul C. van Oorschot. Graphical passwords: Learning from the first twelve years. ACM Comput. Surv., 44(4):19:1-19:41, 2012.
Is revolutionary hardware for fully homomorphic encryption important?. Charlotte Bonte, Rosario Cammarota, Wei Dai, Joshua Fryman, Huijing Gong, Duhyeong Kim, Raghavan Kumar, Poornima Lalwaney Kim Laine, Sanu Mathew, Nojan Sheybani, Anand Rajan, Andrew Reinders, Michael Steiner, Vikram Suresh, Sachin Taneja, Marc Trifan, Alexander Viand, Wei Wang, Wen Wang, Chris Wilkerson, Jin Yang, What else is needed? COSADECharlotte Bonte, Rosario Cammarota, Wei Dai, Joshua Fryman, Huijing Gong, Duhyeong Kim, Raghavan Kumar, Poornima Lal- waney Kim Laine, Sanu Mathew, Nojan Sheybani, Anand Ra- jan, Andrew Reinders, Michael Steiner, Vikram Suresh, Sachin Taneja, Marc Trifan, Alexander Viand, Wei Wang, Wen Wang, Chris Wilkerson, and Jin Yang. Is revolutionary hardware for fully homomorphic encryption important? What else is needed? COSADE, 2021.
Cryptanalytic extraction of neural network models. Nicholas Carlini, Matthew Jagielski, Ilya Mironov, CRYPTO (3). Springer12172Nicholas Carlini, Matthew Jagielski, and Ilya Mironov. Crypt- analytic extraction of neural network models. In CRYPTO (3), volume 12172 of Lecture Notes in Computer Science, pages 189- 218. Springer, 2020.
Using hamiltonian totems as passwords. Hervé Chabanne, Jean-Michel Cioranesco, Vincent Despiegel, Jean-Christophe Fondeur, David Naccache, IACR Cryptol. ePrint Arch. 751Hervé Chabanne, Jean-Michel Cioranesco, Vincent Despiegel, Jean-Christophe Fondeur, and David Naccache. Using hamil- tonian totems as passwords. IACR Cryptol. ePrint Arch., page 751, 2013.
Exploring connections between active learning and model extraction. Kamalika Varun Chandrasekaran, Irene Chaudhuri, Somesh Giacomelli, Songbai Jha, Yan, USENIX Security Symposium. USENIX AssociationVarun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. Exploring connections between active learning and model extraction. In USENIX Security Symposium, pages 1309-1326. USENIX Association, 2020.
Programmable bootstrapping enables efficient homomorphic inference of deep neural networks. Ilaria Chillotti, Marc Joye, Pascal Paillier, CSCML. Springer12716Ilaria Chillotti, Marc Joye, and Pascal Paillier. Programmable bootstrapping enables efficient homomorphic inference of deep neural networks. In CSCML, volume 12716 of Lecture Notes in Computer Science, pages 1-19. Springer, 2021.
A survey on security and privacy issues of bitcoin. Mauro Conti, Sandeep Kumar, E , Chhagan Lal, Sushmita Ruj, IEEE Commun. Surv. Tutorials. 204Mauro Conti, Sandeep Kumar E, Chhagan Lal, and Sushmita Ruj. A survey on security and privacy issues of bitcoin. IEEE Commun. Surv. Tutorials, 20(4):3416-3452, 2018.
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Yevgeniy Dodis, Leonid Reyzin, Adam D Smith, EUROCRYPT. Springer3027Yevgeniy Dodis, Leonid Reyzin, and Adam D. Smith. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. In EUROCRYPT, volume 3027 of Lecture Notes in Computer Science, pages 523-540. Springer, 2004.
Increasing the cost of model extraction with calibrated proof of work. Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, Nicolas Papernot, abs/2201.09243CoRRAdam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, and Nicolas Papernot. Increasing the cost of model extraction with calibrated proof of work. CoRR, abs/2201.09243, 2022.
Obfuscated fuzzy hamming distance and conjunctions from subset product problems. D Steven, Lukas Galbraith, Zobernig, TCC (1). Springer11891Steven D. Galbraith and Lukas Zobernig. Obfuscated fuzzy hamming distance and conjunctions from subset product prob- lems. In TCC (1), volume 11891 of Lecture Notes in Computer Science, pages 81-110. Springer, 2019.
The Amsterdam Library of Object Images. Jan-Mark Geusebroek, Gertjan J Burghouts, Arnold W M Smeulders, Int. J. Comput. Vis. 611Jan-Mark Geusebroek, Gertjan J. Burghouts, and Arnold W. M. Smeulders. The Amsterdam Library of Object Images. Int. J. Comput. Vis., 61(1):103-112, 2005. URL: https://aloi.science .uva.nl/.
Hyeonsang Eom, and Yongseok Son. An efficient multi-signature wallet in blockchain using bloom filter. Jongbeen Han, Mansub Song, SAC. ACMJongbeen Han, Mansub Song, Hyeonsang Eom, and Yongseok Son. An efficient multi-signature wallet in blockchain using bloom filter. In SAC, pages 273-281. ACM, 2021.
Approximate nearest neighbors: Towards removing the curse of dimensionality. Piotr Indyk, Rajeev Motwani, STOC. ACMPiotr Indyk and Rajeev Motwani. Approximate nearest neigh- bors: Towards removing the curse of dimensionality. In STOC, pages 604-613. ACM, 1998.
Indistinguishability obfuscation from well-founded assumptions. Aayush Jain, Huijia Lin, Amit Sahai, STOC. ACMAayush Jain, Huijia Lin, and Amit Sahai. Indistinguishability obfuscation from well-founded assumptions. In STOC, pages 60-73. ACM, 2021.
Highly-efficient and composable password-protected secret sharing (or: How to protect your bitcoin wallet online). Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk, Jiayu Xu, EuroS&P. IEEEStanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk, and Jiayu Xu. Highly-efficient and composable password-protected secret sharing (or: How to protect your bitcoin wallet online). In EuroS&P, pages 276-291. IEEE, 2016.
Bitcoin and Cryptocurrency Technologies -A Comprehensive Introduction. Arvind Narayanan, Joseph Bonneau, Edward W Felten, Andrew Miller, Steven Goldfeder, Princeton University PressArvind Narayanan, Joseph Bonneau, Edward W. Felten, An- drew Miller, and Steven Goldfeder. Bitcoin and Cryptocurrency Technologies -A Comprehensive Introduction. Princeton Uni- versity Press, 2016.
Towards reverseengineering black-box neural networks. Bernt Seong Joon Oh, Mario Schiele, Fritz, Explainable AI. Springer11700Seong Joon Oh, Bernt Schiele, and Mario Fritz. Towards reverse- engineering black-box neural networks. In Explainable AI, volume 11700 of Lecture Notes in Computer Science, pages 121- 144. Springer, 2019.
Knockoff nets: Stealing functionality of black-box models. Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz, Computer Vision Foundation / IEEE. CVPRTribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. Knock- off nets: Stealing functionality of black-box models. In CVPR, pages 4954-4963. Computer Vision Foundation / IEEE, 2019.
. Marek Palatinus, Pavol Rusnak, Aaron Voisine, Sean Bowe, BIP-39Marek Palatinus, Pavol Rusnak, Aaron Voisine, and Sean Bowe. BIP-39. https://github.com/bitcoin/bips/blob/master/bip-00 39.mediawiki, 2013.
New secure approach to backup cryptocurrency wallets. Hossein Rezaeighaleh, Cliff C Zou, GLOBECOM. IEEEHossein Rezaeighaleh and Cliff C. Zou. New secure approach to backup cryptocurrency wallets. In GLOBECOM, pages 1-6. IEEE, 2019.
F1: A fast and programmable accelerator for fully homomorphic encryption. Nikola Samardzic, Axel Feldmann, Aleksandar Krastev, Srinivas Devadas, Ronald G Dreslinski, Christopher Peikert, Daniel Sánchez, MICRO. ACMNikola Samardzic, Axel Feldmann, Aleksandar Krastev, Srinivas Devadas, Ronald G. Dreslinski, Christopher Peikert, and Daniel Sánchez. F1: A fast and programmable accelerator for fully homomorphic encryption. In MICRO, pages 238-252. ACM, 2021.
Privacy weaknesses in biometric sketches. Koen Simoens, Pim Tuyls, Bart Preneel, IEEE Symposium on Security and Privacy. IEEE Computer SocietyKoen Simoens, Pim Tuyls, and Bart Preneel. Privacy weak- nesses in biometric sketches. In IEEE Symposium on Security and Privacy, pages 188-203. IEEE Computer Society, 2009.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, ICLR. Karen Simonyan and Andrew Zisserman. Very deep convolu- tional networks for large-scale image recognition. In ICLR, 2015.
Stealing machine learning models via prediction apis. Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, Thomas Ristenpart, USENIX Security Symposium. USENIX AssociationFlorian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction apis. In USENIX Security Symposium, pages 601- 618. USENIX Association, 2016.
Cryptographic key derivation from biometric inferences for remote authentication. Erkam Uzun, Carter Yagemann, Simon P Chung, Vladimir Kolesnikov, Wenke Lee, AsiaCCS. ACMErkam Uzun, Carter Yagemann, Simon P. Chung, Vladimir Kolesnikov, and Wenke Lee. Cryptographic key derivation from biometric inferences for remote authentication. In AsiaCCS, pages 629-643. ACM, 2021.
Stealing hyperparameters in machine learning. Binghui Wang, Neil Zhenqiang Gong, IEEE Symposium on Security and Privacy. Binghui Wang and Neil Zhenqiang Gong. Stealing hyperparam- eters in machine learning. In IEEE Symposium on Security and Privacy, pages 36-52. IEEE Computer Society, 2018.
When are opaque predicates useful?. Lukas Zobernig, Steven D Galbraith, Giovanni Russello, TrustCom/BigDataSE. IEEELukas Zobernig, Steven D. Galbraith, and Giovanni Russello. When are opaque predicates useful? In TrustCom/BigDataSE, pages 168-175. IEEE, 2019.
| [
"https://github.com/bitcoin/bips/blob/master/bip-00"
]
|
[
"Deep Object Centric Policies for Autonomous Driving",
"Deep Object Centric Policies for Autonomous Driving"
]
| [
"Dequan Wang ",
"Coline Devin ",
"Qi-Zhi Cai ",
"Yu Fisher ",
"Trevor Darrell "
]
| []
| []
| While learning visuomotor skills in an end-toend manner is appealing, deep neural networks are often uninterpretable and fail in surprising ways. For robotics tasks, such as autonomous driving, models that explicitly represent objects may be more robust to new scenes and provide intuitive visualizations. We describe a taxonomy of "object-centric" models which leverage both object instances and end-to-end learning. In the Grand Theft Auto V simulator, we show that object centric models outperform object-agnostic methods in scenes with other vehicles and pedestrians, even with an imperfect detector. We also demonstrate that our architectures perform well on real world environments by evaluating on the Berkeley DeepDrive Video dataset. | 10.1109/icra.2019.8794224 | [
"https://arxiv.org/pdf/1811.05432v1.pdf"
]
| 53,290,474 | 1811.05432 | 31c2c312c291008a9db1c888c4a41eed291b75f4 |
Deep Object Centric Policies for Autonomous Driving
Dequan Wang
Coline Devin
Qi-Zhi Cai
Yu Fisher
Trevor Darrell
Deep Object Centric Policies for Autonomous Driving
While learning visuomotor skills in an end-toend manner is appealing, deep neural networks are often uninterpretable and fail in surprising ways. For robotics tasks, such as autonomous driving, models that explicitly represent objects may be more robust to new scenes and provide intuitive visualizations. We describe a taxonomy of "object-centric" models which leverage both object instances and end-to-end learning. In the Grand Theft Auto V simulator, we show that object centric models outperform object-agnostic methods in scenes with other vehicles and pedestrians, even with an imperfect detector. We also demonstrate that our architectures perform well on real world environments by evaluating on the Berkeley DeepDrive Video dataset.
I. INTRODUCTION
End-to-end approaches to visuomotor learning are appealing in their ability to discover which features of an observed environment are most relevant for a task, and to be able to exploit large amounts of training data to discover both a policy and a co-dependent visual representation. Yet, the key benefit of such approaches-that they learn from task experience-is also their Achilles heel when it comes to many real-world settings, where behavioral training data is not unlimited and correct perception of "long-tail" visual phenomena can be critical for robust performance.
Learning all visual parameters of a visuomotor policy from task reward (or demonstration cloning) places an undue burden on task-level supervision or reward. In autonomous driving scenarios, for example, an agent should ideally be able to perceive objects and vehicles with a wide range of appearance, even those that are not well represented in a behavioral training set. Indeed, for many visuomotor tasks, there exist datasets with supervision for perception tasks, such as detection or segmentation, that do not provide supervision for behaviour learning. Learning the entire range of vehicle appearance from steering supervision alone, while optimal in the limit of infinite training data, clearly misses the mark in many practical settings.
Classic approaches to robotic perception have employed separate object detectors to provide a fixed state representation to a rule-based policy. Multistage methods, such as those which first segment a scene, can avoid some aspects of the domain transfer problem [1], but do not encode discrete objects, and thus are limited to holistic reasoning. End-toend learning with pixel-wise attention can localize specific objects and provide interpretability, but throws away the existence of instances. 1 We propose an object-centric perception approach to deep control problems, and focus our experimentation on autonomous driving tasks. Existing end-to-end models are holistic in nature; our approach augments policy learning with explicit representations that provide object level attention.
In this work we consider a taxonomy of representations that consider different levels of objects-centric representations, such as discreteness and sparsity. We define a family of approaches to object-centric models, and provide a comparative evaluation of the benefit of incorporating object knowledge either at a pixel or box level, with either sparse or dense coverage, and with either pooled or concatenated features.
We evaluate these aspects in a challenging simulated driving environment with many cars and pedestrians, as well as on real dash-cam data, as shown in Figure 1. We show that using a sparse and discrete object-centric representation with a learned per-object attention outperforms previous methods in on-policy evaluations and provides interpretability about which objects were determined most relevant to the policy.
II. RELATED WORK
Approaches to robot skill learning face bias/variance tradeoffs, including in the definition of a policy model. One extreme of this trade-off is to make no assumptions about the structure of the observations, such as end-to-end behavior cloning from raw sensory data [3], [4], [5]. At the opposite end, one can design a policy structure that is very specific to a particular task, e.g. for driving by calculating margins between cars, encoding lane following, and tracking pedestrians [6]. These modular pipelines with rule-based system dominate autonomous driving industry [7], [8], [9]. The image is first passed through a 34-layer DLA convolutional network [2], which outputs RoI pooled features for each object along with globally pooled features for the whole image. Then object-level attention layer calculates the task-oriented importance score for each RoI. The linear policy layer takes both global and object features and predicts action for next step. The first attempt at training an end-to-end driving policy from raw inputs traces back to 1980s with ALVINN [10]. Muller et al. revisited this idea to help off-road mobile robots with obstacle avoidance system [11]. Recently, Bojarski et al. demonstrate the appeal of foregoing structure by training a more advanced convolutional network to imitate demonstrated driving [3], [4]. Xu et al. advocate learning a driving policy from a uncalibrated crowd-sourced video dataset [5] and show their model can predict the true actions taken by the drivers from RGB inputs. Codevilla et al. [12] leverage the idea of conditional imitation learning on highlevel command input in order to resolve the ambiguity in action space. These end-to-end models, which automatically discover and construct the mapping from sensory input to control output, reduce the burden of hand-crafting rules and features. However, these approaches have not yet been shown to work in complex environments, such as intersections with other drivers and pedestrians.
We address how to best represent images for robotics tasks such as driving. Muller et al. train a policy model from the semantic segmentation of images, which increases generalization from synthetic to real-world [1]. Chen et al. provide an additional intermediate stage for end-to-end learning, which learns the policy on the top of some ConvNetbased measurements, such as affordance of road/traffic state for driving [13]. Sauer et al. combine the advantages of conditional learning and affordance [14]. The policy module is built on a set of low-dimensional affordance measure-ments, with the given navigation commands. We argue for an object-centric approach which allows objects to be handled explicitly by the model. Prior work has encoded objects as bounding box positions [15] for manipulation tasks, but does not use end-to-end training and discards all information about the objects except for their pixel positions. We expand upon this work and evaluate a taxonomy of "object-centric" neural network models on the driving task.
III. OBJECT-CENTRIC POLICIES
We describe an generic architecture that takes in RGB images and outputs actions. Our model expresses a series of choices that add different levels of object-centricity to the model. Our goal is to identify which aspects are important for visuomotor tasks such as autonomous driving.
A. Generic Architecture
The generic form of our model takes in an RGB image and outputs two sets of features: global image contextual features and an object-centric representation. The global contextual features are produced by a convolutional network over the whole image, followed by a global average pooling operation. The object-centric representation is constructed as described below to produce a fixed-length object-centric representation. The global features are concatenated with the object representation, and passed to a fully connected policy network which outputs a discretized action. For on-policy evaluation, a hard-coded PID controller converts the action to low-level throttle, steer, and brake commands. return concatenate(remaining f i , sorted by w i ) 15: else if summation then 16: return sum(remaining f i ) 17: end if
B. Objectness Taxonomy
What does it mean for a end-to-end model to be "objectcentric"? In this section, we define a taxonomy of structures that leverage different aspects of "objectness". By defining this taxonomy and placing previous work within it, we evaluate which aspects bring the greatest gains in performance specifically for driving scenarios. The aspects discussed are countability, selection, and aggregation. Figure 3 visualizes the levels.
1) Countability: Discrete vs Continuous: An example of a continuous object-centric representation is a pixel-level attention map over an image, as used in [16]. In contrast, a discrete representation could be a bounding box or instance mask. The potential benefit of keeping a discrete object structure is that a model may need to reason explicitly over instances (such as cars navigating an intersection) rather than reasoning over the global vehicle "stuff". Our implementation of discrete objects applies pre-trained FPN detector [17] to output bounding boxes for vehicles and pedestrians. We utilize RoI-pooling layer [18] to extract regional feature for each box. The boxes and their respective features are treated as a set of objects. In the discrete setting, we define O as the list of objects returned by the detector, and f (o i ) as the RoI features of the i-th object. We define G as the global features from the whole image.
2) Selection: Sparse vs Dense: Should the policy model reason over all objects at once (dense), or should it first select a fixed number (sparse) of salient objects and consider only those? The former allows more flexibility, but e.g., may distract the policy with cars that are very far away or separated from the agent by a median. To obtain a relevance score for each object, we train a task-specific selector jointly with the policy. The selector is a network that takes in the RoI features of each object concatenated with the global image features and outputs a scalar score, indicating the relevance of the object. The scores w are evaluated with a softmax to produce a weight between 0 and 1 for each object. In the sparse model, only the top k scoring objects are used in the policy.
3) Aggregation: Sum vs Concatenate: If using discrete objects, a decision needs to be taken about how to combine the objects into a single representation. One possible approach is the weight and sum the features of the objects, while another approach is to concatenate the features. The former is agnostic to the number of objects and is order invariant, while the latter may allow for more nuanced computation about multi-object decisions. Our implementation of the concatenation approach is to sort the objects by their selector weights and concatenate the featuresw i * f i in order from largest w i to smallest.
IV. EXPERIMENTS
We evaluate our object-centric models on both a simulated environment and a real-world dataset. Specifically, we use the Grand Theft Auto V simulation [19] and the Berkeley DeepDrive Video dataset [5] for online and offline evaluation, respectively. All models are trained on a behavioral cloning objective.
A. Evaluation Setup 1) Online Driving Simulation: For the simulation experiments, 1.6 million training frames were collected by using the in-game navigation system as the expert policy. Following a DAgger-like [20] augmented imitation learning pipeline, noise was added to the control command every 30 seconds to generate diverse behavior. The noisy control frames and the following ∼ 7 frames were dropped during training to avoid replicating noisy behavior. The simulation was rendered at 12 frames per second. The training dataset was collected over 1000 random paths across 2km in the game. The in-game times ranged from 8:00 am to 7:00 pm with the default cloudy weather. In total, Each frame included control signals, such as speed, angle, throttle, steering, brake, as well as ground-truth bounding boxes around vehicles and pedestrians. During our training and testing procedure we used a camera in front of the car which keeps a fixed 60 • horizontal field of view (FoV). The maximum speed of all vehicles was set to 20km/h. When training a policy, the expert's continuous action was discretized into 9 actions: (left, straight, right) × (fast, slow, stop). At evaluation time, we used a PID controller to translate the discrete actions into continuous control signals per frame.
For testing, we deployed the model in 8 locations unseen during training, 2 highway and 6 urban intersections. Figure 7 demonstrates some example scene layouts in our simulation environment. For each location, we tested the model for 100 minutes: the agent was run for 10 independent roll-outs lasting 10 minutes each. If the vehicle crashed or got stuck during a rollout, the incident was recorded and the in-game AI intervened over for at least 15 seconds until 100m. The top row shows results using a learned detection model, while the bottom row uses ground-truth bounding box. The object centric models (green) overall perform better than the object agnostic models (blue), with the sparse models being the best. The highway environment is easier to drive than the urban environment. Comparing the heuristic selector with the learned selector used in the "sparse object" model, it is clear that learning a selector provides better results. it recovered. An extreme accident which took more time to recover from would be penalized more in our metric as it would travel less far overall; the frames during the intervention were not counted towards the total.
The models were evaluated with several metrics. For each roll-out, we calculated the total distance travelled, the number of collisions, and the number of interventions by the in-game AI. To compare across roll-outs, we computed the distance driven between AI interventions, the number of collisions and interventions per 100m traveled.
2) Real-world Offline Dataset: We used 2.2 million training frames and 0.2 million testing frames from a large-scale crowd-sourcing dash-cam video dataset, with diverse driving behaviors. Each frame was accompanied by raw sensory data from GPS, IMU, gyroscope, magnetometer, as well as sensor-fused measurements like course and speed.
We follow the settings of continuous action driving model [5]. For each frame, the model was trained to predict the expert's future linear and angular speeds. The predictions were made at intervals of 1/3 seconds during training.
For evaluation, we again follow the method in [5], which first discretized speed and angle into 30 bins each. We then mapped the joint distribution of speed and angle into 30 × 30 = 900 bins. We evaluated the 900-way classification model trained on this dataset by the perplexity of the model on withheld test data. Specifically, we calculated the value of softmax loss function as perplexity indicator, following the evaluation protocol of [5].
B. Implementation Details
The convolutional network was a 34-layer DLA model [2] pre-trained on ImageNet [21], with the open-source frame- work PyTorch [22]. We use a Detectron model [23] trained on MSCOCO [24] to generate bounding boxes for moving objects, specifically vehicles and pedestrians. We used the Adam optimizer [25] for 3 epochs with initial learning rate 0.001, weight decay 10 −4 , and batch size 128. We do not use any data augmentation, which is different from [12], [14]. All sparse models use k = 5 to keep the top 5 objects and discard the rest.
C. Results
We evaluate several baselines, prior methods, and ablations. The baseline method is based on the the network by Xu et al. [5], which does not represent objects or use attention at inference time. The pixel attention method is the same as baseline but with an additional pixel-level attention mechanism, learned end-to-end with the task. This On the right, histograms shows how far each model drove between interventions and collisions. We see that in the urban environment, the object centric approaches drove farther in between interventions than the pixel attention or the baseline. In the highway environment, the pixel attention performs slightly better, probably because this environment does not require much navigation between cars and pedestrians.
is similar to [16]. Next, we evaluate several object-centric models drawn from our taxonomy. The results labeled dense object use a discrete and dense object representation with summation of the objects weighted by a learned selector. Sparse object is the same as dense object, but only looks at the top 5 objects in the scene, as scored by the learned selector. While the preceding models used the selector to weight object features before summing them, sparse object concat concatenates the features of the top 5 objects and passes the entire list to the fully connected policy. We also evaluate our selector by comparing to a heuristic selector: the size of the object's bounding box. The results using the heuristic selector in a sparse object model are labeled heuristic selector.
The results of the on-policy simulated driving are shown in Figure 4. We show several metrics: the number of collisions, the number of times the agent got stuck, and the distance driven between these. Each evaluation was repeated for two environments: urban (which has many intersections and cars/pedestrians) and highway (which is mostly driving straight). The object-centric methods consistently outperform the two object-agnostic method in the urban evaluation, while the highway environment shows good performance for all attentional models.
The comparable performance between the evaluation with ground truth boxes versus predicted boxes (from a detector trained on MSCOCO [24]) indicates that our method is robust to noisy detections. Figure 5 visualizes evaluation rollouts along a map with collisions and interventions drawn in. These maps show how the object centric models drive for longer without crashing or getting stuck, and how they end up farther from their start point than the baseline and pixel attention models. This is supported by the histograms of distance between interventions in Figure 6 which shows how the sparse models especially drive farther between interventions.
To identify the benefits of using a learned selector over boxes, we compared the sparse object model against a heuristic selector, which assigns importance to objects based on their size. The motivation for this heuristic is that larger objects are likely to be closer, and therefore more important for the policy. Figure 4 shows that the model with a learned selector performs equally or better than the heuristic for every metric. Although some other heuristic may work better, we conclude that learning the selector jointly with the policy is beneficial.
The final experiment in Table I is an off-policy evaluation on the real world dataset that measures the perplexity of the learned model with respect to test data. When trained on only a subset of the data (from 5% to 50%), the sparse object models performs best, with concatenation overtaking summation in the medium data regime. The concatenation model performs equally well to the baseline once all the data has been seen, indicating that the sparse model is advantageous for low data problems, and that the sparse concat model is ideal for medium to large data situations. The object prior that our models leverage allows them to learn quickly from little data without being distracted by irrelevant pixels. Figure 8 shows example scenes with our model's attention. Fig. 7. Sample scenes from the Grand Theft Auto V simulation with our sparse model's learned object selector compared against a learned pixel-level attention. For rows 1 and 3, red indicates a high scoring object, and blue is low scoring (best viewed on screen). For rows 2 and 4, then pixel attention is shown by brightness of the pixels. The actions output by each model are shown by the white squares in the corners: accelerator is the top square, and the bottom squares are turn left, brake, and turn right, respectively. A single action may both turn and accelerate or brake. Rows 1 and 2 shows both models performing well, while rows 3 and 4 show the pixel attention model ignoring pedestrians and deciding to accelerate towards them. The object centric model is more conservative and attends strongly to the pedestrians, choosing to slow down instead of speeding up. Red indicates a high scoring object, and blue is low scoring (best viewed on screen). Our method is robust to imperfect detections, such as overlapping bounding boxes, for both both day and night scenes.
V. CONCLUSION
We defined a taxonomy over object-centric models and showed in an on-policy evaluation that sparse object models outperformed object-agnostic models according to our metrics of distance driven and frequency of collisions and interventions. Our results show that highway driving is significantly easier than navigating intersections; the necessity of navigating city environments showcase the advantages of representing objects. Overall the results, discreteness and sparsity, along with a learned selection mechanism, seem to be the most important aspects of object-centric models.
For simplicity, this work only considered the presence of vehicles and pedestrians and did not evaluate the policies ability to follow the rules of the road. Using generic object detection rather than class specific detection would hopefully lead to paying attention to streetlight, signage, and other objects relevant to driving. These types of objects are crucial for following the rules of the road, and we expect that objectcentric policies would provide even more gains in future settings. Promising avenues for future work also include leveraging the 3D nature of objects and their temporal coherence.
Fig. 2 .
2The overview of object-centric architecture.
Fig. 3 .
3An illustration of the representation taxonomy we describe in Section III-B. (a) shows a global image representation that does not leverage objects. (b) is a continuous (pixel-level) attention that selects salient parts of the image. (c) is a dense and discrete object representation that selects all objects in the scene. (d) is a discrete but sparse object presentation that only selects the objects important for the task. (e) is a sparse representation that treats each object individually by concatenating instead of averaging the object features.
:= Selector( f i , G) // Object score 7: end for 8:w 0 , ...w N = Softmax(w 0 , ..., w N ) 9: f i :=w i * f i for all i 10: if sparsity then 11: Sort objects byw and keep only the top k 12: end if 13: if concatenation then 14:
Fig. 4 .
4Driving performance. From left to right: driving distance between interventions, number of interventions per 100m , number of collisions per
Fig. 5 .Fig. 6 .
56Sample trajectories from the evaluation. Yellow dots indicate to interventions while red dots indicate collisions (best viewed on screen). This example illustrates the reliability of the object centric models over the the baselines, with less collisions & interventions and more travelling distances. Analysis of intervention frequency. On the left, the shaded region measures the proportion of interventions caused by collisions. In the highway environment, almost all interventions are cause by collisions, but in the more complex urban environment, the policy get stuck at an intersection as shown in the supplementary video.
Fig. 8 .
8Sample scenes from the Berkeley DeepDrive Video dataset with the sparse model's learned selector visualized.
EECS Department, University of California, Berkeley 2 CS Department, Nanjing University * Work done while at UC BerkeleyFig. 1. Our method uses discrete objects as part of the policy model for driving in traffic. The learned selector identifies the objects most relevant to the policy, which is often the nearest car.Grand Theft Auto V
Berkeley DeepDrive
SPARSE TRAINING REAL WORLD EVALUATION. TO EVALUATE THE MODELS TRAINED ON REAL IMAGES, WE MEASURE THE PERPLEXITY OF THE MODELS ON WITHHELD TEST DATA AS AN OFF-POLICY EVALUATION. LOWER PERPLEXITY INDICATES THAT DATASET WAS MODELED MORE ACCURATELY.% data trained on
5%
10% 25% 50%
100%
baseline
2.52
2.40
2.29
1.94
1.80
pixel attention
2.70
2.33
2.15
1.96
1.84
dense object
2.34
2.24
2.07
2.06
2.01
heuristic selector
2.48
2.39
2.31
2.13
2.10
sparse object
2.31
2.23
2.19
2.07
2.10
sparse object concat 2.37
2.31
2.04
1.93
1.82
TABLE I
Driving policy transfer via modularity and abstraction. M Müller, A Dosovitskiy, B Ghanem, V Koltun, arXiv:1804.09364arXiv preprintM. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun, "Driv- ing policy transfer via modularity and abstraction," arXiv preprint arXiv:1804.09364, 2018.
Deep layer aggregation. F Yu, D Wang, E Shelhamer, T Darrell, CVPR. F. Yu, D. Wang, E. Shelhamer, and T. Darrell, "Deep layer aggrega- tion," in CVPR, 2018.
End to end learning for self-driving cars. M Bojarski, D Testa, D Dworakowski, B Firner, B Flepp, P Goyal, L D Jackel, M Monfort, U Muller, J Zhang, arXiv:1604.07316arXiv preprintM. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al., "End to end learning for self-driving cars," arXiv preprint arXiv:1604.07316, 2016.
Explaining how a deep neural network trained with end-to-end learning steers a car. M Bojarski, P Yeres, A Choromanska, K Choromanski, B Firner, L Jackel, U Muller, arXiv:1704.07911arXiv preprintM. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, "Explaining how a deep neural net- work trained with end-to-end learning steers a car," arXiv preprint arXiv:1704.07911, 2017.
End-to-end learning of driving models from large-scale video datasets. H Xu, Y Gao, F Yu, T Darrell, CVPR. H. Xu, Y. Gao, F. Yu, and T. Darrell, "End-to-end learning of driving models from large-scale video datasets," in CVPR, 2017.
An empirical evaluation of deep learning on highway driving. B Huval, T Wang, S Tandon, J Kiske, W Song, J Pazhayampallil, M Andriluka, P Rajpurkar, T Migimatsu, R Cheng-Yue, arXiv:1504.01716arXiv preprintB. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, et al., "An empirical evaluation of deep learning on highway driving," arXiv preprint arXiv:1504.01716, 2015.
Stanley: The robot that won the darpa grand challenge. S Thrun, M Montemerlo, H Dahlkamp, D Stavens, A Aron, J Diebel, P Fong, J Gale, M Halpenny, G Hoffmann, Journal of Field Robotics. S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, et al., "Stanley: The robot that won the darpa grand challenge," Journal of Field Robotics, 2006.
Autonomous driving in urban environments: Boss and the urban challenge. C Urmson, J Anhalt, D Bagnell, C Baker, R Bittner, M Clark, J Dolan, D Duggins, T Galatali, C Geyer, Journal of Field Robotics. C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. Clark, J. Dolan, D. Duggins, T. Galatali, C. Geyer, et al., "Autonomous driving in urban environments: Boss and the urban challenge," Journal of Field Robotics, 2008.
Making bertha drive-an autonomous journey on a historic route. J Ziegler, P Bender, M Schreiber, H Lategahn, T Strauss, C Stiller, T Dang, U Franke, N Appenrodt, C G Keller, ITSMJ. Ziegler, P. Bender, M. Schreiber, H. Lategahn, T. Strauss, C. Stiller, T. Dang, U. Franke, N. Appenrodt, C. G. Keller, et al., "Making bertha drive-an autonomous journey on a historic route." ITSM, 2014.
Alvinn: An autonomous land vehicle in a neural network. D A Pomerleau, NIPS. D. A. Pomerleau, "Alvinn: An autonomous land vehicle in a neural network," in NIPS, 1989.
Off-road obstacle avoidance through end-to-end learning. U Muller, J Ben, E Cosatto, B Flepp, Y L Cun, NIPS. U. Muller, J. Ben, E. Cosatto, B. Flepp, and Y. L. Cun, "Off-road obstacle avoidance through end-to-end learning," in NIPS, 2006.
End-to-end driving via conditional imitation learning. F Codevilla, M Müller, A Dosovitskiy, A López, V Koltun, arXiv:1710.02410arXiv preprintF. Codevilla, M. Müller, A. Dosovitskiy, A. López, and V. Koltun, "End-to-end driving via conditional imitation learning," arXiv preprint arXiv:1710.02410, 2017.
Deepdriving: Learning affordance for direct perception in autonomous driving. C Chen, A Seff, A Kornhauser, J Xiao, ICCV. C. Chen, A. Seff, A. Kornhauser, and J. Xiao, "Deepdriving: Learning affordance for direct perception in autonomous driving," in ICCV, 2015.
Conditional affordance learning for driving in urban environments. A Sauer, N Savinov, A Geiger, arXiv:1806.06498arXiv preprintA. Sauer, N. Savinov, and A. Geiger, "Conditional affordance learning for driving in urban environments," arXiv preprint arXiv:1806.06498, 2018.
Deep object-centric representations for generalizable robot learning. C Devin, P Abbeel, T Darrell, S Levine, ICRAC. Devin, P. Abbeel, T. Darrell, and S. Levine, "Deep object-centric representations for generalizable robot learning," in ICRA, 2017.
Interpretable learning for self-driving cars by visualizing causal attention. J Kim, J Canny, ICCV. J. Kim and J. Canny, "Interpretable learning for self-driving cars by visualizing causal attention," in ICCV, 2017.
Feature pyramid networks for object detection. T.-Y Lin, P Dollár, R B Girshick, K He, B Hariharan, S J Belongie, CVPR. T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, "Feature pyramid networks for object detection." in CVPR, 2017.
Fast r-cnn. R Girshick, ICCV. R. Girshick, "Fast r-cnn," in ICCV, 2015.
Free supervision from video games. P , CVPR. P. Krähenbühl, "Free supervision from video games," in CVPR, 2018.
A reduction of imitation learning and structured prediction to no-regret online learning. S Ross, G Gordon, D Bagnell, AISTATS. S. Ross, G. Gordon, and D. Bagnell, "A reduction of imitation learning and structured prediction to no-regret online learning," in AISTATS, 2011.
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, Imagenet large scale visual recognition challenge," IJCVO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., "Imagenet large scale visual recognition challenge," IJCV, 2015.
Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z Devito, Z Lin, A Desmaison, L Antiga, A Lerer, A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, "Automatic differen- tiation in pytorch," in NIPS-W, 2017.
Detectron. R Girshick, I Radosavovic, G Gkioxari, P Dollár, K He, R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He, "Detectron," https://github.com/facebookresearch/detectron, 2018.
. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Microsoft coco: Common objects in context," in ECCVT.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in ECCV, 2014.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," arXiv preprint arXiv:1412.6980, 2014.
| [
"https://github.com/facebookresearch/detectron,"
]
|
[
"Music Genre Classification using Machine Learning Techniques",
"Music Genre Classification using Machine Learning Techniques"
]
| [
"Hareesh Bahuleyan [email protected] \nUniversity of Waterloo\nONCanada\n"
]
| [
"University of Waterloo\nONCanada"
]
| []
| Categorizing music files according to their genre is a challenging task in the area of music information retrieval (MIR). In this study, we compare the performance of two classes of models. The first is a deep learning approach wherein a CNN model is trained end-to-end, to predict the genre label of an audio signal, solely using its spectrogram. The second approach utilizes hand-crafted features, both from the time domain and frequency domain. We train four traditional machine learning classifiers with these features and compare their performance. The features that contribute the most towards this classification task are identified. The experiments are conducted on the Audio set data set and we report an AUC value of 0.894 for an ensemble classifier which combines the two proposed approaches. 1 Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 .Lonce Wyse. 2017. Audio spectrogram representations for processing with convolutional neural networks. arXiv preprint arXiv:1706.09559 .E Zwicker and H Fastl. 1999. Psychoacoustics facts and models . | null | [
"https://arxiv.org/pdf/1804.01149v1.pdf"
]
| 4,595,761 | 1804.01149 | 7915f9109576eeb26e33dad27b705efd509a1d17 |
Music Genre Classification using Machine Learning Techniques
Hareesh Bahuleyan [email protected]
University of Waterloo
ONCanada
Music Genre Classification using Machine Learning Techniques
Categorizing music files according to their genre is a challenging task in the area of music information retrieval (MIR). In this study, we compare the performance of two classes of models. The first is a deep learning approach wherein a CNN model is trained end-to-end, to predict the genre label of an audio signal, solely using its spectrogram. The second approach utilizes hand-crafted features, both from the time domain and frequency domain. We train four traditional machine learning classifiers with these features and compare their performance. The features that contribute the most towards this classification task are identified. The experiments are conducted on the Audio set data set and we report an AUC value of 0.894 for an ensemble classifier which combines the two proposed approaches. 1 Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 .Lonce Wyse. 2017. Audio spectrogram representations for processing with convolutional neural networks. arXiv preprint arXiv:1706.09559 .E Zwicker and H Fastl. 1999. Psychoacoustics facts and models .
Introduction
With the growth of online music databases and easy access to music content, people find it increasing hard to manage the songs that they listen to. One way to categorize and organize songs is based on the genre, which is identified by some characteristics of the music such as rhythmic structure, harmonic content and instrumentation (Tzanetakis and Cook, 2002). Being able to automatically classify and provide tags to the music present in a user's library, based on genre, would be beneficial for audio streaming services such as Spotify and iTunes. This study explores the application of machine learning (ML) algorithms to identify and classify the genre of a given audio file. The first model described in this paper uses convolutional neural networks (Krizhevsky et al., 2012), which is trained end-to-end on the MEL spectrogram of the audio signal. In the second part of the study, we extract features both in the time domain and the frequency domain of the audio signal. These features are then fed to conventional machine learning models namely Logistic Regression, Random Forests (Breiman, 2001), Gradient Boosting (Friedman, 2001 and Support Vector Machines which are trained to classify the given audio file. The models are evaluated on the Audio Set dataset (Gemmeke et al., 2017). We compare the proposed models and also study the relative importance of different features.
The rest of this paper is organized as follows. Section 2 describes the existing methods in the literature for the task of music genre classification. Section 3 is an overview of the the dataset used in this study and how it was obtained. The proposed models and the implementation details are discussed in Section 4. The results are reported in Section 5.2, followed by the conclusions from this study in Section 6.
Literature Review
Music genre classification has been a widely studied area of research since the early days of the Internet. Tzanetakis and Cook (2002) addressed this problem with supervised machine learning approaches such as Gaussian Mixture model and knearest neighbour classifiers. They introduced 3 sets of features for this task categorized as timbral structure, rhythmic content and pitch content. Hidden Markov Models (HMMs), which have been extensively used for speech recognition tasks, have also been explored for music genre classification (Scaringella and Zoia, 2005;Soltau et al., 1998). Support vector machines (SVMs) with different distance metrics are studied and compared in Mandel and Ellis (2005) for classifying genre.
In Lidy and Rauber (2005), the authors discuss the contribution of psycho-acoustic features for recognizing music genre, especially the importance of STFT taken on the Bark Scale (Zwicker and Fastl, 1999). Mel-frequency cepstral coefficients (MFCCs), spectral contrast and spectral roll-off were some of the features used by (Tzanetakis and Cook, 2002). A combination of visual and acoustic features are used to train SVM and AdaBoost classifiers in Nanni et al. (2016).
With the recent success of deep neural networks, a number of studies apply these techniques to speech and other forms of audio data (Abdel-Hamid et al., 2014;Gemmeke et al., 2017). Representing audio in the time domain for input to neural networks is not very straight-forward because of the high sampling rate of audio signals. However, it has been addressed in Van Den Oord et al. (2016) for audio generation tasks. A common alternative representation is the spectrogram of a signal which captures both time and frequency information. Spectrograms can be considered as images and used to train convolutional neural networks (CNNs) (Wyse, 2017). A CNN was developed to predict the music genre using the raw MFCC matrix as input in Li et al. (2010). In Lidy and Schindler (2016), a constant Q-transform (CQT) spectrogram was provided as input to the CNN to achieve the same task.
This work aims to provide a comparative study between 1) the deep learning based models which only require the spectrogram as input and, 2) the traditional machine learning classifiers that need to be trained with hand-crafted features. We also investigate the relative importance of different features.
Dataset
In this work, we make use of Audio Set, which is a large-scale human annotated database of sounds (Gemmeke et al., 2017). The dataset was created by extracting 10-second sound clips from a total of 2.1 million YouTube videos. The audio files have been annotated on the basis of an ontology which covers 527 classes of sounds including musical instruments, speech, vehicle sounds, animal sounds and so on 2 . This study requires only the audio files that belong to the music category, specifically having one of the seven genre tags shown in Table 1. Total 40540
The number of audio clips in each category has also been tabulated. The raw audio clips of these sounds have not been provided in the Audio Set data release. However, the data provides the YouTubeID of the corresponding videos, along with the start and end times. Hence, the first task is to retrieve these audio files. For the purpose of audio retrieval from YouTube, the following steps were carried out: (Gonzalez, 2006) was utilized to download the video in the mp4 format.
1. A command line program called youtube-dl
2. The mp4 files are converted into the desired wav format using an audio converter named ffmpeg (Tomar, 2006) (command line tool).
Each wav file is about 880 KB in size, which means that the total data used in this study is approximately 34 GB.
Methodology
This section provides the details of the data preprocessing steps followed by the description of the two proposed approaches to this classification problem.
Data Pre-processing
In order to improve the Signal-to-Noise Ratio (SNR) of the signal, a pre-emphasis filter, given by Equation 1 is applied to the original audio signal.
y(t) = x(t) − α * x(t − 1)(1)
where, x(t) refers to the original signal, and y(t) refers to the filtered signal and α is set to 0.97. Such a pre-emphasis filter is useful to boost amplitudes at high frequencies (Kim and Stern, 2012).
Deep Neural Networks
Using deep learning, we can achieve the task of music genre classification without the need for hand-crafted features. Convolutional neural networks (CNNs) have been widely used for the task of image classification (Krizhevsky et al., 2012). The 3-channel (RGB) matrix representation of an image is fed into a CNN which is trained to predict the image class. In this study, the sound wave can be represented as a spectrogram, which in turn can be treated as an image (Nanni et al., 2016) (Lidy and Schindler, 2016). The task of the CNN is to use the spectrogram to predict the genre label (one of seven classes).
Spectrogram Generation
A spectrogram is a 2D representation of a signal, having time on the x-axis and frequency on the y-axis. A colormap is used to quantify the magnitude of a given frequency within a given time window. In this study, each audio signal was converted into a MEL spectrogram (having MEL frequency bins on the y-axis). The parameters used to generate the power spectrogram using STFT are listed below:
• Sampling rate (sr) = 22050
• Frame/Window size (n fft) = 2048
• Time advance between frames (hop size) = 512 (resulting in 75% overlap)
• Window Function: Hann Window
• Frequency Scale: MEL
• Number of MEL bins: 96
• Highest Frequency (f max) = sr/2
Convolutional Neural Networks
From the Figure 1, one can understand that there exists some characteristic patterns in the spectrograms of the audio signals belonging to different classes. Hence, spectrograms can be considered as 'images' and provided as input to a CNN, which has shown good performance on image classification tasks. Each block in a CNN consists of the following operations 3 :
• Convolution: This step involves sliding a matrix filter (say 3x3 size) over the input image which is of dimension image width x image height. The filter is first placed on the image matrix and then we compute an element-wise multiplication between the filter and the overlapping portion of the image, followed by a summation to give a feature value. We use many such filters , the values of which are 'learned' during the training of the neural network via backpropagation.
• Pooling: This is a way to reduce the dimension of the feature map obtained from the convolution step, formally know as the process of down sampling. For example, by max pooling with 2x2 window size, we only retain the element with the maximum value among the 4 elements of the feature map that are covered in this window. We keep moving this window across the feature map with a predefined stride.
• Non-linear Activation: The convolution operation is linear and in order to make the neural network more powerful, we need to introduce some non-linearity. For this purpose, we can apply an activation function such as ReLU 4 on each element of the feature map.
In this study, a CNN architecture known as VGG-16, which was the top performing model in the ImageNet Challenge 2014 (classification + localization task) was used (Simonyan and Zisserman, 2014). The model consists of 5 convolutional blocks (conv base), followed by a set of densely connected layers, which outputs the probability that a given image belongs to each of the possible classes.
For the task of music genre classification using spectrograms, we download the model architecture with pre-trained weights, and extract the conv base. The output of the conv base is then send to a new feed-forward neural network which in turn predicts the genre of the music, as depicted in Figure 2. There are two possible settings while implementing the pre-trained model:
1. Transfer learning: The weights in the conv base are kept fixed but the weights in the feed-forward network (represented by the yellow box in Figure 2) are allowed to be tuned to predict the correct genre label.
2. Fine tuning: In this setting, we start with the pre-trained weights of VGG-16, but allow all the model weights to be tuned during training process.
The final layer of the neural network outputs the class probabilities (using the softmax activation function) for each of the seven possible class labels. Next, the cross-entropy loss is computed as follows:
L = − M c=1 y o,c * log p o,c(2)
where, M is the number of classes; y o,c is a binary indicator whose value is 1 if observation o belongs to class c and 0 otherwise; p o,c is the model's predicted probability that observation o belongs to class c. This loss is used to backpropagate the error, compute the gradients and thereby update the weights of the network. This iterative process continues until the loss converges to a minimum value.
Implementation Details
The spectrogram images have a dimension of 216 x 216. For the feed-forward network connected to the conv base, a 512-unit hidden layer is implemented. Over-fitting is a common issue in neural networks. In order to prevent this, two strategies are adopted:
1. L2-Regularization (Ng, 2004): The loss function of the neural network is added with the term 1 2 λ i w i 2 , where w refers to the weights in the neural networks. This method is used to penalize excessively high weights. We would like the weights to be diffused across all model parameters, and not just among a few parameters. Also, intuitively, smaller weights would correspond to a less complex model, thereby avoiding overfitting. λ is set to a value of 0.001 in this study. (Srivastava et al., 2014): This is a regularization mechanism in which we shutoff some of the neurons (set their weights to zero) randomly during training. In each iteration, we thereby use a different combination of neurons to predict the final output. This makes the model generalize without any heavy dependence on a subset of the neurons. A dropout rate of 0.3 is used, which means that a given weight is set to zero during an iteration, with a probability of 0.3.
Dropout
The dataset is randomly split into train (90%), validation (5%) and test (5%) sets. The same split is used for all experiments to ensure a fair comparison of the proposed models.
The neural networks are implemented in Python using Tensorflow 5 ; an NVIDIA Titan X GPU was utilized for faster processing. All models were trained for 10 epochs with a batch size of 5 http://tensorflow.org/ 32 with the ADAM optimizer (Kingma and Ba, 2014). One epoch refers to one iteration over the entire training dataset. Figure 3 shows the learning curves -the loss (which is being optimized) keeps decreasing as the training progresses. Although the training accuracy keeps increasing, the validation accuracy first increases and after a certain number of epochs, it starts to decrease. This shows the model's tendency to overfit on the training data. The model that is selected for evaluation purposes is the one that has the highest accuracy and lowest loss on the validation set (epoch 4 in Figure 3).
Baseline Feed-forward Neural Network
To assess the performance improvement that can be achived by the CNNs, we also train a baseline feed-forward neural network that takes as input the same spectrogram image. The image which is a 2-dimensional vector of pixel values is unwrapped or flattened into a 1-dimensional vector. Using this vector, a simple 2-layer neural network is trained to predict the genre of the audio signal. The first hidden layer consists of 512 units and the second layer has 32 units, followed by the output layer. The activation function used is ReLU and the same regularization techniques described in Section 4.2.3 are adopted.
Manually Extracted Features
In this section, we describe the second category of proposed models, namely the ones that require hand-crafted features to be fed into a machine learning classifier. Features can be broadly classified as time domain and frequency domain features. The feature extraction was done using librosa 6 , a Python library.
Time Domain Features
These are features which were extracted from the raw audio signal.
1. Central moments: This consists of the mean, standard deviation, skewness and kurtosis of the amplitude of the signal.
2. Zero Crossing Rate (ZCR): A zero crosssing point refers to one where the signal changes sign from positive to negative (Gouyon et al., 2000). The entire 10 second signal is divided into smaller frames, and the number of zero-crossings present in each frame are determined. The frame length is chosen to be 2048 points with a hop size of 512 points. Note that these frame parameters have been used consistently across all features discussed in this section. Finally, the average and standard deviation of the ZCR across all frames are chosen as representative features.
Root Mean Square Energy (RMSE):
The energy in a signal is calculated as:
N n=1 |x(n)| 2(3)
Further, the root mean square value can be computed as:
1 N N n=1 |x(n)| 2(4)
RMSE is calculated frame by frame and then we take the average and standard deviation across all frames.
Tempo:
In general terms, tempo refers to the how fast or slow a piece of music is; it is expressed in terms of Beats Per Minute (BPM). Intuitively, different kinds of music would have different tempos. Since the tempo of the audio piece can vary with time, we aggregate it by computing the mean across several frames. The functionality in librosa first computes a tempogram following (Grosche et al., 2010) and then estimates a single value for tempo. 6 https://librosa.github.io/
Frequency Domain Features
The audio signal can be transformed into the frequency domain by using the Fourier Transform. We then extract the following features. 1990). First, the Short-Time Fourier-Transform (STFT) of the signal is taken with n fft=2048 and hop size=512 and a Hann window. Next, we compute the power spectrum and then apply the triangular MEL filter bank, which mimics the human perception of sound. This is followed by taking the discrete cosine transform of the logarithm of all filterbank energies, thereby obtaining the MFCCs. The parameter n mels, which corresponds to the number of filter banks, was set to 20 in this study. 3. Spectral Centroid: For each frame, this corresponds to the frequency around which most of the energy is centered (Tjoa, 2017). It is a magnitude weighted frequency calculated as:
Chroma Features
f c = k S(k)f (k) k f k ,(5)
where S(k) is the spectral magnitude of frequency bin k and f(k) is the frequency corresponding to bin k.
Spectral Band-width:
The p-th order spectral band-width corresponds to the p-th order moment about the spectral centroid (Tjoa, 2017) and is calculated as
[ k (S(k)f (k) − f c ) p ] 1 p(6)
For example, p = 2 is analogous to a weighted standard deviation. 5. Spectral Contrast: Each frame is divided into a pre-specified number of frequency bands. And, within each frequency band, the spectral contrast is calculated as the difference between the maximum and minimum magnitudes (Jiang et al., 2002).
6. Spectral Roll-off: This feature corresponds to the value of frequency below which 85% (this threshold can be defined by the user) of the total energy in the spectrum lies (Tjoa, 2017).
For each of the spectral features described above, the mean and standard deviation of the values taken across frames is considered as the representative final feature that is fed to the model.
The features described in this section would be would be used to train machine learning algorithms (refer Section 4.4). The features that contribute the most in achieving a good classification performance will be identified and reported.
Classifiers
This section provides a brief overview of the four machine learning classifiers adopted in this study.
Logistic Regression (LR): This linear clas-
sifier is generally used for binary classification tasks. For this multi-class classification task, the LR is implemented as a one-vs-rest method. That is, 7 separate binary classifiers are trained. During test time, the class with the highest probability from among the 7 classifiers is chosen as the predicted class.
2. Random Forest (RF): Random Forest is a ensemble learner that combines the prediction from a pre-specified number of decision trees. It works on the integration of two main principles: 1) each decision tree is trained with only a subset of the training samples which is known as bootstrap aggregation (or bagging) (Breiman, 1996), 2) each decision tree is required to make its prediction using only a random subset of the features (Amit and Geman, 1997). The final predicted class of the RF is determined based on the majority vote from the individual classifiers.
Gradient Boosting (XGB):
Boosting is another ensemble classifier that is obtained by combining a number of weak learners (such as decision trees). However, unlike RFs, boosting algorithms are trained in a sequential manner using forward stagewise additive modelling (Hastie et al., 2001).
During the early iterations, the decision trees learnt are fairly simple. As training progresses, the classifier become more powerful because it is made to focus on the instances where the previous learners made errors. At the end of training, the final prediction is a weighted linear combination of the output from the individual learners. XGB refers to eXtreme Gradient Boosting, which is an implementation of boosting that supports training the model in a fast and parallelized manner.
Support Vector Machines (SVM):
SVMs transform the original input data into a high dimensional space using a kernel trick (Cortes and Vapnik, 1995). The transformed data can be linearly separated using a hyperplane. The optimal hyperplane maximizes the margin. In this study, a radial basis function (RBF) kernel is used to train the SVM because such a kernel would be required to address this non-linear problem. Similar to the logistic regression setting discussed above, the SVM is also implemented as a one-vs-rest classification task.
Evaluation
Metrics
In order to evaluate the performance of the models described in Section 4, the following metrics will be used.
• Accuracy: Refers to the percentage of correctly classified test samples.
• F-score: Based on the confusion matrix, it is possible to calculate the precision and recall. F-score 7 is then computed as the harmonic mean between precision and recall.
• AUC: This evaluation criteria known as the area under the receiver operator characteristics (ROC) curve is a common way to judge the performance of a multi-class classification system. The ROC is a graph between the true positive rate and the false positive rate. A baseline model which randomly predicts each class label with equal probability would have an AUC of 0.5, and hence the system being designed is expected to have a AUC higher than 0.5.
Results and Discussion
In this section, the different modelling approaches discussed in Section 4 are evaluated based on the metrics described in Section 5.1. The values have been reported in Table 2. The best performance in terms of all metrics is observed for the convolutional neural network model based on VGG-16 that uses only the spectrogram to predict the music genre. It was expected that the fine tuning setting, which additionally allows the convolutional base to be trainable, would enhance the CNN model when compared to the transfer learning setting. However, as shown in Table 2, the experimental results show that there is no significant difference between transfer learning and fine-tuning. The baseline feed-forward neural network that uses the unrolled pixel values from the spectrogram performs poorly on the test set. This shows that CNNs can significantly improve the scores on such an image classification task.
Among the models that use manually crafted features, the one with the least performance is the Logistic regression model. This is expected since logistic regression is a linear classifier. SVMs outperform random forests in terms of accuracy. However, the XGB version of the gradient boosting algorithm performs the best among the feature engineered methods.
Most Important Features
In this section, we investigate which features contribute the most during prediction, in this classification task. To carry out this experiment, we chose the XGB model, based on the results discussed in the previous section. To do this, we rank the top 20 most useful features based on a scoring metric (Figure 4). The metric is calculated as the number of times a given feature is used as a decision node among the individual decision trees that form the gradient boosting predictor.
As can be observed from Figure 4, Mel-Frequency Cepstral Coefficients (MFCC) appear the most among the important features. Previous studies have reported MFCCs to improve the performance of speech recognition systems (Ittichaichareon et al., 2012). Our experiments show that MFCCs contribute significantly to this task of music genre classification. The mean and standard deviation of the spectral contrasts at different frequency bands are also important features. The music tempo, calculated in terms of beats per minute also appear in the top 20 useful features.
Next, we study how much of performance in terms of AUC and accuracy, can be obtained by just using the top N while training the model. From Table 3 it can be seen that with only the top 10 features, the model performance is surprisingly good. In comparison to the full model which has 97 features, the model with the top 30 features has only a marginally lower performance (2 points on The final experiment in this section is comparison of time domain and frequency domain features listed in Section 4.3. Two XGB models were trained -one with only time domain features and the other with only frequency domain features. Table 4 compares the results in terms of AUC and accuracy. This experiment further confirms the fact that frequency domain features are definitely better than time domain features when it comes to modelling audio for machine learning tasks.
Confusion Matrix
Confusion matrix is a tabular representation which enables us to further understand the strengths and weaknesses of our model. Element a ij in the ma- Figure 5 compares the confusion matrices of the best performing CNN model and XGB, the best model among the feature-engineered classifiers. Both models seems to be good at predicting the class 'Rock' music. However, many instances of class 'Hip Hop' are often confused with class 'Pop' and vice-versa. Such a behaviour is expected when the genres of music are very close. Some songs may fall into multiple genres, even as much that it may be difficult for humans to recognize the exact genre.
Ensemble Classifier
Ensembling is a commonly adopted practice in machine learning, wherein, the results from This is done by either majority voting or by averaging scores/probabilities. Such an ensembling scheme which combines the prediction powers of different classifiers makes the overall system more robust. In our case, each classifier outputs a prediction probability for each of the class labels. Hence, averaging the predicted probabilities from the different classifiers would be a straight-forward way to do ensemble learning.
The methodologies described in 4.2 and 4.4 use very different sources of input, the spectrograms and the hand-crafted features respectively. Hence, it makes sense to combine the models via ensem-bling. In this study, the best CNN model namely, VGG-16 Transfer Learning is ensembled with XGBoost the best feature engineered model by averaging the predicted probabilities. As shown in Table 2, this ensembling is beneficial and is observed to outperform the all individual classifiers. The ROC curve for the ensemble model is above that of VGG-16 Fine Tuning and XGBoost as illustrated in Figure 6.
Conclusion
In this work, the task of music genre classification is studied using the Audioset data. We pro- Figure 6: ROC Curves for the best performing models and their ensemble pose two different approaches to solving this problem. The first involves generating a spectrogram of the audio signal and treating it as an image. An CNN based image classifier, namely VGG-16 is trained on these images to predict the music genre solely based on this spectrogram. The second approach consists of extracting time domain and frequency domain features from the audio signals, followed by training traditional machine learning classifiers based on these features. XGBoost was determined to be the best feature-based classifier; the most important features were also reported. The CNN based deep learning models were shown to outperform the feature-engineered models. We also show that ensembling the CNN and XGBoost model proved to be beneficial. It is to be noted that the dataset used in this study was audio clips from YouTube videos, which are in general very noisy. Futures studies can identify ways to pre-process this noisy data before feeding it into a machine learning model, in order to achieve better performance.
Figure 1 :Figure 2 :
12Sample spectrograms for 1 audio signal from each music genre Convolutional neural network architecture (Image Source: Hvass Tensorflow Tutorials)
Figure 3 :
3Learning Curves -used for model selection; Epoch 4 has the minimum validation loss and highest validation accuracy
:
This is a vector which corresponds to the total energy of the signal in each of the 12 pitch classes. (C, C#, D, D#, E ,F, F#, G, G#, A, A#, B)(Ellis, 2007). The chroma vectors are then aggregated across the frames to obtain a representative mean and standard deviation.
Figure 4 :
4Relative importance of features in the XGBoost model; the top 20 most contributing features are displayed the AUC metric and 4 point on the accuracy metric).
Figure 5 :
5Confusion Matrices of the best performing models different classifiers are combined.
Table 1 :
1Number of instances in each genre classGenre
Count
1
Pop Music
8100
2
Rock Music
7990
3 Hip Hop Music 6958
4
Techno
6885
5 Rhythm Blues
4247
6
Vocal
3363
7 Reggae Music
2997
Table 2 :
2Comparison of performance of the models on the test set
Accuracy F-score AUC
Spectrogram-based models
VGG-16 CNN Transfer Learning
0.63
0.61
0.891
VGG-16 CNN Fine Tuning
0.64
0.61
0.889
Feed-forward NN baseline
0.43
0.33
0.759
Feature Engineering based models
Logistic Regression (LR)
0.53
0.47
0.822
Random Forest (RF)
0.54
0.48
0.840
Support Vector Machines (SVM)
0.57
0.52
0.856
Extreme Gradient Boosting (XGB)
0.59
0.55
0.865
Ensemble Classifiers
VGG-16 CNN + XGB
0.65
0.62
0.894
Table 3 :
3Ablation Study: Comparing XGB performance keeping only top N featuresN AUC Accuracy
10 0.803
0.47
20 0.837
0.52
30 0.845
0.55
97 0.865
0.59
Table 4 :
4Comparison of Time Domain features and Frequency Domain features trix refers to the number of test instances of class i that the model predicted as class j. Diagonal elements a ii corresponds to the correct predictions.Model
AUC Accuracy
Time Domain only
0.731
0.40
Frequency Domain only 0.857
0.57
Both
0.865
0.59
The code has been opensourced and is available at https://github.com/HareeshBahuleyan/ music-genre-classification
https://research.google.com/audioset/ ontology/index.html
https://ujjwalkarn.me/2016/08/11/ intuitive-explanation-convnets/ 4 https://en.wikipedia.org/wiki/ Rectifier_(neural_networks)
https://en.wikipedia.org/wiki/F1_ score
Convolutional neural networks for speech recognition. Ossama Abdel-Hamid, Abdel-Rahman Mohamed, Hui Jiang, Li Deng, Gerald Penn, Dong Yu, IEEE/ACM Transactions on audio, speech, and language processing. 2210Ossama Abdel-Hamid, Abdel-rahman Mohamed, Hui Jiang, Li Deng, Gerald Penn, and Dong Yu. 2014. Convolutional neural networks for speech recogni- tion. IEEE/ACM Transactions on audio, speech, and language processing 22(10):1533-1545.
Shape quantization and recognition with randomized trees. Yali Amit, Donald Geman, Neural computation. 97Yali Amit and Donald Geman. 1997. Shape quantiza- tion and recognition with randomized trees. Neural computation 9(7):1545-1588.
Bagging predictors. Leo Breiman, Machine learning. 242Leo Breiman. 1996. Bagging predictors. Machine learning 24(2):123-140.
Random forests. Machine learning. Leo Breiman, 45Leo Breiman. 2001. Random forests. Machine learn- ing 45(1):5-32.
Supportvector networks. Corinna Cortes, Vladimir Vapnik, Machine learning. 203Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning 20(3):273-297.
Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. B Steven, Paul Davis, Mermelstein, Readings in speech recognition. ElsevierSteven B Davis and Paul Mermelstein. 1990. Compar- ison of parametric representations for monosyllabic word recognition in continuously spoken sentences. In Readings in speech recognition, Elsevier, pages 65-74.
Chroma feature analysis and synthesis. Resources of Laboratory for the Recognition and Organization of Speech and Audio-LabROSA. Dan Ellis , Dan Ellis. 2007. Chroma feature analysis and synthe- sis. Resources of Laboratory for the Recognition and Organization of Speech and Audio-LabROSA .
Greedy function approximation: a gradient boosting machine. H Jerome, Friedman, Annals. Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics pages 1189-1232.
Audio set: An ontology and human-labeled dataset for audio events. Jort F Gemmeke, P W Daniel, Dylan Ellis, Aren Freedman, Wade Jansen, Channing Lawrence, Manoj Moore, Marvin Plakal, Ritter, 2017 IEEE International Conference on. IEEE. Acoustics, Speech and Signal ProcessingJort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, pages 776-780.
Youtube-dl: download videos from youtube. Ricardo Garcia Gonzalez, Ricardo Garcia Gonzalez. 2006. Youtube-dl: down- load videos from youtube. com.
On the use of zero-crossing rate for an application of classification of percussive sounds. Fabien Gouyon, François Pachet, Olivier Delerue, Proceedings of the COST G-6 conference on Digital Audio Effects (DAFX-00). the COST G-6 conference on Digital Audio Effects (DAFX-00)Verona, ItalyFabien Gouyon, François Pachet, Olivier Delerue, et al. 2000. On the use of zero-crossing rate for an ap- plication of classification of percussive sounds. In Proceedings of the COST G-6 conference on Digital Audio Effects (DAFX-00), Verona, Italy.
Cyclic tempograma mid-level tempo representation for musicsignals. Peter Grosche, Meinard Müller, Frank Kurth, Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE. Peter Grosche, Meinard Müller, and Frank Kurth. 2010. Cyclic tempograma mid-level tempo representation for musicsignals. In Acoustics Speech and Sig- nal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, pages 5522-5525.
The elements of statistical learnine. Trevor Hastie, Robert Tibshirani, Jerome Friedman, Trevor Hastie, Robert Tibshirani, and Jerome Fried- man. 2001. The elements of statistical learnine.
Speech recognition using mfcc. Chadawan Ittichaichareon, Siwat Suksri, Thaweesak Yingthawornsuk, International Conference on Computer Graphics, Simulation and Modeling (ICGSM'2012) July. Chadawan Ittichaichareon, Siwat Suksri, and Thaweesak Yingthawornsuk. 2012. Speech recognition using mfcc. In International Con- ference on Computer Graphics, Simulation and Modeling (ICGSM'2012) July. pages 28-29.
Music type classification by spectral contrast feature. Dan-Ning Jiang, Lie Lu, Hong-Jiang Zhang, Jian-Hua Tao, Lian-Hong Cai, Multimedia and Expo, 2002. ICME'02. Proceedings. 2002 IEEE International Conference on. IEEE. 1Dan-Ning Jiang, Lie Lu, Hong-Jiang Zhang, Jian-Hua Tao, and Lian-Hong Cai. 2002. Music type classi- fication by spectral contrast feature. In Multimedia and Expo, 2002. ICME'02. Proceedings. 2002 IEEE International Conference on. IEEE, volume 1, pages 113-116.
Powernormalized cepstral coefficients (pncc) for robust speech recognition. Chanwoo Kim, M Richard, Stern, Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE. Chanwoo Kim and Richard M Stern. 2012. Power- normalized cepstral coefficients (pncc) for robust speech recognition. In Acoustics, Speech and Sig- nal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, pages 4101-4104.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems. pages 1097-1105.
Automatic musical pattern feature extraction using convolutional neural network. L H Tom, Antoni B Li, Chan, Chun, Proc. Int. Conf. Data Mining and Applications. Int. Conf. Data Mining and ApplicationsTom LH Li, Antoni B Chan, and A Chun. 2010. Auto- matic musical pattern feature extraction using con- volutional neural network. In Proc. Int. Conf. Data Mining and Applications.
Evaluation of feature extractors and psycho-acoustic transformations for music genre classification. Thomas Lidy, Andreas Rauber, ISMIR. Thomas Lidy and Andreas Rauber. 2005. Evaluation of feature extractors and psycho-acoustic transfor- mations for music genre classification. In ISMIR. pages 34-41.
Parallel convolutional neural networks for music genre and mood classification. Thomas Lidy, Alexander Schindler, Thomas Lidy and Alexander Schindler. 2016. Parallel convolutional neural networks for music genre and mood classification. MIREX2016 .
Song-level features and support vector machines for music classification. I Michael, Dan Mandel, Ellis, ISMIR. volume 2005. Michael I Mandel and Dan Ellis. 2005. Song-level fea- tures and support vector machines for music classi- fication. In ISMIR. volume 2005, pages 594-599.
Combining visual and acoustic features for music genre classification. Loris Nanni, M G Yandre, Alessandra Costa, Lumini, Young Moo, Seung Ryul Kim, Baek, Expert Systems with Applications. 45Loris Nanni, Yandre MG Costa, Alessandra Lumini, Moo Young Kim, and Seung Ryul Baek. 2016. Combining visual and acoustic features for music genre classification. Expert Systems with Applica- tions 45:108-117.
Feature selection, l 1 vs. l 2 regularization, and rotational invariance. Y Andrew, Ng, Proceedings of the twenty-first international conference on Machine learning. the twenty-first international conference on Machine learningACM78Andrew Y Ng. 2004. Feature selection, l 1 vs. l 2 regu- larization, and rotational invariance. In Proceedings of the twenty-first international conference on Ma- chine learning. ACM, page 78.
On the modeling of time information for automatic genre recognition systems in audio signals. Nicolas Scaringella, Giorgio Zoia, ISMIR. Nicolas Scaringella and Giorgio Zoia. 2005. On the modeling of time information for automatic genre recognition systems in audio signals. In ISMIR. pages 666-671.
Karen Simonyan, Andrew Zisserman, arXiv:1409.1556Very deep convolutional networks for large-scale image recognition. arXiv preprintKaren Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 .
Proceedings of the 1998 IEEE. Hagen Soltau, Tanja Schultz, Martin Westphal, Alex Waibel, Acoustics, Speech and Signal Processing. 2Recognition of music typesHagen Soltau, Tanja Schultz, Martin Westphal, and Alex Waibel. 1998. Recognition of music types. In Acoustics, Speech and Signal Processing, 1998. Pro- ceedings of the 1998 IEEE International Conference on. IEEE, volume 2, pages 1137-1140.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929-1958.
Music information retrieval. Steve Tjoa, Steve Tjoa. 2017. Music information retrieval.
Converting video formats with ffmpeg. Suramya Tomar, Linux Journal. 14610Suramya Tomar. 2006. Converting video formats with ffmpeg. Linux Journal 2006(146):10.
Musical genre classification of audio signals. George Tzanetakis, Perry Cook, IEEE Transactions on speech and audio processing. 105George Tzanetakis and Perry Cook. 2002. Musical genre classification of audio signals. IEEE Trans- actions on speech and audio processing 10(5):293- 302.
| [
"https://github.com/HareeshBahuleyan/"
]
|
[
"Stock Price Prediction Using Temporal Graph Model with Value Chain Data",
"Stock Price Prediction Using Temporal Graph Model with Value Chain Data"
]
| [
"Chang Liu \nDepartment of Information Engineering and Computer Science\nUniversity of Trento\n\n",
"Sandra Paterlini \nDepartment of Economics and Management\nUniversity of Trento\n\n"
]
| [
"Department of Information Engineering and Computer Science\nUniversity of Trento\n",
"Department of Economics and Management\nUniversity of Trento\n"
]
| []
| Stock price prediction is a crucial element in financial trading as it allows traders to make informed decisions about buying, selling, and holding stocks. Accurate predictions of future stock prices can help traders optimize their trading strategies and maximize their profits. In this paper, we introduce a neural network-based stock return prediction method, the Long Short-Term Memory Graph Convolutional Neural Network (LSTM-GCN) model, which combines the Graph Convolutional Network(GCN)and Long Short-Term Memory (LSTM) Cells. Specifically, the GCN is used to capture complex topological structures and spatial dependence from value chain data, while the LSTM captures temporal dependence and dynamic changes in stock returns data.We evaluated the LSTM-GCN model on two datasets consisting of constituents of Eurostoxx 600 and S&P 500. Our experiments demonstrate that the LSTM-GCN model can capture additional information from value chain data that are not fully reflected in price data, and the predictions outperform baseline models on both datasets. | 10.48550/arxiv.2303.09406 | [
"https://export.arxiv.org/pdf/2303.09406v1.pdf"
]
| 257,557,365 | 2303.09406 | 819bbbe096be883c83f038bfbed7cab5a45e34f3 |
Stock Price Prediction Using Temporal Graph Model with Value Chain Data
Chang Liu
Department of Information Engineering and Computer Science
University of Trento
Sandra Paterlini
Department of Economics and Management
University of Trento
Stock Price Prediction Using Temporal Graph Model with Value Chain Data
1
Stock price prediction is a crucial element in financial trading as it allows traders to make informed decisions about buying, selling, and holding stocks. Accurate predictions of future stock prices can help traders optimize their trading strategies and maximize their profits. In this paper, we introduce a neural network-based stock return prediction method, the Long Short-Term Memory Graph Convolutional Neural Network (LSTM-GCN) model, which combines the Graph Convolutional Network(GCN)and Long Short-Term Memory (LSTM) Cells. Specifically, the GCN is used to capture complex topological structures and spatial dependence from value chain data, while the LSTM captures temporal dependence and dynamic changes in stock returns data.We evaluated the LSTM-GCN model on two datasets consisting of constituents of Eurostoxx 600 and S&P 500. Our experiments demonstrate that the LSTM-GCN model can capture additional information from value chain data that are not fully reflected in price data, and the predictions outperform baseline models on both datasets.
I. INTRODUCTION
Predicting financial time series has always been a highly sought-after topic among researchers and investors, as it allows for better decision-making in financial markets [1]. The accuracy of predicted returns is a crucial aspect of any portfolio construction model, as it directly affects the performance and profitability of the portfolio. Historically, research has primarily focused on the techniques of fundamental analysis and technical analysis [2]. However, in recent years, with the increasing availability of computational power, statistical models and machine learning (ML) algorithms have become more prevalent in financial forecasting. These algorithms can analyze large amounts of data and identify patterns that may not be discernible to human traders, allowing for more accurate predictions and better decision-making in financial markets. For instance, statistical models like autoregressive models (AR), vector autoregression models (VAR), autoregressive integrated moving average models (ARIMA) [3], and heterogeneous autoregressive models (HAR) [4], in conjunction with multivariate linear regression models arXiv:2303.09406v1 [q-fin.ST] 7 Mar 2023 2 [5], are commonly used as benchmarks to be compared with more sophisticated approaches. In fact, with the increasing prevalence of ML and deep learning (DL) models, they are becoming more common tools for financial predictions. Researchers have been using these techniques in recent years to replicate the success seen in other areas of research, such as natural language processing and image processing. However, predicting financial time series poses a unique challenge: compared to ML tasks mentioned above, there is no absolute or objective "ground truth" for the stock price. The value of a stock is ultimately determined by the collective beliefs and expectations of all market participants. These beliefs and expectations can be influenced by a wide range of factors, including economic conditions, company performance, news events, and investor sentiment [6]. Therefore, while predicting stock prices, the ultimate goal is to uncover hidden information from the market to produce excess returns from setting up appropriate investment strategies.
Machine Learning Models
Among different ML approaches [7], we now briefly introduce the ones we focus on in this study.
Convolutional Neural Networks (CNN) have proven to be very efficient in extracting features from visual images [8] [9].
Compared to the Multi-Layer Perceptron model (MLP) where all pixels are put into a 1-d tensor in the first step, CNN is better to capture the 2-d features. CNN applies a 2-d filter (i.e.convolutional kernel) to the input image by sliding it across the image and computing the dot product between the filter weights and the image pixels. The output of the filter is a feature map which is then used as input to the next layer in the network.
The Graph Neural Network (GNN) was inspired by CNN [10]. However, the non-Euclidean nature of graphs, where each node does have a different number of edges, makes the convolutions and filtering on graphs not as well-defined as on images.
Researchers have been working on how to conduct convolutional operations on graphs. Based on how the filter is applied, these can be grouped into spectral-based models and spatial-based models [11].
A recurrent neural network (RNN) is a deep learning model designed for predicting time series. It applies the same feedforward neural network to a series of data and keeps the cell state as well as the hidden state (memory). Literature on applying the Long Short-Term Memory model (LSTM) for stock prediction is ample, and it includes [12] [13], [14], [13], [15], [16]. Literature can also be found in applying the LSTM together with CNN, including [17] and [18]. In particular, LSTM networks are able to capture long-term dependencies in sequential data by using a memory cell that can store information for an extended period of time. This allows them to effectively model sequences that have dependencies that are far apart in time.
Just as CNNs and RNNs can be combined to process temporal visual data, GNNs and RNNs can be combined to process graph data with temporal node features. Graph data is characterized by a set of nodes and edges that define relationships between them. GNNs are a type of neural network specifically designed to operate on graph-structured data, where the nodes 3 and edges have associated features. Different models have been developed that incorporate both GNN and RNN architectures.
For example, in [19], a Chebyshev spectral graph convolutional operator is applied to the input hidden state of the LSTM cell. In [20], a Chebyshev spectral graph convolutional operator is applied to both the input signal and hidden state from an LSTM or a GRU cell. In [21], a graph convolutional operator [22] is applied to the input signal of a GNN. In particular, [23] combined the LSTM with a multi-graph attention network to predict the direction of movement of stock prices. In [24], a spatial-temporal graph-based model was deployed to forecasting global stock market volatility.
Value Chain Data
In addition to using state-of-the-art ML methods, there is an increasing focus on identifying non-standard sources of information that can be used to extract valuable patterns for financial predictions. This is because traditional financial data sources, such as stock prices and company financial statements, may not provide a complete picture of market trends and may not be sufficient to make accurate predictions. Therefore, researchers are exploring alternative data sources, such as social media sentiment, news articles, satellite imagery, and web traffic data, to supplement traditional financial data and improve predictive accuracy. This approach is often referred to as "alternative data" or "big data" in finance, and it has become an essential area of research in recent years. Among the alternative source of information, value chain Data has so far found some narrow applications. For example, [25] found evidence of return predictability across economically linked firms through supply-customer relationships. However, [25] have also discovered that stock prices do not immediately reflect news about related companies. This delayed correlation is difficult to capture using linear models because the lag for different companies can vary. In this paper, we employ deep learning methods to account for this lagged correlation and improve price prediction.
By incorporating value chain data into the prediction model, we assume that there is valuable information, that is not reflected in the existing price data, and that can be extracted to generate excess returns. Still, due to the delayed patterns, the quality and timing of the data are critical.
In recent years, more research can be found on stock prediction using GCN through graph representation learning. In order to construct the underlying graph, both nodes and edges need to be defined. While it is very straightforward to define the node as single stock or company and use the stock price or any derived signals from it e.g. technical indicators like Moving Average, Momentum, Relative Strength Index, or Moving Average Convergence Divergence as the node features, the construction of edges are more complex. In general, they can be divided into three groups [26]: relationships constructed by human knowledge, relationships extracted via knowledge graph, and relationships calculated with similarity. While the second and third approaches are extracting information for edge construction using textual data (Knowledge Graph) or price data (Calculated with Similarity), the approach with human knowledge provides direct and explicit relationships between 4 companies. These relationships can be that two companies are in the same industry ( [27], [28]), with the same business [29], or that they are in competition, collaboration, and strategic alliance ( [30] [31], [32]). Especially, [30] proposed an attributedriven graph attention network to model momentum spillovers, using industry category, supply chain, competition, customer, and strategic alliance for constructing graphs.
Our Proposal
To our knowledge, so far no research has been done on predicting explicit stock return by combining both the temporal feature from the price and the spatial feature of value chain data. In particular, we introduce the so-called LSTM-GCN model, combining LSTM and Graph Convolutional Network (GCN) such that topological information can be extracted from value chain data through the use of graph models. We use the value chain data to construct an undirected graph where each node is a stock and the historical price movements of single stocks are node features. For each snapshot of the graph, we apply GCN to extract spatial information. The time series of the spatial information is then put into LSTM layers to extract the temporal information. Our model differs from the models proposed by [23] that we apply GCN for feature extraction on all inputs (last cell state, last hidden state, and current node features) of the LSTM cell. In Section IV, we show that better performance can be achieved when applying GCN on all inputs of LSTM cell instead of on the node features only.
The paper is organized as follows. In Section II, we describe the LSTM-GCN model. In Section III, we explain the data and methodology used to test the model. Section IV presents the empirical results of the study, comparing LSTM-GCN model to the baseline models and demonstrating its superior properties. We also simulate the model's outcomes to generate a real financial portfolio and compute end-of-period cumulative returns. Finally, Section V provides concluding remarks.
II. THE LSTM-GCN MODEL
GCN is a type of neural network used for semi-supervised learning on graph-structured data. For GCNs, the goal is to learn a function of signal/features on a graph. It takes as input the feature matrix X and the adjacency matrix A. The feature matrix X has size (n × d) where n is the number of nodes and d is the number of features. The rows of X are x 1 , ..., x n ∈ R d and denote the features for each node i=1,..,n. An adjacency matrix A is a square matrix that represents the connections between the nodes or vertices in a graph. Let's denote the graph as G = (V, E), where V is the set of nodes and E is the set of edges.
The entries of the adjacency matrix are defined as A ij = 1 if (i, j) ∈ E otherwise A ij = 0. If the edges are weighted, then the value of A ij takes the value of the edge weights which are usually normalized to have values between 0 and 1. As we 5 are working with graphs constructed through human knowledge of supply-chain structure, we are not interested in updating the graph itself during training. Therefore, each neural network layer can then be written as a non-linear function [10]:
H (l+1) = f (H (l) , A),(1)
with H (0) = X and H (L) = Z where Z is the node-level output and L the number of layers.
In general, there are two types of the function f (·, ·) [10], each using spatial filtering [10] and spectral filtering [33]. Our model uses the spatial filtering proposed in [10], but we also test the model with the spectral approach as a baseline model for comparison.
Following the spatial filtering approach, the graph Laplacian matrix L from the adjacency matrix A is calculated:
L = D − 1 2 AD − 1 2 ,
where D is the degree matrix, which is a diagonal matrix that contains the degree (i.e., the number of neighbors) of each node.
We then define a weight matrix W and apply it to the input feature matrix X:
Z =D − 1 2ÃD − 1 2 XW,
whereà = A + I n is the adjacency matrix with added self-loops,D is the degree matrix ofÃ, and I n is the identity matrix of size n × n.
We can then apply a nonlinear activation function f to the weighted sum of the features of each node's neighbors:
H = f (Z).
This operation can be thought of as a graph convolution operation, where each node's feature vector is updated based on the features of its neighbors. We can stack multiple GCN layers to obtain a deep GCN. In particular, we use two-layer GCNs in our model, so that for each input node features X t we have:
H(X t ) = H 2 (X t ) = f (H 1 , A) = f (f (X t , A), A)
The core of our model is an LSTM cell with additional GCN combined with three GCN layers. As shown in Figure 2, hidden state c t−1 , cell state h t−1 and the current node feature matrix X t from the last cell output are processed by GCN 6 Fig. 1: Illustration of the GCLSTM cell, two Chebyshev spectral graph convolutional operators developed by [33] are applied to both the last hidden state and the last cell state.
firstly, before they enter the LSTM cell. The updating process for each step can then be written as:
f t = σ(W f · [H(h t−1 ),H(X t )] + b f ) i t = σ(W i · [H(hh t−1 ),H(X t )] + b i ) c t = f t H (c t−1 ) + i t tanh(W c · [H(h t−1 ),H(X t )] + b c ) o t = σ(W o · [H(h t−1 ),H(X t )] + b o ) h t = o t tanh(c t )
with f t as forget gate, i t the input gate, c t the cell state, o t the output gate and h t the hidden gate. The final model we rely on is shown in Figure 2. We used a rolling window approach to capture the returns of the past d days as the node feature.
Then, we constructed the graph A t using value chain data from the last day of the rolling window and used it as a GCN layer to update the input gates of the LSTM cells. We employed three LSTM cells, each with two layers. The final hidden states from the LSTM cells were flattened and subsequently fed into a MLP network to predict the final stock returns. It's worth noting that the number of final nodes in the MLP network didn't have to match the number of nodes in the input graph. In our experiments, we predicted the next day's returns for only a subset of the stocks that were used as nodes in the graph. Reuters Refinitiv excludes any supplier-customer relationship that has a confidence score below 20%.
B. Preprocessing
To mitigate survivorship bias, we initially consider all companies that were included in the respective index (i.e. Eurostoxx 600 or S&P500) and retrieve the value chain data for each of them. Companies lacking value chain data are excluded from the graph model. Subsequently, we obtain historical closing prices of these companies and their respective customers/suppliers from 2000-01-01 to 2022-09-30. To minimize the impact of sparsity, trading days with fewer than 50 prices are excluded, primarily arising from Sharia trading days in Islamic countries. Furthermore, companies with fewer than 4000 trading days are omitted to reduce the number of artificially filled NaNs with zeroes. The daily return data of these companies serve as input for the model. We employ a 60-day rolling window to forecast the daily return of the next day based on the previous 59 days' daily returns. We restrict the model output to stocks with at least 5000 trading days history to decrease the number of spurious zeros in the model output. In addition to that, we restrict the output stocks for Eurostoxx 600 to the ones from In this layout, components less connected to other components are placed in peripheral space. The graph of Eurostoxx 600 looks denser although the total number of edges in both graphs is comparable. This is due to the fact that the connections in S&P value chain data are more concentrated on fewer nodes in the inner space.
is less than the number of stocks predicted. Table I summarizes the attributes of both datasets and the network configuration.
We notice that the Eurostoxx 600 graph has a higher density than S&P 500 one, while also being more connected than S&P 500. The Eurostoxx 600 nodes can be divided into 799 disjoint non-connected components with a maximum size of 674 and a mean size of 1.9724, while the S&P contains more non-connected components with a smaller maximum (638) and mean size (1.8465) of them. Fig. 3 visualizes both graphs. In both Figures, non-connected components are placed in peripheral space. It can be seen that the edges from S&P 500 are denser in the inner space and have more non-connected components.
Rolling Window Set Up
We partition 80% of the rolling data for training and reserve 20% for the test dataset. Since we're using recurrent models, shuffling isn't applied, because the ordering of the data is critical for the LSTM cell to memorize long-term dependencies in sequential data. We establish the last update date as the initial date for a valid edge in the graph. We also use the confidence 10 score as the edge weight, which ranges from 0 to 100% and reflects the signal strength transmitted through the edge. All edges are set to be bidirectional.
C. Comparison of Models
We compare the following models 1) ARIMA: we apply a rolling window of 60 days; for each rolling window we rely on the package [34] to determine the optimal ARIMA parameters and use them for prediction.
2) FCL: Multi-Layer neural network with four layers (10 * n input stock, 5 * n input stock, 10 * n input stock, 10 * n output stock). The input of the model is the historical returns from the last ten days. The tensor with shape (10, n input stock) is flattened and fed into the model. 4) GCN: we apply GCN directly and put the output from GCN layers to a FCL network.
5) GCLSTM: Graph Convolutional Long Short Term
Memory Cell is a model developed by [35]. It differs from our model as it applies Chebyshev spectral graph convolutional operator [19] instead of GCN. 6) TGCN: Temporal Graph Convolutional Gated Recurrent Cell proposed by [36]. It differs from our model as it only applied GCN to the node feature matrix X t .
The model outputs are compared regarding their mean squared error (MSE) and mean average error (MAE). We also calculate the mean R 2 value of the predicted returns from all output nodes. R 2 usually takes values greater than 0 and less than 1.
However, when predicting stock returns, R 2 is always close to 0 due to the noise in the stock market [37]. Most models can only achieve a slightly better prediction power compared to just using zeroes or historical mean as predicted values. Besides the measures mentioned above, we also show the rates of predicting the correct direction of price movement for each model.
For better evaluation, we exclude the zeros from true values for comparison as they often refer to non-trading days.
We notice that both MAE and MSE might not be the best measures for comparing model predictions, e.g. having a predicted return of +3% where the true value is 1% might have a less negative impact on the overall portfolio return than having a predicted return of -1%, although both MSE and MAE are the same in both cases. For that reason, we also compared the models by running simulations with a simple portfolio. We deploy a naive market-neutral strategy. The portfolio is re-balanced 11 on daily basis: the next day's weight on a single stock is its predicted return. The predicted returns are capped to +/-50% to avoid single stock being over-weighted, thus diversifying the risk. Weights are normalized separately for long and short positions so that the sum of all positive weights equals 1 and the sum of all negative weights equals -1. In order to have investable portfolios, we also downloaded the historical components from Eurostoxx 600 and S&P 500. The portfolio is only allocated to stocks that are in the indices at the time of re-balancing. We then compare the simulation results regarding its performance using measures such as annualized returns, Sharpe ratios, and Sortino ratios. The Sharpe ratio [38] and Sortino ratio [39] are both risk-adjusted performance measures used to evaluate the return of an investment relative to its risk. The
Sharpe ratio measures the excess return of an investment compared to a risk-free asset per unit of risk (usually standard deviation), while the Sortino ratio measures the excess return of an investment compared to the downside risk (usually the standard deviation of negative returns). The formulas for Sharpe ratio and Sortino ratio are as follows:
Sharpe Ratio:
S = R p − R f σ p
with R p = average return of the investment, R f = risk-free rate of return and σ p = standard deviation of the investment's returns.
Sortino Ratio:
S Sortino = R p − R f σ D
with R p = average return of the investment, R f = risk-free rate of return, σ D = standard deviation of the investment's negative returns (or downside deviation).
Both ratios can be used to compare the risk-adjusted performance of different investments, with a higher value indicating a better risk-adjusted return. However, the Sortino ratio may be more appropriate for investments with asymmetric returns, as it only considers downside risk.
In our experiments, we use Euro OverNight Index Average (EONIA) for Eurostoxx 600 and the Overnight US Dollar USD Libor interest rate (US LIBOR US00O/N) for S&P 500, as risk-free rates of return.
IV. EMPIRICAL RESULTS
The cumulative performance of the market-neutral strategy for the two datasets can be found in Fig. 4 and Fig. 5. Notice how the LSTM-GCN model outperforms all the baseline models on both datasets. We also notice that all temporal graph models (GCLSTM, TGCN and LSTM-GCN) experienced high volatility in Q1 2020 during the COVID-19, however, TGCN 12 and LSTM-GCN recovered faster than GCLSTM, showing the advantage of spatial filtering over spectral filtering for extracting information from value chain data. Table II and Table III report the key summary statistics. LSTM-GCN does not only outperform the other models with respect to MAE or MSE, but it also shows the highest rates of predicting correct directness for Eurostoxx 600. Only in S&P 500, the rate of the correctness of LSTM-GCN is slightly lower than TGCN.
We notice that the R 2 of baseline models are negative. LSTM-GCN shows slightly positive R 2 values for both datasets.
The simulation shows the highest cumulative returns, Sharpe ratio, and Sortino ratio with the LSTM-GCN model. We also notice that the TGCN model performs well next to the LSTM-GCN model, which is not a surprise as both models share a similar structure.
When comparing results from both datasets, we notice that in general, the graph models (GCN, GCLSTM, TGCN, and LSTM-GCN) perform better on Eurostoxx 600 than S&P 500. One explanation is the sparsity of the graph from S&P 500 compared to Eurostoxx 600, which can be found in Table I. The sparsity may originate from the way we preprocess the data, or from the quality of the raw data. Another possible explanation is that the US market (S&P 500) as a whole is well researched, thus the hidden information from value chain data is already partially reflected in the historical price data. The
European market (Eurostoxx 600) is more segregated compared to the US market, thus higher excess returns can be found together with value chain data.
We evaluated the model's robustness by manipulating the number of node features within the range of ±10 and ±20 days.
The simulation results are presented in Fig. 6 and Fig. 7. We tested the significance level of R 2 > 0 using the t-Test. Results can be found in Table IV and Table V. For the S&P 500, we observed positive mean R 2 values for all output stocks within rolling window lengths of 50 and 60 days. In the case of Eurostoxx 600, we found positive R 2 values for all rolling windows except for a length of 40 days. For all rolling windows with positive R 2 , t-test results show that we can reject the null hypothesis R 2 ≤ 0 with a significance level of 0.1%. It is important to note that the same model parameters were applied to all rolling windows, and therefore, the model may not be optimal for longer rolling windows. Moreover, the lower mean R 2 value for a rolling window length of 40 days may be due to a lack of sufficient data points.
V. CONCLUSION
In this paper, we introduce a neural network-based approach LSTM-GCN for stock price prediction, which combines LSTM and GCN. We use a graph network to model the value chain data in which the nodes on the graph represent companies, the edges represent the supplier-customer relationships between companies. The historical performance of the stocks is used as the 13 Fig. 4: Performance of long-short strategy of all stocks with a restricted country list, stocks return limit at +/-50% nodes' feature/attribute. The GCN is used to capture the topological structure of the graph to obtain the spatial dependencies, while the LSTM model is used to capture the dynamic change of node features to obtain the temporal dependence.
Our experimental results on Eurostoxx 600 and S&P 500 datasets demonstrate the superiority of LSTM-GCN over the baseline models regarding MSE and MAE. We also evaluated the models by running simulations using predicted values for constructing a market-neutral strategy. Results show that our model results in the highest cumulative returns, Sharpe ration, 14 Fig. 5: Performance of long-short strategy of all stocks with a restricted country list, stocks return limit at +/-50% and Sortino ratio. We noticed that the performances of the models differ in the two datasets significantly. We discussed the reason. Overall, we show that for both datasets, even though the amount may vary, excess returns can be found by applying temporal graph models on the value chain data.
Furthermore, we ran a robustness test by varying the length of the rolling window. For each rolling window length, we ran a t-test to evaluate whether R 2 is significantly larger than zero. Results show that for Eurostoxx 600, rolling window 50, 60, 70, and 80 days, and for S&P 500, rolling window 50, and 60 days, the null hypothesis that R 2 ≤ 0 can be rejected, meaning that our model is significantly better than use mean values as predicted values.
Our findings suggest that even though the magnitude of excess returns generated may vary across different datasets, applying temporal graph models on value chain data can be an effective approach to identifying profitable investment opportunities in the stock market. Future research could explore further enhancements to our model, such as incorporating external data sources or applying it to different domains beyond stock price prediction. Especially, we are interested in including additional graphs to the model to have multi-modalities. These could be graphs constructed through data other than from human knowledge, e.g. graphs based on similarity calculated from price data, or heterogeneous graphs with additional entities as nodes other than listed companies e.g. investment managers with a focus on cash equity.
7 Fig. 2 :
72Illustration of the model with GCLSTM cells. The input for each GCLSTM cell is a rolling window of graphs with graph edges, edges weights and node features. The hidden states of the cells from each time step are concatenated. The concatenated tensors are then flattened to a 1-d tensor and used as input for an MLP. For each rolling window, we used the value chain data available on the last day to construct the graphIII. DATA AND METHODOLOGICAL SET UPA. DatasetsWe assess the prediction accuracy of the LSTM-GCN model on two empirical datasets, namely the Eurostoxx 600 and S&P 500. The Eurostoxx 600 and S&P 500 are stock market indices that gauge the performance of the stock markets in Europe and the United States, respectively. The Eurostoxx 600 comprises the largest 600 companies from 17 European countries, with the index weighted by market capitalization. Similarly, the S&P 500 encompasses the largest 500 publicly traded firms in the United States, spanning various sectors such as technology, healthcare, financials, and consumer goods. The main difference between these indices lies in their geographical focus and composition.We acquire the supplier-customer relationships of each company in the Eurostoxx 600 and S&P 500 from the value chain data provided by Thomson Reuters Refinitiv. For each company, we download the list of suppliers and customers for each constituent of the Eurostoxx 600 and S&P 500 and then create the supply chain networks, represented as a graph with nodes (i.e. companies) connected by edges that denote supplier and customer relationships. The value chain data of Reuters are8
Fig. 3 :
3the 19 European Countries (i.e. Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, and the United Kingdom), and for S&P 500 the ones listed in US market. The impact of this step is larger for Eurostoxx 600 because a significant part of the stocks from its supplier-customer network is from non-EU countries e.g. USA. Consequently, the number of stocks used Value chain graphs of Eurostoxx600 and S&P 500 datasets. Nodes are the listed companies. Red nodes denote companies that are constituents of the corresponding indexes. Edges represent the supplier/customer relationships between two entities.
3 )
3LSTM: we apply a two layers LSTM model (59, 60, 6) on the input tensor. The output hidden state of the LSTM model is flattened and given to an MLP with two fully connected layers (6 * n input stock, 10 * n input stock, 1 * n output stock). The cell states of the LSTM cells are updated through the rolling window and used for the training dataset and then for the testing dataset.
15 Fig. 6 :
156Robustness test on dataset Eurostoxx 600. All simulations show positive annualized returns, Sharpe ratio, and Sortino ratio.
16 Fig. 7 :
167Robustness test on dataset S&P 500. All simulations show positive annualized returns, Sharpe ratio, and Sortino ratio.
TABLE I :
Iscore reflecting the degree of confidence that Thomson Reuters Refinitiv holds regarding the validity of the supplier-customer relationship. For each company pair, Thomson Reuters Refinitiv employs an algorithm to collect all detected relations (i.e., evidence snippets) from the source documents and estimates the likelihood of a valid supply-customer relationship between them. This estimation accounts for the source type (e.g., News, Filings) and all collected evidence snippets. Moreover, ThomsonSummary of the network properties of Eurostoxx 600 and S&P500
Eurostoxx 600
S&P 500
# nodes
1576
1694
# edges
2501
2446
density
8.105 10 −4
6.513 10 −4
# of connected components
799
912
maximum size of connected components
674
638
average size of connected components
1.9724
1.8465
# features of nodes
59
59
# output nodes
550
656
derived from company reports, wherein each supplier-customer relationship contains a last updated date and a confidence
TABLE II :
IIComparison of Models for Eurostoxx 600models
MAE
MSE
R 2
Correctness (%)
ann. Return (%) ann. Sharpe Ratio ann. Sortino Ratio
ARIMA
8.0733 10 −4
1.7691 10 −2
-0.1853
21.2756
-7.8119
-0.5536
-0.6806
FCL
7.2651 10 −4
1.6716 10 −2
-0.0123
50.3497
5.0726
0.5787
0.9276
LSTM
7.2118 10 −4
1.6618 10 −2
-0.0008
50.3231
-4.8658
-0.3605
-0.5827
GCN
7.2128 10 −4
1.6639 10 −2
-0.0017
50.7231
-0.9653
-0.0805
-0.1109
GCLSTM
7.2203 10 −4
1.6637 10 −2
-0.0022
50.4616
4.8830
0.2911
0.3366
TGCN
7.2111 10 −4
1.6614 10 −2
-0.0007
50.7679
5.5689
0.4153
0.5460
LSTM-GCN
7.1993 10 −4
1.6607 10 −2
0.0010
51.0170
16.3031
1.0759
1.6872
TABLE III :
IIIComparison of Models for S&P 500models
MAE
MSE
R 2
Correctness (%)
ann. Return (%) ann. Sharpe Ratio ann. Sortino Ratio
ARIMA
2.0477 10 −3
2.2001 10 −2
-0.1569
23.1054
-16.1552
-0.8375
-0.9020
FCL
1.8643 10 −3
2.1694 10 −2
-0.0594
50.5791
0.6716
0.0131
0.0164
LSTM
1.8383 10 −3
2.1024 10 −2
-0.0005
51.1683
5.3151
0.1890
0.2546
GCN
1.8372 10 −3
2.1024 10 −2
-0.0007
51.1574
-0.4178
-0.0859
-0.1224
GCLSTM
1.8745 10 −3
2.1223 10 −2
-0.0135
50.5980
-0.5192
-0.1629
-0.2497
TGCN
1.8367 10 −3
2.1032 10 −2
-0.0014
50.9658
4.6013
0.5100
0.7920
LSTM-GCN
1.8353 10 −3
2.1013 10 −2
0.0006
50.9596
5.8904
0.5345
0.8349
TABLE IV :
IVRobustness Test for Eurostoxx 600length of rolling window
ann. Return (%) ann. Sharpe Ratio
ann. Sortino Ratio
R 2
t-statistic
p-value
40
6.2434
0.5200
0.7345
-0.0004
-3.1303
-
50
11.8664
0.9407
1.2455
0.0004
3.4384
3.1476 10 −4
60
16.4957
1.0956
1.7176
0.0010
7.9738
4.4904 10 −15
70
2.0968
0.1456
0.1987
0.0005
3.8080
7.7959 10 −5
80
1.2250
0.0860
0.1099
0.0007
6.0850
1.0933 10 −9
TABLE V :
VRobustness Test for S&P 500length of rolling window
ann. Return (%) ann. Sharpe Ratio
ann. Sortino Ratio
R 2
t-statistic
p-value
40
3.2779
0.2592
0.3660
-0.0001
-1.4048
-
50
7.6080
0.6639
0.9638
0.0004
5.0718
2.5690 10 −7
60
5.8591
0.5307
0.8291
0.0006
6.1369
7.2860 10 −10
70
3.9570
0.3587
0.5565
-0.0008
-6.5909
-
80
3.4417
0.4180
0.5733
-0.0004
-8.2550
-
Systematic analysis and review of stock market prediction techniques. D P Gandhmal, K Kumar, Computer Science Review. 34100190D. P. Gandhmal and K. Kumar, "Systematic analysis and review of stock market prediction techniques," Computer Science Review, vol. 34, p. 100190, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S157401371930084X
A literature review of technical analysis on stock markets. R T Farias Nazário, J L Silva, V A Sobreiro, H Kimura, The Quarterly Review of Economics and Finance. 66R. T. Farias Nazário, J. L. e Silva, V. A. Sobreiro, and H. Kimura, "A literature review of technical analysis on stock markets," The Quarterly Review of Economics and Finance, vol. 66, pp. 115-126, 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1062976917300443
Stock price prediction using the arima model. A A Ariyo, A O Adewumi, C K Ayo, 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation. A. A. Ariyo, A. O. Adewumi, and C. K. Ayo, "Stock price prediction using the arima model," in 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, 2014, pp. 106-112.
Trading volume and realized volatility forecasting: Evidence from the china stock market. M Liu, W.-C Choo, C.-C Lee, C.-C Lee, https:/onlinelibrary.wiley.com/doi/abs/10.1002/for.2897Journal of Forecasting. 421M. Liu, W.-C. Choo, C.-C. Lee, and C.-C. Lee, "Trading volume and realized volatility forecasting: Evidence from the china stock market," Journal of Forecasting, vol. 42, no. 1, pp. 76-100, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/for.2897
Clustering and regression techniques for stock prediction. B Bini, T Mathew, international Conference on Emerging Trends in Engineering, Science and Technology (ICETEST -2015). 24B. Bini and T. Mathew, "Clustering and regression techniques for stock prediction," Procedia Technology, vol. 24, pp. 1248-1255, 2016, international Conference on Emerging Trends in Engineering, Science and Technology (ICETEST -2015). [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2212017316301931
Efficient capital markets: A review of theory and empirical work*. B G Malkiel, E F Fama, https:/onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6261.1970.tb00518.xThe Journal of Finance. 252B. G. Malkiel and E. F. Fama, "Efficient capital markets: A review of theory and empirical work*," The Journal of Finance, vol. 25, no. 2, pp. 383-417, 1970. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6261.1970.tb00518.x
M Prado, Machine Learning for Asset Managers, ser. Elements in Quantitative Finance. Cambridge University PressM. de Prado, Machine Learning for Asset Managers, ser. Elements in Quantitative Finance. Cambridge University Press, 2020. [Online]. Available: https://books.google.de/books?id=0D8LEAAAQBAJ
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, 10.1145/3065386Commun. ACM. 606A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, p. 84-90, may 2017. [Online]. Available: https://doi.org/10.1145/3065386
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintT. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," arXiv preprint arXiv:1609.02907, 2016.
Decision-making for financial trading: A fusion approach of machine learning and portfolio selection. F D Paiva, R T N Cardoso, G P Hanaoka, W M Duarte, Expert Systems with Applications. 115F. D. Paiva, R. T. N. Cardoso, G. P. Hanaoka, and W. M. Duarte, "Decision-making for financial trading: A fusion approach of machine learning and portfolio selection," Expert Systems with Applications, vol. 115, pp. 635-655, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417418305037
Novel deep learning model with cnn and bi-directional lstm for improved stock market index prediction. J Eapen, D Bein, A Verma, 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). J. Eapen, D. Bein, and A. Verma, "Novel deep learning model with cnn and bi-directional lstm for improved stock market index prediction," in 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), 2019, pp. 0264-0270.
Hybrid deep learning model for stock price prediction. M A Hossain, R Karim, R Thulasiram, N D B Bruce, Y Wang, 2018 IEEE Symposium Series on Computational Intelligence (SSCI). M. A. Hossain, R. Karim, R. Thulasiram, N. D. B. Bruce, and Y. Wang, "Hybrid deep learning model for stock price prediction," in 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018, pp. 1837-1844.
A lstm-based method for stock returns prediction: A case study of china stock market. K Chen, Y Zhou, F Dai, 2015 IEEE International Conference on Big Data (Big Data. K. Chen, Y. Zhou, and F. Dai, "A lstm-based method for stock returns prediction: A case study of china stock market," in 2015 IEEE International Conference on Big Data (Big Data), 2015, pp. 2823-2824.
On stock volatility forecasting based on text mining and deep learning under high-frequency data. B Lei, Z Liu, Y Song, https:/onlinelibrary.wiley.com/doi/abs/10.1002/for.2794Journal of Forecasting. 408B. Lei, Z. Liu, and Y. Song, "On stock volatility forecasting based on text mining and deep learning under high-frequency data," Journal of Forecasting, vol. 40, no. 8, pp. 1596-1610, 2021. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/for.2794
An ensemble of lstm neural networks for high-frequency stock market classification. S Borovkova, I Tsiamas, https:/onlinelibrary.wiley.com/doi/abs/10.1002/for.2585Journal of Forecasting. 386S. Borovkova and I. Tsiamas, "An ensemble of lstm neural networks for high-frequency stock market classification," Journal of Forecasting, vol. 38, no. 6, pp. 600-619, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/for.2585
A graph-based convolutional neural network stock price prediction with leading indicators. J M Wu, Z Li, G Srivastava, M.-H Tasi, J. C.-W Lin, https:/onlinelibrary.wiley.com/doi/abs/10.1002/spe.2915Software: Practice and Experience. 513J. M.-T. Wu, Z. Li, G. Srivastava, M.-H. Tasi, and J. C.-W. Lin, "A graph-based convolutional neural network stock price prediction with leading indicators," Software: Practice and Experience, vol. 51, no. 3, pp. 628-644, 2021. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2915
Parallel architecture of cnn-bidirectional lstms for implied volatility forecast. J.-E Choi, D W Shin, https:/onlinelibrary.wiley.com/doi/abs/10.1002/for.2844Journal of Forecasting. 416J.-E. Choi and D. W. Shin, "Parallel architecture of cnn-bidirectional lstms for implied volatility forecast," Journal of Forecasting, vol. 41, no. 6, pp. 1087-1098, 2022. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/for.2844
Convolutional neural networks on graphs with fast localized spectral filtering. M Defferrard, X Bresson, P Vandergheynst, abs/1606.09375CoRR. M. Defferrard, X. Bresson, and P. Vandergheynst, "Convolutional neural networks on graphs with fast localized spectral filtering," CoRR, vol. abs/1606.09375, 2016. [Online]. Available: http://arxiv.org/abs/1606.09375
Structured sequence modeling with graph convolutional recurrent networks. Y Seo, M Defferrard, P Vandergheynst, X Bresson, Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson, "Structured sequence modeling with graph convolutional recurrent networks," 12 2016. [Online]. Available: http://arxiv.org/abs/1612.07659 18
Gated graph recurrent neural networks. L Ruiz, F Gama, A Ribeiro, IEEE Transactions on Signal Processing. 68L. Ruiz, F. Gama, and A. Ribeiro, "Gated graph recurrent neural networks," IEEE Transactions on Signal Processing, vol. 68, pp. 6303-6318, 2020. [Online]. Available: https://doi.org/10.1109%2Ftsp.2020.3033962
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, T. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," 2016. [Online]. Available: https://arxiv.org/abs/1609.02907
Graph-based learning for stock movement prediction with textual and relational data. Q Chen, C.-Y. Robert, The Journal of Financial Data Science. 44Q. Chen and C.-Y. Robert, "Graph-based learning for stock movement prediction with textual and relational data," The Journal of Financial Data Science, vol. 4, no. 4, pp. 152-166, 2022. [Online]. Available: https://jfds.pm-research.com/content/4/4/152
Forecasting global stock market volatility: The impact of volatility spillover index in spatial-temporal graph-based model. B Son, Y Lee, S Park, J Lee, https:/onlinelibrary.wiley.com/doi/abs/10.1002/for.2975Journal of Forecasting. B. Son, Y. Lee, S. Park, and J. Lee, "Forecasting global stock market volatility: The impact of volatility spillover index in spatial-temporal graph-based model," Journal of Forecasting, vol. n/a, no. n/a. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/for.2975
. L Cohen, A Frazzini, J Chevalier, K Daniel, D Diamond, G Fama, W Goetzmann, R Jagannathan, A Kashyap, J Lakonishok, O Lamont, J Lewellen, T Moskowitz, L Pedersen, M Piazzesi, J Piotroski, J Rauh, D Skinner, M Spiegel, R Stambaugh, A Sufi, J Thomas, T Vuolteenaho, I Welch, W Xiong, Economic links and predictable returnsL. Cohen, A. Frazzini, J. Chevalier, K. Daniel, D. Diamond, G. Fama, W. Goetzmann, R. Jagannathan, A. Kashyap, J. Lakonishok, O. Lamont, J. Lewellen, T. Moskowitz, L. Pedersen, M. Piazzesi, J. Piotroski, J. Rauh, D. Skinner, M. Spiegel, R. Stambaugh, A. Sufi, J. Thomas, T. Vuolteenaho, I. Welch, and W. Xiong, "Economic links and predictable returns," 2008.
Review of graph construction and graph learning in stock price prediction. Y Wang, Y Qu, Z Chen, 9th International Conference on Information Technology and Quantitative Management. 214Y. Wang, Y. Qu, and Z. Chen, "Review of graph construction and graph learning in stock price prediction," Procedia Computer Science, vol. 214, pp. 771-778, 2022, 9th International Conference on Information Technology and Quantitative Management. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1877050922019500
Graph-based stock recommendation by time-aware relational attention network. J Gao, X Ying, C Xu, J Wang, S Zhang, Z Li, ACMTransJ. Gao, X. Ying, C. Xu, J. Wang, S. Zhang, and Z. Li, "Graph-based stock recommendation by time-aware relational attention network," ACM Trans.
. Knowl, Discov, Data, 10.1145/345139716Knowl. Discov. Data, vol. 16, no. 1, jul 2021. [Online]. Available: https://doi.org/10.1145/3451397
Hgnn: Hierarchical graph neural network for predicting the classification of price-limit-hitting stocks. C Xu, H Huang, X Ying, J Gao, Z Li, P Zhang, J Xiao, J Zhang, J Luo, Information Sciences. 607C. Xu, H. Huang, X. Ying, J. Gao, Z. Li, P. Zhang, J. Xiao, J. Zhang, and J. Luo, "Hgnn: Hierarchical graph neural network for predicting the classification of price-limit-hitting stocks," Information Sciences, vol. 607, pp. 783-798, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025522005928
Modeling the stock relation with graph network for overnight stock movement prediction. W Li, R Bao, K Harimoto, D Chen, J Xu, Q Su, 10.24963/ijcai.2020/626Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, C. Bessiere, Ed. International Joint Conferences on Artificial Intelligence Organization. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, C. Bessiere, Ed. International Joint Conferences on Artificial Intelligence Organization7special Track on AI in FinTechW. Li, R. Bao, K. Harimoto, D. Chen, J. Xu, and Q. Su, "Modeling the stock relation with graph network for overnight stock movement prediction," in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, C. Bessiere, Ed. International Joint Conferences on Artificial Intelligence Organization, 7 2020, pp. 4541-4547, special Track on AI in FinTech. [Online]. Available: https://doi.org/10.24963/ijcai.2020/626
Modeling the momentum spillover effect for stock prediction via attribute-driven graph attention networks. R Cheng, Q Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35R. Cheng and Q. Li, "Modeling the momentum spillover effect for stock prediction via attribute-driven graph attention networks," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 1, pp. 55-62, May 2021. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/16077
A new method for stock price prediction based on mrfs and ssvm. L Lai, C Li, W Long, 2017 IEEE International Conference on Data Mining Workshops. ICDMWL. Lai, C. Li, and W. Long, "A new method for stock price prediction based on mrfs and ssvm," in 2017 IEEE International Conference on Data Mining Workshops (ICDMW), 2017, pp. 818-823.
A machine learning approach for stock price prediction. C K S Leung, R K Mackinnon, Y Wang, 10.1145/2628194.2628211Proceedings of the 18th International Database Engineering & Applications Symposium, ser. IDEAS '14. the 18th International Database Engineering & Applications Symposium, ser. IDEAS '14New York, NY, USAAssociation for Computing MachineryC. K.-S. Leung, R. K. MacKinnon, and Y. Wang, "A machine learning approach for stock price prediction," in Proceedings of the 18th International Database Engineering & Applications Symposium, ser. IDEAS '14. New York, NY, USA: Association for Computing Machinery, 2014, p. 274-277. [Online]. Available: https://doi.org/10.1145/2628194.2628211
Convolutional neural networks on graphs with fast localized spectral filtering. M Defferrard, X Bresson, P Vandergheynst, M. Defferrard, X. Bresson, and P. Vandergheynst, "Convolutional neural networks on graphs with fast localized spectral filtering," 2016. [Online].
pmdarima: Arima estimators for Python. T G Smith, Online; accessed ¡today¿T. G. Smith et al., "pmdarima: Arima estimators for Python," 2017-, [Online; accessed ¡today¿]. [Online]. Available: http://www.alkaline-ml.com/ pmdarima
Gc-lstm: Graph convolution embedded lstm for dynamic link prediction. J Chen, X Wang, X Xu, J. Chen, X. Wang, and X. Xu, "Gc-lstm: Graph convolution embedded lstm for dynamic link prediction," 12 2018. [Online]. Available: http://arxiv.org/abs/1812.04206
T-GCN: A temporal graph convolutional network for traffic prediction. L Zhao, Y Song, C Zhang, Y Liu, P Wang, T Lin, M Deng, H Li, IEEE Transactions on Intelligent Transportation Systems. 219L. Zhao, Y. Song, C. Zhang, Y. Liu, P. Wang, T. Lin, M. Deng, and H. Li, "T-GCN: A temporal graph convolutional network for traffic prediction," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 9, pp. 3848-3858, sep 2020. [Online]. Available: https://doi.org/10.1109%2Ftits.2019.2935152
Empirical Asset Pricing via Machine Learning. S Gu, B Kelly, D Xiu, 10.1093/rfs/hhaa009The Review of Financial Studies. 335S. Gu, B. Kelly, and D. Xiu, "Empirical Asset Pricing via Machine Learning," The Review of Financial Studies, vol. 33, no. 5, pp. 2223-2273, 02 2020. [Online]. Available: https://doi.org/10.1093/rfs/hhaa009
The sharpe ratio. W F Sharpe, The Journal of Portfolio Management. 211W. F. Sharpe, "The sharpe ratio," The Journal of Portfolio Management, vol. 21, no. 1, pp. 49-58, 1994. [Online]. Available: https://jpm.pm-research.com/content/21/1/49
Downside risk. F A Sortino, R Van Der Meer, The Journal of Portfolio Management. 174F. A. Sortino and R. van der Meer, "Downside risk," The Journal of Portfolio Management, vol. 17, no. 4, pp. 27-31, 1991. [Online]. Available: https://jpm.pm-research.com/content/17/4/27
| []
|
[
"Dynamical Zero Modes and Pure Glue QCD 1+1 in Light-Cone Field Theory",
"Dynamical Zero Modes and Pure Glue QCD 1+1 in Light-Cone Field Theory"
]
| [
"Alex C Kalloniatis \nMax-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio\n",
"Hans-Christian Pauli \nMax-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio\n",
"Stephen Pinsky \nMax-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio\n"
]
| [
"Max-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio",
"Max-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio",
"Max-Planck-Institut fur Kernphysik Postfach\nDepartment of Physics\nThe Ohio State University\n10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio"
]
| []
| We consider light-cone quantized QCD 1+1 on a 'cylinder' with periodic boundary conditions on the gluon fields. This is the framework of discretized light-cone quantization. We review the argument that the light-cone gauge A + = 0 is not attainable.The zero mode is a dynamical and gauge invariant field. The attainable gauge has a Gribov ambiguity. We exactly solve the problem of pure glue theory coupled to some zero mode external sources. We verify the identity of the front and the more familiar instant form approaches. We obtain a discrete spectrum of vacuum states and their wavefunctions. | 10.1103/physrevd.50.6633 | [
"https://export.arxiv.org/pdf/hep-th/9403038v3.pdf"
]
| 45,058,413 | hep-th/9403038 | edaf817513279cebca663a47eedb64d1ab151514 |
Dynamical Zero Modes and Pure Glue QCD 1+1 in Light-Cone Field Theory
May 1994 27 May, 1994
Alex C Kalloniatis
Max-Planck-Institut fur Kernphysik Postfach
Department of Physics
The Ohio State University
10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio
Hans-Christian Pauli
Max-Planck-Institut fur Kernphysik Postfach
Department of Physics
The Ohio State University
10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio
Stephen Pinsky
Max-Planck-Institut fur Kernphysik Postfach
Department of Physics
The Ohio State University
10 39 80, 174 West 18th Avenue ColumbusD-69029, 43210Heidelberg 1Ohio
Dynamical Zero Modes and Pure Glue QCD 1+1 in Light-Cone Field Theory
May 1994 27 May, 1994arXiv:hep-th/9403038v3 27
We consider light-cone quantized QCD 1+1 on a 'cylinder' with periodic boundary conditions on the gluon fields. This is the framework of discretized light-cone quantization. We review the argument that the light-cone gauge A + = 0 is not attainable.The zero mode is a dynamical and gauge invariant field. The attainable gauge has a Gribov ambiguity. We exactly solve the problem of pure glue theory coupled to some zero mode external sources. We verify the identity of the front and the more familiar instant form approaches. We obtain a discrete spectrum of vacuum states and their wavefunctions.
Introduction
Recently the Hamiltonian approach to field theory has been tackled with renewed interest.
The hope is that Dirac's 'front form' Hamiltonian scheme [1] is useful for confronting quantum chromodynamics (QCD). Often in the literature this is called 'light-cone ', 'null-plane' or 'light-front' quantization. In the sequel we shall persist with the original Dirac nomenclature. This formulation uses x + = 1 √ 2 (ct + z), called the light-cone time, as the 'time' evolution parameter rather than the conventional x 0 = ct. For an extensive bibliography the reader is referred to Refs. [2]. One reason for the modern phase of this approach is the apparent simplicity of the vacuum in front form theory. In the more familiar 'instant form' quantization the QCD vacuum contains an infinite number of soft particles. But then in front form field theory the question arises: where can long range phenomena of spontaneous symmetry breaking and perhaps even confinement appear in the apparent absence of any 'infrared' vacuum structure?
The specific approach of Discretized Light-Cone Quantization (DLCQ) is one setting in which one can anwer these questions and hopefully pursue the program to a solution. Here the theory is defined in a finite 'spatial volume' with periodic or antiperiodic boundary conditions imposed on bosonic or fermionic fields, respectively. There are two appealing reasons for such a formulation. One obtains an infrared regulated theory, and the discretization of momenta facilitates putting the many-body problem onto the computer.
The price one has to pay, shown actually some time ago [3], is that Fourier zero modes of the fields are often not independent dynamical quanta. Rather, by a constraint equation, they are dependent on them. Recent work on such a constrained zero mode in scalar φ 4 1+1 has lead to the insight that it gives rise to the phenomena of spontaneous symmetry breaking and field condensates [4], aspects normally attributed to non-trivial vacuum structure.
Our concern in this paper, however, is with zero modes that are true dynamical independent fields. One way they can arise is as follows. Due to the boundary conditions in gauge theory one cannot fully implement the traditional light-cone gauge A + = 0. The development of the understanding of this problem in DLCQ can be traced in Refs. [5].
The field A + turns out to have a zero mode which cannot be gauged away [6]. This mode is indeed dynamical, and is the object we study in this paper. It has its analogue in instant form approaches to gauge theory. For example, there exists a large body of work on Abelian and non-Abelian gauge theories in 1+1 dimensions quantized on a cylinder geometry [7]. There indeed this dynamical zero mode plays an important role.
We too shall concern ourselves in the present work with non-Abelian gauge theory in 1+1 dimensions, revisiting the model introduced by 't Hooft [8]. A DLCQ treatment of the theory, giving meson and baryon spectra, and wavefunctions, was undertaken by Hornbostel [9]. Apart from a modified approach by Lenz et al. [10], zero modes have been neglected in previous DLCQ studies of QCD 1+1 . This we rectify to some extent in the present paper.
The specific task we undertake here is to understand the zero mode subsector of the pure glue theory, namely where only zero mode external sources excite only zero mode gluons. We shall see that this is not an approximation but rather a consistent solution, a sub-regime within the complete theory. A similar framing of the problem lies behind the work of Lüscher [11] and van Baal [12] using the instant form Hamiltonian approach to pure glue gauge theory in 3+1 dimensions. The beauty of this reduction in the 1+1 dimensional theory is two-fold. First, it yields a theory which is exactly soluble. This is useful given the dearth of soluble models in field theory. Secondly, the zero mode theory represents a paring down to the point where the front and instant forms are manifestly identical, which is nice to know indeed. We solve the theory in this specific dynamical regime and find a discrete spectrum of states whose wavefunctions can be completely determined. These states have the quantum numbers of the vacuum. There is a summary and discussion of the results at the end of the paper. The appendix explains notation.
Gauge Fixing
We consider an SU(2) non-Abelian gauge theory in 1+1 dimensions with classical sources coupled to the gluons. The Lagrangian density is
L = 1 2 Tr (F µν F µν ) + 2 Tr (J µ A µ )(1)
where
F µν = ∂ ν A ν − ∂ ν A µ − g[A µ , A ν ]. With a finite interval in x − from −L to L, we
impose periodic boundary conditions on all gauge potentials A µ .
We now show that the light-cone gauge A + = 0 cannot be reached. A gauge transformation U bringing a gauge potential B µ , itself in some arbitrary gauge configuration, to some other gauge configuration A µ is
gA µ = ∂ µ U U −1 + gU B µ U −1 .(2)
Here g is the coupling constant and U is an element of the Lie algebra of SU (2). Clearly U given by
U = P exp [−g x − −L dy − B + (y − )](3)
will bring us to the gauge A + = 0.
We appear to have been successful in getting the light-cone gauge. However, the element U through which we wish to achieve the gauge condition must satisfy Z 2 -periodic boundary conditions, as in [13], namely U (x) = (±)U (x + 2L). Clearly Eq.(3) does not satisfy these boundary conditions. So in fact the attempt has failed.
With the appendicial notation, a modification of Eq. (3) is
U (x) = e gx − o B + P e −g x − −L dy − B + (y − ) .(4)
Since o B + is the zero mode of B + , this is an allowed gauge transformation but it does not completely bring us to the light-cone gauge. We find instead
A + = o B + .(5)
In other words, we cannot eliminate the zero mode of the gauge potential. The reason is evident: it is invariant under periodic gauge transformations. But of course we can always perform a rotation in color space. In line with other authors [14], we choose this so that o A + 3 is the only non-zero element, since in our representation only σ 3 is diagonal.
In addition, we can impose the subsidiary gauge condition
o A − 3 = 0 .(6)
The reason is that there still remains freedom to perform gauge transformations that depend only on light-cone time x + and the color matrix σ 3 . The above condition Eq. (6) can be reached from the arbitrary configuration B µ by the Lie algebra element
W = P exp[−ig x + x + 0 dx + o B − 3 (x + ) σ 3 2 ] ,(7)
where x + 0 is some arbitrary but fixed light-cone time. It, moreover, does not 'undo' the previous gauge condition.
The above procedure would appear to have enabled complete fixing of the gauge. This is still not so. Gauge transformations
V = exp{ix − ( nπ 2L )σ 3 }(8)
generate shifts, according to Eq. (2), in the zero mode component
o A + 3 → o A + 3 + nπ gL .(9)
All of these possibilities, labelled by the integer n, of course still satisfy ∂ − A + = 0, but as one sees n = 0 should not really be included. One can verify that the transformations V also preserve the subsidiary condition, Eq. (6). One notes that the transformation is
x − -dependent and Z 2 periodic. It is thus a simple example of a Gribov copy [15] in 1+1 dimensions. We follow the conventional procedure by demanding
o A + 3 = nπ gL , n = ±1, ±2, . . . .(10)
This eliminates singularity points at the Gribov 'horizons' which in turn correspond to a vanishing Faddeev-Popov determinant [12].
Equations of motion
Equations for Pure Glue Theory. Ultimately, the argument that the vacuum in front form field theory is trivial rests on the linearity of the Euler-Lagrange equations of motion in the light-cone time x + . This itself stems from the expression for the D'Alembertian in light-cone coordinates ✷ = ∂ + ∂ − in one space dimension. It is the very same fact that causes most zero modes to be constrained when there are transverse dimensions: the space derivative kills the mode, thus eliminating the time derivative in the equation of motion. However, a careful examination of the equations can sometimes reveal double time derivatives ∂ 2 + due to the gauge structure. Thus there can still be dynamical zero mode degrees of freedom even in DLCQ which could, in principle, undermine the vacuum 'triviality' argument. This is what we now explore for SU (2).
The equations of motion for the theory are
[D µ , F µν ] = ∂ µ F µν − g[A µ , F µν ] = J ν .(11)
For our purposes it is convenient to break this equation up into color components A µ a . Color will always be the lower index. Rather than the three color fields A µ 1 , A µ 2 and A µ 3 we will use chiral notation with A µ
+ = A µ 1 + iA µ 2 and A µ − = A µ 1 − iA µ 2 .
In terms of these components the equations of motion are
∂ µ ∂ µ A ν 3 − ∂ ν ∂ µ A µ 3 + ig 2 A µ − ↔ ∂ ν A µ,+ + ig 2 (A ν − ∂ µ A µ + − A ν + ∂ µ A µ − ) + ig(∂ µ A ν − A µ + − ∂ µ A ν + A µ − ) + g 2 [−A µ,+ A µ − A ν 3 + 1 2 A µ,3 (A ν + A µ − + A ν − A µ + )] = J ν 3(12)
and
∂ µ ∂ µ A ν − − ∂ ν ∂ µ A µ − + igA µ 3 ↔ ∂ ν A µ,− + ig(A ν 3 ∂ µ A µ − − A ν − ∂ µ A µ 3 ) + 2ig(∂ µ A ν 3 A µ − − ∂ µ A ν − A µ 3 ) + g 2 [A µ,3 (A µ − A ν 3 − A µ 3 A ν − ) + 1 2 A µ,− (A ν + A µ − − A ν − A µ + )] = J ν − ,(13)
where we use the antisymmetric derivative A
↔ ∂ B = A(∂B) − (∂A)B. A third equation is the complex conjugate of Eq.(13).
Next we break these equations up into normal and zero mode components [6], and look at the equations for each Lorentz component ν = +, − and each color component a = 3, +. With the above gauge conditions the ν = + equations are
(i∂ + ) 2 n A − 3 = n J + 3 ,(14)0 = o J + 3 ,(15)(i∂ + + g o A + 3 ) 2 n A − − = n J + − , and(16)g 2 ( o A + 3 ) 2 o A − − = o J + − .(17)
Observe that these equations exhibit no time ∂ + derivatives. Correspondingly for ν = −:
∂ + ∂ − n A − 3 − ig 2 A − − ↔ ∂ + A − + n + g 2 o A + 3 A − + A − − n = n J − 3 ,(18)−(∂ − ) 2 o A + 3 − ig 2 A − − ↔ ∂ + A − + o + g 2 o A + 3 A − + A − − o = o J − 3 ,(19)−∂ + ∂ − n A − − −ig o A + 3 ∂ − n A − − −2ig∂ − o A + 3 n A − − −ig A − 3 ∂ + A − − n + (20) ig ∂ + A − 3 A − − n − g 2 o A + 3 A − 3 A − − n = n J − − , and (21) −ig o A + 3 ∂ − o A − − −2ig∂ − o A + 3 o A − − −ig A − 3 ∂ + A − − o + (22) ig ∂ + A − 3 A − − o − g 2 o A + 3 A − 3 A − − o = o J − − .(23)
Note the presence of both constraint and evolution equations.
The constrained nature of the first set of equations is not so much a property of the front form, but is rather the Gauss law exhibiting itself. The equations correspond to the fact that, in non-covariant gauges, the field A − is generally a non-dynamical field. In a Hamiltonian approach it plays the role of a Lagrange multiplier to the Gauss law. In the approach we shall take to the quantum theory, we shall implement these as 'strong', namely operator constraints. However, special comment must be reserved for Eq. (15). It actually does not even occur since we have gauged away
A − ± = o J + ± g 2 ( o A + 3 ) 2 .(24)
From the ν = − equations we extract only one relevant equation
− (∂ − ) 2 o A + 3 +g 2 o A + 3 o A − + o A − − = o J − 3 .(25)
We observe that the pure glue theory in 1+1 dimensions involves only a single genuine degree of freedom, the field o A + 3 . Substituting our solutions Eq.(24) into the dynamical equation Eq.(25) we obtain
− (∂ − ) 2 o A + 3 + o J + + o J + − g 2 ( o A + 3 ) 3 = o J − 3 .(26)
From this we can see that this reduction of the theory is not equivalent to a perturbation around the free (g = 0) theory. For convenience we henceforth use the notation
o A + 3 = v , x + = t , w 2 = o J + + o J + − g 2 and o J − 3 = B 2 .(27)
The dynamical equation can then be compactly written as
− ∂ 2 ∂t 2 v + w 2 v 3 = B 2 .(28)
It can be solved by easy reduction to quadrature with solution ± it = v ydy By 3 + 2w 2 Gy 2 + w 2
where G is an integration constant.
The Solution to the Quantum Problem. We pursue a Hamiltonian formulation where, in the front form, the generator of x + translations P − or light-cone energy operator is taken as the Hamiltonian. The only conjugate momentum is
p ≡ o Π − 3 = ∂ − o A + 3 = ∂ − v .(30)
The Hamiltonian density
T +− = ∂ − o A + 3 Π − 3 − L leads to the Hamiltonian H = 1 2 [p 2 + w 2 v 2 + Bv](2L) .(31)
Of course, Hamilton's equations of motion agree with Eq.(24) and Eq.(25). Quantization is achieved by imposing a commutation relation at equal light-cone time on the dynamical degree of freedom. Introducing the variable q = 2Lv, the appropriate commutation relation is
[q(x + ), p(x + )] = i .(32)
Note that the zero mode v or q satisfies a field theory of one dimension less than the original field theory. In 1+1 dimensions the field theoretic problem reduces to quantum mechanics of a single particle as in Manton's treatment of the Schwinger model in Refs. [7].
One thus has to solve the Schrödinger equation
1 2 (− d 2 dq 2 + (2Lw) 2 q 2 + Bq 2L )ψ = Eψ,(33)
with the eigenvalue E = E/(2L) actually being an energy density.
Before proceeding with the solution let us briefly show that exactly the same structure is obtained beginning in the instant form. Here we introduce the periodic boundary conditions on a finite interval of length 2L in x 3 . The appropriate gauge choice is ∂ 3 A 3 a = 0 and then a color rotation can single out the diagonal color component of v =
F a 03 = ∂ 0 vδ a3 + gǫ ab3 A 0 b v,(34)
one gets p = −∂ 0 v as the only conjugate momentum. The Hamiltonian is now taken as the generator of translations in x 0 . Thus
H = 1 2 [p 2 − g 2 ( o A 0 α ) 2 v 2 + 2 o J 0 α o A 0 α +2v o J 3 3 ](2L) , α = 1, 2.(35)
The Gauss law is
o A 0 α = o J 0 α g 2 v 2 ,(36)
which upon substitution into the Hamiltonian yields
H = 1 2 [p 2 + ( o J 0 α ) 2 g 2 v 2 + 2v o J 3 3 ](2L) .(37)
With the same chiral color convention one has (
o J 0 α ) 2 = o J 0 + o J 0
− and thus obviously the same Hamiltonian as in Eq.(31).
Let us return to solving the Schrödinger equation Eq. (33). All eigenstates ψ have the quantum numbers of the naive vacuum adopted in standard front form field theory: all of them are eigenstates of the light-cone momentum operator P + with zero eigenvalue.
The true vacuum is now that state with lowest P − eigenvalue. In order to get an exactly soluble system we perform one more simplification. We eliminate the source 2B = o J − 3 . One of the solutions to Eq.(33) is then ψ(q) = √ q Z ν ( √ 2Eq) where, in the notation of [16], Z ν is the Bessel function with ν 2 ≡ (2Lw) 2 + 1/4. Note that wL is independent of L if w, which is proportional to the external source, scales in L like a dynamical source [17].
The general solution is a superposition of the regular and irregular Bessel functions, that is
ψ(q) = R √ qJ ν ( √ 2Eq) + S √ vJ −ν ( √ 2Eq) .(38)
The constants R and S need to be specified by boundary conditions, square-integrability and continuity of the first derivative. When ν > 1/2 square integrability leads to S = 0.
The boundary condition that is to be imposed comes from the treatment of the Gribov problem. Since the wave function vanishes at q = 0 we must demand that the wavefunctions vanish at the first Gribov horizon q = ±2π/g. The overall constant R is then fixed by normalization. Note that this requirement does not automatically ensure that the wavefunction vanishes at all horizons with arbitrary sources present. Therefore the pieces of the wavefunction for each Gribov region will not be exact copies of each other. For the source free case the wavefunctions for the different regions are indeed exact copies [13].
The most important feature is the consequence of the boundary condition at the Gribov horizon. This leads to the energy density only assuming the discrete values
E (ν) m = g 2 8π 2 (X (ν) m ) 2 , m = 1, 2, . . . ,(39)
where X (ν) m denotes the m-th zero of the ν-th Bessel function J ν . In general, these zeroes can only be obtained numerically. Thus
ψ m (q) = R √ qJ ν ( 2E (ν) m q)(40)
is the complete solution. The true vacuum is the state of lowest energy namely with m = 1.
Discussion and Perspectives
Let us first summarize the essential points. We analyzed pure glue non-Abelian gauge theory in a compact spatial volume and periodic boundary conditions on the gauge potentials.
Working in the front form Hamiltonian approach, we demonstrated how one carefully fixes the gauge. The equations of motion enabled identification of dynamical and constrained zero mode variables. We solved the quantum theory consisting of gluons excited only by pure time-dependent external sources. This reduction uncovered a basic regime of non-Abelian gauge theory where the front and the instant form approaches were seen to be identical. It also reduced a quantum field theory problem to a quantum mechanical one which could be solved for the Schrödinger representation wavefunction. With the explicit interaction term for the dynamical zero mode switched off, we exactly solved the theory in the first Gribov horizon.
The exact solution we obtained is genuinely non-perturbative in character. It describes vacuum-like states since for all of these states P + = 0. Consequently, they all have zero invariant mass M 2 = P + P − . The states are labelled by the eigenvalues of the operator P − . We explain below why the non-zero sources are useful. But with them non-zero we have obtained a generalization of the result of Hetrick [13]. The linear dependence on L in the result for the discrete energy levels is also consistent with what one would expect from a loop of color flux running around the cylinder. In the source-free case Hetrick [13] uses a wave function that is symmetric about q = 0. For our problem this corresponds to
ψ m (q) = N cos( √ 2ǫ m q) .(41)
where N is fixed by normalization. At the first Gribov horizon q = 2π/g and ψ m = (−1) m N , thus √ 2ǫ m 2π/g = mπ and
ǫ = g 2 m 2 8 .(42)
Note that m = 1 is the lowest energy state and has as expected one node in the allowed region 0 ≤ g ≤ 2π/g. Hetrick [13] discusses the connection to the results of Rajeev [7] and we will not comment here.
For the sources non-zero, the wavefunction automatically vanishes at the origin since it is made up of the regular part J +ν . There is thus a discontinuous transition from the source free to non-free cases. Of course the sources themselves are functions of time, so that as time evolves they may take the value zero. What this potentially discontinuous behaviour under time evolution means remains an open question. The manifest equivalence of the front and instant form treatments of this problem is presumably a consequence of the elimination of all but topological features and in this respect the topology is identical in the two forms. In our picture, the two forms will begin to look different with the introduction of genuine dynamical content. However, the same physical content should be present.
This calculation offers the lesson that even in a front form approach, the vacuum might not be just the simple Fock vacuum. Dynamical zero modes do imbue the vacuum with a rich structure. However, the advantage of the front form is not severely lost. In higher dimensions we expect that the transverse gluon components are not dynamical but rather are constrained. If these constraints can be solved, the vacuum will not be inordinately beyond control. This is in sharp distinction to the instant form approach.
There is nonetheless one possible scenario in which a simple vacuum could be restored.
The inclusion of normal mode dynamics via the sources will build additional states on top of the vacua of the present work. One may be able to consistently perform subtractions to obtain a true vacuum state with the eigenvalue of P − identically zero. When the naive continuum limit L → ∞ is taken only the states built on the lowest level might remain. This is still under consideration.
We finish by briefly addressing the program for tackling the higher dimensional theory, and how our result will actually be valuable for the problem in 3+1 dimensions. A crucial observation is that as zero modes are independent of at least one space coordinate they satisfy a field theory in a fewer number of space dimensions than the original. One can thus envisage undertaking a hierarchy of projections from 3 → 2 → 1 → 0 space dimensions, at each level extracting a zero mode theory within the previous higher dimensional theory. A similar idea lies behind the recent work of [17]. In our approach one arrives at a quantum mechanical problem of similar structure to the one we have solved in the present work.
The difference would be that the dynamical quanta of the higher dimensional theoryboth fermions and gluons -will be the sources for the lower dimensional theory.
Our exact solution with non-vanishing sources provides for eventual understanding how constrained and other dynamical zero mode quanta come in at higher dimensions, and how they generate QCD spectroscopy in the real world of 3+1 dimensions.
comments. This work was supported in part by grants from the U.S.Department of Energy
Appendix: Notation and Conventions
The convention for light-cone coordinates we employ is that of [18] x ± = (x 0 ± x 3 )/ √ 2.
The dot product decomposes as A · B = A + B − + A − B + . Following Dirac [1] , x + is taken as the time parameter. The time derivative is thus ∂ + ≡ ∂/∂x + and implied the metric tensor g µν leads to ∂ + = ∂ − . Correspondingly, ∂ − = ∂/∂x − = ∂ + is the space derivative. We consider the theory 'compactified' in the space dimension: the light-cone space coordinate x − ∈ [−L, +L]. Periodic boundary conditions are imposed. Thus a given field φ can be expanded in Fourier modes where the discrete momenta take values k + = n π L , n = 1, 2, . . . .
The missing zero mode n = 0 is projected out by
o φ ≡ φ 0 ≡ 1 2L +L −L dx − φ(x − )(44)
while the sum of the remaining non-zero modes is the normal mode
n φ ≡ φ(x − ) n ≡ φ(x − ) − φ 0 .(45)
We use the notation of Itzykson and Zuber [19] for writing the SU(2) gauge theory. The gauge potentials are represented by A µ = A µ a t a , t a = iσ a 2 , a = 1, 2, 3
where t a are representation matrices satisfying the Lie algebra t a , t b = −ǫ abc t c
and σ a are the Pauli matrices
σ 1 = 0 1 1 0 , σ 2 = 0 −i i 0 , σ 3 = 1 0 0 −1 .(48)
The following identities are useful σ a σ b = iǫ abc σ c + δ ab (49) tr(t a t b ) = −1/2δ ab .
In component form, the field strength tensor can be written
F a µν = ∂ µ A a ν − ∂ ν A a µ + gǫ abc A b µ A c ν(51)
and
D µ ab = ∂ ν δ ab − gǫ abc A µ c(52)
is the covariant derivative in the adjoint representation.
− 3 .
3If the sources themselves were part of the dynamical problem then this equation would have to be reintroduced as a 'weak' constraint, namely applied to physical states of the quantum Hilbert space. In the model we consider below, the sources are merely external classical fields, essentially just parameters, so the specific theory we consider there is only meaningful Solution. We now consider a regime of the theory excited by sources that are purely time-dependent. The reader is referred to the final section for more discussion on these sources for this problem. Vanishing normal mode gluons are then a consistent solution to the above equations of motion in the normal mode sector. Only zero mode gluons occur. From the zero mode equations of motion there are then only two equations with non-trivial content. The last of the ν = + equations is simply solved to give o
are of course now defined with respect to the x 3 direction. After the color diagonalization, one can gauge away o A 0 3 and, by analogy to the above, set all normal mode sources to zero. With
and a NATO collaborative grant. ACK is supported by the DFG under contract DFG-Gz:Pa 450/1-1 and would like to thank The Ohio State University for its hospitality. SSP would like to acknowledge the hospitality of the Stanford Linear Accelerator Center and the Max-Planck-Institut für Kernphysik.
Acknowledgments
. P A M Dirac, Rev. Mod. Phys. 21392P. A. M. Dirac, Rev. Mod. Phys. 21, 392 (1949).
. S J Brodsky, G Mccartor, H C Pauli, S S Pinsky, Particle World. 3109S. J. Brodsky, G. McCartor, H. C. Pauli, S. S. Pinsky, Particle World 3, 109 (1993);
Light-Cone Quantization of Quantum Chromodynamics in Recent Aspects of Quantum Fields. S J Brodsky, H C Pauli, Lecture Notes in Physics. H. Mitter, H. Gausterer396SpringerS. J. Brodsky, H. C. Pauli, Light-Cone Quantization of Quantum Chromodynamics in Recent Aspects of Quantum Fields, eds. H. Mitter, H. Gausterer, Lecture Notes in Physics, Vol. 396. (Springer, Berlin , 1991).
. T Maskawa, K Yamawaki, Prog. Theor. Phys. 56270T. Maskawa, K. Yamawaki, Prog. Theor. Phys. 56, 270 (1976);
R S Wittman, Nuclear and Particle Physics on the Light Cone. M. B. Johnson, L. S. KisslingerSingaporeWorld ScientificR. S. Wittman, in Nuclear and Particle Physics on the Light Cone, edited by M. B. Johnson, L. S. Kisslinger (World Scientific, Singapore, 1989).
. T Heinzl, S Krusche, S Simbürger, E Werner, Z. Phys. C. -Particles and Fields. 56415T. Heinzl, S. Krusche, S. Simbürger, E. Werner, Z. Phys. C. -Particles and Fields 56, (1992) 415;
. T Heinzl, S Krusche, E Werner, B Zellerman, University of Regensburg preprint TPRT. Heinzl, S. Krusche, E. Werner, B. Zellerman, University of Regensburg preprint TPR 92-17 1992;
. Suzhou Huang, Wei Lin, Ann. Phys. (NY). 226248Suzhou Huang, Wei Lin, Ann. Phys. (NY) 226, 248 (1993);
. D Robertson, Phys. Rev. 472549D. Robertson, Phys. Rev. D47, 2549 (1993);
. C M Bender, S S Pinsky, B Van De Sande, Phys. Rev. 48816C. M. Bender, S. S. Pinsky, B. van de Sande, Phys. Rev. D48, 816 (1993);
. S Pinsky, Ohio-State University preprint OHSTPY-HEP-TH-93-15S. Pinsky, Ohio-State University preprint OHSTPY-HEP-TH-93-15 (1993);
. S S Pinsky, B Van De Sande, Phys. Rev. 49S.S. Pinsky, B. van de Sande,Phys. Rev. D49, 2001 (1994)
. G Mccartor, Z. Phys. 41271G. McCartor, Z. Phys. C41, 271 (1988);
. G Mccartor, Z. Phys. 52611G. McCartor, Z. Phys. C52, 611 (1991).
. T Heinzl, S Krusche, E Werner, Phys. Lett. 27254T. Heinzl, S. Krusche, E. Werner, Phys. Lett. B272, 54 (1991);
. T Heinzl, S Krusche, E Werner, Phys. Lett. 275410T. Heinzl, S. Krusche, E. Werner, Phys. Lett. B275, 410 (1992);
. T Heinzl, S Krusche, E Werner, Nucl. Phys. 5324290T. Heinzl, S. Krusche, E. Werner, Nucl. Phys. A532, 4290 (1991).
. A C Kalloniatis, H C Pauli, Z. Phys. 60255A. C. Kalloniatis, H. C. Pauli, Z. Phys. C60, 255 (1993);
Heidelberg preprint MPIH-V2-94. Heidelberg preprint MPIH- V2-94.
. : N See, Manton, Ann. Phys. (NY). J.E. Hetrick, Y. Hosotani1592621Phys. Rev.See, for example: N. Manton, Ann. Phys. (NY) 159, 220 (1985). J.E. Het- rick, Y. Hosotani, Phys. Rev. D38, 2621 (1988);
. J E Hetrick, Y Hosotani, Phys Lett, J.E. Hetrick, Y. Hosotani, Phys. Lett.
. F 88, Palumbo, Phys. Lett. 243109B30B30 (1989) 88, F. Palumbo, Phys. Lett. B243, 109 (1990);
. D G Gross, 3323Lawrence Berkeley Laboratory preprintD. G. Gross, Lawrence Berkeley Laboratory preprint LBL 3323 (1992);
University of Rochester preprint UR-1283. S Guruswamy, S G Rajeev, S. Guruswamy, S.G. Rajeev, Univer- sity of Rochester preprint UR-1283 (1992);
. S G Rajeev, Phys. Lett. 212203S.G. Rajeev, Phys. Lett. B212 (1988) 203;
. E Langmann, G W Semenoff, Phys. Lett. 303303E. Langmann, G.W. Semenoff, Phys. Lett. B303, 303 (1993);
University of Göteborg preprint ITP. J Hallin, J. Hallin, University of Göteborg preprint ITP 93-8 (1993).
. G Hooft, Nucl. Phys. 75461G. 't Hooft, Nucl. Phys. B75, 461 (1974).
. K Hornbostel, S J Brodsky, H C Pauli, Phys. Rev. 413814K. Hornbostel, S.J. Brodsky, H.C. Pauli, Phys. Rev. D41, 3814 (1990).
. F Lenz, M Thies, S Levit, K Yazaki, Ann. Phys. 2081F. Lenz, M. Thies, S. Levit, K. Yazaki, Ann. Phys. 208, 1 (1991).
. M Lüscher, Nucl. Phys. 219233M. Lüscher, Nucl. Phys. B219, 233 (1983);
. M Lüscher, G Münster, Nucl. Phys. 232445M. Lüscher, G. Münster, Nucl. Phys. B232, 445 (1984).
259 (1992) and references therein. P Van Baal, Nucl. Phys. 369P. van Baal, Nucl. Phys. B369, 259 (1992) and references therein.
. J E Hetrick, hep-th/ 9305020Nucl. Phys. B (Proc. Suppl.). 30228J.E. Hetrick UvA-ITFAJ.E. Hetrick, Nucl. Phys. B (Proc. Suppl.) 30 228 (1993), J.E. Hetrick UvA-ITFA 93-15 (hep-th/ 9305020).
. V A Franke, Y V Novozhilov, E V Prokhvatilov, Lett. Math. Phys. 5437V.A. Franke, Y.V. Novozhilov, E.V. Prokhvatilov, Lett. Math. Phys. 5, 437 (1981);
QCD in the Axial Gauge Representation. F Lenz, H W L Naus, M Thies, Ann. Phys.(NY). Erlangen preprint. To appear inF. Lenz, H.W.L. Naus, M. Thies, 'QCD in the Axial Gauge Representation', Erlangen preprint. To appear in Ann. Phys.(NY) (1994).
. V N Gribov, Nucl. Phys. 1391V.N. Gribov, Nucl. Phys. B139, 1 (1978);
. H Yabuki, Phys. Lett. 231271H. Yabuki, Phys. Lett. B231, 271 (1989).
I S Gradshtein, I M Ryzhik, Tables of Integrals, Series, and Products. New YorkAcademic PressI.S. Gradshtein, I.M. Ryzhik, Tables of Integrals, Series, and Products (Academic Press, New York, 1965).
. K Demeterfi, I R Klebanov, G Bhanot, Nucl. Phys. 418and references thereinK. Demeterfi, I.R. Klebanov, G. Bhanot, Nucl. Phys. B418, 15, (1994), and references therein.
. J B Kogut, D E Soper, Phys. Rev. 12901J.B. Kogut, D.E. Soper, Phys. Rev. D1, 2901 (1970).
C Itzykson, J-B Zuber, Quantum Field Theory. SingaporeMcGraw-HillC. Itzykson, J-B. Zuber, Quantum Field Theory, (McGraw-Hill, Singapore, 1985).
| []
|
[
"Secular dipole-dipole stability of magnetic binaries",
"Secular dipole-dipole stability of magnetic binaries"
]
| [
"C Aykroyd \nSYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance\n",
"A Bourgoin \nSYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance\n\nUniversité Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\nF-91191Gif-sur-YvetteAIMFrance\n",
"C Le Poncin-Lafitte \nSYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance\n",
"S Mathis \nUniversité Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\nF-91191Gif-sur-YvetteAIMFrance\n",
"M.-C Angonin \nSYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance\n"
]
| [
"SYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance",
"SYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance",
"Université Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\nF-91191Gif-sur-YvetteAIMFrance",
"SYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance",
"Université Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\nF-91191Gif-sur-YvetteAIMFrance",
"SYRTE\nObservatoire de Paris\nUniversité PSL\nCNRS\nUniversité, LNE\n61 avenue de l'Observatoire75014ParisSorbonneFrance"
]
| []
| The presence of strong large-scale stable magnetic fields in a significant portion of early-type stars, white dwarfs, and neutron stars is well-established. Despite this, the origins of these fields remain a subject of ongoing investigation, with theories including fossil fields, mergers, and shear-driven dynamos. One potential key for understanding the formation of these fields could lie in the connection between magnetism and binarity. Indeed, magnetism can play a significant role in the long-term orbital and precessional dynamics of binary systems. In gravitational wave astronomy, the advanced sensitivity of upcoming interferometric detectors such as LISA and Einstein Telescope will enable the characterization of the orbital inspirals of compact systems, including their magnetic properties. A comprehensive understanding of the dynamics of magnetism in these systems is necessary for the interpretation of the gravitational wave signals and to avoid biases in the calibration of instruments. This knowledge can additionally be used to create new magnetic population models and provide insight into the nature and origins of their internal magnetic fields. The aim of this study is to investigate the secular spin precession dynamics of binary systems under pure magnetic dipole-dipole interactions, with a focus on stars with strong, stable, and predominantly dipolar fields. We employ an orbit-averaging procedure to the spin precession equations from which we derive an effective secular description. By minimizing the magnetic interaction energy of the system, we obtain the configurations of spin equilibrium and their respective stabilities. Finally, we also derive a set of conditions required for the validity of our assumptions to hold. We show that among the four states of equilibrium, there is a single secular state that is globally stable, corresponding to the configuration where the spin and magnetic axes of one star are reversed with respect to the companions', and orthogonal to the orbital plane. Our results are compared to traditional methods of finding instantaneous states of equilibrium, in which orbital motion is generally neglected. Finally, we provide analytical solutions in the neighbourhood of the stable configuration, that can be used to derive secular orbital evolution in the context of gravitational wave astronomy. | 10.1051/0004-6361/202346171 | [
"https://export.arxiv.org/pdf/2305.15429v1.pdf"
]
| 258,784,025 | 2305.15429 | 02da9dcca2b9dfa3411c5c96a1b1a13a2fb4f3a1 |
Secular dipole-dipole stability of magnetic binaries
C Aykroyd
SYRTE
Observatoire de Paris
Université PSL
CNRS
Université, LNE
61 avenue de l'Observatoire75014ParisSorbonneFrance
A Bourgoin
SYRTE
Observatoire de Paris
Université PSL
CNRS
Université, LNE
61 avenue de l'Observatoire75014ParisSorbonneFrance
Université Paris-Saclay
Université Paris Cité
CEA
CNRS
F-91191Gif-sur-YvetteAIMFrance
C Le Poncin-Lafitte
SYRTE
Observatoire de Paris
Université PSL
CNRS
Université, LNE
61 avenue de l'Observatoire75014ParisSorbonneFrance
S Mathis
Université Paris-Saclay
Université Paris Cité
CEA
CNRS
F-91191Gif-sur-YvetteAIMFrance
M.-C Angonin
SYRTE
Observatoire de Paris
Université PSL
CNRS
Université, LNE
61 avenue de l'Observatoire75014ParisSorbonneFrance
Secular dipole-dipole stability of magnetic binaries
The presence of strong large-scale stable magnetic fields in a significant portion of early-type stars, white dwarfs, and neutron stars is well-established. Despite this, the origins of these fields remain a subject of ongoing investigation, with theories including fossil fields, mergers, and shear-driven dynamos. One potential key for understanding the formation of these fields could lie in the connection between magnetism and binarity. Indeed, magnetism can play a significant role in the long-term orbital and precessional dynamics of binary systems. In gravitational wave astronomy, the advanced sensitivity of upcoming interferometric detectors such as LISA and Einstein Telescope will enable the characterization of the orbital inspirals of compact systems, including their magnetic properties. A comprehensive understanding of the dynamics of magnetism in these systems is necessary for the interpretation of the gravitational wave signals and to avoid biases in the calibration of instruments. This knowledge can additionally be used to create new magnetic population models and provide insight into the nature and origins of their internal magnetic fields. The aim of this study is to investigate the secular spin precession dynamics of binary systems under pure magnetic dipole-dipole interactions, with a focus on stars with strong, stable, and predominantly dipolar fields. We employ an orbit-averaging procedure to the spin precession equations from which we derive an effective secular description. By minimizing the magnetic interaction energy of the system, we obtain the configurations of spin equilibrium and their respective stabilities. Finally, we also derive a set of conditions required for the validity of our assumptions to hold. We show that among the four states of equilibrium, there is a single secular state that is globally stable, corresponding to the configuration where the spin and magnetic axes of one star are reversed with respect to the companions', and orthogonal to the orbital plane. Our results are compared to traditional methods of finding instantaneous states of equilibrium, in which orbital motion is generally neglected. Finally, we provide analytical solutions in the neighbourhood of the stable configuration, that can be used to derive secular orbital evolution in the context of gravitational wave astronomy.
The presence of strong large-scale stable magnetic fields in a significant portion of early-type stars, white dwarfs, and neutron stars is well-established. Despite this, the origins of these fields remain a subject of ongoing investigation, with theories including fossil fields, mergers, and shear-driven dynamos. One potential key for understanding the formation of these fields could lie in the connection between magnetism and binarity. Indeed, magnetism can play a significant role in the long-term orbital and precessional dynamics of binary systems. In gravitational wave astronomy, the advanced sensitivity of upcoming interferometric detectors such as LISA and Einstein Telescope will enable the characterization of the orbital inspirals of compact systems, including their magnetic properties. A comprehensive understanding of the dynamics of magnetism in these systems is necessary for the interpretation of the gravitational wave signals and to avoid biases in the calibration of instruments. This knowledge can additionally be used to create new magnetic population models and provide insight into the nature and origins of their internal magnetic fields. The aim of this study is to investigate the secular spin precession dynamics of binary systems under pure magnetic dipole-dipole interactions, with a focus on stars with strong, stable, and predominantly dipolar fields. We employ an orbit-averaging procedure to the spin precession equations from which we derive an effective secular description. By minimizing the magnetic interaction energy of the system, we obtain the configurations of spin equilibrium and their respective stabilities. Finally, we also derive a set of conditions required for the validity of our assumptions to hold. We show that among the four states of equilibrium, there is a single secular state that is globally stable, corresponding to the configuration where the spin and magnetic axes of one star are reversed with respect to the companions', and orthogonal to the orbital plane. Our results are compared to traditional methods of finding instantaneous states of equilibrium, in which orbital motion is generally neglected. Finally, we provide analytical solutions in the neighbourhood of the stable configuration, that can be used to derive secular orbital evolution in the context of gravitational wave astronomy.
I. INTRODUCTION
Close to 10% of early-type massive main sequence (MMS) stars host stable large-scale magnetic fields, ranging from 3 × 10 2 to 3 × 10 4 G [34,41,73]. Meanwhile, it is estimated that 20-25% of the white dwarf (WD) population is magnetic [7], with detected fields between 10 3 and 10 9 G. For neutron stars (NS), surface field strengths are gauged on the order of 10 8 -10 13 G in classical radio pulsars [66], and reach up to 10 14 -10 15 G in magnetars [47]. The origins of magnetic fields are highly debated for both main-sequence stars and for compact objects; we present below the prevailing theories for each respective class of stars.
In main-sequence late-type stars, it is considered established that external magnetic fields are driven by dynamo action in the outer convective zone [18,19]. Conversely, this channel is less likely to be the case for hot, massive stars (M > 1.5M ), which maintain a radiative envelope and inner convective core: any dynamo-based explanation must resolve the challenge of transporting the magnetic field towards the outer surface faster than the stellar evolution timescale [23]. In fact, in radiative stars, magnetic fields are believed to decay in diffusivity timescales estimated longer than their host's main-sequence lifetime [64]. Early direct evidence via Zeeman spectropolarimetry has long since shown that chemically peculiar Ap and Bp stars, which represent around 10% * [email protected] of early-type A/B stars [34], host strong secularly stable magnetic fields, with strengths uncorrelated with stellar rotation -as should be the case for dynamo-fed fields -and geometry largely captured by oblique dipole rotor models [see e.g. 9,75]). These fields have therefore been suggested to have 'fossil' origin, remnants of a prior stellar evolution stage and effectively frozen into the plasma [61,63]. The exact field formation process is a topic of debate, but multiple plausible mechanisms have been proposed -ranging from accumulated magnetic flux captured from the interstellar cloud at birth, to protostar mergers and pre-main-sequence dynamos [35]. More recent studies such as the 'B Fields in OB Stars' (BOB; [62]) and the 'Magnetism in Massive Stars' (MiMeS; [73,82]) surveys also confirmed compatible magnetic incidence and properties on more general O/B-type massive stars. Numerical and semi-analytical magneto-hydrodynamic computations have established the existence of long-term-stable internal field configurations consistent with non-convective bodies such as radiative stars, NS and WDs, which favours the fossil field scenario [11,14,15,32,38]. These fields configurations are composed of both toroidal and poloidal components that stabilise each other [3,12,77]; outside the star, the toroidal field is attenuated and mainly the poloidal component is visible. These results were found to reproduce the general characteristics of observations: the roughly off-center dipolar structure, the independence from stellar spin, and finally, the strong field amplitude [15,31]. Nevertheless, the fossil field hypothesis is not without its challenges. For example, only a small fraction of MMS stars host observable fields, with a sharp dearth of weak-field objects; the precise arXiv:2305.15429v1 [astro-ph.SR] 19 May 2023 mechanism for field formation, stability and evolution must explain this cutoff. It has been suggested that there are thresholds to field strength below which shear or convection instabilities develop [see e.g. 6,39,45,46], or that, in some stars, the time needed to reach an equilibrium becomes longer than the age in the main sequence, due to the Coriolis force produced by rapid rotation [13]. An additional challenge to fossil fields is the existence of a great scarcity of magnetism in close binaries, as low as 2% incidence [4,21]. In fact, there is a single known doubly-magnetic close binary up to date, the -Lupi system [65]. If the fossil scenario is indeed to be the main field formation channel, it is plausible to expect a similar magnetic incidence in binaries and in single stars. However, [81] suggests that tidal instabilities in binary pairs can disrupt the magnetic fields via turbulent Joule diffusion within a few millions years, potentially explaining the scarcity of strongfield binaries. Alternatively, it has been argued that interstellar clouds with strong magnetic fields are harder to fragment [24], yielding selection biases towards less magnetic binary systems. Other alternatives have also been suggested to address this challenge, such as the merger scenarios [34,35,69,70]. In these scenarios, coalescing main sequence stars and/or protostars would generate strong enough shear to drive dynamo action, yielding a single magnetic byproduct star. Such hypotheses are in line with the prediction that around 8% of MMS stars originate from mergers [27], and naturally explain the lack of magnetic binaries. Nevertheless, at this stage, no channel can be completely favoured over another.
In the compact object community there is a somewhat analogous debate. On one side, classical fossil theories defend that magnetic white dwarfs (MWDs) and NS are derived from Ap/B and O-type stars respectively, and that their fields must persist from the main sequence or red giant phase [37,78,84]. Another possibility lifted by [7,74] is that internal dynamos in the convective cores of intermediate-mass and massive stars -externally invisible during the main sequence phase -might develop into strong stable fields by flux compression as the stellar core collapses into a WD. These fields would then be slowly revealed as the WD sheds its outer layers, and decay in secular Ohmic timescales. On the other side of the debate, merger theories [36,79] advocate an intimate link between magnetism and binarity; in such scenarios, two common-envelop stars would generate a magnetic field through differential rotation and merge to form a strong-field MWD. Closely interacting systems which failed to completely merge might instead develop into magnetic cataclysmic variables. Finally, alternative theories support the operating of some dynamo mechanism during the cooling of the WD -e.g. during the crystallisation convection of the core in a rapidly rotating WD [44,71]. In support of the fossil hypothesis in WDs and NS is the striking similarity in magnetic flux between this group and the main sequence A/B/O stars, as well as the long field decay timescales -estimated to be on the order of tens or hundreds of billions of years [25,33]. Conversely, the progenitors of non-magnetic WDs would be low-mass stars, which are known to harbor relatively weak dynamo-driven fields. This contrasting spectral origin is consistent with the observation that MWDs are on average more massive than their non-magnetic counterparts [7,57]. However, merger scenarios would also naturally explain such disparity. Against the fossil field hypothesis, it has been argued that there is an insufficient volume density of Ap/Bp stars to by itself account for the high occurance of MWDs, as required for the classical fossil theory to hold [7,37,48]. Furthermore, many surveys have pointed out a sparsity of known detached binaries composed of a MWD plus a non-degenerate companion [33,58], whereas conversely, magnetism amongst cataclysmic variables is ubiquitous, with about a quarter of these WDs reaching the high-field range (B ≥ 1 MG). This has propelled the suggestion that magnetism and binarity in WD systems are intrinsically connected, leading to the advent of merger hypotheses. However, as pointed out by [7,54], at the present moment, at least five such detached MWD binary systems are known, and this frequency may be higher than previously thought.
Evidently, current observational data are insufficient to completely rule out one or another formation channel, and indeed, multiple of them may be at work simultaneously. Further studies of magnetic binary interactions can provide crucial insights to resolve this debate, of which binarity has shown to be a key element.
In particular, magnetism can play an important role in the dynamics of stellar, compact, and planetary systems, shaping the long-term evolution of their orbits [10,16]. In gravitational wave (GW) astronomy, the upcoming generation of interferometric detectors such as the Laser Interferometer Space Antenna (LISA; [5]) and the Einstein Telescope (ET; [60]) will provide enough sensitivity to probe the interactions of compact systems and to characterise their magnetic attributes. On one hand, this can enable the composition of new magnetic population models and bring insight to the nature and origin of internal fields. On the other hand, a careful understanding of the dynamics of magnetism in these systems is also required to avoid biases in the calibration of the instrument and in the interpretation of signals into physical parameters. Indeed, the secular (long-term) impact of magnetism on the orbits will manifest as a definitive signature on the GWs, which must be correctly accounted for [10,22,59,Savalle et al. in prep.].
It is thus imperative to study the secular evolution of the fields themselves, their binary coupling and the interplay with stellar orientation. In the case of stable, rigid fields, this translates to investigating the rotational dynamics of the stars, which may include their states of equilibrium. In purely tidaldriven systems, spin motion and stability has long-since been determined by [42,43]. In the case of star-planet systems, [26] further explored the interaction between tides and magnetic braking -where pressure-driven stellar winds give rise to the loss of angular momentum -and [76] determined the relative strengths of the tidal and magnetic effects in magnetic star-planet interactions. The long-term effects of static dipole fields on stellar rotation, however, has yet to be completely explored. In this regard, the works of [65] and [49] investigate the spin equilibrium under purely the dipolar interactions, but they neglect the interactions with orbital dynamics. In particular, they explore the cases where the obliquity between the dipole and spin axes is constrained to 0 • (aligned) and 90 • (perpendicular) respectively.
In this work, we direct our attention towards magnetic binary interactions -in particular, towards stars with strong, stable, and predominantly dipolar fields. We consider the magnetic moments of these stars to be aligned with the stellar spin, and investigate the secular evolution of each star's orientation due to the mutual magnetic torques, through an effective orbit-averaged description. We provide criteria for determining whether magnetism dictates the stellar rotational motion, which may then reflect on the secular evolution of the binary's orbits. Our study can be applied to any type of star system (MMS, WD, and NS constituents), as long as both components of the binary are magnetic and dominated by dipolar terms. In this way, it can be useful to gather further understanding of the formation processes of MMS stars as well as to ensure an efficient data processing of the LISA or ET observations in the context of compact-star binaries.
The paper is subdivided as follows. The complete physical setup and assumptions of our model are described in Sect. II. We derive, in Sect. III, the effective secular orientation dynamics of the spins of each star due to dipole-dipole interactions. We then provide in Sect. IV an analysis of the equilibrium states and their respective stability, developing a simple analytical solution for spin precession which is valid for quasi-stable systems. Our results are verified numerically and applied to a system possibly satisfying our requirements (Sect. V). Finally, we compare the contrasting results with respect to the traditional instantaneous equilibrium (Sect. VI), highlighting each method's advantages and differences.
a. Notations and conventions. We presently introduce the notation used throughout the paper. For each vector u element of some vector space U, we represent its norm in light typeface u = |u| and its direction by a hatû = u/u. Whenever two vectors are parallel, we symbolize this relationship by u v; and similarly, two perpendicular vectors are denoted by u ⊥ v. We represent the vector space dual to U by a starred U * . In this setting, we denote by an underscore u ∈ U * the associated canonical dot-product covector, that is, the linear functional from U to R that satisfies u : v → (u · v). For two vector spaces U and V, we represent their Cartesian product by
U × V = { (u, v) ; u ∈ U, v ∈ V } and their tensor product by U ⊗ V = { u ⊗ v ; u ∈ U, v ∈ V }. Finally, when- ever two vector subspaces are disjoint U ∩ V = { 0 }, their sum is direct and is denoted by U ⊕ V = { u + v | u ∈ U, v ∈ V }.
II. MAGNETIC BINARY MODEL
In this section, we present the physical set up and the assumptions used throughout the paper. Then, we proceed to rederive the instantaneous magnetically-driven precession equations that govern the rotational state of the system.
Consider an isolated binary system of point-like magnetised bodies, dominated by non-relativistic motion. We assign to each body an index ∈ {1, 2}, used throughout the paper, and we refer to them as the 'primary' and 'secondary', respectively. For each star we introduce a position x , a mass m , a radius R , a magnetic field B , and an intrinsic angu-ŝ 1ŝ2 e xê yê z r Figure 1. Binary system in the reference frame of the centre-of-mass. To simplify the drawing, the primary is placed at the origin (CM). We include the spin axisŝ of each star ( ∈ {1, 2}) and the basis e 0 = (ê x ,ê y ,ê z ). lar momentum (or spin) s . We place ourselves in the reference frame of the centre-of-mass (CM) of the system, to which we attach a right-handed basis e 0 = (ê x ,ê y ,ê z ) spanning the Euclidean tangent space E 3 R 3 . The elements of e 0 are chosen such thatê x points towards the direction of closest approach,ê z is orthogonal to the orbital plane, andê y completes the basis. In this frame, the system can be viewed as an effective one-body problem, parametrised by the relative separation r = x 2 − x 1 . In the absence of non-Keplerian perturbations, the CM frame will be inertial and the elements of e 0 will be static. The setup is illustrated in Fig. 1.
We consider a stable magnetic field that is rigidly frozen into each star, compatible with general observations in massive stars and compact objects. The field is assumed to be predominantly dipolar, which captures most observed topologies (see the off-centered dipole model [1,9]), although quadrupoles and octupoles have been detected in some cases [8,29,52,55,56,83]. In this scenario, we model B as a centered dipole, given in function of some point x outside the surface of the star:
B 1 (x) = µ 0 4π 3 µ 1 · (x − x 1 ) (x − x 1 ) |x − x 1 | 5 − µ 1 |x − x 1 | 3 , (1a) B 2 (x) = µ 0 4π 3 µ 2 · (x − x 2 ) (x − x 2 ) |x − x 2 | 5 − µ 2 |x − x 2 | 3 , (1b)
where µ is the magnetic dipole moment of each star, and µ 0 is the vacuum permeability. The primary will feel the field B 2 of its companion at relative position x = −r, and the secondary at x = r. It is convenient to express these fields in terms of the linear map B r : E 3 → E 3 , which acts on the magnetic dipole moments according to:
B 1 (r) = µ 0 4π B r (µ 1 ), B 2 (−r) = µ 0 4π B r (µ 2 ). (2)
B r can be identified with the symmetric tensor field with values in E 3 ⊗ E * 3 ,
B r = 1 r 3 (3r ⊗r − I),(3)
with I denoting the identity in E 3 . As is observed in the known doubly-magnetic MMS binary system -Lupi [65,72], we examine the particular case where the fields are symmetric about the star's axis of rotation, with alignment between the magnetic dipole moment and the spin 1 :
µ (t) = µ ŝ (t).(4)
In practice, the magnetic moment amplitudes may be expressed in terms of the dipolar field evaluated at the poles B p B (R ŝ ) [cf. Eq. (1)], which can be observationally estimated via spectropolarimetry [see e.g. 72]. We can thus invert Eq. (1) to deduce:
µ = 2π µ 0 B p R 3 .(5)
The magnetic field of each star will interact with the dipole of the companion, inducing the following torque:
Γ B 1 = µ 1 × B 2 (−r),(6a)Γ B 2 = µ 2 × B 1 (r). (6b)
Simultaneous contributions due to gravity exist. We introduce the total torque felt by ,
Γ = Γ B + Γ fig + Γ tide + (. . .),(7)
where Γ B , Γ fig and Γ tide are the respective contributions from the magnetic interaction, figure effects (rigid extended-body interactions), and tides. In order to quantify the relative strength of the dipole-dipole interaction, we introduce the dimensionless parameter γ:
γ = Γ − Γ B Γ B ≤ γ fig + γ tide + (. . .),(8)γ fig = Γ fig Γ B ∼ 3Gµ 0 2π m 1 m 2 B p 1 B p 2 R 3 1 R 3 2 R 2 J 2 .(9)
Eq (9) shows that the ratio γ fig is mainly scaled by the surface magnetic field strength, sphericity, and mean density of each stellar component. In this work, we shall be considering perfectly spherical stars with J 2 = 0, in which case we formally have no figure torques. In practice, the cutoff to J 2 below which rotation is driven by magnetism is given by
J 2 2π 3Gµ 0 B p 1 B p 2 R 3 1 R 3 2 m 1 m 2 1 R 2 ,(10)
and can be used as a criteria for the domain of validity of our models. Similarly, we turn our attention to tidal interactions, which generate torques that scale with Γ tide ∼ 6(Gm 2 m R 5 /a 6 )(k 2 /Q ), where m m is the mass of the tide-inflicting body, k 2 is the gravitational Love number of , and Q is the tidal dissipation quality factor [see e.g. 67,76]. Then,
γ tide = Γ tide Γ B ∼ 6Gµ 0 π m 2 m B p 1 B p 2 R 3 1 R 3 2 R 5 a 3 k 2 Q ,(11)
and the corresponding cutoff criteria for k 2 /Q is: WD and NS systems. At such distance scales (a ∼ 10 8 -10 9 km), the thresholds for k 2 /Q are well within the range to allow magnetically-driven NS-NS systems, while tidal effects may have dominant contributions in MMS-MMS binaries, and WD-WDs lay somewhere in-between. However, even for the most magnetic systems (NS-NS binaries), in order for the rotational dynamics not to be dominated by figure effects, we require that the quadrupole moment J 2 be at most on the order of 10 −6 .
k 2 Q π 6Gµ 0 B p 1 B p 2 R 3 1 R 3 2 m 2 m a 3 R 5 .(12)
Regardless, under spherical, rigid star assumptions, the spins will suffer precession due to purely magnetic torques Γ B ; these torques are orthogonal to the intrinsic angular momentum, and hence the magnitude s must be conserved. Substituting each term and normalizing by s we obtain coordinate-free spin equations:
dŝ 1 dt = −α 1 B r (ŝ 2 ) ×ŝ 1 ,(13a)dŝ 2 dt = −α 2 B r (ŝ 1 ) ×ŝ 2 ,(13b)
with α = µ 0 µ 1 µ 2 /(4πs ). We denote Eq. (13) the 'instantaneous' precession system. The spin axes are constrained to the unit sphere S 2 , yielding a total of four degrees of freedom for the coupled system of equations. Each spin precesses around a (time-dependent) axisω determined by the field direction ω ∝ B r (ŝ m ) -where m ∈ {1, 2}, m is the index of the companion star -and have Larmor frequencies given by
ω = |α B r (ŝ m )| ∼ α r 3 .(14)
The orbital dynamics will induce periodic fluctuations on the separation r, causing the precession axis and frequency 16)]. We assume a binary system with two components of the same kind -that is, MMS-MMS, WD-WD or NS-NS pairs.
Physical Parameters Ratios
Scenario
m 1 , m 2 R 1 , R 2 B 1 p , B 2 p P 1 , P 2 P orb γ fig = Γ fig /Γ B γ tide = Γ tide /Γ B η = F B /F N = P orb /τ MMS 10 M 4.5 R 10 4 G 5 d 5 d 2 × 10 8 J 2 2 × 10 6 k 2 /Q 5 × 10 −10 2 × 10 −9 WD 1 M 10 4 km 10 9 G 1 h 10 h 2 × 10 6 J 2 7 × 10 −1 k 2 /Q 1 × 10 −10 1 × 10 −7 NS 1.4 M 15 km 10 15 G 10 min 1 h 6 × 10 5 J 2 7 × 10 −8 k 2 /Q 7 × 10 −15 5 × 10 −7
to oscillate in time with period P orb . It is clear then that two distinct timescales will be involved in the dynamics of spin precession -(1) a timescale corresponding to the orbital period P orb , manifesting in the 'wobbles' of the axisω and in the modulation of the frequency ω ; as well as (2) a timescale τ due to an average precession rate ω , which we define as:
τ = 2π ω ≡ 2πb 3 α ,(15)
where b = √ r min r max is the geometrical mean between the separation at the pericenter and at the apocenter of the orbit.
In an elliptical orbit, b corresponds to the semi-minor axis.
III. SECULAR PRECESSION
We are presently interested in determining the equilibrium configurations of the precession system and their stability. However, the instantaneous equilibrium obtained by equating (13) to zero does not take into account the orbital dynamics; the strong dependence of the torques on the orbital position of each star implies that the configurations of equilibrium may largely fluctuate as the orbit evolves, which occurs in the fast timescale P orb . Indeed, a configuration that was momentarily stable at some point in the orbit may be disrupted by the orbital motion, leading to instability. Conversely, there may exist configurations where the spins oscillate in the fast timescale P orb , but on a longer timescale can be seen to be stable, due to an effective cancellation of the fluctuations. It is therefore in our interest to search for states of secular (long-term) equilibrium and stability. To do this we must eliminate the effects of these oscillatory terms of short period P orb , which can be performed by employing an orbital averaging scheme to obtain the effective dynamics. Variants of this method are widely adopted for determining secular solutions for the orbital motion [10,40,67,68,85].
For the binary systems considered in this work (magnetic MMS, WDs and NS), the two timescales (P orb and τ ) of Eq. (13) will be distinct enough that their effects can be isolated. Indeed, if the torque strength acting on a star is small enough, then its spin axis will not be significantly affected within a single orbital revolution. Intuitively, the impact of the torques on the spin axis is captured by the spin precession rate ω . A larger precession rate (or smaller precession timescale τ ) implies faster changes to the axisŝ due to stronger torques. Thus, one may explicitly compute the characteristic time-ratio between the orbital timescale P orb and the precession timescale τ :
= P orb τ ∼ 5π 2Gµ 0 B p 1 B p 2 R 3 1 R 3 2 (m 1 + m 2 )m 1 R 2 P P orb (1 − e 2 ) −3/2 ,(16)
where P is the rotational period of body and e the eccentricity of the orbit. To derive the above order of magnitude relation, we have considered as a rough estimate that the mean separation is given by Kepler's third law a 3 ∼ G(m 1 + m 2 )P 2 orb /(4π 2 ) and b ∼ a √ 1 − e 2 . We stress that these relationships are still valid as order-of-magnitude estimates for relativistic systems. We have also estimated the spin magnitude for each star from that of a homogeneous sphere:
s ∼ 4π 5 m R 2 P .(17)
In all three types of systems considered (Table I) we obtain very low values for , namely MMS ∼ 10 −9 , WD ∼ 10 −7 and NS ∼ 10 −6 . We therefore place ourselves in the scenario 1. As discussed, in this scenario, the spin axes will suffer very little variations within the time-frame of a single orbit. We can therefore consider an effective precession dynamics which averages-out these small orbital oscillations. Conversely, for systems with 1 the two timescales cannot be decoupled in this manner. In this way, the time-ratio parameter can be used as a criteria for the validity of the averaging procedure that follows.
For simplicity of the averaging model, we place ourselves in a classical Newtonian framework, although relativistic corrections are possible. A rough estimate of the impact of magnetic fields on the orbit can be obtained by comparing the magnetic force F B ∼ 3µ 0 µ 1 µ 2 /(4πa 4 ) and the gravitational force F N ∼ Gm 1 m 2 /a 2 :
η = F B F N ∼ 3π Gµ 0 B p 1 B p 2 R 3 1 R 3 2 m 1 m 2 1 a 2 .(18)
Even for the most magnetic systems considered, this ratio is limited to F B /F N 10 −10 (Table I). We can therefore consider magnetism negligible in the orbital dynamics, both for MMS stars and compact systems. In this setting, the orbital frame basis (ê x ,ê y ,ê z ) is inertial, the orbits are elliptical, and the separation r = rr can be parametrised as an ellipse in the centre of mass frame:
r( f ) =ê x cos f +ê y sin f , r( f ) = a (1 − e 2 ) 1 + e sin f ,(19)
where f is the true anomaly, a the semi-major axis and e the eccentricity of the orbit. The Keplerian solution determines the relationship between f and t (the time with respect to the reference pericenter passage):
nt ≡ E − e sin E mod 2π,(20a)E = arctan2 e + cos f, √ 1 − e 2 sin f ,(20b)
where arctan2 represents the 2-argument inverse tangent, E is the eccentric anomaly, and n = G(m 1 + m 2 )/a 3 is the mean angular motion.
To formalize our previous argument, consider the binary system at some given instant t = t 0 + h, with a short-timescale variation h ∈ [0, P orb ]. In essence, we are considering that h parametrises a single full orbital revolution of the binary, beginning at some time t 0 . In this timeframe, the variations in the spin axis are bounded by
|ŝ (t 0 + h) −ŝ (t 0 )| = h 0 dŝ dt 0 (t 0 + u)du (21a) ≤ sup u∈[0,h] dŝ dt (t 0 + u) P orb ,(21b)
where the supremum of the derivative ofŝ can be obtained from the precession equation (13), with indices ∈ {1, 2} and m ∈ {1, 2}, where m :
dŝ dt = α B r (ŝ m ) ×ŝ ≤ 1 τ 1 + e 1 − e 3/2 .(22)
It is easy to see [cf. Eq (3)] that equality can be reached when s m =r,ŝ ⊥r, and r = r min = 1/ a(1 − e) . Eq. (13) may then be evaluated at time t = t 0 + h, and using the bounds obtained, expanded viaŝ (t 0 + h) =ŝ (t 0 ) + O( ), from whence:
dŝ dt (t 0 + h) = −α B r(t 0 +h) ŝ m (t 0 ) ×ŝ (t 0 ) + α a 3 (1 − e 2 ) O( ),(23)
where we have explicitly included the temporal dependence of r = r(t 0 + h) in the subscript of B r , and defined = max ( 1 , 2 ). Since the above equation is valid for any h ∈ [0, P orb ], integrating over a full orbit yields the secular spin equation
dŝ dt = −α B r ŝ m ×ŝ + α a 3 (1 − e 2 ) O( ),(24)
where the orbital averaging operator is defined for some function of time ξ as
ξ = 1 P orb P orb 0 ξ(t 0 + h)dh = n 2π 2π 0ξ ( f 0 + f ) dt d f d f ,(25)
whereξ( f (t)) = ξ(t) is the description of ξ in terms of the true anomaly, and the Jacobian factor is found via implicit differentiation of Eq. (20):
d f dt = 1 n (1 − e 2 ) 3/2 (1 + e cos f ) 2 .(26)
In the Keplerian scenario the orbits are fixed, and the linear map B r -which depends purely on the radial separation r -is therefore P orb -periodic. Consequently, the orbital average B r will be constant, independent of the secular time t 0 . Plugging the expressions of B r [Eq. (3)] and of the separation r [Eq. (19)] into Eq. (25), we obtain the effective field tensor in the orbital frame basis (ê x ,ê y ,ê z ):
B r = 1 2a 3 (1 − e 2 ) 3/2 I − 3ê z ⊗ê z .(27)
Note that we have essentially averaged out the orbital oscillations of the magnetic field due to the short-timescale elliptical movement, obtaining a corresponding 'average field' operator B r . As a consequence, the radial component of B r has been suppressed, leaving a dipolar effective field with a predominant component in the orthogonal directionê z , plus a weaker component aligned with the spin direction.
We denote the normalized term on the right-hand side of Eq. (27) byB = I−3ê z ⊗ê z . The final secular form of the spin precession equations is obtained by absorbing the constants together:
dŝ 1 dt = −ν 1B (ŝ 2 ) ×ŝ 1 ,(28a)dŝ 2 dt = −ν 2B (ŝ 1 ) ×ŝ 2 ,(28b)
where we have introduced the magnetic rotational frequencies ν , coupling magnetism, spins and orbital parameters:
ν = µ 0 µ 1 µ 2 4π 1 s 1 2a 3 (1 − e 2 ) 3/2 .(29)
The above system has four degrees of freedom, two for each spin, since the magnitudes ofŝ are conserved quantities of Eq. (28). Additionally, the magnetic interaction energy is also conserved, as will be discussed in Sect. IV A. By performing a linear transformation on the secular time variable t 0 → ν −1 1 t 0 it is possible to reduce the system to a single dimensionless parameter κ = ν 2 /ν 1 , which depends only on the ratio between the two spin magnitudes. The parameter κ will therefore completely control the dynamics of the system, producing a range of bounded trajectories such as illustrated in Fig. 2. These solutions are roughly epicyclic in nature, described by a predominant precessional motion around the axiŝ e z plus an important nutation component. When the respective frequencies of these two motions align as rational multiples of one another, the solutions become periodic. In the following section, we analyse the states of equilibrium of the secular dynamical system. We then proceed to analyse their stability and approximate solutions for trajectories similar to those presented in Fig. 2. respectively. The black axis represents the direction ofê z . The behaviour for the secondary is analogous, up to a swap of initial conditions and κ → κ −1 = ν 1 /ν 2 .
IV. EQUILIBRIUM STATES AND STABILITY
A 'secular' equilibrium state at time t 0 is a pair of spins defined on an orbital period (ŝ 1 ,ŝ 2 ) : [t 0 , t 0 + P orb ] → E 3 × E 3 such that the average change in intrinsic angular momentum is zero:
dŝ 1 dt (t 0 ) = dŝ 2 dt (t 0 ) = 0.(30)
For a purely dipolar magnetic torque, it is clear from Eq. (28) that this condition can only be achieved when the terms of each cross product are either parallel or zero, which we may write more concisely in the form:
B(ŝ 2 ) = λ 1ŝ1 ,B(ŝ 1 ) = λ 2ŝ2 .(31)
This corresponds to a singular-value problem, which can be solved with the explicit matrix form ofB in the orbital-frame basis. Two classes of solutions can be determined from the singular vectors ofB, as described below. In either class, the two spins must be parallel with each other, but they may point in the same direction or in reversed directions. Fig. 3 illustrates the full set of equilibrium configurations.
Case 1. Equilibrium configurations in the orbital plane
The first solution to the singular-value problem corresponds to an equilibrium configuration where the pair of spin axes are both contained inside the orbital plane:
s 1 = σ 1p ,ŝ 2 = σ 2p ,(32)
for some unit vectorp in the plane of the orbit (i.e.p ·ê z = 0). The two unitary parameters (σ 1 , σ 2 ) ∈ {−1, 1} × {−1, 1} describe the relative alignment between the two spins: they can either point in the same direction or in reversed directions. Notice that the set of all pairs (ŝ 1 ,ŝ 2 ) which satisfy these conditions at a given moment in time forms a space of dimension 1.ŝ 1ŝ 2 r s 1ŝ 2 rŝ 1ŝ 2 r s 1ŝ 2 r Figure 3. Equilibrium configurations of the spin axes. These correspond to directions that are either inside the orbital plane, in some arbitrary direction (case 1, left column), or orthogonal to the orbital plane (case 2, right column). In each case, the spins may either be parallel (top) or anti-parallel (bottom). As discussed in Sect. IV B, only the anti-parallel orthogonal configuration (bottom-right) is the only stable scenario.
Case 2. Equilibrium configurations orthogonal to the orbital plane
The second solution to the singular-value problem corresponds to a configuration where both spins are orthogonal to the orbital plane -in other words, parallel to the axisê z :
s 1 = σ 1êz ,ŝ 2 = σ 2êz .(33)
As in the previous case, the spins can be oriented in the same direction or in reverse directions according to the values of σ .
A. Magnetic interaction energy between two dipoles
The interaction energy between two magnetic dipoles is given by the symmetric expression below [see e.g. 65]:
U B (µ 1 , µ 2 ) = − µ 0 4π µ 1 · B r (µ 2 ).(34)
We shall denote U B the 'instantaneous magnetic energy'. Conversely, one may take the orbital average of U B , normalizing the resulting expression by a positive constant factor:
U B (ŝ 1 ,ŝ 2 ) = −ŝ 1 ·B(ŝ 2 ).(35)
We denoteŪ B the 'secular magnetic energy'. As can be straightforwardly verified,Ū B is a constant of motion of the secular system -its time derivative vanishes for any pair (ŝ 1 ,ŝ 2 ) of axes satisfying (28). Without considering additional forces or dissipation, the orbit-averaged precession system is conservative, and the motion is restricted to some levelcurve of constant-energyŪ B . In astrophysical systems, we expect dissipation due to radiation, tidal forces and internal frictions to bring the magnetic system to the lower energy states, by exchanging energy until eventually settling into a local minimum ofŪ B . As we shall see, this local minimum corresponds to a state of stability of the physical system. In Appendix A, we recall that any symmetric bilinear form constrained to the unit n-sphere U : S n × S n → R is bounded by its largest-magnitude eigenvalue. Applying the principle to the secular magnetic energy [cf. Eq. (35)], one obtains the bounds −2 ≤Ū B ≤ 2. The spin configurations in which these bounds are actually attained correspond toŝ along the direction of the associated eigenvector -that is,ŝ ê z . In fact, such configuration corresponds exactly to the equilibrium states orthogonal to the orbital plane as determined in the previous section [Eq. (33)]. In particular, the energy lower bounds are reached in the anti-parallel case (σ 2 = −σ 1 ), which as we shall see, is the most stable equilibrium point. In the following section, we analyse the local stability of all the computed equilibrium states from the standpoint of the Hessian form ofŪ B .
B. Stability tests
The stability of each equilibrium configuration can be analysed via the local convexity of the magnetic interaction energy. We evaluate the nature of each of the extrema -minimum, maximum or saddle point -by determining the sign of the Hessian at that point. We remind the reader of its definition.
Consider a real-valued function f of n real variables x = (x 1 , · · · , x n ) with all partial second-order derivatives. The Hessian of f is a matrix-valued function H : R n → M n (R) defined as follows:
H(x) = ∂ 2 f ∂x 2 1 (x) · · · ∂ 2 f ∂x 1 ∂x n (x) . . . . . . . . . ∂ 2 f ∂x n ∂x 1 (x) · · · ∂ 2 f ∂x 2 n (x) .(36)
Evaluating the Hessian at some point x ∈ R n provides a description of the local convexity of f at that point. If x * is a critical point (∇ f (x * ) = 0) then f can be locally approximated by a quadratic function
f (x * + u) = f (x * ) + 1 2 u T H(x * ) u + O(u 3 ).(37)
The Hessian matrix at critical point x * can be decomposed into its eigenspace in order to obtain the principal directions of curvature. The sign of the corresponding eigenvalues determine whether each direction is stable or unstable.
For the problem at hand, we wish to study the critical points of the secular energyŪ B as a function of spin direction. For the following, we define the relative alignment between spinŝ s 1 andŝ 2 :
σ = σ 1 σ 2 .(38)
Case 1. Equilibrium configurations in the the orbital plane
As we have seen, any pair of spin axes which are parallel to each other and contained within the orbital plane will correspond to a secular equilibrium state of the averaged system. Indeed, for any givenp ·ê z = 0 in Eq. (32) the energy takes the same valueŪ B (σ 1p , σ 2p ) = σ, depending only on the relative orientation of the two spins. The sign of σ = σ 1 σ 2 will be positive if the spins are facing the same direction or negative if they are in opposite directions. This invariance with respect to a choice ofp in fact reflects the more general axis-symmetry of the system with respect to rotations around the axisê z . Simultaneous rotations of both spins will leave the energy expression invariant. In the case of this particular equilibrium configuration, the act of choosing onep lying in the orbital plane over anotherp corresponds to rotating both spins simultaneously by the angle that takesp →p. The set of all unit vectorsp in the orbital plane corresponds to a one-dimensional level-set of constant energy (a circle).
To study the stability of this level-set we can reduce the degrees of freedom of the system from four (two for each spin axis, i.e. S 2 × S 2 ) to three (by removing the rotational degree of freedom). In this reduced three-dimensional manifold, the circle becomes a point, and the remaining three directions of the tangent space determine the convexity of the energy.
As a first step, consider the spins parameterized in spherical coordinates,ŝ = (cos ψ sin θ , sin ψ sin θ , cos θ ),
with azimuth ψ and angle θ measured from the north pole. In these coordinates, the secular energy takes the form
U B = 2 cos θ 1 cos θ 2 − cos(ψ 1 − ψ 2 ) sin θ 1 sin θ 2 .(40)
As discussed above, we can see clearly that simultaneous rotations in the azimuth ψ 1 and ψ 2 do not affect the energy. We can therefore look at the difference ψ 1 − ψ 2 ∆ψ and discard the redundant degree of freedom ψ 1 + ψ 2 . At the equilibrium, the polar angles are θ 1 = θ 2 = π/2, and the azimuth is either ∆ψ = 0 or ∆ψ = π. We consider therefore the three directions of the tangent space of this reduced manifold. In this case, the Hessian matrix of the energyŪ B with respect to the three variables (∆ψ, θ 1 , θ 2 ) has value:
H 1 = σ 0 0 0 σ 2 0 2 σ .(41)
The sign of σ = ±1 corresponds to the two cases of the spin directions being aligned or anti-aligned. For either sign, the Hessian has both positive and negative eigenvalues (−1, 1 and 3σ). This implies a saddle point, which is unstable.
Case 2. Equilibrium configurations orthogonal to the orbital plane
In the second case, we consider both spins to be orthogonal to the plane of the orbit. The spin axesŝ therefore will be lying in the north or south pole of their corresponding unit spheres. A natural choice of coordinates for each spin in the neighbourhood of the poles is the Cartesian pair x , y , which with the unit-norm condition giveŝ
s = x , y , σ 1 − x 2 − y 2 .(42)
In these coordinates, the secular magnetic energy can be expressed in the form:
U B = −x 1 x 2 − y 1 y 2 + 2σ 1 − x 2 1 − y 2 1 1 − x 2 2 − y 2 2 . (43)
The corresponding basis for the tangent space is the coordinate basis dx and dy . With respect to these coordinates the Hessian of the energy takes the values:
H 2 = − 2σ 0 1 0 0 2σ 0 1 1 0 2σ 0 0 1 0 2σ .(44)
The nature of the eigenvalues of the Hessian change according to the sign of σ: when the spins are aligned (σ = +1), the Hessian is negative definite, with eigenvalues {−3, −3, −1, −1}, implying a point of maximum energy and therefore instability; when the spins are in opposing directions (σ = −1), the eigenvalues are {1, 1, 3, 3} and the Hessian is positive definite, which implies a point of minimum energy and hence stability. This suggests that given enough time and in the absence of stronger torques, magnetically interacting systems will naturally converge towards the stable anti-aligned orthogonal configuration. The timescale of this convergence will depend on the strength of the dissipation effects involved.
C. Solutions near equilibrium
In Appendix B, we determine solutions near the stable equilibrium state. These solutions can be plugged in to derive complete secular orbital dynamics on systems which present strong magnetic torques, such as WD or NS binaries in the context of gravitational wave emission [for further details, see e.g. 10,59]. We present below the main results:
The spins can be decomposed asŝ = x ê x + y ê y + z ê z , where the orbital-plane components satisfy
x (t) = ρ + cos ω + p t + φ + + ρ − cos ω − p t + φ − ,(45a)y (t) = ρ + sin ω + p t + φ + + ρ − sin ω − p t + φ − ,(45b)
and the orthogonal component satisfies
z 1 (t) = 1 2ν 2 ζ +Q z + ρ z cos (ω z t + φ z ) ,(46a)z 2 (t) = 1 2ν 1 ζ −Q z − ρ z cos (ω z t + φ z ) ,(46b)
with real parameters ρ ± , φ ± , ρ z , φ z , dependent on initial conditions. The expressions of the constants ζ andQ z and the frequencies ω z , ω + p and ω − p are given in the appendix.
V. NUMERICAL VALIDATION
We present in this section a numerical verification of the results that were obtained in Sect. IV. In the first part, we demonstrate the derived (in-)stability of each equilibrium configuration. In the second part, we compare the obtained analytical solutions to numerical integration, in the neighbourhood of the stable equilibrium.
A. Stability of the equilibrium configurations
We artificially introduce dissipation into the dynamics of the secular system and numerically show that it is driven towards the stable states. Such a dissipative effect must preserve the norm condition on unit vectors and manifest as a friction when orientation changes. For this we include a time-delay term into the magnetic field that is felt by each companion star. By our previous arguments in Sect. III, it is direct to see that this term will propagate to the secular scale as follows:
dŝ 1 dτ (τ) =ŝ 1 (τ) ×B(ŝ 2 (τ − ∆τ)),(47a)dŝ 2 dτ (τ) = κŝ 2 (τ) ×B(ŝ 1 (τ − ∆τ)),(47b)
where we have used the re-scaled dimensionless time τ → ν −1 1 τ and κ = ν 2 /ν 1 = s 1 /s 2 presented at the end of Sect. III. For convenience, we consider the delay to be an order of magnitude smaller than the average torque timescale, ∆τ ∼ 0.1 κ −1/2 . This provides us with dissipative effects visible on the simulation timescales.
In order to assess stability, we consider two metrics: the secular magnetic interaction energyŪ B [see Eq. (35)]; and the angular distance to the stable equilibrium point (ê z , −ê z ), which we define as:
d 1 = arg(ŝ 1 , +ê z ), (48a) d 2 = arg(ŝ 2 , −ê z ),(48b)
where arg(u, v) = arctan2 |u × v|, u · v is the angle between any two vectors u and v.
The dimensionless system is then integrated until convergence, for four sets of initial spin conditions (Table II), and with spin ratio κ = 0.3. Each initial condition corresponds to an unstable equilibrium state plus a small perturbation on the order of ∼1 • . For completeness, we also include a perturbation of the stable state (∼10 • ). The resulting trajectories are plotted in Fig. 4, together with the time evolution of the secular energyŪ B and of the angular distances d 1 and d 2 . Even for a small initial angular perturbation, the spin axes diverge from their original unstable equilibrium and converge towards the stable, anti-aligned orthogonal configuration. Table II. Initial conditions in four distinct stability simulations (a-d).
Each spin axis is parametrised in polar coordinates with azimuth ψ and angle θ measured from the north pole.
θ 1 ( • ) θ 2 ( • ) φ 1 ( • ) φ 2 ( • )(
B. Solutions near equilibrium
In this part, we present a brief application of the analytical solutions of the secular equations that were exposed in Sect. IV C. The equations are expressed in dimensionless time, and we adopt a spin ratio of κ = 0.3. The following initial conditions are considered (polar coordinates): inclinations θ 1 = 10 • and θ 2 = 172.5 • from the north pole, and azimuths ψ 1 = 0 and ψ 2 = 50 • . Figure 5 compares the obtained analytical expression of the primary to numerical integration, decomposed in the Cartesian basisŝ 1 (τ) = x 1 (τ)ê x + y 1 (τ)ê y + z 1 (τ)ê z . Matching peaks can be observed in the power spectra of both solutions, at the obtained angular frequencies (in dimensionless units) ω − p = 0.47 rad and ω + p = 1.86 rad for the orbital components (x 1 , y 1 ), and at ω z = 2.34 rad for the orthogonal component z 1 (cf. Appendix B).
VI. DISCUSSION
We have so far defined the concept of a secular equilibrium state and applied it to obtain the equilibrium dynamics of binary systems with two magnetic components. In this section, we begin by discussing the fine points between secular and instantaneous equilibrium. Then, we apply our results to a real astrophysical scenario, the -Lupi magnetic binary.
A. Comparison between instantaneous and secular equilibrium states
In Section IV, we defined the secular equilibrium as the spin configurations where the net torque over an orbital period is effectively zero. These states contrast with the instantaneous equilibrium, the configurations of zero torque on the instantaneous precession system [Eq. (13)]. In the latter scenario, the spin dynamics are considered at a single moment in time and at a fixed orbital position. When orbital motion is introduced, an instantaneous equilibrium state may be destabilised. This can be seen in the following manner. Consider a bounded orbit parametrised by the osculating true anomaly f = f (t). Expressing the spins in spherical coordinates, the instantaneous magnetic energy U B [Eq. (34)] takes the form: Whereas the dot-product term on the right-hand side is invariant to an orbital translation of the bodies, the cosine terms will oscillate with the orbital dynamics. Take two instants within the same orbital revolution, t 1 and t 2 such that the true anomaly at each instant equals f (t 1 ) ≡ (ψ 1 + ψ 2 )/2 and f (t 2 ) ≡ (ψ 1 +ψ 2 +π)/2. Then the difference in energy between those two instants will be roughly
U B (t) = µ 0 µ 1 µ 2 4πr 3 μ 1 ·μ 2 −3 sin θ 1 sin θ 2 cos(ψ 1 − f (t)) cos(ψ 2 − f (t))(49)U B (t 2 ) − U B (t 1 ) ∼ 3µ 0 µ 1 µ 2 4πa 3 sin θ 1 sin θ 2 ,(50)
where we have substituted r ∼ a. We conclude that instantaneous equilibrium positions will indeed develop large energy oscillations in the timescale t ∼ P orb , particularly when the polar angle θ is large (i.e. spin axes close to orbital plane alignment). For rapidly orbiting systems, this energy fluctuation occurs very quickly and destabilises the equilibrium of the point. There is a direct analogy between the states of secular equilibrium and those of instantaneous equilibrium. Recall the expression of the secular magnetic energy:
U B = −ŝ 1 ·B(ŝ 2 ),B = I − 3ê z ⊗ê z .
Such expression has been normalized by the positive constant µ 0 µ 1 µ 2 /8πb 3 as discussed in Sect. IV. We similarly normal-ize the expression of the instantaneous magnetic energy [cf. Eq. (34)] by the scalar µ 0 µ 1 µ 2 /4πr 3 and obtain:
U B ∝ +ŝ 1 ·B r (ŝ 2 ),B r = I − 3r ⊗r.
Observe that the secular averaging procedure effectively produced a flip in the sign of the energy as well as a switcĥ r ↔ê z . Analogously to how the secular equilibrium states of the binary are given by the singular vectors ofB (cf. Sect. IV), the instantaneous states are given by the singular vectors of B r . Consequently, each state of instantaneous equilibrium has a secular counterpart. These correspond to spin axes aligned with the radial directionr (resp.ê z ) or perpendicular tor (resp.ê z ). The stability of each state depends on the local convexity of the energy U B (resp.Ū B ). We present in Table III a comparison between the obtained states for the two types of equilibrium.
B. -Lupi
We consider as a potential application case the -Lupi inner binary system. -Lupi is a ternary system composed of two close-range B-type companions Aa and Ab, plus a third dis- Table III. Comparison between the stability of each type of equilibrium state. The instantaneous equilibrium states correspond to the spins either parallel or perpendicular to the orbital separationr. Analogously, the secular states correspond to the spins either parallel or perpendicular to the axisê z . The energies U B andŪ B are given in dimensionless units.
Instantaneous equilibria
Secular equilibria
Config. Cond. σ U B Stable Config. Cond. σŪ B Stable ⇒ŝ r +1 −2 yes ↑↑ŝ ê z +1 +2 no ↑↓ŝ ⊥r −1 −1 no ŝ ⊥ê z −1 +1 no ↑↑ŝ ⊥r +1 +1 no ⇒ŝ ⊥ê z +1 −1 no ŝ r −1 +2 no ↑↓ŝ ê z −1 −2 yes
tant companion dubbed -Lupi B. Both stars of the -Lupi A inner system are magnetic, making it the first and only currently known massive binary that has two magnetic components. Furthermore, the field of each star can be captured by a dipolar model with axes roughly parallel to the spins [72].
The system also has a short orbital period of P orb ∼ 4.56 d, making it an excellent example to apply our model. [65,80] obtained estimates for relevant stellar and orbital parameters, from which we adopt the values for the semi-major axis a = 29.5 R , eccentricity of e = 0.28, inclination ι = 21 • , stellar masses of m 1 = 9.0 M for the primary and m 2 = 7.9 M for the secondary, and radii R 1 = R 2 = 4.5 R .
[72] reported a dipolar field strength of at least B p 1 = 600 G and B p 2 = 900 G at the surface poles of the star, and projected rotational velocities at the equator v 1 sin i 1 = 37 km s −1 for the primary and v 2 sin i 2 = 27 km s −1 for the secondary. By adopting the rotational inclination for each star equal to the orbital plane inclination ι ∼ 21 • , we obtain rotational periods on the order of P 1 = 2.2 d and P 2 = 3.0 d.
From these physical parameters we may calculate the corresponding dimensionless ratios. The impact of figure effects can be assessed via γ fig = 2 × 10 10 J 2 , whereas for tides γ tide 1 = 2.5 × 10 8 k 1 2 /Q 1 and γ tide 2 = 3.5 × 10 8 k 2 2 /Q 2 . These expressions suggest that if -Lupi is both highly symmetrical with J 2 6×10 −11 , and has tidal parameters k 2 /Q 3×10 −9 , then the system's rotation is likely driven by magnetism. In this case, the smallness of η = 5 × 10 −12 and = 4 × 10 −11 predict that the system will be driven towards the secular stable equilibrium in a timescale proportional to energy dissipation rates. This outcome corresponds well to the expected state of -Lupi based on observational data [65].
VII. CONCLUSIONS
This paper has presented an analysis of the secular precession dynamics of binary systems under pure magnetic dipoledipole interactions, considering an effective description with orbit-averaged motion. In particular, we have supposed spindipole alignment, perfect sphericity and tidal rigidity, and then derived criteria for assessing the validity of these assumptions, as well as the relative strengths of magnetic dipole interactions, tidal torques and figure effects. We have shown that this effective long-term description predicts a set of states of secular equilibrium which confront the traditional states of in-stantaneous magnetic equilibrium, where orbital dynamics are in fact neglected. Indeed, we have determined that there is a single secular state that is globally stable, corresponding to the configuration ±(ê z , −ê z ) where the spin axes are reversed with respect to each other and orthogonal to the orbital plane. Conversely, the instantaneous state of radial alignment ±(r,r) is in fact only momentarily stable, since the orbital motion generates energy fluctuations and destabilizes the configuration. Our work can also be used to derive the long-term evolution of binary orbits providing an expected spin evolution in the absence of strong additional torques.
Our results hold for typical early-type MMS (such as the observed -Lupi system), WD and NS binaries hosting dipolar fossil-like fields, where we expect long-term convergence towards the secularly stable state. Another interesting case of application is that of M dwarfs. For masses lower than 0.35M , M dwarfs are fully convective, unlike any other main-sequence stars, which renders the dynamo-driven topology unique [e.g. 17,28]. These low-and mid-mass M dwarfs are possible targets of application for our formalism since their magnetic fields often display intense dipolar components [e.g. 30], although intermittent higher-order multipolar components might also be present [e.g. 50, and references therein]. Moreover, they have been observed forming binaries with two magnetic components [e.g. 51,53]. An interesting example of such binary systems is that of the YY Gem system [53], where each star has both a dipolar and multipolar components. The Zeeman-Doppler imaging analysis indeed revealed moderately complex global fields with a typical strength of 200-300 G, with dipolar components that are anti-aligned as predicted for fossil-type fields. These considerations hint that our work might be applied to any type of dipolar magnetic fields, either of fossil origin or triggered by a dynamo action, as long as its variation timescales are longer than the precession timescales. However, as mentioned in [53], the Zeeman intensification analysis suggests that the global fields of YY Gem may only comprise a few percent of their total magnetic fields, which highlights the need to consider multipoles in future studies. In addition, we point out that in typical M dwarf binary systems, gravitational interactions may control the orientation of the spins themselves. We can in fact compute (for a typical magnetic M dwarf: m ∼ 0.35M , B ∼ 1 kG) the dimensionless parameters that measure the strength of extended-body gravitational interactions with respect to magnetic torques: γ fig = 6 × 10 11 J 2 and γ tides = 7 × 10 7 k 2 /Q [Eqs. (9,11)]. Subsequently, we obtain the (quite tight) cutoffs J 2 ≤ 10 −12 for the dimensionless quadrupole moment [Eq. (10)], and k 2 /Q ≤ 10 −8 for the tidal parameters [Eq. (12)], as the necessary parameters for magnetism to control the spin evolution.
Another key assumption from this work that warrants discussion is the alignment between the spin and magnetic axes of each star, since many misaligned systems are observed in nature [e.g 73]. In the case where the spin and magnetic dipole are anti-aligned [µ < 0 in Eq. (5)], our general results still hold but spin directions must be flipped accordingly in the equations. More generally, this alignment constraint must be relaxed in future studies to explore the more general scenario of misaligned spin and magnetic axes. Assuming a rigid description where the magnetic axes rotate around the spins, a potentially direct case could be when the stellar rotation period P is much shorter than the orbital period P orb . In such regime, the problem may be hierarchically split into three distinct timescales P P orb τ, where τ is the 'spin precession timescale' as described in Sect. 2 [Eq. (15)]. The dynamics could then be formulated as an effective description when seen from the longer orbital timescale P orb , potentially reducing to the one explored in this work. Accordingly, one could expect our main results to still hold.
Further extensions of this work will also include abandoning the magnetostatic description and considering internal coupling of the fields with matter [20]. Finally, the precession dynamics and equilibrium states of the system may be investigated by directly taking into account not only magnetic forces but also competing figure effects and dynamical tides [see example of such combined study in 2]. ACKNOWLEDGMENTS C.A. acknowledges the joint finantial support of Centre National d'Études Spatiales (CNES) and École Doctorale Astronomie et Astrophysique d'Ile de France (ED127 AAIF). This work was also supported by the Programme National GRAM, by PNPS (CNRS/INSU), by INP and IN2P3 cofunded by CNES, and by CNES LISA grants at CEA/IRFU. Authors are also grateful to S. Bouquillon for fruitful discussions. We are thankful to the anonymous reviewer for their constructive comments, which allowed us to improve our article.
where γ fig = Γ fig /Γ B and γ tide = Γ tide /Γ B are the contributions to γ due to figure effects and due to tides, respectively. Our main interest lies in isolating the equilibrium dynamics of magnetic effects, and we shall therefore consider the regime where Γ B is dominant. More explicitly, we assume γ fig 1 and γ tide 1, for which we shall presently derive criteria. The first assumption concerns the strength of figure effects. For a deformed extended body, classical gravitational torques up to quadrupole order have magnitudes roughly around Γ fig ∼ (3/2) (Gm 1 m 2 /r) J 2 (R /a) 2 , where J 2 is the dimensionless quadrupole moment, a is the semi-major axis of the orbit, and G is the gravitational constant [see e.g. 67]. The corresponding contribution to γ is
I. Typical binary system physical parameters (high-field range), and the corresponding dimensionless parameters: γ fig = Γ fig /Γ B the ratio between figure effects and magnetic torques [cf. Eq. (9)]; γ tide = Γ tide /Γ B the ratio between tidal and magnetic torques [cf. Eq. (11)]; η = F B /F N the ratio between forces [Eq. (18)]; and = P orb /τ the ratio between dynamical timescales [Eqs. (15)-(
Figure 2 .
2Sample trajectories for the secular evolution of the spin axis of the primaryŝ 1 , plotted against the unit sphere for different values of κ = ν 2 /ν 1 . The initial conditions are fixed, and shown for the primary as a dashed line from the origin toŝ 1 (0). The colours are interpolated between blue and red from initial time to t 0 = 50 ν −1 1
Figure 4 .
4Evolution of the binary system in a secular timescale with dissipation introduced. Each column corresponds to a given initial condition fromTable II, chosen in a neighbourhood of an equilibrium point. On the top row: the trajectory of each spin axisŝ is plotted against the unit sphere, in blue for the primary and green for the secondary. Colors are interpolated towards red from initial time to final time of convergence. Initial conditions for the spin axes are portrayed as dashed lines from the origin toŝ (0). The orbital plane is represented in a darker shade with black contours, and the axisê z as a vertical black arrow. On the middle row: the secular magnetic interaction energy. On the bottom row: the angular distance of each spin axis with respect to the stable equilibrium point (ê z , −ê z ).
Figure 5 .
5Comparison between the analytical precession model (blue) and numerical integration (red), close to stable equilibrium. On the leftmost column: the components of the spin of the primary, in the time domain. The corresponding Discrete Fourier Transform F {ŝ 1 } is given in absolute value (power spectrum in decibels, middle column), and in complex argument (rightmost column). The time and frequency axes are expressed in the re-scaled dimensionless units.
Table I
Ishows typical values for γ fig and γ tide in MMS,
Table
Throughout the text, we shall assume µ positive, but µ < 0 is also allowed, with appropriate sign changes in the equations.
Appendix A: Optimization of bilinear formsIn this appendix we recall the variational characterization of the singular value decomposition. Consider the bilinear form U taking two unit vectors U : S n ×S n → R, represented under a basis ê 1 , ...,ê n via some matrix U i j :whereû = u iê i ,v = v iê i , and sum over repeated indices is presupposed.Since U is continuous on the compact domain S n ×S n , it attains both a maximum and minimum value. Indeed, consider the Lagrange function g(û,v) = U(û,v) − λ 1 û ·û − 1 − λ 2 v ·v − 1 , (A2) with multipliers λ 1 and λ 2 . The extremaû * andv * necessarily satisfy the stationary condition ∇g(û * ,v * ) = 0, which implies that for any tangent vectors h 1 , h 2 ,From the matrix representation of U [Eq. (A1)], we see that the solutionsû * andv * correspond respectively to the left and right singular vectors of U i j -that is, the unique set of vectors that satisfywhere λ = 2λ 1 = 2λ 2 is the corresponding singular value of U i j . It is direct to see that the attained value U * = U(û * ,v * ) = λ. From all singular-vector pairs (û * ,v * ), the global maximum (or minimum) of U is therefore reached by the candidate pair that has the highest (or lowest) corresponding singular value λ. Note that whenever U is symmetric, the singular vectors and eigenvectors of U i j coincide, and the singular values are equal to the magnitude of the eigenvalues.Appendix B: Linearized SolutionsIn this section, we formally derive secular solutions to the spin-spin equations,in a neighbourhood of the stable equilibrium point. The solution obtained here is in fact more generally valid for any configuration close to the poles (ŝ 1 ,ŝ 2 ) ≈ (σ 1êz , σ 2êz ), including the unstable aligned orthogonal case (see Sect. IV). As in the main text, we consider two indices , m ∈ {1, 2}, with m , representing the pair of binaries permuted in some order.In order to solve the system (B1), we take advantage of the axial symmetry of the physical system aroundê z . We split the Euclidean vector space E 3 into an orbital plane Π = xê x + yê y ; (x, y) ∈ R 2 plus a normal line Λ = zê z ; z ∈ R such thatWe identify the line Λ with the reals and the orbital plane Π with the complex plane by introducing the linear isomorphism Φ : Π × Λ → C × R, which satisfies 2 :where i is the imaginary number. Eq. (B1) can be expressed in the space C×R by declaring a new set of spin variables, which we define uniquely from Φ(ŝ ) = (p , z ). More explicitly, p corresponds to the orbital component ofŝ , and z corresponds to the projection ofŝ on the basis elementê z :This identification allows us to leverage the rotational symmetries of Π through the algebraic structures of the complex numbers. In particular, through the isomorphism, vector dotand cross-products in E 3 can be described in terms of complex multiplication and conjugation3.For each body, we obtain the new and equivalent form of the spin precession equations:2The choice of real and imaginary axes is arbitrary. There is nothing special aboutê x andê y . We could have equally taken any other pair of orthogonal axes in the orbital plane, and obtained equivalent results.3For example, for two vectors contained in the orbital plane, u, v ∈ Π, the dot-and cross-product operations under the isomorphism are simplywhere Re, Im are the real and imaginary parts, and the operation p → p * denotes complex conjugation. These equations are coupled by the four parameters (p 1 , p 2 , z 1 , z 2 ) with values in C 2 × R 2 , for a total dimensionality of 6. As discussed in Sect. III, two degrees of freedom are redundant, constrained by the unit-norm conditionsand the magnetic interaction energy defines a conserved quantity:ŪThe anti-symmetry of the equation (B6) with respect to a swap of indices ↔ m allows us to determine another first integral for the problem -namely, the z-component of the total intrinsic angular momentum, S z = s 1 z 1 + s 2 z 2 . We introduce the quantity:Since for any two complex numbers Im(u * v + v * u) = 0, it follows that the derivative of ζ vanishes. We also introduce the anti-symmetric spin:which is not a conserved quantity. The spins of the two bodies can be decoupled in Eqs. (B5) and (B6) via differentiation and substitution of the constraints. We obtain the second-order system of equations below, purely in terms of the variables p 1 , p 2 and Q z :with α + , α − , β + , β − four functions defined viaand the constants a ∈ R, b ∈ R + given byTo solve the full system of equations, we must first solve for Q z and then plug the obtained solution in the expression of α ± and β ± in order to determine p 1 and p 2 . In fact, there is an exact analytical solution for Q z in terms of elliptic functions. The resulting expression for Q z is somewhat lengthy, and subsequently solving (B11), (B12) in this scenario proves to be challenging. Instead, we opt to present simpler and physically meaningful expressions for all the parameters, with domain of validity close to the poles.Consider the expansion of Eqs. (B11) -(B13) around some originQ z , with Q z (t) =Q z + δQ z (t). For a good choice ofQ z , the resulting system may be truncated at low orders in δQ z to yield low-order solutions Q (0) z , Q (1) z , etc. In particular, we constrainQ z to the domain of Q z by choosinḡ Q z = Q z (t 0 ) for a reference time t 0 . The error incurred from this expansion will depend on the largeness of the variations δQ z -which will be considerably small in the neighbourhood of the poles. To understand this, consider a spin variation from t 0 to t, δŝ (t) =ŝ (t) −ŝ (t 0 ). Due to the geometry of the unit sphere, this variation will propagate along the z component to some |δz (t)| ≤ |δŝ (t)| sin θ ≤ 2 sin 2 θ , where θ is the maximum attained polar inclination, that is, sin θ = sup t |ŝ (t) ·ê z |. For a solution close to the poles, we consider the polar angle θ a small parameter. In this case, the perturbation δQ z (t) = ν 2 δz 1 (t) − ν 1 δz 2 (t) will also remain similarly bounded. With these considerations, we present below the solutions for low orders of δQ.Orthogonal componentAs discussed, we begin by determining an expression for the decoupled variable Q z [Eq. (B13)]. Until this point arbitrary, we fix the choice ofQ z to best fit (B13) and minimize δQ z . A good expansion parameterQ z is in fact the zero-order constant solution Q (0) z , the unique real value that satisfies the third-degree algebraic equation:whereas the first-order solution iswithWe see a posteriori that the choice ofQ z = Q (0) z is in fact the average of the first-order solution Q(1)z .Orbital componentFor the orbital part, we consider a zero-order approximation in δQ. In this scenario, each component p (0) obeys a constantcoefficient linear ordinary differential equation, withThe corresponding orbital plane solutions are given by superposed complex rotations:with two oscillating frequenciesand four constants c + 1 , c − 1 , c + 2 , c − 2 in the unit complex disk D = z ∈ C : |z| ≤ 1 ; they can be computed from the initial conditions as:The calculations may naturally be extended to first-order solutions p(1), which include a higher number of harmonic frequencies.We now return to the original Euclidean space E 3 . The spin can be broken down into the corresponding components of the basis e 0 by inverting the isomorphism Φ: s = Φ −1 (p , z ) = Re(p )ê x + Im(p )ê y + z ê z .From the above solutions, the component z of the spins can then be retrieved from (B18) via the relations z (1) 1 (t) = 1 2ν 2 ζ + Q (1) z (t) ,z (1) 2 (t) = 1 2ν 1 ζ − Q (1) z (t) , (B25) and the two orbital components:Re(p (t)) = ρ + cos ω + p t + φ + + ρ − cos ω − p t + φ − , (B26a) Im(p (t)) = ρ + sin ω +with ρ ± = |c ± | ∈ [0, 1] and φ ± = arg c ± ∈ [0, 2π].
. N Achilleos, D T Wickramasinghe, ApJ. 346444Achilleos, N. & Wickramasinghe, D. T. 1989, ApJ, 346, 444
. J Ahuir, A Strugarek, A S Brun, S Mathis, A&A. 650126Ahuir, J., Strugarek, A., Brun, A. S., & Mathis, S. 2021, A&A, 650, A126
. T Akgün, A Reisenegger, A Mastrano, P Marchant, MNRAS. 4332445Akgün, T., Reisenegger, A., Mastrano, A., & Marchant, P. 2013, MNRAS, 433, 2445
. E Alecian, C Neiner, G A Wade, Proceedings of the International Astronomical Union. 9330Alecian, E., Neiner, C., Wade, G. A., et al. 2014, Proceedings of the International Astronomical Union, 9, 330
. P Amaro-Seoane, H Audley, S Babak, Laser Interferometer Space AntennaAmaro-Seoane, P., Audley, H., Babak, S., et al. 2017, Laser Interferometer Space Antenna
. M Aurière, G A Wade, J Silvester, A&A. 4751053Aurière, M., Wade, G. A., Silvester, J., et al. 2007, A&A, 475, 1053
. S Bagnulo, J D Landstreet, Monthly Notices of the Royal Astronomical Society. 5075902Bagnulo, S. & Landstreet, J. D. 2021, Monthly Notices of the Royal Astronomical Society, 507, 5902
. K Beuermann, F Euchner, K Reinsch, S Jordan, B T Gänsicke, A&A. 463647Beuermann, K., Euchner, F., Reinsch, K., Jordan, S., & Gän- sicke, B. T. 2007, A&A, 463, 647
. E F Borra, J D Landstreet, L Mestel, ARA&A. 20191Borra, E. F., Landstreet, J. D., & Mestel, L. 1982, ARA&A, 20, 191
. A Bourgoin, C Le Poncin-Lafitte, S Mathis, M.-C Angonin, Phys. Rev. D. 105124042Bourgoin, A., Le Poncin-Lafitte, C., Mathis, S., & Angonin, M.-C. 2022, Phys. Rev. D, 105, 124042
. J Braithwaite, MNRAS. 3861947Braithwaite, J. 2008, MNRAS, 386, 1947
. J Braithwaite, MNRAS. 397763Braithwaite, J. 2009, MNRAS, 397, 763
. J Braithwaite, M Cantiello, MNRAS. 4282789Braithwaite, J. & Cantiello, M. 2013, MNRAS, 428, 2789
. J Braithwaite, Å Nordlund, A&A. 4501077Braithwaite, J. & Nordlund, Å. 2006, A&A, 450, 1077
. J Braithwaite, H C Spruit, Nature. 431819Braithwaite, J. & Spruit, H. C. 2004, Nature, 431, 819
. B C Bromley, S J Kenyon, The Astronomical Journal. 164229Bromley, B. C. & Kenyon, S. J. 2022, The Astronomical Jour- nal, 164, 229
. M K Browning, ApJ. 6761262Browning, M. K. 2008, ApJ, 676, 1262
. A Brun, A Strugarek, Q Noraz, The Astrophysical Journal. 92621Brun, A., Strugarek, A., Noraz, Q., et al. 2022, The Astrophys- ical Journal, 926, 21
. A S Brun, M K Browning, Living Reviews in Solar Physics. 144Brun, A. S. & Browning, M. K. 2017, Living Reviews in Solar Physics, 14, 4
C G Campbell, Magnetohydrodynamics in Binary Stars. 456Campbell, C. G. 2018, Magnetohydrodynamics in Binary Stars, Vol. 456
VizieR Online Data Catalog. F Carrier, P North, S Udry, J Babel, Carrier, F., North, P., Udry, S., & Babel, J. 2002, VizieR Online Data Catalog
. G A Carvalho, R C D Anjos, J G Coelho, ApJ. 94090Carvalho, G. A., Anjos, R. C. d., Coelho, J. G., et al. 2022, ApJ, 940, 90
. P Charbonneau, K B Macgregor, ApJ. 5591094Charbonneau, P. & MacGregor, K. B. 2001, ApJ, 559, 1094
. B Commerçon, P Hennebelle, E Audit, G Chabrier, R Teyssier, A&A. 5103Commerçon, B., Hennebelle, P., Audit, E., Chabrier, G., & Teyssier, R. 2010, A&A, 510, L3
. A Cumming, Monthly Notices of the Royal Astronomical Society. 333589Cumming, A. 2002, Monthly Notices of the Royal Astronomi- cal Society, 333, 589
. C Damiani, A F Lanza, A&A. 57439Damiani, C. & Lanza, A. F. 2015, A&A, 574, A39
. S E De Mink, H Sana, N Langer, R G Izzard, F R N Schneider, The Astrophysical Journal. 7827de Mink, S. E., Sana, H., Langer, N., Izzard, R. G., & Schneider, F. R. N. 2014, The Astrophysical Journal, 782, 7
. W Dobler, M Stix, A Brandenburg, ApJ. 638336Dobler, W., Stix, M., & Brandenburg, A. 2006, ApJ, 638, 336
. J F Donati, J D Landstreet, ARA&A. 47333Donati, J. F. & Landstreet, J. D. 2009, ARA&A, 47, 333
. J F Donati, J Morin, P Petit, MNRAS. 390545Donati, J. F., Morin, J., Petit, P., et al. 2008, MNRAS, 390, 545
. V Duez, Astronomische Nachrichten. 332983Duez, V. 2011, Astronomische Nachrichten, 332, 983
. V Duez, J Braithwaite, S Mathis, ApJ. 72434Duez, V., Braithwaite, J., & Mathis, S. 2010, ApJ, 724, L34
. L Ferrario, D De Martino, B T Gänsicke, Space Sci. Rev. 191111Ferrario, L., de Martino, D., & Gänsicke, B. T. 2015, Space Sci. Rev., 191, 111
L Ferrario, A Melatos, J Zrake, Space Science Reviews. 19177Ferrario, L., Melatos, A., & Zrake, J. 2015, Space Science Re- views, 191, 77
. L Ferrario, J Pringle, C Tout, D Wickramasinghe, Monthly Notices of the Royal Astronomical Society: Letters. 40071Ferrario, L., Pringle, J., Tout, C., & Wickramasinghe, D. 2009, Monthly Notices of the Royal Astronomical Society: Letters, 400, L71
Advances in Space Research. L Ferrario, D Wickramasinghe, A Kawka, 661025Ferrario, L., Wickramasinghe, D., & Kawka, A. 2020, Ad- vances in Space Research, 66, 1025
. L Ferrario, D T Wickramasinghe, Monthly Notices of the Royal Astronomical Society. 356615Ferrario, L. & Wickramasinghe, D. T. 2005, Monthly Notices of the Royal Astronomical Society, 356, 615
. J Fuller, S Mathis, arXiv:2301.11914arXiv e-printsFuller, J. & Mathis, S. 2023, arXiv e-prints, arXiv:2301.11914
. M Gaurat, L Jouve, F Lignières, T Gastine, A&A. 580103Gaurat, M., Jouve, L., Lignières, F., & Gastine, T. 2015, A&A, 580, A103
. D Gerosa, M Kesden, U Sperhake, E Berti, R Shaughnessy, Phys. Rev. D. 9264016Gerosa, D., Kesden, M., Sperhake, U., Berti, E., & O'Shaughnessy, R. 2015, Phys. Rev. D, 92, 064016
& the MiMeS Collaboration. J Grunhut, G A Wade, EAS Publications Series. 6467Grunhut, J., Wade, G.A., & the MiMeS Collaboration. 2013, EAS Publications Series, 64, 67
. P Hut, A&A. 92167Hut, P. 1980, A&A, 92, 167
. P Hut, A&A. 99126Hut, P. 1981, A&A, 99, 126
. J Isern, E García-Berro, B Külebi, P Lorén-Aguilar, The Astrophysical Journal. 83628Isern, J., García-Berro, E., Külebi, B., & Lorén-Aguilar, P. 2017, The Astrophysical Journal, 836, L28
. A S Jermyn, M Cantiello, ApJ. 923104Jermyn, A. S. & Cantiello, M. 2021, ApJ, 923, 104
. L Jouve, F Lignières, M Gaurat, A&A. 64113Jouve, L., Lignières, F., & Gaurat, M. 2020, A&A, 641, A13
. V M Kaspi, A M Beloborodov, Annual Review of Astronomy and Astrophysics. 55261Kaspi, V. M. & Beloborodov, A. M. 2017, Annual Review of Astronomy and Astrophysics, 55, 261
A Kawka, S Vennes, The A-Star Puzzle. J. Zverko, J. Ziznovsky, S. J. Adelman, & W. W. Weiss224Kawka, A. & Vennes, S. 2004, in The A-Star Puzzle, ed. J. Zverko, J. Ziznovsky, S. J. Adelman, & W. W. Weiss, Vol. 224, 879-885
. A King, R Whitehurst, J Frank, Monthly Notices of the Royal Astronomical Society. 244731King, A., Whitehurst, R., & Frank, J. 1990, Monthly Notices of the Royal Astronomical Society, 244, 731
. O Kochukhov, A&A Rev. 291Kochukhov, O. 2021, A&A Rev., 29, 1
. O Kochukhov, A Lavail, ApJ. 8354Kochukhov, O. & Lavail, A. 2017, ApJ, 835, L4
. O Kochukhov, A Lundin, I Romanyuk, D Kudryavtsev, ApJ. 72624Kochukhov, O., Lundin, A., Romanyuk, I., & Kudryavtsev, D. 2011, ApJ, 726, 24
. O Kochukhov, D Shulyak, ApJ. 87369Kochukhov, O. & Shulyak, D. 2019, ApJ, 873, 69
. J D Landstreet, S Bagnulo, A&A. 63410Landstreet, J. D. & Bagnulo, S. 2020, A&A, 634, L10
. J D Landstreet, G Mathys, A&A. 359213Landstreet, J. D. & Mathys, G. 2000, A&A, 359, 213
. J D Landstreet, S Bagnulo, G Valyavin, A F Valeev, A&A. 60792Landstreet, J. D., Bagnulo, S., Valyavin, G., & Valeev, A. F. 2017, A&A, 607, A92
. J Liebert, Publications of the Astronomical Society of the Pacific. 1001302Liebert, J. 1988, Publications of the Astronomical Society of the Pacific, 100, 1302
. J Liebert, D T Wickramsinghe, G D Schmidt, The Astronomical Journal. 1292376Liebert, J., Wickramsinghe, D. T., Schmidt, G. D., et al. 2005, The Astronomical Journal, 129, 2376
. M Lira, J C Degollado, C Moreno, D Núñez, Gen. Rel. Grav. 54146Lira, M., Degollado, J. C., Moreno, C., & Núñez, D. 2022, Gen. Rel. Grav., 54, 146
. M Maggiore, C V D Broeck, N Bartolo, Journal of Cosmology and Astroparticle Physics. 202050Maggiore, M., Broeck, C. V. D., Bartolo, N., et al. 2020, Journal of Cosmology and Astroparticle Physics, 2020, 050
S Mathis, V Duez, J Braithwaite, Astrophysical Dynamics: From Stars to Galaxies. N. H. Brummell, A. S. Brun, M. S. Miesch, & Y. Ponty271Mathis, S., Duez, V., & Braithwaite, J. 2011, in Astrophysical Dynamics: From Stars to Galaxies, ed. N. H. Brummell, A. S. Brun, M. S. Miesch, & Y. Ponty, Vol. 271, 270-278
. T Morel, N Castro, L Fossati, Proceedings of the International Astronomical Union. 9342Morel, T., Castro, N., Fossati, L., et al. 2014, Proceedings of the International Astronomical Union, 9, 342
. D Moss, MNRAS. 226297Moss, D. 1987, MNRAS, 226, 297
. D Moss, A&A. 403693Moss, D. 2003, A&A, 403, 693
. H Pablo, M Shultz, J Fuller, Monthly Notices of the Royal Astronomical Society. 48864Pablo, H., Shultz, M., Fuller, J., et al. 2019, Monthly Notices of the Royal Astronomical Society, 488, 64
. E S Phinney, S R Kulkarni, Annual Review of Astronomy and Astrophysics. 32591Phinney, E. S. & Kulkarni, S. R. 1994, Annual Review of As- tronomy and Astrophysics, 32, 591
E Poisson, C M Will, Gravity: Newtonian, Post-Newtonian, Relativistic. Cambridge University PressPoisson, E. & Will, C. M. 2014, Gravity: Newtonian, Post- Newtonian, Relativistic (Cambridge University Press)
. A Pound, E Poisson, Physical Review D. 77Pound, A. & Poisson, E. 2008, Physical Review D, 77
. F Schneider, S Ohlmann, P Podsiadlowski, Nature. 574211Schneider, F., Ohlmann, S., Podsiadlowski, P., et al. 2019, Na- ture, 574, 211
. F R N Schneider, P Podsiadlowski, N Langer, N Castro, L Fossati, MNRAS. 4572355Schneider, F. R. N., Podsiadlowski, P., Langer, N., Castro, N., & Fossati, L. 2016, MNRAS, 457, 2355
. M R Schreiber, D Belloni, B T Gänsicke, S G Parsons, M Zorotovic, Nature Astronomy. 5648Schreiber, M. R., Belloni, D., Gänsicke, B. T., Parsons, S. G., & Zorotovic, M. 2021, Nature Astronomy, 5, 648
. M Shultz, & CollaborationG A Wade, & CollaborationE Alecian, & CollaborationMonthly Notices of the Royal Astronomical Society: Letters. 4541Shultz, M., Wade, G. A., Alecian, E., & Collaboration, B. 2015, Monthly Notices of the Royal Astronomical Society: Letters, 454, L1
. M E Shultz, G A Wade, T Rivinius, MNRAS. 490274Shultz, M. E., Wade, G. A., Rivinius, T., et al. 2019, MNRAS, 490, 274
. D Stello, M Cantiello, J Fuller, Nature. 529364Stello, D., Cantiello, M., Fuller, J., et al. 2016, Nature, 529, 364
. D W N Stibbs, Monthly Notices of the Royal Astronomical Society. 110395Stibbs, D. W. N. 1950, Monthly Notices of the Royal Astro- nomical Society, 110, 395
. A Strugarek, E Bolmont, S Mathis, The Astrophysical Journal Letters. 84716Strugarek, A., Bolmont, E., Mathis, S., et al. 2017, The Astro- physical Journal Letters, 847, L16
. R J Tayler, MNRAS. 191151Tayler, R. J. 1980, MNRAS, 191, 151
. C A Tout, D T Wickramasinghe, L Ferrario, Monthly Notices of the Royal Astronomical Society. 35513Tout, C. A., Wickramasinghe, D. T., & Ferrario, L. 2004, Monthly Notices of the Royal Astronomical Society, 355, L13
. C A Tout, D T Wickramasinghe, J Liebert, L Ferrario, J E Pringle, Monthly Notices of the Royal Astronomical Society. 387897Tout, C. A., Wickramasinghe, D. T., Liebert, J., Ferrario, L., & Pringle, J. E. 2008, Monthly Notices of the Royal Astronomical Society, 387, 897
. K Uytterhoeven, P Harmanec, J H Telting, C Aerts, A&A. 440249Uytterhoeven, K., Harmanec, P., Telting, J. H., & Aerts, C. 2005, A&A, 440, 249
. J Vidal, D Cébron, A Ud-Doula, E Alecian, A&A. 629142Vidal, J., Cébron, D., ud-Doula, A., & Alecian, E. 2019, A&A, 629, A142
G A Wade, C Neiner, E Alecian, Monthly Notices of the Royal Astronomical Society. 456Wade, G. A., Neiner, C., Alecian, E., et al. 2015, Monthly No- tices of the Royal Astronomical Society, 456, 2
. D T Wickramasinghe, L Ferrario, Publications of the Astronomical Society of the Pacific. 112873Wickramasinghe, D. T. & Ferrario, L. 2000, Publications of the Astronomical Society of the Pacific, 112, 873
. D T Wickramasinghe, L Ferrario, MNRAS. 3561576Wickramasinghe, D. T. & Ferrario, L. 2005, MNRAS, 356, 1576
. C M Will, M Maitra, Physical Review D. 95Will, C. M. & Maitra, M. 2017, Physical Review D, 95
| []
|
[
"Large-Scale Chemical Language Representations Capture Molecular Structure and Properties Data availability",
"Large-Scale Chemical Language Representations Capture Molecular Structure and Properties Data availability"
]
| [
"Jerret Ross \nIBM Research\n10598Yorktown HeightsNYUSA\n",
"Brian Belgodere \nIBM Research\n10598Yorktown HeightsNYUSA\n",
"Vijil Chenthamarakshan \nIBM Research\n10598Yorktown HeightsNYUSA\n",
"Inkit Padhi \nIBM Research\n10598Yorktown HeightsNYUSA\n",
"Youssef Mroueh \nIBM Research\n10598Yorktown HeightsNYUSA\n",
"Payel Das \nIBM Research\n10598Yorktown HeightsNYUSA\n"
]
| [
"IBM Research\n10598Yorktown HeightsNYUSA",
"IBM Research\n10598Yorktown HeightsNYUSA",
"IBM Research\n10598Yorktown HeightsNYUSA",
"IBM Research\n10598Yorktown HeightsNYUSA",
"IBM Research\n10598Yorktown HeightsNYUSA",
"IBM Research\n10598Yorktown HeightsNYUSA"
]
| []
| Predicting the properties of a chemical molecule is of great importance in many applications, including drug discovery and material design. Machine learning-based models promise to enable more accurate and faster molecular property predictions than the current state-of-the-art techniques, such as Density Functional Theory calculations or wet-lab experiments. Various supervised machine learning models, including graph neural nets, have demonstrated promising performance in molecular property prediction tasks. However, the vast chemical space and the limited availability of property labels make supervised learning challenging, calling for learning a general-purpose molecular representation. Recently, unsupervised transformerbased language models pre-trained on large unlabeled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabeled molecules from the PubChem and ZINC datasets. Experiments show that utilizing the learned molecular representation outperforms existing baselines on downstream tasks, including supervised and self-supervised graph neural net baselines and language models, on several classification and regression tasks from ten benchmark datasets while performing competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that the large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.MainMachine Learning (ML) has emerged as an appealing, computationally efficient approach for predicting molecular properties, with implications in drug discovery and material engineering. ML models for molecules can be trained directly on pre-defined chemical descriptors, such as unsupervised molecular fingerprints 1 , or hand-derived derivatives of geometric features such as a Coulomb Matrix (CM) 2 . However, more recent ML models have focused on automatically learning the features either from the natural graphs that encode the connectivity information or from the line annotations of molecular structures, such as the popular SMILES 3 (Simplified Molecular-Input Line Entry System) representation. SMILES defines a character string representation of a molecule by performing a depth-first pre-order spanning tree traversal of the molecular graph, generating symbols for each atom, bond, tree-traversal decision, and broken cycles. Therefore, the resulting character string corresponds to a flattening of a spanning tree of the molecular graph. Learning on SMILES has been widely adopted for molecular property prediction 4-7 as SMILES is generally more compact than other methods of representing structure, including graphs. Additionally, meaningful substructures such as branches, cyclic structures, and chirality information are explicitly represented in SMILES strings, which is not the case for the graph representation.However, the SMILES grammar is complex and restrictive; most sequences over the appropriate character set do not belong to well-defined molecules. Alternative string-based representations exist, such as SMARTS 8 and SELFIES 9 . Comparing benefits of these alternative representations with respect to SMILES is an active area of research. For example 10 , focusing on molecular optimization tasks on the learned representation space, suggested no obvious shortcoming of SMILES with respect to SELFIES in terms of optimization ability and sample efficiency, particularly when the language model is more advanced. Nevertheless, string-based representations are thought to not be topologically-aware, while graphs are. Due to these limitations, deep chemical language models may focus on learning the grammar of molecular strings and not the implicit topological structure of the molecular graphs. Accordingly, while string-based deep neural nets have been employed in predicting molecular properties 5-7, 11 , they are typically outperformed by graph neural networks (GNNs) 12 and their variants 13-21 . GNN frameworks can be generally viewed as "message passing", which includes local neighborhood information aggregation and information arXiv:2106.09553v3 [cs.LG] 14 Dec 2022Regression Tasks Next, we evaluate MOLFORMER-XL on more challenging regression tasks from MoleculeNet. We report our performance on five regression benchmarks, namely QM9, QM8, ESOL, FreeSolv, and Lipophilicity, inTable 2. In particular, QM9 and QM8 involve predicting several quantum chemical measures, which is considered challenging without having access to privileged 3D geometric information. Again we use the train, validation and test split as suggested in 28 for these tasks. The baselines considered are a molecular graph convolutional network (GC, a GNN that utilizes a mean-pooling over the node and its neighbors before the linear transformation) 39 , the attentive-FP (A-FP) model 40 , and an MPNN variant 18 that learns edge features such as pairwise interatomic distances. Results show that MOLFORMER-XL upon task-specific fine-tuning outperforms the existing supervised GNN baselines, specifically GC, A-FP, and MPNN (augmented with bond distances for QM8 and QM9), on all five tasks.Table 7further shows MOLFORMER outperforming geometry-aware GNNs (DimeNet, GeomGCL, and GEM) on three physical property regression benchmarks. These results, combined with MOLFORMER-XL performance on the classification benchmarks confirm its generalizability.A Closer Look at QM9Table 9further compares MoLFormer-XL performance on the QM9 atomization energies and enthalpy (internal energy/enthalpy corrected for reference atomic energy, in eV) prediction tasks with two exemplary supervised 3D GNNs , such as SchNet 41 and Dimenet 37 . MOLFORMER-XL trained on SMILES alone is outperformed by both those models in all of the four tasks. However, SchNet, and DimeNet, which directly encode 3D information with specialized architecture for modeling quantum interactions, beats MOLFORMER-XL only by roughly a factor of 8 and by roughly a factor of 10, / | 10.1038/s42256-022-00580-7 | [
"https://export.arxiv.org/pdf/2106.09553v3.pdf"
]
| 254,636,625 | 2106.09553 | df59d0098c1b2c1ee8995da802dd6b12d158c2b8 |
Large-Scale Chemical Language Representations Capture Molecular Structure and Properties Data availability
Jerret Ross
IBM Research
10598Yorktown HeightsNYUSA
Brian Belgodere
IBM Research
10598Yorktown HeightsNYUSA
Vijil Chenthamarakshan
IBM Research
10598Yorktown HeightsNYUSA
Inkit Padhi
IBM Research
10598Yorktown HeightsNYUSA
Youssef Mroueh
IBM Research
10598Yorktown HeightsNYUSA
Payel Das
IBM Research
10598Yorktown HeightsNYUSA
Large-Scale Chemical Language Representations Capture Molecular Structure and Properties Data availability
/
Predicting the properties of a chemical molecule is of great importance in many applications, including drug discovery and material design. Machine learning-based models promise to enable more accurate and faster molecular property predictions than the current state-of-the-art techniques, such as Density Functional Theory calculations or wet-lab experiments. Various supervised machine learning models, including graph neural nets, have demonstrated promising performance in molecular property prediction tasks. However, the vast chemical space and the limited availability of property labels make supervised learning challenging, calling for learning a general-purpose molecular representation. Recently, unsupervised transformerbased language models pre-trained on large unlabeled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabeled molecules from the PubChem and ZINC datasets. Experiments show that utilizing the learned molecular representation outperforms existing baselines on downstream tasks, including supervised and self-supervised graph neural net baselines and language models, on several classification and regression tasks from ten benchmark datasets while performing competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that the large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.MainMachine Learning (ML) has emerged as an appealing, computationally efficient approach for predicting molecular properties, with implications in drug discovery and material engineering. ML models for molecules can be trained directly on pre-defined chemical descriptors, such as unsupervised molecular fingerprints 1 , or hand-derived derivatives of geometric features such as a Coulomb Matrix (CM) 2 . However, more recent ML models have focused on automatically learning the features either from the natural graphs that encode the connectivity information or from the line annotations of molecular structures, such as the popular SMILES 3 (Simplified Molecular-Input Line Entry System) representation. SMILES defines a character string representation of a molecule by performing a depth-first pre-order spanning tree traversal of the molecular graph, generating symbols for each atom, bond, tree-traversal decision, and broken cycles. Therefore, the resulting character string corresponds to a flattening of a spanning tree of the molecular graph. Learning on SMILES has been widely adopted for molecular property prediction 4-7 as SMILES is generally more compact than other methods of representing structure, including graphs. Additionally, meaningful substructures such as branches, cyclic structures, and chirality information are explicitly represented in SMILES strings, which is not the case for the graph representation.However, the SMILES grammar is complex and restrictive; most sequences over the appropriate character set do not belong to well-defined molecules. Alternative string-based representations exist, such as SMARTS 8 and SELFIES 9 . Comparing benefits of these alternative representations with respect to SMILES is an active area of research. For example 10 , focusing on molecular optimization tasks on the learned representation space, suggested no obvious shortcoming of SMILES with respect to SELFIES in terms of optimization ability and sample efficiency, particularly when the language model is more advanced. Nevertheless, string-based representations are thought to not be topologically-aware, while graphs are. Due to these limitations, deep chemical language models may focus on learning the grammar of molecular strings and not the implicit topological structure of the molecular graphs. Accordingly, while string-based deep neural nets have been employed in predicting molecular properties 5-7, 11 , they are typically outperformed by graph neural networks (GNNs) 12 and their variants 13-21 . GNN frameworks can be generally viewed as "message passing", which includes local neighborhood information aggregation and information arXiv:2106.09553v3 [cs.LG] 14 Dec 2022Regression Tasks Next, we evaluate MOLFORMER-XL on more challenging regression tasks from MoleculeNet. We report our performance on five regression benchmarks, namely QM9, QM8, ESOL, FreeSolv, and Lipophilicity, inTable 2. In particular, QM9 and QM8 involve predicting several quantum chemical measures, which is considered challenging without having access to privileged 3D geometric information. Again we use the train, validation and test split as suggested in 28 for these tasks. The baselines considered are a molecular graph convolutional network (GC, a GNN that utilizes a mean-pooling over the node and its neighbors before the linear transformation) 39 , the attentive-FP (A-FP) model 40 , and an MPNN variant 18 that learns edge features such as pairwise interatomic distances. Results show that MOLFORMER-XL upon task-specific fine-tuning outperforms the existing supervised GNN baselines, specifically GC, A-FP, and MPNN (augmented with bond distances for QM8 and QM9), on all five tasks.Table 7further shows MOLFORMER outperforming geometry-aware GNNs (DimeNet, GeomGCL, and GEM) on three physical property regression benchmarks. These results, combined with MOLFORMER-XL performance on the classification benchmarks confirm its generalizability.A Closer Look at QM9Table 9further compares MoLFormer-XL performance on the QM9 atomization energies and enthalpy (internal energy/enthalpy corrected for reference atomic energy, in eV) prediction tasks with two exemplary supervised 3D GNNs , such as SchNet 41 and Dimenet 37 . MOLFORMER-XL trained on SMILES alone is outperformed by both those models in all of the four tasks. However, SchNet, and DimeNet, which directly encode 3D information with specialized architecture for modeling quantum interactions, beats MOLFORMER-XL only by roughly a factor of 8 and by roughly a factor of 10, /
updates across different levels of granularity, e.g., nodes, edges, or the full graph, according to the graph's connectivity structure.
One challenge with supervised training of GNNs and language models for molecular property prediction is the scarcity of labeled data. Label annotation of molecules is typically expensive and this problem is compounded by the fact that the size of the space consisting of plausible chemicals in need of annotation is astronomically large (10 60 to 10 100 ) 22 . Such a scenario creates the need for molecular representation learning which can be generalizable to various property prediction tasks in an un-/self-supervised setting. The recent success of large transformer-based 23 foundation models 24 , using the paradigm of learning a task-agnostic language representation, obtained by pre-training on large unlabeled corpora and subsequently using it for fine-tuning on downstream tasks of interest, has been extended to other domains.
Pre-trained Language Models (LMs) 25 and GNNs 26 have only recently started to emerge for predicting molecular properties. However, to what extent pre-trained LMs, trained on a large corpus of billions of molecules, are able to capture the moleculeproperty relationships across various downstream tasks remains unexplored.
Towards this direction, here we present molecular SMILES transformer models referred to as MOLFORMER (Molecular Language transFormer). We name our best performing MOLFORMER variant MOLFORMER-XL. MOLFORMER-XL was obtained using an efficient linear attention mechanism trained on a large corpus of 1.1 billion molecules (see Figure 1). Results show, for the first time, that pre-trained transformer encoders of molecular SMILES perform competitively with existing supervised or unsupervised LM and GNN baselines on predicting a wide variety of molecular properties, including quantum-mechanical properties.
Our main contributions are:
• We train a large-scale and efficient Molecular Language model transFormer (MOLFORMER) on over a billion molecules, with relatively limited hardware resources (up to 16 V100 GPUs). We owe our scalability and speedups to efficient linear time attention, adaptive bucketing of batches, and open-source parallelization provided in PyTorch Lightning and NCCL.
With the combination of bucketing and linear attention we are able to achieve a batch size of 1600 molecules per GPU. Using 16 GPUs we need 208 hours to complete 4 epochs of pre-training for MOLFORMER-XL. To complete training in the same amount of time without bucketing and linear attention we would be limited to less than 50 molecules per GPU and require over 1000 GPUs for the task.
• We explore the difference between absolute and relative position embeddings in representing molecular SMILES. We also provide a new, efficient, and accurate linear attention approximation of the recently proposed relative position RoFormer 27 .
• We perform extensive experimentation and ablation studies on several classification and regression tasks from 10 benchmark datasets, covering quantum mechanical, physical, biophysical, and physiological property prediction of small molecule chemicals from MoleculeNet 28 .
• Our results provide encouraging evidence that MOLFORMER representations can accurately capture sufficient chemical and structural information to predict a diverse range of chemical properties. Furthermore, the performance of MOL-FORMER is either better or on par with state-of-the-art GNNs that learn from precise graph topology information and beyond (e.g., bond distances).
• We provide further analyses to demonstrate that MOLFORMER can capture substructures, as well as spatial interatomic distances within a molecule from SMILES annotations only.
To our knowledge, the present study is the first one that explores the representational power of pre-trained chemical language models on predicting a broad range of downstream molecular properties from quantum chemical to physiological. In particular, predicting quantum-chemical properties from SMILES strings alone is non-trivial, as those properties are largely dependent on the accurate 3D molecular geometric information, which is considered privileged information and not available in general.
Results and Discussion
MoLFormer Framework
The goal of MOLFORMER is to learn a universal molecular representation from large scale chemical SMILES data and then evaluate the representation on various downstream molecular property prediction tasks, as shown in Figure 1. To do so, MOLFORMER model is developed using the masked language model framework 29,30 , which randomly masks a certain percentage of tokens within a SMILES sequence during training and then predicts those tokens. The masked language modeling thus exploits self-supervision and enables contextual learning. To allow better contextual learning and faster training, rotary positional embedding 27 was used instead absolute positional embedding, along with linear attention 31 (See Methods and Supplementary Information for further details on model architecture and training). We saw increased stability and faster convergence in training loss behavior when pre-training using rotary embeddings in contrast to absolute embeddings as observed in Figure 2. To demonstrate the effectiveness of the pre-trained MOLFORMER as a universal and task-agnostic molecular representation, we benchmarked its adaptation performance on numerous challenging classification and regression tasks from MoleculeNet 28 . Details of the benchmark datasets can be found in SI Section C.
Derivation of MOLFORMER Embeddings
We encode a chemical SMILES by extracting the mean of all embeddings of the last hidden state from the encoder model. The resulting embedding is used for all downstream tasks. The downstream tasks themselves can be divided into two categories, The first category being called Frozen and the second being called Fine-tuned. The Frozen setting is defined by training a fully connected model for each task, while keeping the encoder embeddings fixed. The second setting, Fine-tuned, involves fine-tuning the weights of the encoder model jointly with the fully connected model for each downstream task. The ideal configuration and hyperparameters for the frozen strategy are discovered through a grid search as described in SI Table 1. For the fine-tuned strategy, we use a 2-layer fully connected network with a hidden dimension of 768 (matching the encoder embedding) with Dropout (set to 0.1) and GELU layers in-between, on top of a final single output dimension for regression tasks.
Performance of MOLFORMER Embeddings on Downstream Tasks
We evaluate the performance of MOLFORMER embeddings and compare them with existing baselines on six classification and five regression tasks from the MoleculeNet benchmark 28 , as discussed below. We refer to MOLFORMER which has been pre-trained on the entire training set comprised of ≈ 1.1 B molecules (all molecules from both PubChem and Zinc) as MOLFORMER-XL. Unless stated otherwise, the MOLFORMER-XL is trained with linear attention using rotary positional embeddings and the performance reported is of the model fine-tuned on the downstream task (see Methods for details). To predict various properties on the downstream tasks we fined-tuned the model as described in the previous section. We use the training, validation and testing data split as defined by the MoleculeNet benchmark for all tasks (see SI C).
Classification Tasks
We choose six classification tasks from the MoleculeNet benchmark with nine total baselines, four supervised and five self-supervised, for comparison against MOLFORMER-XL. The supervised baselines consist of shallow machine learning models trained on molecular fingerprints (RF and SVM in Table 1) and graph neural nets. Among the pre-trained/self-supervised baselines, Hu, et al. 32 pre-trains a Graph Isomorphism Network (GIN, a GNN that uses an MLP and weighted sum of node features in the aggregation) on molecular graphs that includes edge features involved in aggregation. N-gram graph 33 uses a simple unsupervised representation for molecules by first embedding the nodes in a graph and then constructing a compact representation of the graph by assembling the vertex embeddings in short walks in the graph. MolCLR 26 is a self-supervised learning framework based on GIN, which uses contrastive loss 34,35 . GraphMVP-C is the Graph Multi-View Pre-training (GraphMVP) framework proposed by reference 36 , where self-supervised learning (SSL) is performed by leveraging the correspondence and consistency between 2D topological structures and 3D geometric views. We have considered three other geometry-aware GNN baselines, one supervised (DimeNet 37 ), and two self-supervised (GeomGCL 36 and GEM 38 ). ChemBERTa 25 is a pre-trained molecular language model trained on a smaller chemical dataset. Table 1 documents the performance comparison of MOLFORMER with these baselines on six classification benchmarks using the MoleculeNet scaffold data splits. MOLFORMER-XL outperforms all baselines in three (BBBP, ClinTox, and SIDER) out of six benchmarks and comes a close second in the other three (Tox21, HIV, and BACE).
respectively. This result, along with Tables 1 and 2, reinstates the power of learning an universal molecular representation from readily available information, such as SMILES, at a broader scale, while confirming the crucial role of privileged geometric information for quantum-chemical energy prediction. Further, results from this comparison opens up the door for future investigations, with the goal of estimating emergence of geometric awareness in MoLFormer (see later sections) or how the expressiveness of SMILES-only MoLFormer can be further enhanced by adding partial or complete 3D geometric information.
Ablation Studies
In this section we discuss several different ablations of MOLFORMER-XL in an attempt to provide insights into its impressive performance. The ablations we performed can be broadly divided in the following three categories (1) the effect of size and the nature of the pre-training data and model depth, and (2) the results with (frozen) and with (fine-tuned) model fine-tuning on the downstream data, (3) the effect of absolute and rotary positional embeddings.
Data/Model Size First we investigate how pre-training dataset size affects the performance of MOLFORMER-XL on several downstream tasks from the MoleculeNet benchmark. To accomplish this we chose 3 different weighted combinations of the PubChem and Zinc datasets, specifically a set consisting of 10% of Zinc and 10% PubChem, another with 100% of PubChem mixed with 10% of Zinc, and then one with 100% Zinc molecules and 0% PubChem. We also investigate the influence of model depth by pre-training a 6 layer model, named MOLFORMER-Base, on the complete Zinc and Pubchem dataset.All models are pre-trained with rotary embeddings and linear attention and then compared to MOLFORMER-XL. Identical learning rates, data splits, optimization, etc. are used for pre-training and fine-tuning. Tables 1 and 2 summarize these results. While MOLFORMER-XL performs better on average, we report two interesting observations. The first is that the model that is pre-trained on the second biggest data set, 100% Zinc, consistently performs worse than all other pre-trained models. A possible explanation for the poor performance of the model trained on only Zinc is due to the Zinc dataset consisting of a much smaller vocabulary than all other dataset combinations as well as much shorter molecules with little variance with respect to molecule length. The other point of interest is that when MOLFORMER-XL falls behind, it is only by a very small margin (See performance on ESOL, QM8, FreeSolv benchmarks in Table 2). Tables 1 and 2 further show that MOLFORMER-Base has a weaker performance than MOLFORMER-XL in majority of tasks, implying a deeper model helps in learning. Table 3 further summarizes the two remaining ablation experiments using the QM9 benchmark. For simplicity we observe that the fine-tuned ablation experiments achieves such a convincing win over the frozen experiments on all pre-training dataset sizes that we opted to only investigate fine-tuning for all other benchmarks. These results provide empirical insights onto the neural and data scaling behavior of MOLFORMER.
Fine-tuned versus Frozen
Position embeddings The positional embeddings ablation results are collected in Table 3, which show that MOLFORMER with Rotary embeddings and fine-tuning is behind the Absolute positional embedding model for the smaller datasets, but then wins as the dataset size passes 1 Billion molecules.
Insights into MOLFORMER
Molecular Similarity Recovery
Next, we analysed the correlation between pairwise similarities estimated using the Tanimoto distance, a popular measure of pairwise distance between chemicals, on the molecular fingerprints and those estimated using the Euclidean distance on the MOLFORMER-XL embeddings. We further looked into the correlation between the number of atoms in the maximum common subgraph of a pair of molecules with their corresponding euclidean distance in the embedding space for a set of random molecules picked from PubChem. The results are summarized in Table 4 and show that MOLFORMER-XL embeddings are better correlated with known molecule similarity measures when compared to ChemBERTa. These results are suggestive of MOLFORMER embeddings being informative of chemical structure similarity.
Attention Analyses
Finally, we inspect the average pooled attention matrices of MOLFORMER-XL to explore the chemical information embedded in them. For this purpose, we utilize the cosine similarities between attention values and the spatial distances between atoms within a molecule from the QM9 test set. Spatial distances are obtained from the corresponding energy-minimized geometries provided within QM9 benchmark 28 . MOLFORMER-XL is compared with a MOLFORMER variant trained with full attention and rotary embeddings on the entire PubChem+Zinc dataset. Note that the MOLFORMER models here are not fine-tuned for the QM9 dataset. The frozen MOLFORMER with full attention shows a much higher average MAE (≥ 12) on QM9 downstream tasks, performance is particularly worse on internal energies (U and U 0 ), enthalpy (H), and free energy (G). We present attention results separately for three different categories of interatomic spatial distances: short (≤ 2 Å; that are mostly reflective of typical covalent bonds in the molecule, C-C single bond distance being 1.5 Å), medium (2-4 Å) and long (≥ 4Å), and summarize them in Table 3. Interestingly, attentions in MOLFORMER with linear or full attention (and rotary positional embeddings) show strong similarity with interatomic distances in both the short and medium categories, while revealing a weak (around 0.2) similarity with longer interatomic distances. This is an interesting observation, indicating that MOLFORMER is able to capture spatial relations between atomic tokens that are not necessarily neighbors in the SMILES sequence. The observed attentions in MOLFORMER-XL are slightly more in line with medium and long range distances, when compared to MOLFORMER with full attention. This observation suggests MOLFORMER-XL, with linear-attention, does in fact capture spatial relations between atoms more effectively. Figures 5 and 6 in SI). We chose two molecules from the QM9 test set whose attention values show a high cosine similarity with the medium range spatial distances for this visualization. Visual inspection indicates that an aggregation of heads on the intermediate rotary attention layer corresponds well to the covalent bonding pattern, while also capturing the signature of the spatial relations between non-bonded atoms within a molecule. These attention analysis results suggest that MOLFORMER-XL is able to recover molecular structural information from corresponding SMILES sequence to a significant extent. This capability likely stems from pre-training on a large corpus of chemical SMILES which also allows MOLFORMER-XL to learn fundamental properties of chemicals, including structural information and various downstream properties, ranging from quantum chemical to physiological. A similar observation has been reported in recent work on protein sequence modeling 42, 43 . To our knowledge, this is the first confirmation that structural and diverse property information emerges in the representation learned by a chemical language model pre-trained on large-scale data.
Conclusion
In this work, we have explored the power of unsupervised large-scale pre-trained molecular language models at various molecular property prediction tasks. Unlike graphs, molecular languages such as SMILES do not explicitly encode molecular topology. However, with well-designed self-supervised training on a large-scale corpus and with an expressive architecture, such as a contextualized transformer-based language model with a linear attention mechanism, and a parallelized training protocol, our MOLFORMER can efficiently learn implicit rich structure-property relationship information.
Specifically, MOLFORMER outperforms existing graph-based baselines on a wide variety of molecular regression and classification benchmarks. To our knowledge, this is the first work that validates the power of large-scale self-supervised pre-trained molecular language models on predicting molecular properties across the entire range from quantum chemical to physiological. Further, by analysing the learned attentions, we show that MOLFORMER trained on SMILES sequences indeed is aware of interatomic relations within a molecule, even beyond the 2D topology. Finally, on the large-scale learning end, we showcased with MOLFORMER an efficient and environment-friendly use of computational resources, reducing the number of GPUs needed to perform the training by a factor of 60 (1000 vs. 16).
MOLFORMER has immediate potential for faster in silico screening of molecules across diverse targets, which is important for material design and drug discovery applications with positive societal impact. However, it should be noted that misuse of such technology without a proper experimental and scientific validation in a wet lab can have harmful implications. Further, it has been shown that accurate property prediction models (for example., for predicting toxicity) along with generative models can be exploited for designing highly toxic molecules 44 . This highlights the need for a responsible framework around the use of these emerging powerful technologies. In addition, the present work calls for further exploration of the representational power of MOLFORMER in the context of its ability to learn structural molecular information directly from chemical language and can be extended beyond the small organic molecules studied in this work. Future work will also aim to improve MOLFORMER by employing larger models and larger training data, using improved and/or domain-specific self-supervised tasks, and using other string-based representations like SELFIES 9 .
Methods
Model Details
As we aim to train a large scale masked language model of chemical SMILES efficiently and effectively, while utilizing relatively limited hardware resources, we leveraged transformer-based neural nets 23 . Transformers process inputs through a series of blocks alternating between self-attention and feed-forward connections. Transformers encode the position in the sequence via a positional embedding, termed the absolute positional embedding. The input feature at a position m is therefore concatenated with its corresponding absolute position embedding. Self-attention enables the network to construct complex representations that incorporate context from across the sequence. Attention mechanisms transform the features in the sequence into queries (q), keys (k), and value (v) representations. These representations produce the output of the attention at position m / as follows:
Attention m (Q, K,V ) = ∑ N n=1 exp( q m , k n )v n ∑ N n=1 exp( q m , k n )
.
A well known computational bottlenecks of the vanilla transformer 23 architecture is that the attention mechanism suffers from a quadratic computational cost with respect to the sequence length. Linear complexity attention models 31, 45 have tackled this issue utilizing kernel approximations and random feature approximations variants. This led us to design MOLFORMER that utilizes an encoder based on a transformer with linear attention 31 . MOLFORMER with linear attention consists of 12 layers, 12 attention heads per layer, and has a hidden state size of 768. A Generalized Feature map 31 for the linear attention was chosen (see SI Section A.1.1 for details).
As mentioned above, in a transformer architecture the dependency between tokens at different position of a (chemical) sequence is modeled under the supervision of position encoding. The seminal work of 23 investigated absolute position embeddings to encode the position of a token in the sequence. More recent work 46-48 showed that use of relative position embeddings between tokens results in improved performance. Rotary position embeddings were introduced in RoFormer 27 as a means to enhance the relative encoding via position dependent rotations R m of the query and the keys at a position m. These rotations can be efficiently implemented as pointwise multiplications and do not result in a dramatic computational increase.
In order to leverage Rotary embeddings with linear transformers, the use of the following approximation was proposed in 27 :
Attention m (Q, K,V ) = ∑ N n=1 R m ϕ(q m ), R n ϕ(k n ) v n ∑ N n=1 ϕ(q m ), ϕ(k n ) ,
where Q, K,V are the query, key, and value respectively, and ϕ a random feature map. After preliminary experimentation with this linear Roformer, we found it performed worse than its absolute position counterpart. We propose the following modification to Roformer that we found to train more gracefully (the training loss falls faster and lower) than the original Roformer, as well as observing better performance than the model using absolute embeddings:
Attention m (Q, K,V ) = ∑ N n=1 ϕ(R m q m ), ϕ(R n k n ) v n ∑ N n=1 ϕ(R m q m ), ϕ(R n k n )
.
When compared with 27 we rotate the original keys and queries instead of the transformed ones with the feature map ϕ.
Datasets and Tokenization
We constructed several datasets for pre-training by combining the PubChem 49 and ZINC 50 datasets with varying proportion from each. The PubChem dataset consists of 111 million molecules, while the much larger ZINC dataset contains over 1 billion molecules. To construct a vocabulary, we utilize the tokenizer from 51 . All molecules from both PubChem and ZINC are converted to a canonical format utilizing RDKit 52 then tokenized. All unique tokens extracted from the resulting output gives us a vocabulary of 2357 tokens plus 5 special tokens, resulting in a total of 2362 vocabulary tokens which are used for all pre-trained models considered in this paper, irrespective of pre-training dataset size. In other words, all models have the same embedding capacity with a fixed vocabulary size. However, the total unique tokens that they are pre-trained on might only contain a subset of the model vocabulary capacity. The post tokenization sequence length of the molecules range from 1 to just over 2000 tokens. We decide to restrict the sequence length range from 1 token to 202 tokens, special tokens inclusive, to reduce computation time. Since over 99.4 percent of all molecules from our dataset contain less than 202 tokens we hypothesize that the removal of molecules with more than 202 tokens would be of minimal negative impact on pre-training.
Large Scale Training and Parallelization
For pre-training we use the masked language model method defined in 30 . Initially 15% of the tokens are selected for possible denoising. From that selection, 80% of the tokens will be randomly selected and replaced with the [MASK] token, 10% of the tokens will be randomly selected to be replaced with a random token, while the remaining 10% of the tokens will be unchanged. Training was performed for 4 epochs through the entire PubChem+ZINC dataset with a fixed learning rate of 1.6e −4 and a batch size of 1600 molecules per GPU on a total of 16 GPUs over 2 servers connected via Infiniband fabric. It should be noted that as the number of GPUs utilized increased we found an increase in learning rate was necessary up to a factor of 8.
In order to scale our training to large datasets (1 Billion+ data points), we relied on adaptive bucketing of mini-batches by sequence length, as well as parallelization via distributed training (see Supplementary Information (SI) A for details). Using Linear attention and bucketing allowed us to reduce the number of GPUs needed from roughly 1000 for quadratic attention with no bucketing to 16.
/
Data availability
Datasets used for model pre-training and finetuning on benchmark tasks are available at https://github.com/IBM/ molformer.
Code availability
Python codes for MoLFormer training and fine-tuning, and python notebooks for MoLFormer attention visualization, as well as instances of pre-trained models are available at https://github.com/IBM/molformer. For other enquiries contact the corresponding authors. Table 2 Distance-Category Overview of MoLFormer pipeline. The transformer neural network based model is trained on the SMILES sequences corresponding to a large collection of chemical molecules from PubChem and Zinc, two public chemical databases in a self-supervised fashion. MOLFORMER was designed with an efficient linear attention mechanism and relative positional embeddings, with the goal of learning a meaningful and compressed representation of chemical molecules. This foundation model was then adopted to different downstream molecular property prediction tasks via fine-tuning on task-specific data. The representative power was further tested by recovering molecular similarity using the MOLFORMER encodings, as well as by analyzing the correspondence between the interatomic spatial distance and attention value for a given molecule. Table 1. Comparison of fine-tuned MOLFORMER with existing supervised and pre-trained/self-supervised baselines on multiple classification benchmarks. Bold indicates the top-performing model. All models were evaluated by AUC-ROC on scaffold splits. Baseline performances are adopted from references 25,26,36 , '-' signifies that the values were not reported for the corresponding task. . Table 2. Performance of fine-tuned MOLFORMER and other supervised GNN baselines on QM9, QM8, ESOL, FreeSolv, and Lipophilicity regression benchmarks. For QM9 and QM8, we report average MAE, while RSME is reported for the remaining tasks. Baseline performances are taken from references 28, 40 . Bold indicates the top-performing model.. Table 3. Comparison of MOLFORMER models with respect to cosine similarity between the interatomic spatial distance map and the attention map, across three different distance categories for 7806 molecules from QM9 test set. Short, Medium, and Long distance categories are defined with interatomic distances in the range of ≤2, 2-4, and 4-10 Å, respectively. Bold indicates the top-performing model. Table 4. Correlation with structural similarity metrics on 10000 randomly selected pairs of molecules from the PubChem dataset. Reported correlations are between (1) the pairwise similarities estimated using molecular Fingerprints and those using MOLFORMER-XL (or ChemBERTa) embeddings and (2) the number of atoms in the maximum common subgraph (MCS) of two molecules and their corresponding Euclidean distance in the embedding space.
/
Figures and Tables
Related Work
Large Scale Training of Language Model The recent advancement of transformer-based masked language models (MLMs) 29, 30 and prefix language models (PLMs) 55 have shown remarkable performance on various natural language understanding tasks. Self-supervised pre-trained representation learning of sequences through MLMs randomly masks input tokens during training and predicts these masked tokens, whereas PLMs require adding task-specific text tags to the input sequences. These language models show substantial performance improvements on downstream tasks via increasing transformer models size and pretraining using large-scale data corpora. Recent efforts have addressed the resulting cost and memory challenges encountered due to scaling up models and data. One such effort is the linear-time attention transformers introduced in 45, 56-58 which address the quadratic memory challenges within the attention mechanism and allowing for more efficiency in training MLMs.
Molecular Representation Learning
To represent molecules in vector space, traditional chemical fingerprints such as ECFP 1 , have been used. Deep neural nets were further trained on chemical fingerprints for supervised learning. Recurrent Neural Network (RNN) based models have been used for molecular representation learning using SMILES and other linear molecular annotations as inputs 59 . At the same time, graph convolutional networks have been used to learn the neural fingerprints of molecules 12,60 . Previous work 18 implemented a single common framework to learn from graphs, referred to as a message passing framework, which computes node embeddings by aggregating neighborhood information during the message passing phase and computes a feature vector of the graph during the readout phase. Many attempts to extend GNNs have been made, which include variations of the original message passing concept to learn non-local effects; for instance, in 40 an attention mechanism was introduced. One challenge faced by GNNs is achieving higher expressivity that can distinguish between two given graphs to that of the hierarchy of the Weisfeiler-Lehman (WL) graph isomorphism, while maintaining scalability. It has been shown that typical message passing models have limited expressiveness and are not better than the first WL test (1-WL) 61 . Powerful deep models that represent higher order interactions between graph nodes have been suggested 61,62 , but with a large increase in computational cost. Molecular graphs can be further augmented with the 3D coordinates of atoms. Such augmentation is considered as privileged information due to the cost associated with deriving the 3D molecular geometry. To better model the spatial interactions among atoms the message passing framework was extended in 18 to include pairwise interatomic distances as edge features when geometric information was available. More recently, variations of the message passing networks (MPNN) were proposed to better model the spatial interactions within molecules and increase the models expressive power, e.g., by using continuous filter convolutional layers (SchNet) 41 or by using directional message passing (DimeNet) 37 but at the cost of increased computational complexity. However, those models are not generalizable to settings where 3D structural information is not readily available and/or is expensive to compute (e.g. for larger molecules). Since the goal of this work is to learn a generalizable molecular representation from a large amount of unlabeled data without relying on expensive 3D information, we mainly focus on comparing the proposed MOLFORMER with existing supervised and un/self-supervised baselines that utilize different input representation (SMILES, graphs, fingerprints) and can be generalizable to a wide variety of tasks, from quantum mechanical to physiological.
Pre-trained Molecular Language and Graph Models
The recent success of language representation models in downstream NLP tasks has inspired extending this paradigm to other domains. By combining the power of pre-training on large unlabeled corpus and contextual language models (LMs) using advanced neural nets, such as transformers, a domain-specific "language" embedding is obtained as the exclusive input for several downstream tasks.
Examples include understanding the language of life through advanced LMs trained on protein sequences. Here features extracted by LMs directly from single protein sequences reach state-of-the-art performance in downstream prediction tasks, even when those were used without evolutionary information 42, 43, 63 . Similar large-scale unsupervised pre-training on SMILES sequences have been explored for molecular property prediction 25,[64][65][66][67] ; however, those models did not attempt to predict a diverse range of molecular properties while exploiting the available chemical sequences at scale. Unsupervised/semi-supervised representation learning has been tested on molecular graphs as well 26, 33, 68 . A more recent line of work has leveraged the power of contrastive self-supervised pre-training using 2D graph topology and 3D conformal geometry 36 (referred as GeomGCL), which showed performance improvement on molecular regression and classification tasks compared to prior pre-training baselines.
Previous work 69 has further considered use of motif prediction during self-supervised pretraining on molecular graphs. Another study 70 has also investigated the effect of including substructure information, as well as cheminformatics measures, in a molecular contrastive learning framework. References 71,72 have shown the benefit of pre-training by maximizing information between different molecular views, e.g. 1D SMILES and 2D graph, or 2D graph and 3D geometry. Fang, et al. 38 has also explored the advantages of geometry-based pre-training as well as including higher-order information in learning through modeling the atom-bond-angle relations in the graph (referred as GEM).
A Model and Methods
A.1 MOLFORMER Model and Pre-Training Details
In this section we include additional details and insights of MOLFORMER pre-training.
A.1.1 Optimizer
For optimization we used the Fused Lamb optimizer from 73 as implemented via APEX due to Lamb's superior performance in several aspects of training. For example, learning rate warm ups were found to be unnecessary and training was found to be robust when large batch sizes were used. All other optimizers were unable to maintain their stability without large amounts of modification any time a training configuration needed to be changed.
A.1.2 Linear Attention
Preliminary experiments showed an acceptable balance between computation speed and minimal performance deficit when compared to the FAVOR 45 feature map. Generalized Features are a simplification of the feature map in FAVOR 45 . The feature map size we settled on is 32.
A.1.3 Rotary versus Absolute position embeddings
We show in Figure 1 that MOLFORMER with linear attention and rotary embeddings has a better validation loss than its absolute position counterpart. This observation lead us to adopt MOLFORMER with linear attention and rotary embedding throughout the paper.
A.2 Parallelization and Computing Environment
All experiments were performed on a GPU cluster where each node contains either 8 NVIDIA Tesla V100 (32GB) or 8 Ampere A100 (40GB) GPUs connected via NVLink. The V100 nodes are equipped with dual 28-core (Intel Xeon Gold 6258R) CPUs, the A100 nodes are equipped with dual 64-core (AMD EPYC 7742) CPUs, and all nodes are connected by 2 non-blocking EDR InfiniBand (100Gbps) network adapters as well as 2 100Gbps Ethernet adapters. All nodes are installed with RHEL 8.3, CUDA 10.2, and cuDNN 7.5.
Due to the size of the datasets utilized in pre-training, our training environment relies on the Distributed Data Parallel functions provided by Pytorch and Pytorch Lightning utilizing the NCCL backend. By utilizing RDMA to enable GPU-direct technology we were able to efficiently scale to multi-node multi-GPU training. Additionally, we utilized HuggingFace Datasets to localize the data onto the machines where pre-training took place to improve performance during pre-training. Our pretraining task consists of training on the full dataset to 4 epochs. Training a single epoch of just PubChem on a single NVIDIA V100 GPU would take approximately 60 hours. Utilizing Distributed Data Parallel, pre-training on the full PubChem dataset alone took approx. 22 hours on 16 NVIDIA V100 GPUs this averages to about 5.5 hours per epoch. The speed up achieved by parallelizing training to 16 GPUs gave us a factor of 10.9. Pre-training for 4 epochs on the combined PubChem+ZINC datasets took approx 208 hours on a 16 NVIDIA V100 GPUs which averages to about 52 hours of compute for a single epoch. All fine-tunning tasks were able to be performed on single GPUs (either V100 or A100) and completed in approx. 12 hours.
A.3 Memory Efficient Training with Adaptive Bucketing By Sequence Length
We observed that the distribution of molecule lengths in our dataset centered around molecules that were less than 45 tokens long after tokenization. This fact coupled with large batch sizes increased the likelihood that padding tokens would overwhelm each batch and result in large amounts of computational waste. To address this problem we decided to break each minibatch into multiple buckets. This process is done on a batch by batch basis, i.e. on the fly, which means full dataset preprocessing does not take place. It should be noted that gathering of statistics for the full dataset did take place before training and buckets were defined by sequence interval length gathered from that process. The first bucket would contain SMILES strings of length 1 to 42, the next bucket would be of size 43 to 66, the third bucket would be of size 67 to 122 and finally the last bucket would be of size 123 to 202. Due to the length distribution of our dataset buckets 1 and 2 would always be present in all training steps while bucket 3 would be present for the majority of minibatches. Molecules that fell into bucket 4 appeared in most minibatches but would usually only represent a very small percentage of molecules found within the minibatch. With this information we decided to not utilize bucket 4 in the training procedure until it reached a threshold of 50 molecules thus preventing us from training on a bucket that consistently contained a very small amount of molecules which we believe aided training. Bucketing combined with gradient accumulation across buckets gave us stable training, maintained training randomization and reduced computational time needed compared to the traditional method of keeping GPU memory full at all times to maximize computation without consideration of the wasted computation that arises because of the large amount of padding tokens. To be more concrete, without bucketing on 1 V100 GPU and on PubChem only a single epoch would take approx. 1200 hours while with bucketing the same epoch only took around 60 hours giving adaptive bucketing a speedup of 20×. A similar concept, namely micro-batching 74 exists but we became aware of it after implementing our adaptive bucketing technique. We have not yet baselined the differences between our domain specific bucketing implementation towards molecular data against the generic micro-batching 74 . Using Linear attention and bucketing allowed us to reduce the number of GPUs needed for quadratic attention and no bucketing from roughly 1000 to 16.
A.4 Pre-training Scaleout
We give in Figure 2 the estimated training times of MOLFORMER as a function of the used GPUs in the parallelization.
B Fine-tuning MOLFORMER for Property Prediction
During the fine-tuning process, where the MOLFORMER weights are not frozen, we experimented with different hyperparameters. In our experiments, we found that batch sizes of both 64 and 128 work best for the downstreams tasks. Also, among various learning rates, 3e-5 seems to be the best fit for all the measures on QM9 dataset. We found that the beta values used by the optimzier were import and were set to beta1 equaling .9 and beta2 equaling .99. We found the model to be very sensitive to the betas2 value during fine-tuning. The discriminator used was of a fixed size of 2 layers with a hidden size of 768 for all fine-tuning experiments.
For the frozen strategy where the embeddings from the MOLFORMER are fixed, we use a fully connected model to predict the properties. A hyperparameter sweep was performed for the frozen strategy using grid search and we randomly picked 25 different variations for each task. The best model with the lowest validation loss was picked for further analysis. The different values of the frozen strategy hyperparameters are summarized in the table below.
C Dataset, Vocabulary, and Property Units
All our downstream evaluations are performed on tasks from the MoleculeNet dataset 28 . All the tasks mentioned in Table 2 use random splits as suggested in 28 , while those in Table 1 use scaffold splits as suggested in 26 . A brief description of the downstream datasets are in Tables 2 and 3. We refer the reader to 28 for more details on the specific tasks.
We have observed that various related work in the past have used different units for quantitative analysis on QM9 dataset without explicitly stating the units, making it difficult to compare the relative performance of different methods. We have listed the units of the measures used in this paper in Table 6. We also give in Tables 4 and 5 the statistics of sequence length and vocabulary for the datasets considered in this work. While the vocabulary of each dataset varies the architecture of our models is identical for all experiments contained in this paper. The vocabulary for our models is defined by the union of the vocabularies of both PubChem and Zinc dataset.
D Additional results on comparing MOLFORMER-XL with geometry-aware GNNs on regression benchmarks
In this section we show additional results in Table 7 on comparison of MOLFORMER with geometry-aware supervised (DimeNet) and self-supervised (GeomGCL, GEM) on the physical chemistry regression benchmarks from MoleculeNet.
E Additional Results and Ablations on QM9 Benchmark
In this section we show additional results and ablations on the pre-training and fine-tuning of MOLFORMER on the QM9 benchmark.
Fine-tuned MOLFORMER-XL versus Baselines
We further report MOLFORMER-XL performance on all twelve property prediction tasks individually within QM9, and compare that against several previously discussed baseline models as well as four additional baselines. The additional baselines included are as follows: (i) a more expressive GNN, specifically 123-GNN 61 , (ii) two neural nets that leverage 3D geometry -a multitask neural net encoding the Coulomb Matrix (CM) 75 and its GNN variant as in the deep tensor neural net (DTNN) 76 , and (iii) Chemberta 25 . which show that MOLFORMER-XL achieves comparable or Table 9. Comparison of MOLFORMER-XL with two exemplary 3D GNN models, SchNet and Dimenet, on QM9 atomization energy/enthalpy (in eV) regression benchmarks.
Impact of MOLFORMER pre-training dataset on Downstream task We present in Table 10 and Fig. 3 ablations on the impact of the pre-training dataset on the performance of fine-tuning MOLFORMER in the downstream property prediction tasks on QM9. We see that as the pre-training dataset sizes becomes large, MOLFORMER achieves better performance (i.e lower MAE). Note that MOLFORMER-XL refers to MOLFORMER pre-trained on PubChem+ ZINC.
Position embeddings
The positional embeddings ablation results are collected in Table 3. Different pre-training datasets sizes are also investigated and are broken up into MOLFORMER (1) Table 3 show that MOLFORMER with Rotary embeddings and fine-tuning is behind the Absolute positional embedding model for the smaller datasets, but then wins as the dataset size passes 1 Billion molecules.We note that two main differences between our linear time relative rotary attention formulation and the original one from 27 are: 1) our attention remains normalized, whereas the one proposed in 27 does not; 2) The rotation in 27 is applied to the transformed query and key in the random feature space, whereas we apply it at the key and query level. This results, in our case, to an approximation of a kernel acting on the position-modulated key and queries that encode the relative position. On the other hand, the original formulation results in an approximation of a product kernel one on the space of positions and one on the space of keys and queries. We believe that both the normalization and having a kernel acting on a joint embedding of keys/queries and positions advantage our formulation on the original one from 27 . More rigorous investigation will be carried out in future work.
With that said we can see that as the pre-training grows from PubChem only to the more extensive and diverse Pub-Chem+Zinc corpus the representational power of the model increases, as showcased by the stronger performance on the QM9 benchmark. Table 11 the impact of fine-tuning versus using frozen MOLFORMER embeddings from pre-training phase on the performance on the QM9 benchmark in both rotary and absolute embeddings. We see that fine-tuning and rotary achieve the best performance.
Impact Fine-tuning versus Frozen Embedding/ Rotary versus Absolute on Downstream tasks We show in
Robustness Across Data Folds
We report performance comparison at the individual property prediction task within QM9 in SI Table 10. In order to ensure the robustness of these results across data splits we also provide, the performance of MOLFORMER-XL on QM9 tasks using 5-cross validation folds (SI Table 11).
Specifically, we report from Table 11 the mean and standard deviations of MAE for 5 different folds of the data split into 80% training and 10% validation and 10% test. Most of the related work does not perform cross validation and just report the results on a single split. We note that the standard deviations are quite low for most of the predictions and the mean errors are in line with the main paper for all folds of all tasks which suggests the MOLFORMER representations are robust. extend beyond the toxic and non-toxic classes. With Clintox one can see that as the center of the clusters becomes more concentrated, the toxic and non-toxic molecules there is less overlap of classes in that area of the cluster. Also, generally, if a toxic molecule is located near a cluster of non-toxic molecules, it is on the border of the cluster. Figure 4:right shows the a separation tendency between the high/low LUMO classes but with an obvious amount of overlap in the non-finetuned embedding space.
G MOLFORMER Attention Visualization and Structure Discovery
In this section we present a visual comparison to show the attention representations of two molecules from the QM9 test dataset (gdb_62509 and gdb_1105) generated by the linear and full attention model variants. Both these models of MOLFORMER-XL have rotary positional embedding in place. We picked gdb_62509 and gdb_1105 based on the best cosine similarity from the medium bucket category, and are the same as the ones used in Table 3. For full quantitative comparison on this metric, refer to Table 3. The attention head weights of each layer are averaged and all layers are presented in figures 5 and 6. Please note that the colorbar on the subplots are of different scale for different variants. Several interesting observations can be made from these representations. For both full-attention and its linear counterpart most of the attentions are directed toward the closing parentheses in higher layers (layer 9 and up). Therefore, it is reasonable to avoid other layers for identifying meaningful features with respect to structure. We also notice that intermediate layers 8 and 9, from the linear-attention with rotary variant, capture the 3D structure of the molecules better. This also reinforces observation made in quantitative analysis reflected in Table 3.
While the model variations have similar in downstream task performance in our evaluations the differences in their attention weights and in their ability of discovering structural information from SMILES representation are insightful and intriguing. Specifically with regard to how the linear-attention embedding captures structural information of molecules. Also, it is worth
Figure 3
3further elaborates this point showing the average learned attention coefficients in an intermediate attention layer of MOLFORMER-XL with rotary positional embeddings. Attentions between different pairs of atomic tokens are compared to the corresponding covalent bond connectivity and 3D distances between atom pairs (complete attention matrices for the same molecules across all layers are shown in
Figure 1
1Figure 1
Figure 2
Figure and Table Captions
Figure 2 .
2(a) Training and (b) validation losses of our Linear attention MOLFORMER with rotary (relative) and absolute position embeddings on PubChem. We see that both rotary and absolute MOLFORMER have graceful training curves. Our Rotary Linear attention MOLFORMER leads to lower training and validation losses than MOLFORMER with absolute position embeddings.
Figure 3 .
3Visualization of the learned attention map (using either full or linear attention) under rotary embedding and corresponding molecular structure (bond connectivity and 3D distance in Angstroms) for two random molecules: 'CC1(C)C(C)(O)C1(C)O' (a) and 'CC(C)C(C)(C)O' (b). The attention map (only tokens that map to constituent atoms are shown for clarity), comprised of the average-pooled heads of an intermediate attention layer, exhibits awareness of both covalent bond connectivity and interatomic long-range spatial relationship. The linear attention variant captures (encircled in green) the medium 3D range distance better in comparison to its counterpart.
/ 26 .
26Wang, Y., Wang, J., Cao, Z. & Barati Farimani, A. Molecular contrastive learning of representations via graph neural networks. Nat. Mach. Intell. 4, 279-287 (2022). 27. Su, J., Lu, Y., Pan, S., Wen, B. & Liu, Y. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864 (2021). 28. Wu, Z. et al. Moleculenet: a benchmark for molecular machine learning. Chem. science 9, 513-530 (2018). 29. Liu, Y. et al. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). 30. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the NAACL: HLT, Vol 1 (2019). 31. Katharopoulos, A., Vyas, A., Pappas, N. & Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, 5156-5165 (PMLR, 2020). 32. Hu*, W. et al. Strategies for pre-training graph neural networks. In International Conference on Learning Representations (2020). 33. Liu, S., Demirel, M. F. & Liang, Y. N-gram graph: Simple unsupervised representation for graphs, with applications to molecules. In NeurIPS, 8464-8476 (2019). 34. Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. A simple framework for contrastive learning of visual representations. In III, H. D. & Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning, vol. 119 of Proceedings of Machine Learning Research, 1597-1607 (PMLR, 2020). 35. Oord, A. v. d., Li, Y. & Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018). 36. Liu, S. et al. Pre-training molecular graph representation with 3d geometry. In International Conference on Learning Representations (2022).37. Gasteiger, J., Groß, J. & Günnemann, S. Directional message passing for molecular graphs. In International Conference on Learning Representations (2020). 38. Fang, X. et al. Geometry-enhanced molecular representation learning for property prediction. Nat. Mach. Intell. 4, 127-134 (2022). 39. Altae-Tran, H., Ramsundar, B., Pappu, A. S. & Pande, V. Low data drug discovery with one-shot learning. ACS central science 3, 283-293 (2017). 40. Xiong, Z. et al. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. J. medicinal chemistry 63, 8749-8760 (2019). 41. Schütt, K. et al. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In Guyon, I. et al. (eds.) Advances in Neural Information Processing Systems, vol. 30 (Curran Associates, Inc., 2017).
Figure 1 .
1Training and validation losses of our Linear attention MOLFORMER with rotary (relative) and absolute position embeddings on PubChem and PubChem+ZINC (>1 billion data points) datasets. We see that both rotary and absolute MOLFORMER have graceful training curves. Our Rotary Linear attention MOLFORMER leads to lower training and validation losses than MOLFORMER with absolute position embeddings. This observation lead us to focus on MOLFORMER with rotary position embeddings.
Figure 2 .
2Estimated training times for our Linear attention MOLFORMER with rotary embeddings on a) PubChem and b) PubChem+ZINC datasets taken after 250 iterations. We see that training time decreases slightly sub-linearly as GPUs are added. Training time also scales approximately linearly as more data is added.
Figure 3 .
3Mean absolute errors with varying training set size. Fine-tuning of MOLFORMER with rotary embeddings for prediction of various properties on QM9 Molecules for different training set sizes.
Figure 4 .
4t-SNE projection of frozen MOLFORMER embeddings (no fine-tuning) of the BBBP (left) and ClinTox(middle) and discretized LUMO (right) datasets.
42 .
42Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proc. Natl. Acad. Sci. USA 118, e2016239118, DOI: 10.1073/pnas.2016239118 (2021). 43. Vig, J. et al. Bertology meets biology: Interpreting attention in protein language models. 49. Kim, S. et al. PubChem 2019 update: improved access to chemical data. Nucleic Acids Res. (2018).Extended dataTable 1. Comparison of MOLFORMER-XL with fine-tuned MOLFORMER models that are either of smaller size or pre-trained on smaller datasets on BBBP, HIV, Sider, Clintox, Tox21 and BACE classification benchmarks.Extended dataTable 2. Performance comparison of fine-tuned MOLFORMER-XL with fine-tuned MOLFORMER models are either of smaller size or pre-trained on smaller datasets on QM9 (avg MAE), QM8 (avg MAE), ESOL (RMSE), FreeSolv (RMSE), and Lipophilicity (RMSE) regression benchmarks.Extended dataTable 3. Comparison of different MOLFORMER variants on QM9 test set, in terms of average MAE and average standard MAE. Variants considered are MOLFORMER pre-trained using QM9 only, PubChem only, and PubChem+ZINC dataset. The variants with and without fine-tuning on downstream task are compared, as well as models with, ( )Rotary, and without ,(×)Rotary, rotary embeddings. Our best candidate variant (forTable 8) is chosen based on the average MAE (Mean Absolute Error) score, lower is better.In 9th International Conference
on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (OpenReview.net, 2021).
44. Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. Dual use of artificial-intelligence-powered drug discovery. Nat. Mach.
Intell. 4, 189-191, DOI: 10.1038/s42256-022-00465-9 (2022).
45. Choromanski, K. M. et al. Rethinking attention with performers. In 9th International Conference on Learning Representa-
tions, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (OpenReview.net, 2021).
46. Shaw, P., Uszkoreit, J. & Vaswani, A. Self-attention with relative position representations. In NAACL-HLT, 464-468
(Association for Computational Linguistics, New Orleans, Louisiana, 2018).
47. Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR 21, 1-67 (2020).
48. Ke, G., He, D. & Liu, T.-Y. Rethinking positional encoding in language pre-training. In ICLR (2021).
50. Irwin, J. J. & Shoichet, B. K. ZINC-a free database of commercially available compounds for virtual screening. J. Chem.
Inf. Model. 45, 177-182 (2005).
51. Schwaller, P. et al. Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction. ACS Cent. Sci.
5, 1572-1583, DOI: 10.1021/acscentsci.9b00576 (2019).
/
Extended data
Dataset
BBBP HIV BACE SIDER Clintox Tox21
10% ZINC + 10% PubChem
91.5
81.3 86.6
68.9
94.6
84.5
10% ZINC + 100% PubChem 92.2
79.2 86.3
69.0
94.7
84.5
100% ZINC
89.9
78.4 87.7
66.8
82.2
83.2
MOLFORMER-Base
90.9
77.7 82.8
64.8
61.3
43.2
MOLFORMER-XL
93.7
82.2 88.2
69.0
94.8
84.7
Dataset
QM9
QM8
ESOL FreeSolv Lipophilicity
10% Zinc + 10% Pub
1.7754 0.0108 0.3295
0.2221
0.5472
10% Zinc + 100% Pub 1.9093 0.0102 0.2775
0.2050
0.5331
100% Zinc
1.9403 0.0124 0.3023
0.2981
0.5440
MOLFORMER-Base
2.2500 0.0111 0.2798
0.2596
0.6492
MOLFORMER-XL
1.5984 0.0102 0.2787
0.2308
0.5298
Pre-training Data →
QM9 Only
PubChem Only
PubChem+ZINC
Dataset Size →
111 × 10 3
111 × 10 6
> 1.1 × 10 9
Measure ↓
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
Avg MAE
8.3808
2.4621
2.6604
8.2600
2.9680
3.3990
2.5497
1.8620
1.5894
Avg std MAE
0.2390
0.0843
0.0937
0.2447
0.0801
0.1355
0.0978
0.0611
0.0567
Correlation ChemBERTa MOLFORMER-XL
Fingerprint
0.48
0.64
MCS
-0.44
-0.60
Extended data
Table 1 .
1Different Values of the hyperparameters for the Frozen ModelsHyperparameter
Values
Learning Rate
0.001, 0.0001, 0.0005, 0.00005
Batch Size
64, 128, 256
Hidden Dimension 64, 128, 256, 512, 1024
Number of Layers 2, 3, 4
Table 3 .
3Minimum, maximum and mean and standard deviation of sequence length for the datasets considered in this work. The vocabulary size after tokenization is also given in the last column.Pre-trained Data
Min Max Mean
Std
Vocab Size
QM9
1
22
14.76 2.02
30
Zinc
4
152 43.08 8.70
113
10 PubChem + 10 Zinc
2
2031 42.52 10.06
1044
PubChem
1
2211 43.24 22.21
2349
100 PubChem + 10 Zinc
2
2211 42.86 16.93
2355
PubChem + Zinc
1
2211 44.76 14.55
2362
Table 4. Description of regression datasets used for downstream evaluations
Pre-trained Data Most Frequent Tokens
Table 5 .
5Top five most frequent tokens for each data source.
Table 6 .
6Units of QM9 target measuresTable 7. Performance of fine-tuned MOLFORMER-XL and other supervised and self-supervised geometry-aware GNN baselines on ESOL, FreeSolv, and Lipophilicity regression benchmarks. Baseline performances are taken from references 36, 38 .better performance to that of the majority of the competitors. Specifically, MOLFORMER-XL outperforms all baselines in term of average MAE and average standard MAE. ChemBERTa shows the highest average MAE among all. The more expressive 123-GNN performs better on most measures compared to MOLFORMER-XL; however, such powerful networks are known to be difficult to scale (see 77 for example). As a comparison, the linear attention employed in MOLFORMER-XL ensures a linear time complexity.Table 8. MOLFORMER performance on QM9 test set. Our best MOLFORMER variant is pre-trained on PubChem+ZINC dataset and fine-tuned for each measure. Baseline performance values are taken from 28, 40, 62 . Blue and Orange indicates best and second-best performing model, respectively. MOLFORMER trained with rotary embeddings on PubChem+ZINC achieves the best Avg MAE and Avg. std. MAE across all tasks.Task
DimeNet 37 GeomGCL 36 GEM 38 MOLFORMER-XL
ESOL (RMSE)
0.633
0.575
0.798
0.2787
FreeSolv (RMSE)
0.978
0.866
1.877
0.2308
Lipophilicity (RMSE)
0.614
0.541
0.660
0.5289
Graph-Based
Geometry-Based
SMILES-Based
Measure
A-FP 123-gnn GC
CM DTNN MPNN
MOLFORMER-XL ChemBERTa
α
0.492
0.27
1.37
0.85
0.95
0.89
0.3327
0.8510
C v
0.252
0.0944
0.65
0.39
0.27
0.42
0.1447
0.4234
G
0.893
0.0469
3.41
2.27
2.43
2.02
0.3362
4.1295
gap
0.00528
0.0048 0.01126
0.0086 0.0112
0.0066
0.0038
0.0052
H
0.893
0.0419
3.41
2.27
2.43
2.02
0.2522
4.0853
ε homo
0.00358 0.00337 0.00716 0.00506 0.0038 0.00541
0.0029
0.0044
ε lumo
0.00415 0.00351 0.00921 0.00645 0.0051 0.00623
0.0027
0.0041
µ
0.451
0.476
0.583
0.519
0.244
0.358
0.3616
0.4659
R 2
26.839
22.90
35.97
46.00
17.00
28.5
17.0620
86.150
U 0
0.898
0.0427
3.41
2.27
2.43
2.05
0.3211
3.9811
U
0.893
0.111
3.41
2.27
2.43
2.00
0.2522
4.3768
ZPVE
0.00207 0.00019 0.00299 0.00207 0.0017 0.00216
0.0003
0.0023
Avg MAE
2.6355
1.9995
4.3536
4.7384 2.3504
3.1898
1.5894
8.7067
Avg std MAE
0.0854
0.0658
0.1683
0.1281 0.1008
0.1108
0.0567
0.1413
pre-trained on only the QM9 training set (111k molecules) -referred to as MOLFORMER-qm9 (2) only PubChem (111M molecules) -referred to as MOLFORMER-PubChem (3) PubChem+ZINC (1.1 Billion+ Molecules), i.e. MOLFORMER-XL. Results presented in
Pre-training Data →/
QM9 Only
PubChem Only
PubChem+ZINC
Measure ↓
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
Frozen
× Rotary
Fine-tuned
× Rotary
Fine-tuned
Rotary
α
1.6258
0.5078
0.6001
1.5470
0.5280
0.8452
0.5312
0.3713
0.3327
C v
1.0176
0.1589
0.1906
0.9984
0.1506
0.2701
0.2303
0.1584
0.1447
G
3.2528
0.9985
0.7479
2.0089
0.8626
1.5920
0.3066
0.6861
0.3362
gap
0.0187
0.0057
0.0061
0.0182
0.0050
0.0109
0.0036
0.0039
0.0038
H
1.9221
1.1579
1.0250
2.3627
1.3342
0.7088
0.3675
.07369
0.2522
ε homo
0.0115
0.0042
0.0046
0.0147
0.0038
0.0082
0.0062
0.0028
0.0029
ε lumo
0.0157
0.0041
0.0056
0.0148
0.0036
0.0080
0.0058
0.0025
0.0027
µ
0.8394
0.4380
0.4630
0.8509
0.4284
0.6166
0.6463
0.3921
0.3616
R 2
86.9461
24.0785
25.9482
87.2816
30.2904
34.0425
27.5962
18.8286
17.0620
U 0
3.0626
1.1462
1.3168
2.0613
0.8969
1.5503
0.4500
0.4244
0.3211
U
1.8555
1.0454
1.6158
1.9638
1.1122
1.1351
0.4480
0.7370
0.2522
ZPVE
0.0020
0.0011
0.0012
0.0020
0.0008
0.0012
0.0004
0.0002
0.0003
Avg MAE
8.3808
2.4621
2.6604
8.260
2.968
3.3990
2.5497
1.8620
1.5894
Avg std MAE
0.2390
0.0843
0.0937
0.2447
0.0801
0.1355
0.0978
0.0611
0.0567
# Wins for fixed data
0
10
2
0
11
1
2
3
7
Table 10 .
10Comparison of different MOLFORMER variants on the QM9 test set. Models on the left half of the table are pre-trained using QM9 only and the model in the middle is trained on PubChem only, whereas the models on right half are pre-trained on the PubChem+ZINC dataset. The variants with ( ) and without (×) rotary embeddings are compared. Our best candidate variant (forTable 8) is picked based on the average MAE score.n
25%
50%
75%
100%
Fraction of Training Data
0
2
4
6
8
10
12
MAE
Impact of Training Data
homo
u298
alpha
h298
lumo
g298
u0
mu
AcknowledgementsWe thank IBM Research for supporting this work.Author informationContributionsAll authors conceived the project, developed the MoLFormer framework, and designed experiments. J.R., B.B., V.C., and I.P. performed model training, fine-tuning, and inference experiments. I.P. and P.D. performed attention map analyses. All authors analysed the results and wrote the paper.Ethics declarationsCompeting interestsThe authors declare no competing interests./Supplementary InformationF Insights into MOLFORMER-tSNE VisualizationWe performed the following set of experiments in order to evaluate if the MOLFORMER embeddings capture molecular properties as well as structural aspects. First, we investigate a t-SNE 78 projection of MOLFORMER-XL embeddings sampled from two classes of the BBBP dataset and two classes of the ClinTox dataset as shown inFigure 4: left and middle. The two classes in BBBP are separated by molecules that penetrate (penetrating) the blood brain barrier and molecules that do not penetrate (non-penetrating) the blood brain barrier. Clintox is separated into a class of toxic molecules and non-toxic molecules.Finally we discretize the QM9 dataset according to the LUMO energy, where one class consists of molecules with LUMO energy value < 0 Hartree and the other class is composed of molecules with LUMO energy >= 0 Hartree(Figure 4: right). It can be seen inFigure 4, starting with BBBP on the very left, that even without task-specific fine-tuning, MOLFORMER-XL is able to discriminate between the two classes, suggesting that molecular property information has been captured in this universal representation. For ClinTox, middle plot inFigure 4, clustering of toxic and non-toxic is apparent but the clustersH Assets and LicenseThe following
Extended-connectivity fingerprints. D Rogers, M Hahn, J. chemical information modeling. 50Rogers, D. & Hahn, M. Extended-connectivity fingerprints. J. chemical information modeling 50, 742-754 (2010).
Fast and accurate modeling of molecular atomization energies with machine learning. M Rupp, A Tkatchenko, K.-R Müller, O A Von Lilienfeld, Phys. review letters. 10858301Rupp, M., Tkatchenko, A., Müller, K.-R. & Von Lilienfeld, O. A. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. review letters 108, 058301 (2012).
Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. D Weininger, J. chemical information computer sciences. 28Weininger, D. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. J. chemical information computer sciences 28, 31-36 (1988).
Smiles2vec: An interpretable general-purpose deep neural network for predicting chemical properties. G B Goh, N O Hodas, C Siegel, A Vishnu, arXiv:1712.02034arXiv preprintGoh, G. B., Hodas, N. O., Siegel, C. & Vishnu, A. Smiles2vec: An interpretable general-purpose deep neural network for predicting chemical properties. arXiv preprint arXiv:1712.02034 (2017).
Deepdta: deep drug-target binding affinity prediction. H Öztürk, A Özgür, E Ozkirimli, Bioinformatics. 34Öztürk, H., Özgür, A. & Ozkirimli, E. Deepdta: deep drug-target binding affinity prediction. Bioinformatics 34, i821-i829 (2018).
Chemixnet: Mixed dnn architectures for predicting chemical properties using multiple molecular representations. A Paul, arXiv:1811.08283arXiv preprintPaul, A. et al. Chemixnet: Mixed dnn architectures for predicting chemical properties using multiple molecular representa- tions. arXiv preprint arXiv:1811.08283 (2018).
Self-attention based molecule representation for predicting drug-target interaction. B Shin, S Park, K Kang, J C Ho, Machine Learning for Healthcare Conference. PMLRShin, B., Park, S., Kang, K. & Ho, J. C. Self-attention based molecule representation for predicting drug-target interaction. In Machine Learning for Healthcare Conference, 230-248 (PMLR, 2019).
Daylight Chemical Information Systems, I. Smarts™-a language for describing molecular patterns. Daylight Chemical Information Systems, I. Smarts™-a language for describing molecular patterns (2007).
Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation. M Krenn, F Häse, A Nigam, P Friederich, A Aspuru-Guzik, file:/localhost/opt/grobid/grobid-home/tmp/10.1088/2632-2153/aba947Mach. Learn. Sci. Technol. 145024Krenn, M., Häse, F., Nigam, A., Friederich, P. & Aspuru-Guzik, A. Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation. Mach. Learn. Sci. Technol. 1, 045024, DOI: 10.1088/2632-2153/aba947 (2020).
Sample efficiency matters: A benchmark for practical molecular optimization. W Gao, T Fu, J Sun, C W Coley, Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Gao, W., Fu, T., Sun, J. & Coley, C. W. Sample efficiency matters: A benchmark for practical molecular optimization. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022).
The message passing neural networks for chemical property prediction on smiles. J Jo, B Kwak, H.-S Choi, S Yoon, Methods. 179Interpretable machine learning in bioinformaticsJo, J., Kwak, B., Choi, H.-S. & Yoon, S. The message passing neural networks for chemical property prediction on smiles. Methods 179, 65-72 (2020). Interpretable machine learning in bioinformatics.
Convolutional networks on graphs for learning molecular fingerprints. D Duvenaud, Proceedings of the 28th International Conference on Neural Information Processing Systems. the 28th International Conference on Neural Information Processing Systems215Duvenaud, D. et al. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of the 28th International Conference on Neural Information Processing Systems -Volume 2, NIPS'15 (2015).
Convolutional neural networks on graphs with fast localized spectral filtering. M Defferrard, X Bresson, P Vandergheynst, Adv. neural information processing systems. 29Defferrard, M., Bresson, X. & Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. Adv. neural information processing systems 29 (2016).
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, International Conference on Learning Representations. Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (2017).
Gated graph sequence neural networks. Y Li, D Tarlow, M Brockschmidt, R S Zemel, 4th International Conference on Learning Representations. Bengio, Y. & LeCun, Y.San Juan, Puerto RicoConference Track ProceedingsLi, Y., Tarlow, D., Brockschmidt, M. & Zemel, R. S. Gated graph sequence neural networks. In Bengio, Y. & LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings (2016).
Graph attention networks. P Veličković, International Conference on Learning Representations. Veličković, P. et al. Graph attention networks. In International Conference on Learning Representations (2018).
Inductive representation learning on large graphs. Adv. neural information processing systems. W Hamilton, Z Ying, J Leskovec, 30Hamilton, W., Ying, Z. & Leskovec, J. Inductive representation learning on large graphs. Adv. neural information processing systems 30 (2017).
Neural message passing for quantum chemistry. J Gilmer, S S Schoenholz, P F Riley, O Vinyals, G E Dahl, International conference on machine learning. PMLRGilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. In International conference on machine learning, 1263-1272 (PMLR, 2017).
Modeling relational data with graph convolutional networks. M Schlichtkrull, European semantic web conference. SpringerSchlichtkrull, M. et al. Modeling relational data with graph convolutional networks. In European semantic web conference, 593-607 (Springer, 2018).
Multi-scale deep graph convolutional networks. R Liao, Z Zhao, R Urtasun, R S Zemel, Lanczosnet, 7th International Conference on Learning Representations. New Orleans, LA, USAOpenReview.netLiao, R., Zhao, Z., Urtasun, R. & Zemel, R. S. Lanczosnet: Multi-scale deep graph convolutional networks. In 7th Interna- tional Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (OpenReview.net, 2019).
Utilizing edge features in graph neural networks via variational information maximization. P Chen, W Liu, C.-Y Hsieh, G Chen, S Zhang, arXiv:1906.05488arXiv preprintChen, P., Liu, W., Hsieh, C.-Y., Chen, G. & Zhang, S. Utilizing edge features in graph neural networks via variational information maximization. arXiv preprint arXiv:1906.05488 (2019).
Chemical space. P Kirkpatrick, C Ellis, Nature. 432Kirkpatrick, P. & Ellis, C. Chemical space. Nature 432, 823-824 (2004).
Attention is all you need. Adv. neural information processing systems. A Vaswani, 30Vaswani, A. et al. Attention is all you need. Adv. neural information processing systems 30 (2017).
On the opportunities and risks of foundation models. R Bommasani, file:/localhost/opt/grobid/grobid-home/tmp/10.48550/ARXIV.2108.07258arXiv preprintBommasani, R. et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 DOI: 10.48550/ARXIV.2108.07258 (2021).
Chemberta: large-scale self-supervised pretraining for molecular property prediction. S Chithrananda, G Grand, B Ramsundar, arXiv:2010.09885arXiv preprintChithrananda, S., Grand, G. & Ramsundar, B. Chemberta: large-scale self-supervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885 (2020).
RDKit: Open-source cheminformatics. Online; accessed 28RDKit: Open-source cheminformatics. http://www.rdkit.org (2021). [Online; accessed 28-May-2021].
Molecular property prediction: A multilevel quantum interactions modeling perspective. C Lu, file:/localhost/opt/grobid/grobid-home/tmp/10.1609/aaai.v33i01.33011052AAAI. AAAI PressLu, C. et al. Molecular property prediction: A multilevel quantum interactions modeling perspective. In AAAI, 1052-1060, DOI: 10.1609/aaai.v33i01.33011052 (AAAI Press, 2019).
Analyzing learned molecular representations for property prediction. K Yang, J. Chem. Inf. Model. 59Yang, K. et al. Analyzing learned molecular representations for property prediction. J. Chem. Inf. Model. 59, 3370-3388 (2019).
Exploring the limits of transfer learning with a unified text-to-text transformer. C Raffel, JMLR. Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR (2020).
I Beltagy, M E Peters, A Cohan, Longformer, arXiv:2004.05150The long-document transformer. Beltagy, I., Peters, M. E. & Cohan, A. Longformer: The long-document transformer. arXiv:2004.05150 (2020).
Reformer: The efficient transformer. N Kitaev, L Kaiser, A Levskaya, ICLR. Kitaev, N., Kaiser, L. & Levskaya, A. Reformer: The efficient transformer. In ICLR (2020).
S Wang, B Z Li, M Khabsa, H Fang, H Ma, Linformer, 2006.04768Self-attention with linear complexity. Wang, S., Li, B. Z., Khabsa, M., Fang, H. & Ma, H. Linformer: Self-attention with linear complexity (2020). 2006.04768.
Smiles enumeration as data augmentation for neural network modeling of molecules. E J Bjerrum, 1703.07076Bjerrum, E. J. Smiles enumeration as data augmentation for neural network modeling of molecules (2017). 1703.07076.
Convolutional embedding of attributed molecular graphs for physical property prediction. C W Coley, R Barzilay, W H Green, T S Jaakkola, K F Jensen, J. chemical information modeling. 57Coley, C. W., Barzilay, R., Green, W. H., Jaakkola, T. S. & Jensen, K. F. Convolutional embedding of attributed molecular graphs for physical property prediction. J. chemical information modeling 57, 1757-1772 (2017).
Weisfeiler and leman go neural: Higher-order graph neural networks. C Morris, In AAAI. 33Morris, C. et al. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, vol. 33, 4602-4609 (2019).
Provably powerful graph networks. H Maron, H Ben-Hamu, H Serviansky, Y Lipman, Adv. neural information processing systems. 32Maron, H., Ben-Hamu, H., Serviansky, H. & Lipman, Y. Provably powerful graph networks. Adv. neural information processing systems 32 (2019).
Prottrans: towards cracking the language of lifes code through self-supervised deep learning and high performance computing. A Elnaggar, IEEE transactions. Elnaggar, A. et al. Prottrans: towards cracking the language of lifes code through self-supervised deep learning and high performance computing. IEEE transactions on pattern analysis machine intelligence (2021).
X-mol: large-scale pre-training for molecular understanding and diverse molecular analysis. D Xue, file:/localhost/opt/grobid/grobid-home/tmp/10.1101/2020.12.23.424259Xue, D. et al. X-mol: large-scale pre-training for molecular understanding and diverse molecular analysis. bioRxiv DOI: 10.1101/2020.12.23.424259 (2020).
Smiles-bert: large scale unsupervised pre-training for molecular property prediction. S Wang, Y Guo, Y Wang, H Sun, J Huang, Proceedings of the 10th ACM international conference on bioinformatics, computational biology and health informatics. the 10th ACM international conference on bioinformatics, computational biology and health informaticsWang, S., Guo, Y., Wang, Y., Sun, H. & Huang, J. Smiles-bert: large scale unsupervised pre-training for molecular property prediction. In Proceedings of the 10th ACM international conference on bioinformatics, computational biology and health informatics, 429-436 (2019).
A merged molecular representation learning for molecular properties prediction with a web-based service. H Kim, J Lee, S Ahn, J R Lee, Sci. Reports. 11Kim, H., Lee, J., Ahn, S. & Lee, J. R. A merged molecular representation learning for molecular properties prediction with a web-based service. Sci. Reports 11, 1-9 (2021).
Chemformer: a pre-trained transformer for computational chemistry. R Irwin, S Dimitriadis, J He, E J Bjerrum, Mach. Learn. Sci. Technol. 315022Irwin, R., Dimitriadis, S., He, J. & Bjerrum, E. J. Chemformer: a pre-trained transformer for computational chemistry. Mach. Learn. Sci. Technol. 3, 015022 (2022).
Self-supervised graph transformer on large-scale molecular data. Y Rong, Adv. Neural Inf. Process. Syst. 33Rong, Y. et al. Self-supervised graph transformer on large-scale molecular data. Adv. Neural Inf. Process. Syst. 33, 12559-12571 (2020).
Motif-based graph self-supervised learning for molecular property prediction. Z Zhang, Q Liu, H Wang, C Lu, C.-K Lee, Adv. Neural Inf. Process. Syst. 34Zhang, Z., Liu, Q., Wang, H., Lu, C. & Lee, C.-K. Motif-based graph self-supervised learning for molecular property prediction. Adv. Neural Inf. Process. Syst. 34, 15870-15882 (2021).
Improving molecular contrastive learning via faulty negative mitigation and decomposed fragment contrast. Y Wang, R Magar, C Liang, A Barati Farimani, J. Chem. Inf. Model. Wang, Y., Magar, R., Liang, C. & Barati Farimani, A. Improving molecular contrastive learning via faulty negative mitigation and decomposed fragment contrast. J. Chem. Inf. Model. (2022).
Dual-view molecule pre-training. J Zhu, arXiv:2106.10234arXiv preprintZhu, J. et al. Dual-view molecule pre-training. arXiv preprint arXiv:2106.10234 (2021).
3d infomax improves gnns for molecular property prediction. H Stärk, International Conference on Machine Learning. PMLRStärk, H. et al. 3d infomax improves gnns for molecular property prediction. In International Conference on Machine Learning, 20479-20502 (PMLR, 2022).
Large batch optimization for deep learning: Training bert in 76 minutes. Y You, International Conference on Learning Representations. You, Y. et al. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations (2020).
. E A Falcon, Wa, Pytorch Lightning, Github, Falcon, e. a., WA. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning 3 (2019).
Fast and accurate modeling of molecular atomization energies with machine learning. M Rupp, A Tkatchenko, K.-R Müller, O A Von Lilienfeld, file:/localhost/opt/grobid/grobid-home/tmp/10.1103/PhysRevLett.108.058301Phys. Rev. Lett. 10858301Rupp, M., Tkatchenko, A., Müller, K.-R. & von Lilienfeld, O. A. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett. 108, 058301, DOI: 10.1103/PhysRevLett.108.058301 (2012).
Quantum-chemical insights from deep tensor neural networks. K T Schütt, F Arbabzadah, S Chmiela, K R Müller, A Tkatchenko, Nat. communications. 8Schütt, K. T., Arbabzadah, F., Chmiela, S., Müller, K. R. & Tkatchenko, A. Quantum-chemical insights from deep tensor neural networks. Nat. communications 8, 1-8 (2017).
Building powerful and equivariant graph neural networks with structural messagepassing. C Vignac, A Loukas, P Frossard, Adv. Neural Inf. Process. Syst. 33Vignac, C., Loukas, A. & Frossard, P. Building powerful and equivariant graph neural networks with structural message- passing. Adv. Neural Inf. Process. Syst. 33, 14143-14155 (2020).
Visualizing data using t-sne. L Van Der Maaten, G Hinton, J. Mach. Learn. Res. 9van der Maaten, L. & Hinton, G. Visualizing data using t-sne. J. Mach. Learn. Res. 9, 2579-2605 (2008). /
| [
"https://github.com/IBM/",
"https://github.com/IBM/molformer.",
"https://github.com/PyTorchLightning/pytorch-lightning"
]
|
[
"Exact approaches for the Connected Vertex Cover problem",
"Exact approaches for the Connected Vertex Cover problem",
"Exact approaches for the Connected Vertex Cover problem",
"Exact approaches for the Connected Vertex Cover problem"
]
| [
"Manuel Aprile [email protected] \nMathematics Department Via Trieste 63\nUniversità degli studi di Padova\n35121PadovaItaly\n",
"Manuel Aprile [email protected] \nMathematics Department Via Trieste 63\nUniversità degli studi di Padova\n35121PadovaItaly\n"
]
| [
"Mathematics Department Via Trieste 63\nUniversità degli studi di Padova\n35121PadovaItaly",
"Mathematics Department Via Trieste 63\nUniversità degli studi di Padova\n35121PadovaItaly"
]
| []
| Given a graph G, the Connected Vertex Cover problem (CVC) asks to find a minimum cardinality vertex cover of G that induces a connected subgraph. In this paper we describe some approaches to solve the CVC problem exactly. First, we give compact mixed-integer extended formulations for CVC: these are the first formulations proposed for this problem, and can be easily adapted to variations of the problem such as Tree Cover. Second, we describe a simple branch and bound algorithm for the CVC problem. Finally, we implement our algorithm and compare its performance against our best formulation: contrary to what usually happens for the classical Vertex Cover problem, our formulation outperforms the branch and bound algorithm. | 10.48550/arxiv.2203.09868 | [
"https://export.arxiv.org/pdf/2203.09868v2.pdf"
]
| 247,594,516 | 2203.09868 | 6f5cd7ae9033933b9ac98cfaa6d79c5739c0db45 |
Exact approaches for the Connected Vertex Cover problem
17 Feb 2023
Manuel Aprile [email protected]
Mathematics Department Via Trieste 63
Università degli studi di Padova
35121PadovaItaly
Exact approaches for the Connected Vertex Cover problem
17 Feb 2023Connected Vertex Cover · Extended formulations · Branch and bound
Given a graph G, the Connected Vertex Cover problem (CVC) asks to find a minimum cardinality vertex cover of G that induces a connected subgraph. In this paper we describe some approaches to solve the CVC problem exactly. First, we give compact mixed-integer extended formulations for CVC: these are the first formulations proposed for this problem, and can be easily adapted to variations of the problem such as Tree Cover. Second, we describe a simple branch and bound algorithm for the CVC problem. Finally, we implement our algorithm and compare its performance against our best formulation: contrary to what usually happens for the classical Vertex Cover problem, our formulation outperforms the branch and bound algorithm.
Introduction
Given a graph G = (V, E), a subset of vertices C ⊆ V is a vertex cover of G if every edge of G has at least one endpoint in C. The problem of finding a vertex cover of minimum cardinality in a graph is equivalent to finding a maximum stable set (or a maximum clique in the complement graph) and is one of the best studied problems in theoretical computer science. In this paper we study one of the most popular variants of the minimum Vertex Cover (VC) problem, where we aim at finding a minimum connected vertex cover (CVC): i.e., we additionally require the subgraph G[C] induced by C to be connected. We call this the CVC problem.
The CVC problem has applications in wireless network design, where one aims at placing relay stations on the network so that they cover all transmission links (the edges of the network) and are all connected to each other.
Similarly to the VC problem, the CVC problem is NP-hard [15] and admits a polynomial-time 2-approximation algorithm [24]. On the other hand, the CVC problem is NP-hard even if the input graph is restricted to be bipartite [13]: this is surprising as Vertex Cover is polynomially solvable for bipartite graphs, as, thanks to the famous König-Egeváry Theorem, it amounts to finding a maximum matching.
The CVC problem has received attention especially from the point of view of parameterized algorithms [16,22] and approximation algorithms [12,24,9]. An aspect that did not receive much attention is that of solving the CVC problem in practice: moreover, prior to this paper there were no mathematical programming formulations for the problem. Such formulations are usually easy to implement and are flexible to the addition of extra constraints to the problem, an advantage for real-world applications. Unlike for the CVC problem, there is a wealth of methods for solving the VC problem, the most effective being branch and bound algorithms (see [26] for a survey), and there are many linear and non-linear formulations for VC and the related maximum clique and maximum stable set problems [23,19,4].
A key feature of the CVC problem that we exploit in this paper is that its constraints can be modelled as linear constraints from two polytopes: the vertex cover polytope and the spanning tree polytope. Both are well-studied polytopes for which a large number of extended formulations is known [5,4,7,14,20,25]: those are formulations where extra variables are used, other than the variables of the original polytope, in order to limit the number of inequalities.
In this paper we aim at partially filling the gap between VC and CVC by proposing mixed-integer extended formulations for the CVC problem. Our main contribution is a mixed integer formulation for the CVC problem with a relatively small number of variables (linear in the number of edges of the input graph). The formulations we propose also lend themselves to modelling related problems as the Tree Cover problem [8] (see Section 5). As an additional contribution, we also describe a simple branch and bound algorithm for CVC, by modifying a standard algorithm for the maximum stable set problem. Finally, we perform numerical experiments to compare the various approaches. In our experiments, the proposed mixed-integer formulation solves the problem much faster than the branch and bound algorithm. This is interesting since, for the general Vertex Cover problem, combinatorial algorithms usually outperform linear formulations.
The paper is organized as follows: this introduction terminates with Section 1.1, which gives some basic terminology and notation; in Section 2 we give our formulations for CVC and prove their correctness; the branch and bound algorithm is described in Section 3; numerical experiments are given in Section 4; finally, we conclude with some further research directions in Section 5.
Preliminaries
Throughout the paper we let G = (V, E) be a connected graph. This is natural because, ignoring exceptions such as isolated vertices, only connected graphs admit connected vertex covers. A set U ⊆ V is stable if the subgraph G[U ] induced by U does not contain any edge. Clearly, a subset U ⊆ V is a vertex cover if and only if its complement V \ U is stable. Hence, solving the CVC problem amounts to finding the maximum stable set S such that the graph G \ S obtained by removing S is connected. Finally, a subgraph of G is a spanning tree of G if it is a tree and contains all vertices of G: we usually identify a spanning tree with a set of edges F ⊆ E.
For sets U ⊆ A, we denote by χ U ∈ {0, 1} A the incidence vector of U , which satisfies χ U v = 1 if and only if v ∈ U . We will use incidence vectors for subsets of vertices, edges, or arcs in directed graphs. For a vector x ∈ R A , we often write x(U ) to denote u∈U x u .
Mixed-Integer programming formulations
A compact integer formulation of the Vertex Cover problem is well known: it suffices to use a variable x v for each node v of our graph G, and ask that x u + x v ≥ 1 for each edge uv of G. On the other hand, it is not trivial to come up with a formulation for CVC, and we do not know any formulation that only uses node variables. The reason behind this difficulty is that imposing connectedness in an induced subgraph is a difficult constraint to model. Notice that a graph is connected if and only if it admits a spanning tree. Hence to model connectedness we resort to the spanning tree polytope of G, denoted by STP(G), defined as the convex hull of the incidence vectors of all the spanning trees in G. The basic idea that underlies all the formulations in this section is to add edge variables to the node variables, and to impose that such edge variables model a spanning tree in the subgraph induced by our vertex cover. We first propose the following formulation, based on the classical linear description of STP(G) given by Edmonds [11].
P stp = x ∈ {0, 1} V | ∃ y ∈ [0, 1] E : x u + x v ≥ 1 ∀(u, v) ∈ E (1) y(E(U )) ≤ |U | − 1 ∀∅ = U ⊆ V (2) y(E) = x(V ) − 1 (3) y uv ≤ x u , y uv ≤ x v ∀(u, v) ∈ E .(4)
Lemma 1. Let G = (V, E) be a connected graph. Then C ⊆ V is a CVC if and only if (χ C , y) ∈ P stp for some y ∈ R E .
Proof. If C is a CVC, then fix any spanning tree F of G[C]. Then χ C clearly satisfies Constraints (1); moreover, setting y = χ F can be easily seen to satisfy Constraints (2), (3), (4).
On the other hand, assume that (χ C , y) ∈ P stp . Then C ⊆ V is clearly a vertex cover. Moreover, (4) implies y e = 0 for each e ∈ E \ E(C), hence the projection y ′ of y to variables E(C) is in the spanning tree polytope of G[C], due to constraints (2),(3) (notice that x(V ) − 1 = |C| − 1). The spanning tree polytope of G[C] is then non-empty, therefore G[C] is connected.
The description above has an exponential number of constraints. There are well known extended formulations of size O(n 3 ) for the spanning tree polytope of an n-vertex graph [25,20], and smaller extended formulations for special classes of graphs [7,14]. Therefore, we would like to turn any formulation for the spanning tree polytope into a formulation for CVC. This can be done by going through the forest polytope of G, STP ↓ (G), defined as the convex hull of incidence vectors of forests of G. The same proof of Lemma 1 shows that a correct formulation for CVC can be obtained by replacing Constraints 2 in P stp with y ∈ STP ↓ (G). Finally, it is well-known that one can obtain a formulation of STP ↓ (G) from one of
STP(G), since STP ↓ (G) = {x ∈ [0, 1] E : ∃y ∈ R E : x ≤ y, y ∈ STP(G)}.
While this approach does reduce the size of our CVC formulation from exponential to polynomial, it still yields too many extra variables to be practical. In the next section, we address this issue.
A smaller mixed-integer formulation
We now give a smaller formulation for the CVC problem, which makes use of a mixed-integer formulation for STP(G) with a small number of additional variables. We start by giving the formulation for STP(G), which builds on natural ideas that can be found, for instance, in [21]. Rather than spanning trees in undirected graphs, we focus on arborescences in directed graphs. Given our graph G, we simply bidirect each edge obtaining the directed graph D(V, A). Now, fix a "root" vertex r ∈ V . Recall that an r-arborescence of D is a subset of arcs F ⊆ A such that, for every v ∈ V \ {r}, F contains exactly one directed path from r to v. Clearly, a description of the r-arborescences of D gives a description of the spanning trees of G by just ignoring the orientations (i.e. setting y uv = z uv + z vu for each edge uv). Moreover, since arborescences are rooted in r, we do not need arcs that point to r, and we simply delete them. Recall that δ − (v) denotes the set of arcs of A pointing to v.
Q r = z ∈ {0, 1} A | ∃ d ∈ R V : z(δ − (v)) = 1 ∀v ∈ V \ {r} (5) d v ≥ n · (z uv − 1) + d u + 1 ∀(u, v) ∈ A (6) d r = 0 (7) z(A) = |V | − 1 . (8) Lemma 2. Let D = (V, A) be a directed graph, and r ∈ V such that δ − (r) = ∅. Then F ⊆ A is an r-arborescence of D if and only if (χ F , d) ∈ Q r for some d ∈ R V .
Proof. First, given an r-arborescence F , set d v to the length of the (unique) path from r to v in F , for each v ∈ V . It is easy to check that all constraints are satisfied by (χ F , d).
On the other hand, let (z, d) ∈ Q r , with z = χ F . We first show that F , after ignoring orientations, does not contain cycles: suppose by contradiction that (6): this implies that C cannot be a directed cycle. In particular, if v is the vertex of C with d v minimum, then there are two arcs of C pointing to v: but this is in contradiction with Constraint (5), if v = r, and with δ − (r) = ∅ otherwise. Now, we have |F | = |V |−1 by (8). This, the absence of cycles, and Constraint (5), guarantees that F is an r-arborescence of D.
C ⊆ F is a cycle with vertices v 1 , . . . , v k , where for each i = 1, . . . , k, v i v i+1 ∈ C or v i+1 v i ∈ C (where the sum is modulo k). For any uv ∈ C, we have that d v ≥ d u + 1 by
One could turn Q r into a formulation for the forest polytope of G and obtain a formulation for the CVC problem, as described in the previous section. However, it is not clear how to do this without adding additional variables: the issue is the choice of the root r, which does not need to be connected to the other vertices in a forest. Instead, we are able to limit the number of variables by exploiting the fact that, for any edge uv of G, at least one of u, v has to be picked in our vertex cover. Hence, we choose a "main" root vertex r, and another root r 1 , adjacent to r, that we can use as a root when r is not in our vertex cover. We consider the following directed version D(V, A) of our graph G(V, E): fix r, r 1 ∈ V with rr 1 ∈ E, turn every edge vr ∈ E into a directed arc from r to v, turn every edge vr 1 ∈ E with v = r into a directed arc from r 1 to v, and bidirect each other edge. Notice that, in D, δ − (r) = ∅ and δ − (r 1 ) = {r}. Now, consider the following formulation:
P arb (r, r 1 ) = x ∈ {0, 1} C :| ∃ z ∈ {0, 1} A , d ∈ R V : x u + x v ≥ 1 ∀(u, v) ∈ A,(9)z(δ − (v)) = x v ∀v ∈ V \ {r, r 1 } (10) d v ≥ n · (z uv − 1) + d u + x v ∀(u, v) ∈ A (11) d r = 0 (12) z(A) = x(V ) − 1 (13) z uv ≤ x u , z uv ≤ x v ∀(u, v) ∈ A .(14)
Theorem 1. Let G = (V, E) be a connected graph, let r, r 1 ∈ V with (r, r 1 ) ∈ E and construct the directed graph D(V, A) as described above. Then C ⊆ V is a CVC if and only if (χ C , z, d) ∈ P arb (r, r 1 ) for some z, d.
Proof. First, let C ⊆ V be a CVC. We distinguish three cases.
1. r ∈ C, r 1 ∈ C. Let F be any r-arborescence of D[C], and set x = χ C , z = χ F , d v equal to the distance between r and v in F for v ∈ C, and d v = 0 for v ∈ C.
Notice that 0 ≤ d v ≤ n − 1 holds for all v ∈ V . Now, (x, z, d) can be checked to satisfy all constraints of P arb (r, r 1 ): we only discuss Constraints (11).
Let (u, v) ∈ A. If (u, v) ∈ F , the corresponding constraint is d v ≥ −n + d u + x v ,
which is trivially satisfied for any u, v as d v is non-negative and the righthand side is non-positive. Hence, suppose (u, v) ∈ F , hence x v = 1. Then the constraint is d v ≥ d u + 1, which is satisfied at equality by our choice of d. 2. r 1 ∈ C, r ∈ C. We proceed similarly as in the previous case, choosing an r 1 -arborescence F of D[C] and setting z = χ F , d v equal to the distance between r 1 and v in F for v ∈ C, and d v = 0 for v ∈ C. Then (x, z, d) can be checked to satisfy all constraints exactly as before. 3. r, r 1 ∈ C. Let F be an r-arborescence of D[C] containing the arc rr 1 (notice that such an arborescence always exists). Set z = χ F , and set d as in the first case. Again, one checks that all constraints are satisfied. Now, let (χ C , z, d) ∈ P arb (r, r 1 ), with z = χ F . In order to show that G[C] is connected, we just need to show that F does not contain any cycle. We use the same argument as in the proof of Lemma 2, which we repeat for completeness. Assume that F contains a cycle C. C cannot be a directed cycle due to Constraints (11), hence C contains a vertex v with two incoming arcs. Constraint (5) implies that v = r or v = r 1 , but this contradicts the fact that δ − (r) = ∅, δ − (r 1 ) = {r}.
A Branch & Bound algorithm
In this section we describe a naive branch & bound algorithm to solve the CVC problem. For simplicity we follow the standard framework of branch & bound algorithms for the maximum stable set problem, see for instance [26]: instead of looking directly for a minimum vertex cover, we look for a stable set S * of maximum size. The only difference with the classical setting is that we impose that S * is feasible, where we call feasible a stable set S such that G \ S is connected.
We now give an informal description of the algorithm, referring to Algorithm 1 for the pseudocode. To avoid recursion, a stack is used to store the nodes explored by the algorithm. Each node consists of a pair (S, U ), where S is a feasible stable set and U is a set of candidate nodes that can be added to S. The idea is to explore the search space of all possible nodes while keeping a record of the best solution found so far, denoted by S * : at each step, the current node (S, U ) of the stack is either branched on, or pruned if we realize that it cannot produce a stable set larger than S * . The pruning step is based on greedy coloring, as in the classical algorithm for the maximum stable set problem, exploiting the fact that any proper coloring of the complement of a graph gives an upper bound on its maximum stable set: in particular, the maximum stable set that the node can produce has size at most |S| + α(G[U ]) ≤ |S| + χ(Ḡ(U )), and the latter term is estimated as the numbers of colors used in a greedy coloring (see Line 6). Branching is also performed as in the classical algorithm, but with a crucial difference: we select a vertex v ∈ U and create nodes (S, U \{v}) and (S∪{v}, U ′ ), where U ′ ⊆ U \ {v} is obtained by removing from U all the neighbors of v and all the cut-vertices 1 of G \ (S ∪ {v}) (see Line 11). This ensures that we only consider feasible stable sets.
Algorithm 1 Pseudocode of a basic branch & bound algorithm for CVC. Following the classical framework for maximum stable set algorithms, the algorithm finds the largest stable set S * in G such that G \ S * is connected, and then outputs the corresponding vertex cover.
Input: A connected graph G = (V, E) Output: A minimum-size CVC of G 1: S * ← ∅ 2: C ← cut-vertices of G 3: A ← [(∅, V \ C)] 4: while A non-empty do 5: (S, U ) ← pop(A) 6:
while U non-empty and |S * | < |S|+ greedy color(Ḡ[U ]) do 7:
v ← pop(U ) 8:
Append (S, U ) to A 9:
S ← S ∪ {v} 10:
C ← cut-vertices of G \ S 11:
U ← (U ∩N (v)) \ C 12:
if |S| > |S * | then 13:
S * ← S 14:
end if 15:
end while 16: end while 17: return V \ S * We now argue that our algorithm is correct: most importantly, we need to show that removing cut-vertices as described above is enough to find the largest feasible stable set. Proof. Equivalently, we will show that the set S * output by the algorithm is the maximum feasible stable set of G. We say that a node (S, U ) contains a feasible stable set S ′ if S ⊆ S ′ ⊆ U .
First, we claim that the starting node (∅, V \ C) contains all feasible stable sets, where C are the cut-vertices of G. Indeed, if u is a cut-vertex of G, and S a feasible stable set, S cannot contain u: if u ∈ S, we must have that G \ {u} consists of two connected components G 1 , G 2 , and S contains the vertices of G 1 without loss of generality. But since G is connected, there is at least an edge between u and a vertex of G 1 , a contradiction. Now, it suffices to show that, whenever we branch on a node (S, U ) obtaining two new nodes, any feasible stable set S ′ contained in (S, U ) is contained in one of the new nodes. This implies that any feasible stable set is explored by the algorithm at some step, and concludes the proof.
The new nodes created are (S, U \ {v}) and (S ∪ {v}, U ′ ), where U ′ is defined in Line 11. Clearly, if v ∈ S ′ , then S ′ is contained in node (S, U \ {v}) and we are done. On the other hand, if v ∈ S ′ , we only need to show that S ′ ⊆ U ′ . This follows since S ′ cannot contain any neighbor of v, or any cut-vertex of G \ (S ∪ {v}), where the latter is proved by using the same argument as for the starting node.
We conclude the section with some improvements to Algoritm 1 that can be implemented to increase performance (see next Section for the implementation details).
-Computing a strong upper bound reduces the number of branch and bound nodes, at the price of longer running time for each node: for bipartite graphs, instead of resorting to a coloring bound we can directly compute the size of a maximum (usually unfeasible) stable set in the current subgraph, resulting in much better bounds and shorter total running time. -On the other hand, for general graphs we find that is better to spend less time on the upper bound computation: instead of recomputing a greedy coloring at each execution of Line 6, keeping the same coloring for several steps reduces the total running time. -Russian Doll Search: to slightly restrict the number of visited nodes, we order the vertices as v 1 , . . . , v n by decreasing degree and call the algorithm n times: at step i, we include node i on our starting set S and restrict the set U to vertices v j , with j > i, that are not neighbors of v i .
Numerical results
We now compare the performance of our formulation P arb and our branch and bound algorithm on a benchmark of random graphs. We remark that the CVC problem is most interesting in graphs where the solution of CVC is strictly larger than the minimum vertex cover (we call such graphs interesting): if this is not the case one could just use the state of the art methods for finding the minimum vertex cover, and check that it induces a connected subgraph. This poses challenges to forming a benchmark of interesting graphs, as for instance the standard DIMACS benchmark [18] does not contain interesting graphs as far as we could check. Hence we resorted to sparse, random graphs. In particular, half of our graphs are Erdős-Rényi random graphs with density equal to 0.05; the others are bipartite random graphs, with density ranging from 0.1 to 0.5. We remark that bipartite graphs often seem to be interesting, which makes sense intuitively as each part of the bipartition forms a (possibly sub-optimal) vertex cover that is not connected: for instance, in the complete bipartite graph K n,n , a minimum vertex cover has size n, while a minimum connected vertex cover has size n + 1. Moreover, as mentioned in the introduction, bipartite graphs are one of the simplest graph classes for which the VC problem is polynomial and CVC is NP-hard, which makes them good candidates for studying the differences between the two problems. The graphs are produced with the functions fast gnp random graph() and bipartite.random graph() from the Networkx package [17], and the name of the graph indicates the random seed: for instance, G i is the random graph on 100 vertices with density 0.05 created by seed i. Some of the seeds are missing since we only consider connected graphs. The experiments are run on a processor Intel Core i5-4590 (4 cores) clocked at 3.3 GHz with 4 GB RAM. Algorithm 1 is coded in Python, version 3.7, and Networkx functions articulation points() and greedy color() are used to perform lines 10 and 6 respectively. We refer to [3] for the code for Algorithm 1 and for producing the formulation P arb .
As for the implementation of formulation P arb , it is also done in Python 3.7 and Gurobi 9.0.3 is used as MIP solver. Default parameters are used, and the results are averaged over three runs to account for the performance variability of the solver. Table 2. Results for random bipartite graphs. The density of each graph is written in its name, with the random seed in brackets. Table 1 indicates the results for random graphs, and Table 2 for bipartite graphs. Columns V C, CV C indicate the sizes of the minimum vertex cover and connected vertex cover respectively. The columns B&B t, B&B n indicate the running time (in seconds) and the number of nodes of Algorithm 1, and similarly for P arb t and P arb n.
It is evident from this comparison that solving the CVC problem with our formulation P arb is much faster than with Algorithm 1, by a factor of one up to three order of magnitudes for some of the instances. Algorithm 1 does not finish in the time limit (one hour) for one of the bipartite graphs of density 0.2. Clearly, this might be partially due to the naive implementation of Algorithm 1, which is not optimized for speed: for instance, in line 10 one does not have to recompute all cut vertices every time, but could restrict the computation to a single connected component of an appropriate subgraph of G. However, implementing this using the appropriate functions of Networkx actually further slows down the algorithm, as more information needs to be carried by each node. Hence, obtaining a faster version of the algorithm would require more advanced data structures and tools. But we believe this would not be enough to match the speed of P arb : a major limit of the algorithm is that the bound used in the pruning phase (line 6) is the same as for the classical vertex cover problem, i.e. does not take connectivity into account. Finding a better bound that is specific to the CVC problem is a non-trivial challenge, that we leave as an open problem. On the other hand, since Gurobi solves P arb using a very small number of branching nodes, it would seem that the bound of the linear relaxation of P arb is reasonably tight. This suggests the idea of taking the best of both worlds and integrating a bound based on P arb into a combinatorial branch and bound algorithm.
Conclusion
The CVC problem brings together two of the most natural concepts in graph theory: stable sets and vertex covers on one hand, connectedness and spanning trees on the other. This paper approaches the problem from a modeling perspective, giving exact mixed-integer formulations for solving the problem, and compares them with a simple branch and bound algorithm. We believe that further work needs to be done in both directions: while we focused on modeling the connectivity requirement, better formulations could be found by using tighter formulations of the vertex cover problem; on the other hand, finding a faster branch and bound algorithm is fascinating challenge, as it is unclear how to tailor the branching and pruning steps to the CVC problem. We conclude by mentioning some extensions of CVC that could be of interest.
The Tree Cover problem [8] is closely related to the CVC problem: given a graph with non-negative weights on the edges and numbers k, w one asks to find a connected vertex cover of size at most k whose induced subgraph admits a spanning tree of weight at most w. It is easy to see that our formulations given in Section 2 can be adapted to model the Tree Cover problem, and exploring this further is an interesting research direction.
A natural generalization of the CVC problem considers hypergraphs instead of graphs [12]. We remark that deciding whether a hypergraph contains a span-ning tree is NP-hard [1], hinting that the hypergraph version of CVC might be significantly harder than the graph version. However, we believe that our formulations can be extended to the hypergraph setting, and intend to investigate further in the future.
Finally, a different direction of research would be to generalize the connectivity constraint in the CVC problem to a matroid constraint, i.e. requiring that the edges of the subgraph induced by our vertex cover are full-rank sets of a given matroid. To the best of our knowledge, problems of this kind have not been studied before. Modelling such problems with mixed-integer formulations would be a promising line of inquiry, as there are several extended formulations for special matroid polytopes [6,10,2].
Theorem 2 .
2Let G = (V, E) be a connected graph. Then Algorithm 1 on input G outputs a minimum CVC of G.
Name (seed) |V |, |E| VC CVC B&B t B&B n P arb t P arb nTable 1. Results for random graphs of low density (0.05). Name (seed) |V |, |E| VC CVC B&B t B&B n P arb t P arb nG1
100, 252 58 60 65.6 23138 0.2
1
G2
100, 247 55 56
6.4
435
0.15
1
G3
100, 232 56 57 12.7 1742 0.17
1
G4
100, 238 58 59 17.9 2296 0.43 191
G7
100, 257 56 59 21.1 2700
0.3
14
G9
100, 254 58 60 100.3 21846 0.18
1
G13
100, 260 58 59 56.2 18766 0.3
7
G16
100, 263 56 58 18.1 3620 0.22
1
G24
100, 234 58 58 11.2 1788 0.24
1
G25
100, 264 61 61 28.6 4789 0.54 158
G0.1(1)
100, 255 49 54 11.18 6635 0.16
1
G0.1(4)
100, 242 50 57 863.4 818251 0.23
1
G0.2(0)
100, 483 50 57 1h+ 2mln+ 2.7
393
G0.2(1)
100, 497 50 56 1314.9 999252 2.3
338
G0.3(0)
100, 753 50 55 1137.1 723409 4.2
88
G0.3(1)
100, 753 50 55 1266.4 874949 4.3
166
G0.4(0)
100, 1007 50 54 354.5 210209 3
1
G0.4(1)
100, 977 50 53
69.6 39614 2.1
1
G0.5(0)
100, 1254 50 53
73.5 38685 3.9
1
G0.5(1)
100, 1231 50 53
50.0 26071 5.4
1
A vertex v of a connected graph G is a cut-vertex if its deletion disconnects G.
The np-completeness of finding a-trails in eulerian graphs and of finding spanning trees in hypergraphs. L D Andersen, H Fleischner, Discrete applied mathematics. 593Andersen, L.D., Fleischner, H.: The np-completeness of finding a-trails in eulerian graphs and of finding spanning trees in hypergraphs. Discrete applied mathematics 59(3), 203-214 (1995)
Extended formulations for matroid polytopes through randomized protocols. M Aprile, Operations Research Letters. 502Aprile, M.: Extended formulations for matroid polytopes through randomized pro- tocols. Operations Research Letters 50(2), 145-149 (2022)
Some code for solving the cvc problem. M Aprile, Aprile, M.: Some code for solving the cvc problem (2022), https://github.com/manuel-aprile/CVC
Extended formulations from communication protocols in output-efficient time. M Aprile, Y Faenza, Mathematical Programming. 1831Aprile, M., Faenza, Y.: Extended formulations from communication protocols in output-efficient time. Mathematical Programming 183(1), 41-59 (2020)
Extension complexity of stable set polytopes of bipartite graphs. M Aprile, Y Faenza, S Fiorini, T Huynh, M Macchia, International Workshop on Graph-Theoretic Concepts in Computer Science. SpringerAprile, M., Faenza, Y., Fiorini, S., Huynh, T., Macchia, M.: Extension complexity of stable set polytopes of bipartite graphs. In: International Workshop on Graph- Theoretic Concepts in Computer Science. pp. 75-87. Springer (2017)
Regular matroids have polynomial extension complexity. M Aprile, S Fiorini, Mathematics of Operations Research. 471Aprile, M., Fiorini, S.: Regular matroids have polynomial extension complexity. Mathematics of Operations Research 47(1), 540-559 (2022)
Smaller extended formulations for spanning tree polytopes in minor-closed classes and beyond. M Aprile, S Fiorini, T Huynh, G Joret, D R Wood, P4.47Electronic Journal of Combinatorics. 284Aprile, M., Fiorini, S., Huynh, T., Joret, G., Wood, D.R.: Smaller extended formu- lations for spanning tree polytopes in minor-closed classes and beyond. Electronic Journal of Combinatorics 28(4), P4.47 (2021)
Approximating the tree and tour covers of a graph. E M Arkin, M M Halldórsson, R Hassin, Information Processing Letters. 476Arkin, E.M., Halldórsson, M.M., Hassin, R.: Approximating the tree and tour covers of a graph. Information Processing Letters 47(6), 275-282 (1993)
Connected vertex covers in dense graphs. J Cardinal, E Levy, Theoretical Computer Science. 411Cardinal, J., Levy, E.: Connected vertex covers in dense graphs. Theoretical Com- puter Science 411(26-28), 2581-2590 (2010)
Subgraph polytopes and independence polytopes of count matroids. M Conforti, V Kaibel, M Walter, S Weltge, Operations research letters. 435Conforti, M., Kaibel, V., Walter, M., Weltge, S.: Subgraph polytopes and inde- pendence polytopes of count matroids. Operations research letters 43(5), 457-460 (2015)
Matroids and the greedy algorithm. J Edmonds, Mathematical programming. 11Edmonds, J.: Matroids and the greedy algorithm. Mathematical programming 1(1), 127-136 (1971)
Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs. B Escoffier, L Gourvès, J Monnot, Journal of Discrete Algorithms. 81Escoffier, B., Gourvès, L., Monnot, J.: Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs. Journal of Discrete Algorithms 8(1), 36-49 (2010)
Vertex and edge covers with clustering properties: Complexity and algorithms. H Fernau, D F Manlove, Journal of Discrete Algorithms. 72Fernau, H., Manlove, D.F.: Vertex and edge covers with clustering properties: Com- plexity and algorithms. Journal of Discrete Algorithms 7(2), 149-167 (2009)
Smaller extended formulations for the spanning tree polytope of bounded-genus graphs. S Fiorini, T Huynh, G Joret, K Pashkovich, Discrete & Computational Geometry. 573Fiorini, S., Huynh, T., Joret, G., Pashkovich, K.: Smaller extended formulations for the spanning tree polytope of bounded-genus graphs. Discrete & Computational Geometry 57(3), 757-761 (2017)
The rectilinear steiner tree problem is np-complete. M R Garey, D S Johnson, SIAM Journal on Applied Mathematics. 324Garey, M.R., Johnson, D.S.: The rectilinear steiner tree problem is np-complete. SIAM Journal on Applied Mathematics 32(4), 826-834 (1977)
Parameterized complexity of vertex cover variants. J Guo, R Niedermeier, S Wernicke, Theory of Computing Systems. 413Guo, J., Niedermeier, R., Wernicke, S.: Parameterized complexity of vertex cover variants. Theory of Computing Systems 41(3), 501-520 (2007)
Exploring network structure, dynamics, and function using networkx. A Hagberg, P Swart, D Chult, Los Alamos National Lab.(LANL). Los Alamos, NM (United StatesTech. repHagberg, A., Swart, P., S Chult, D.: Exploring network structure, dynamics, and function using networkx. Tech. rep., Los Alamos National Lab.(LANL), Los Alamos, NM (United States) (2008)
Cliques, coloring, and satisfiability: second DIMACS implementation challenge. D S Johnson, M A Trick, American Mathematical Soc26Johnson, D.S., Trick, M.A.: Cliques, coloring, and satisfiability: second DIMACS implementation challenge, October 11-13, 1993, vol. 26. American Mathematical Soc. (1996)
The lovász theta function and a semidefinite programming relaxation of vertex cover. J Kleinberg, M X Goemans, SIAM Journal on Discrete Mathematics. 112Kleinberg, J., Goemans, M.X.: The lovász theta function and a semidefinite pro- gramming relaxation of vertex cover. SIAM Journal on Discrete Mathematics 11(2), 196-204 (1998)
Using separation algorithms to generate mixed integer model reformulations. R K Martin, Oper. Res. Lett. 103Martin, R.K.: Using separation algorithms to generate mixed integer model refor- mulations. Oper. Res. Lett. 10(3), 119-128 (1991)
Integer programming formulation of traveling salesman problems. C E Miller, A W Tucker, R A Zemlin, Journal of the ACM (JACM). 74Miller, C.E., Tucker, A.W., Zemlin, R.A.: Integer programming formulation of traveling salesman problems. Journal of the ACM (JACM) 7(4), 326-329 (1960)
Enumerate and expand: Improved algorithms for connected vertex cover and tree cover. D Mölle, S Richter, P Rossmanith, Theory of Computing Systems. 432Mölle, D., Richter, S., Rossmanith, P.: Enumerate and expand: Improved algo- rithms for connected vertex cover and tree cover. Theory of Computing Systems 43(2), 234-253 (2008)
On the facial structure of set packing polyhedra. M W Padberg, Mathematical programming. 51Padberg, M.W.: On the facial structure of set packing polyhedra. Mathematical programming 5(1), 199-215 (1973)
Depth-first search and the vertex cover problem. C Savage, Information processing letters. 145Savage, C.: Depth-first search and the vertex cover problem. Information processing letters 14(5), 233-235 (1982)
Integer programming formulations of the traveling salesman problem. R Wong, Proc. 1980 IEEE International Conference on Circuits and Computers. 1980 IEEE International Conference on Circuits and ComputersWong, R.: Integer programming formulations of the traveling salesman problem. In: Proc. 1980 IEEE International Conference on Circuits and Computers. pp. 149-152 (1980)
A review on algorithms for maximum clique problems. Q Wu, J K Hao, European Journal of Operational Research. 2423Wu, Q., Hao, J.K.: A review on algorithms for maximum clique problems. European Journal of Operational Research 242(3), 693-709 (2015)
| [
"https://github.com/manuel-aprile/CVC"
]
|
[
"Decoherence and nonclassicality of photon-added/subtracted multi-mode Gaussian states",
"Decoherence and nonclassicality of photon-added/subtracted multi-mode Gaussian states"
]
| [
"Anaelle Hertz \nDepartment of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada\n",
"Stephan De Bièvre \nUMR 8524 -Laboratoire Paul Painlevé\nUniv. Lille\nCNRS\nF-59000LilleInriaFrance\n"
]
| [
"Department of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada",
"UMR 8524 -Laboratoire Paul Painlevé\nUniv. Lille\nCNRS\nF-59000LilleInriaFrance"
]
| []
| Photon addition and subtraction render Gaussian states non-Gaussian. We provide a quantitative analysis of the change in nonclassicality produced by these processes by analyzing the Wigner negativity and quadrature coherence scale (QCS) of the resulting states. The QCS is a recently introduced measure of nonclassicality [PRL 122, 080402 (2019), PRL 124, 090402 (2020)], that we show to undergo a relative increase under photon addition/subtraction that can be as large as 200%. This implies that the degaussification and the concomitant increase of nonclassicality come at a cost. Indeed, the QCS is proportional to the decoherence rate of the state so that the resulting states are considerably more prone to environmental decoherence. Our results are quantitative and rely on explicit and general expressions for the characteristic and Wigner functions of photon added/subtracted single-and multi-mode Gaussian states for which we provide a simple and straightforward derivation. These expressions further allow us to certify the quantum non-Gaussianity of the photon-subtracted states with positive Wigner function. | 10.1103/physreva.107.043713 | [
"https://export.arxiv.org/pdf/2204.06358v2.pdf"
]
| 257,913,057 | 2204.06358 | 6e5a22b2bd9f6d6edf053f5e12307b8248bf3768 |
Decoherence and nonclassicality of photon-added/subtracted multi-mode Gaussian states
Anaelle Hertz
Department of Physics
University of Toronto
M5S 1A7TorontoOntarioCanada
Stephan De Bièvre
UMR 8524 -Laboratoire Paul Painlevé
Univ. Lille
CNRS
F-59000LilleInriaFrance
Decoherence and nonclassicality of photon-added/subtracted multi-mode Gaussian states
Photon addition and subtraction render Gaussian states non-Gaussian. We provide a quantitative analysis of the change in nonclassicality produced by these processes by analyzing the Wigner negativity and quadrature coherence scale (QCS) of the resulting states. The QCS is a recently introduced measure of nonclassicality [PRL 122, 080402 (2019), PRL 124, 090402 (2020)], that we show to undergo a relative increase under photon addition/subtraction that can be as large as 200%. This implies that the degaussification and the concomitant increase of nonclassicality come at a cost. Indeed, the QCS is proportional to the decoherence rate of the state so that the resulting states are considerably more prone to environmental decoherence. Our results are quantitative and rely on explicit and general expressions for the characteristic and Wigner functions of photon added/subtracted single-and multi-mode Gaussian states for which we provide a simple and straightforward derivation. These expressions further allow us to certify the quantum non-Gaussianity of the photon-subtracted states with positive Wigner function.
I. Introduction
Gaussian states are prominent in continuous-variable quantum information as they are relatively easy to produce experimentally and simple to study theoretically. Nevertheless, non-Gaussian states or operations are essential for performing certain quantum information tasks. They are for example needed to achieve universal photonic quantum computation [1,2]. One possible method for producing non-Gaussian states is through photon addition or subtraction from a Gaussian state. This technique has attracted interest because it allows the engineering of a variety of non-Gaussian quantum states. It has for example been shown that cat states with small amplitude can be prepared with a fidelity close to one by subtracting a photon from a vacuum squeezed state [3][4][5]. Over the last few years, a variety of experimental techniques have been developed to generate and study photon-added/subtracted Gaussian states [3,[6][7][8][9][10][11][12]. For reviews on photon addition and subtraction, we refer to [13,14].
For a state to be non-Gaussian is however not always enough for it to be interesting in the context of quantum information or quantum computing tasks. Indeed, non-Gaussian states may still be classical meaning that they may be mixtures of coherent states. Or they may be more generally mixtures of Gaussian states, in which case they are said not to be quantum non-Gaussian, or genuinely non-Gaussian. (See [14][15][16][17] and references therein for details on the latter subject.) Nonclassicality or the stronger property of quantum (or genuine) non-Gaussianity are needed for certain quantum informational tasks and a variety of techniques for their detection and measure have been developed. In this paper, we provide a quantitative analysis of the degree to which photon-added/subtracted Gaussian states are nonclassi-cal or quantum non-Gaussian.
For our analysis, we will concentrate on two distinct measures of nonclassicality/non-Gaussianity, namely their quadrature coherence scale (QCS) and their Wigner negativity, as expressed through their Wigner negative volume. The QCS is a recently introduced nonclassicality measure [18,19], the definition and main nonclassical features of which are recalled in Sec. III. The Wigner negativity, on the other hand, is a common measure of nonclassicality and has been shown to be a monotone in a resource theory of quantum non-Gaussianity [16].
Our results for single-mode states, detailed below, establish that the degaussification through photon addition/subtraction does substantially enhance the nonclassical features of the underlying Gaussian states. At low and intermediate squeezing, photon addition is more efficient in doing so, but at high enough squeezing, photon addition or subtraction are shown to be equivalent in this respect. Importantly, these results also entail that the increased nonclassicality that is generated in the process comes at a cost. Indeed, the QCS of a state is proportional to its decoherence rate [19], so that a large value of the QCS is equivalent to a short decoherence time. The photon-added/subtracted states are therefore much more sensitive to environmental decoherence than their Gaussian mother states. And the photon-added states tend to be considerably more sensitive than the photonsubtracted ones.
More precisely, we show that the Wigner negative volume of single-mode photon-added/subtracted Gaussian states reaches its maximal value when there is no noise, hence on the photon-added/subtracted squeezed vacuum states. This maximum is independent of the amount of squeezing. In the presence of noise, and at low to intermediate values of the squeezing, we show the Wigner negative volume is more sensitive to noise and hence smaller arXiv:2204.06358v2 [quant-ph] 2 Apr 2023 for photon-subtracted squeezed Gaussian states than for photon-added ones. This means that, at intermediate squeezing, the intuitive idea that photon-addition is more efficient than photon-subtraction in producing nonclassical features such as Wigner negativity hence quantum non-Gaussianity is indeed correct for Gaussian states. One should note however that, as we show, there is a tradeoff between squeezing and Wigner negative volume: photon-added squeezed thermal states loose Wigner negative volume as the squeezing is increased. At large squeezing, the advantage of photon-addition over photonsubtraction is diminished: we establish that the Wigner negative volume is then identical for photon-added and photon-subtracted states.
Concerning the QCS of photon-added/subtracted single-mode Gaussian states, we show that the degaussification accomplished by photon addition/subtraction does typically increase the QCS, and hence the associated nonclassical features of the state, and that this increase is often substantial. As for the Wigner negative volume, it is more pronounced for photon-addition than for photon-subtraction, except at large values of the squeezing, where it is again asymptotically identical.
As a byproduct of our analysis, we show a number of structural results about photon addition/subtraction that are of independent interest and valid for arbitrary n-mode Gaussian states ρ G . While photon addition/subtraction is guaranteed to make n-mode Gaussian states non-Gaussian, it is known, and very easy to see, that photon subtraction applied to a n-mode classical Gaussian state yields a classical non-Gaussian state. We will show that, in addition, photon-subtraction, applied to a n-mode nonclassical Gaussian state yields a nonclassical non-Gaussian state (Proposition 1). Note that this is not true for non-Gaussian states: for example, the one-photon Fock state, which is nonclassical, is transformed into the vacuum, which is classical. In addition, we show that, if a single-mode photon-subtracted Gaussian state ρ G − ∼ a(c)ρ G a † (c) (see Eq. (18)) is Wigner negative, then the underlying Gaussian state ρ G has a QCS strictly larger than 1 (Lemma 3). This is in contrast to what happens with photon addition, which, applied to any Gaussian state, is known [20,21] to always produce a Wigner negative and hence nonclassical state. We further use a sufficient criterium for quantum non-Gaussianity in terms of the Wigner function from [15] to identify a family of photon-subtracted Gaussian state with positive Wigner function that are quantum non-Gaussian.
The paper is organized as follows. In Sec. II we give a brief review of the phase space formalism of quantum optics. In Sec. III we introduce the QCS and recall its main features as a nonclassicality witness and measure. To compute it for photon-added/subtracted states, we need their Wigner and/or characteristic functions. We show how to straightforwardly compute those for general multi-mode photon-added/subtracted states in Sec. IV and apply the result when the initial state is Gaussian. The resulting formulas are simply expressed in terms of the covariance matrix and displacement operator characterizing the initial Gaussian state: see Eq. (21) and Eq. (24). Equivalent, but less explicit formulas were obtained previously in [14,20,21], using a more complex and considerably more lengthy derivation. We first use these expressions to make a number of general qualitative and quantitative observations on the (non)classicality and Wigner negativity/positivity of photon-subtracted multi-mode Gaussian states in Sec. V. In Sec. VI-VII we then turn to a quantitative study of the Wigner negative volume and of the relative change in the QCS for singlemode photon-added/subtracted squeezed thermal states. In Sec. VIII we discuss the two-mode case through some illustrative examples, some of which have recently been prepared experimentally [10]. We conclude and discuss some open problems in Sec. IX.
II. Phase space formalism
We start by briefly introducing the symplectic formalism employed for continuous-variable states in quantum optics. More details can be found, for example, in [22,23].
A continuous-variable system is represented by n modes. To each of them are associated the annihilation/creation operators a i and a † i verifying the commutation relation [a i , a † i ] = 1. We define the vector of quadraturesr = (x 1 ,p 1 ,x 2 ,p 2 , · · · ,x n ,p n ) wherê
x j = 1 √ 2 (a j + a † j ),p j = − i √ 2 (a j − a † j ) ∀j = 1, · · · , n.
Each quantum state ρ can be described by a characteristic function
χ(z) = TrρD(z)(1)
where
D(z) = exp n j=1 (z j a † j −z j a j ) = e a † (z)−a(z)(2)
is the n-mode displacement operator and where
a † (z) = i z i a † i , a(z) = iz i a i .(3)
Note that, for any z, z ∈ C n ,
[a(z), a † (z )] = i z i z i = z · z .
For later use, we recall
a † (c)D(z) = D(z)(a † (c) + z · c) (4) a(c)D(z) = D(z)(a(c) + z · c).(5)
The Fourier transform of the characteristic function gives the Wigner function
W (α) = 1 (π 2 ) n χ(z)e (z·α−z·ᾱ) d 2n z(6)
where d 2n z = d n Re(z)d n Im(z), α = α 1 · · · α n T and
α j = α j1 + iα j2 = 1 √ 2 (x j + ip j ) ∈ C. It is normalized so that W (α)d 2n α = χ(0) = 1.
For later reference, we recall that a state ρ is said to be optically classical [24] if and only if there exist a positive function P (z) so that ρ = P (z)|z z|dz.
Here |z = D(z)|0 are the coherent states with |0 the vacuum state. Otherwise, the state is said to be optically nonclassical. In other words, a state is said to be optically nonclassical if it is not a mixture of coherent states. In what follows, we will drop "optically" from "optically nonclassical". The first-order moments of a state ρ constitute the displacement vector, defined as d = r = Tr(rρ), while the second moments make up the covariance matrix V whose elements are given by
V ij = 2Cov[r i ,r j ] = {r i ,r j } − 2 r i r j(8)
where {·, ·} represents the anticommutator. A Gaussian state ρ G is fully characterized by its displacement vector and covariance matrix. Its characteristic function is a Gaussian:
χ G (ξ) = e − 1 2 ξ T ΩV Ω T ξ−i √ 2(Ωd) T ξ ,(9)
with Ω = n j=1 0 1 −1 0 .
Here, for all 1 ≤ i ≤ n, ξ T i = (ξ i1 , ξ i2 ) ∈ R 2 and ξ T = (ξ T 1 , . . . , ξ T n ) ∈ R 2n . Also, we define
z j = ξ j1 + iξ j2 ,(10)
and z = (z 1 , . . . , z n ) ∈ C n and we will write, with the usual abuse of notation: χ G (z) = χ G (ξ). The Wigner function W G (α) of a Gaussian state is also a Gaussian. See APPENDIX C for the explicit expression.
III. The quadrature coherence scale
The quadrature coherence scale C(ρ) (QCS) of a state ρ is defined as [18,19]
C 2 (ρ) = 1 2nP 2n j=1 Tr[ρ,r j ][r j , ρ] (11)
where P = Trρ 2 is the purity of the state ρ. A summary of its main features is given in this Section. The expression Eq. (11) for C(ρ) does not explain why it is called the quadrature coherence scale. To see this, we consider for simplicity of notation the case where only one mode is present: the general case is obtained by taking an average over the modes. It turns out that the QCS can be rewritten as follows:
C 2 (ρ) = 1 2P (x − x ) 2 |ρ(x, x )| 2 dxdx + (p − p ) 2 |ρ(p, p )| 2 dpdp .(12)
Here ρ(x, x ) (respectively ρ(p, p )) is the operator kernel of ρ in thex-representation (respectivelŷ p-representation).
Since |ρ(x, x )| 2 /P (respectively |ρ(p, p )| 2 /P) is a probability distribution, one readily sees the first (second) term in this expression provides the scale (squared) on which the coherences, meaning the offdiagonal matrix elements ρ(x, x ) (respectively ρ(p, p )) of the density matrix ρ live. Roughly speaking, one can think of ρ(x, x ) and ρ(p, p ) as matrices; the square root of the first (respectively second) term in Eq. (12) provides the width of the strip parallel to its diagonal in which thex-coherences (respectivelyp-coherences) of ρ are substantial. It follows that a large C(ρ) implies that either thex-orp-coherences live far from the diagonal. Conversely, a small C(ρ) implies that the off-diagonal coherences of both quadratures must be small away from the diagonal. As explained in [19], a large value of the QCS manifests itself in nonclassical phenomena such as fast oscillations of the Wigner function, of the probability densities ρ(x, x) and/or ρ(p, p) for position and momentum and of the photon number probability, which can be interpreted as interference phenomena.
In fact, as pointed out already in the introduction, C 2 (ρ) provides a measure of optical non-classicality. More precisely, C 2 (ρ) > 1 implies ρ is nonclassical and a large value of the QCS corresponds to a large nonclassicality [18]. Coherent states, on the other hand, have a QCS equal to 1; all other classical states have a QCS less than or equal to 1, which is therefore a natural reference value for the QCS. The evaluation of the QCS on large families of benchmark states in [18,19,25,26] confirms the efficiency of the QCS as an optical nonclassicality measure. For example, highly excited Fock states, cat states with large separation, highly squeezed states and strongly entangled states all have a large QCS. Some explicit examples of this type are given below in this section. In the following sections, the QCS of photon-added/subtracted Gaussian states will be studied in detail and the results will confirm this general picture.
The QCS has recently be shown to be experimentally accessible. In [27] an interferometric scheme was proposed allowing a direct measurement of the QCS using two identical copies of the state, thereby avoiding having recourse to a full state tomography. This scheme has then been carried out on a cloud quantum computer [28].
For our purposes here, a second feature of the QCS is crucial. It was proven in [19] that the QCS is directly related to the decoherence time of ρ, as follows. When coupled to a thermal bath, and provided C 2 (ρ) > 1, the half life τ P of the purity of ρ satisfies
τ P ≈ 1 2 1 (2n ∞ + 1)C 2 (ρ) − 1 t R ,
where t R is the time scale on which the system converges to the thermal equilibrium with mean photon number n ∞ , which characterizes the temperature of the bath. Similarly, the half life of C 2 (ρ) itself is also inversely proportional to n ∞ C 2 (ρ). In other words, the speed at which environmental decoherence takes place is proportional to the QCS (squared).
In conclusion, whereas a large QCS does imply strong nonclassical properties of the state, as recalled above, this nonclassicality is accompanied automatically with an increased sensitivity to environmental decoherence and hence to a shorter decoherence time. For more details we refer to [19].
In what follows, we investigate how the QCS of Gaussian states is affected by photon addition or subtraction. This will inform us on the change in decoherence time of the degaussified states, compared to the original Gaussian state. We will see that, as a rule, the degaussified state has a much larger QCS, hence a much shorter decoherence time.
For our purposes, neither the expression in Eq. (11) nor the one in Eq. (12) are suitable. It is shown in [18] that the QCS for a general n-mode state can be written in terms of the Wigner or characteristic function of the state:
C 2 (ρ) = |ξ|χ 2 2 n χ 2 2 = 1 4 ∇W 2 2 n W 2 2 .(13)
Here, with ξ, α ∈ C n and · 2 stands for the L 2 -norm, meaning for example W 2 2 := |W (α)| 2 d 2n α. The expressions obtained for the Wigner and characteristic function of photon-added/subtracted states in the next section will allow us to compute their QCS and the corresponding change in QCS.
Let us note that, for pure states, a simple computation starting from (11) shows that
C 2 (ρ) = 1 n i (∆x i ) 2 + (∆p i ) 2 ,(14)
which is the so-called total noise of ρ [29]. As a result, for the n-th Fock state |n , one finds C 2 (|n n|) = 2n + 1 (15) and for cat states |ψ ± |α ± | − α , one has C 2 (|ψ ± ψ ± |) 2|α| 2 .
For an n-mode Gaussian state ρ G , pure or mixed, one finds [19,26]
C 2 G = C 2 (ρ G ) = 1 2n TrV −1 .(16)
For example, the squeezed thermal states, defined in Eq. (41), have C 2 SqTh = 1−q 1+q cosh r (see Eq. (42)). Note the growth of the QCS with n, α and the squeezing parameter r respectively. We will continue the practice of [18,19] in referring to optically nonclassical states ρ for which the QCS is less than 1 as weakly nonclassical states, the others being strongly nonclassical. In other words, we have that
C 2 (ρ) ≤ 1
if and only if ρ is either classical or weakly nonclassical. The relevance of this boundary between weakly and strongly nonclassical is clear from the many benchmark states investigated previously, and will emerge again below in Sec. V and in Sec. VII. We first define what we mean by a general photonadded n-mode state ρ + . Recall that the most general multi-mode one-photon state is of the form
|c = a † (c)|0 ,
where a † (c) is given by Eq. (3), c ∈ C n and i |c i | 2 = 1. In general, a photon-added state is then defined as
ρ + = N + a † (c)ρ a(c) with N + = Tr a † (c)ρ a(c) −1 ,(17)
where ρ is the initial or mother state to which a photon is added. Similarly, the photon-subtracted state is defined as
ρ − = N − a(c)ρ a † (c) with N − = Tr a(c)ρ a † (c) −1 .
(18) Note that Tr a † (c)ρ a(c) = Tr a(c)ρ a † (c) + 1 ≥ 1, so that 0 < N + ≤ 1. However, Tr a(c)ρ a † (c) can vanish, in which case a(c)ρa † (c) = 0 so that ρ − is not defined. We will come back to this point below, but for now we assume N − < +∞.
We write χ ± for the characteristic function of ρ ± . Its expression is obtained by a short and straightforward computation and we find:
χ ± (z) = −N ± c · ∂ z ∓z 2 c · ∂z ∓ z 2 χ(z)(19)
where χ(z) is the characteristic function of the state ρ.
To see this, we note first that the displacement operator can be written as D(z) = e a † (z) e −a(z) e −|z| 2 /2 or equivalently, as D(z) = e −a(z) e a † (z) e |z| 2 /2 .
Consequently, one has the well known formulas
∂ zj D(z) = a † j − zj 2 D(z) = D(z) a † j + zj 2 , ∂ zj D(z) = − D(z) a j + zj 2 = − a j − zj 2 D(z).
Hence, for all c ∈ C n , a short computation shows that
− c · ∂z − c · z 2 c · ∂ z − c · z 2 D(z) = a(c)D(z)a † (c).
This implies Eq. (19) for χ + . The proof for χ − is similar. It is clear from Eq. (19) that, when adding m photons, one needs to apply m times the operator
− c · ∂ z −z 2 c · ∂z − z 2
and to normalize the result.
To compute the Wigner function W ± (α) of ρ ± it now suffices to compute the Fourier transform of χ ± (z) [see Eq. (6)]. Details of the calculation can be found in AP-PENDIX A. We obtain
W ± (α) = N ± c · ∂ α 2 ∓ᾱ c · ∂ᾱ 2 ∓ α W (α).(20)
The one-mode version of this expression was already obtained in [30]. Clearly then, if the characteristic function χ (or Wigner function W) of ρ is known, the characteristic/Wigner function of an arbitrary photonadded/subtracted state can be straightforwardly computed. We illustrate this in the following paragraph for Gaussian states.
B. Photon-added/subtracted Gaussian states
We suppose now that ρ = ρ G is Gaussian. The computation in Eq. (19) then reduces to elementary algebra, using (9). The details are given in APPENDIX B and the result is
χ G ± (z) = N ± 1 2 m T c V m c ± 1 2 − β T ± m c m T c β ± χ G (z).(21)
Here the covariance matrix V is the one of the Gaussian mother state,
β ± = 1 2 (ΩV Ω ∓ I) U † Z + iΩd,(22)
the matrix U is given by [31]
U = n j=1 u where u = 1 √ 2 1 i 1 −i , and Z = z 1 z 1 . . . z n z n = √ 2U ξ, m c = U † c 1 0 . . . c n 0 .(23)
Note that
m T c V m c = 2Cov[a † (c), a(c)].
With analogous calculations (see APPENDIX C), one also finds the Wigner function of a photonadded/subtracted Gaussian state. The resulting expressions are similar with the difference that they involve the inverse of the covariance matrix V . One finds 1
W G ± (r) = N ± M ± (V, c) + λ T ± m c m T c λ ± W G (r) (24) where λ ± = V −1 ± I r − V −1 d ∈ R 2n ,(25)
and M ± (V, c)∈ R is independent of r and given by
M ± (V, c) = ∓ 1 2 − 1 2 m T c V −1 m c .(26)
Let us note that in [20,21] expressions for the characteristic and Wigner functions of photon-added/subtracted Gaussian states were derived through a rather involved computation of the truncated correlation functions of the states, which then need to be summed. The resulting expressions are however less directly formulated in terms of the covariance matrix V and displacement vector d characterizing the Gaussian mother state. Our derivation here, starting as it does from the general and straightforward expressions in Eq. (19) and (20), is elementary and the results are simply expressed in terms of d and of the (inverse of) V . We use them now to analyze the QCS and Wigner negativity of the photon-added/subtracted Gaussian states. Let us mention that yet another approach to the computation of the Wigner function of photonsubtracted states is proposed in [14]; the resulting expressions are again less explicit than the ones proposed here.
V. (Non)Classicality and Wigner negativity of photon-subtracted Gaussian states.
To prepare our quantitative analysis of the QCS of photon-added/subtracted Gaussian states, we obtain in this section general results on the (non)classicality and Wigner negativity/positivity of photon-subtracted Gaussian states. We know that photon-addition/subtraction degaussifies any Gaussian state. The question we address is: under what conditions on the Gaussian state and on c does it become nonclassical or even Wigner negative? Note that, for photon-addition, the answer is immediate. Photon-addition transforms any Gaussian state, centered or not, classical or not, into a Wigner negative and hence nonclassical and even quantum non-Gaussian state. This follows directly from Eq. (24)- (26) and was pointed out already in [20,21]. We therefore concentrate on the photon-subtracted case.
For one-mode photon-subtracted Gaussian states, we establish a relation between Wigner negativity and the QCS. Recall that a state is said to be Wigner positive if its Wigner function is everywhere nonnegative. Otherwise it is said, somewhat abusively, to be Wigner negative.
A. (Non)Classicality of photon-subtracted
Gaussian states.
It is well known that photon-subtraction transforms a classical state into a classical state. We recall the argument. Suppose ρ is classical and let P (z) be its Pfunction, which is nonnegative. Then it follows directly from Eq. (7) that N − |c · z| 2 P (z), which is still nonnegative, is the P -function of ρ − . In addition, photon subtraction can make a nonclassical state classical: a 1 |1 = |0 is an example. In other words, while photon subtraction always preserves the classicality of states, it does not always preserve their nonclassicality.
We show here that, nevertheless, photon subtraction always transforms a Gaussian nonclassical state into a nonclassical state. In other words, photon-subtraction preserves both the classicality and the nonclassicality of Gaussian states. This is the content of Proposition 1 below. It generalizes an observation made in [32] where it is remarked that, under photon subtraction, a single-mode squeezed vacuum state remains nonclassical for all values of squeezing r > 0. Our result holds for all nonclassical Gaussian multi-mode states, centered or not.
As a preliminary step, we first identify those c ∈ C n with c·c = 1, and ρ G for which a(c)ρ G a † (c) = 0; for such c and ρ G photon subtraction therefore does not lead to a state. The result is stated in the following Lemma.
Lemma 1
Let ρ G be a Gaussian state with covariance matrix V and displacement vector d, and let c ∈ C n . Then a(c)ρ G a † (c) = 0 if and only if m c ∈ Ker(V − I) and m c · d = 0.
When V = I, the Gaussian state is in fact a coherent state |z . In that case the first condition of the Lemma is satisfied for all c ∈ C n and the second condition reads c · z = 0. In other words, one has a(c)|z = 0 ⇔ c · z = 0. (27) Of course, this particular case follows immediately from the well known identity
a(c)|z = (c · z)|z ,(28)
which is in turn a direct consequence of Eq. (5). When there is only one mode, then Eq. (27) can only be satisfied if |z = |0 . With several modes, on the other hand it does occur for nonzero z. Lemma 1 treats the case of a general Gaussian state and the proof, which uses Eq. (21)- (22), is slightly more involved. (22), it is given bỹ
Proof.ρ G − := a(c)ρ G a † (c) = 0 if and only ifχ G − (z) = 0 for all z ∈ C n , whereχ G − is the characteristic function of ρ G − . From Eq. (21)-χ G − (z) = 1 2 m T c V m c ± 1 2 − β T ± m c m T c β ± χ G (z).
For this to vanish, the polynomial factor preceding the exponential factor χ G must vanish for all z ∈ C n . Let v ∈ R 2n be an eigenvector of V with eigenvalue λ = 1.
Then define, for all µ ∈ R,
Z(µ) = µU Ω T v.(29)
Then
β T − m c = µ 2 (λ − 1)(Ωv) T m c + i(Ωd) T m c .(30)
It follows thatχ
G − (Z(µ)) = p(µ)χ G (Z(µ)),(31)
where p(µ) is a polynomial of degree two. This polynomial vanishes identically if and only if it has vanishing coefficients. One readily checks this is equivalent to
(Ωv) T m c = 0,(32)1 2 (m T c (V − I)m c )+ | (Ωd) T m c | 2 = 0.(33)
Since Ω T m c = im c , the first of these two conditions is equivalent to v T m c = 0. Since this needs to hold for all eigenvectors of V with eigenvalue λ = 1, it follows that m c ∈ Ker(V − I). Hence the first term in Eq. (33) vanishes and so does therefore the second one. This concludes the proof.
We are now ready to fully characterize the classical and hence the nonclassical photon-subtracted Gaussian states.
Proposition 1
Let ρ G be a Gaussian state. Let c ∈ C n and suppose a(c)ρ G a † (c) = 0. Then:
(i) ρ G − is classical/nonclassical if and only if ρ G is clas- sical/nonclassical. (ii) ρ G − is classical if and only if V − I ≥ 0.
In Proposition 1, conditions (i) and (ii) are equivalent since it is well known that the classicality of a Gaussian state is equivalent to V ≥ I [31]. Proposition 1 (i) asserts that, whereas it is true that photon subtraction cannot produce a nonclassical state from a classical one, it is also true that it does never transform a nonclassical Gaussian state into a classical one. We will show in the next section that it can in fact considerably increase the degree of nonclassicality of a given Gaussian state.
Proof. In view of the previous comment, it is sufficient to prove that if ρ G − is classical then V ≥ I. For that purpose, we use the fact that, if ρ G − is classical, then the Fourier transform of the P -function, which is known to be given by e 1 2 ξ·ξ χ G − (ξ) [33], is a bounded function. Using Eq. (9) and (21) this implies
|e 1 2 ξ·ξ χ G − (ξ)| = N − 1 2 m T c V m c − 1 2 − β T − m c m T c β − ×e − 1 2 ξ T Ω(V −I)Ω T ξ (34) is bounded. Suppose it is not true that V ≥ I. Then there exists v ∈ R 2n , v · v = 1, and 0 ≤ γ < 1 so that V v = γv. For such v, we define Z(µ) as in Eq. (29) and hence ξ(µ) = 1 √ 2 U † Z(µ) = µ 1 √ 2 Ω T v.
The exponential factor in (34) then grows without bound for large µ.
Lemma 2
Let ρ G be a Gaussian state and c ∈ C n . Suppose ρ G − is Wigner-negative. Then
m c T V −1 m c > 1.(35)
Suppose that either d = 0 or that 1 is not an eigenvalue of V . Then Eq. (35) is both necessary and sufficient for ρ G − to be Wigner negative.
Note that it follows from Lemma 2 that photonsubtracted Gaussian states are Wigner-positive if
m c T V −1 m c ≤ 1.
This straightforward condition therefore identifies a family of Wigner-positive states indexed by V and by c which is of independent interest because a complete characterization of all Wigner positive states is not known [34].
Proof. Since m c m T c is a rank one projector in C 2n , the term λ T − m c m T c λ − in Eq. (24) is nonnegative. It fol- lows then from Eq. (24)-(26) that if ρ G − is Wigner nega- tive, then M − (V, c) < 0,M − (V, c) < 0,(36)
which yields the result.
In the case where only a single mode is present (n = 1) the previous result can be sharpened and a link established between the Wigner negativity of the photonsubtracted state and the QCS of the Gaussian mother state. First, without loss of generality, one can now take c = 1 and one finds from Eq. (16) that
C 2 (ρ G ) = 1 2 TrV −1 = m T c V −1 m c ,(37)M ± (V, c) = ∓ 1 2 − 1 2 m T c V −1 m c = − 1 2 C 2 (ρ G ) ± 1 .(38)
Next, introduce an orthogonal eigenbasis e 1 , e 2 for V :
0 < v 1 ≤ v 2 , e i ∈ R 2 , V e i = v i e i .
One then has the following result.
Lemma 3
Suppose v 1 > 1. Then the one-mode photon-subtracted Gaussian state ρ G − is classical and hence Wigner positive. Suppose v 1 < 1. Then the one-mode photon-subtracted Gaussian state ρ G − is Wigner negative if and only if
C 2 (ρ G ) > 1.(39)
Suppose v 1 = 1. Then the one-mode photon-subtracted Gaussian state ρ G − is Wigner negative if and only if
C 2 (ρ G ) > 1 + (d T e 1 ) 2 .(40)
In general therefore, if ρ G − is Wigner negative, then the Gaussian mother state ρ G is strongly nonclassical, meaning C 2 (ρ G ) > 1.
We already showed in the previous subsection that photon-subtracted states are nonclassical if and only if ρ G is nonclassical. One now sees in addition that if their Wigner function has some negativity then the Gaussian mother state is strongly nonclassical. Proof. The first statement follows directly from Proposition 1. Since detV ≥ 1, the condition v 1 < 1 implies v 2 > 1. Hence Lemma 2 implies the result in this case. Now suppose v 1 = 1, so that v 2 ≥ 1. If v 2 = 1, the state ρ G is a coherent state, in which case C(ρ G ) = 1 and the condition is never satisfied; but this is compatible with the statement of the Lemma since, in view of Eq. (28), the photon-subtracted state is then the same coherent state and hence Wigner positive. It remains to treat the case where v 1 = 1 < v 2 . It follows from Eq. (24) and Eq. (25) that the Wigner function is negative in at least one point if and only if
min r M − (V, c) + 1 2 λ − 2 < 0 where λ − = V −1 − I r − V −1 d ∈ R 2 .
Since min r λ − 2 = (e T 1 d) 2 the result then follows from Eq. (38).
In the next sections we turn to a quantitative analysis of the Wigner negativity and the QCS for single-mode photon-added/subtracted squeezed thermal states. We will show that the Wigner negativity of such states is bounded above by that of the one-photon Fock state, and very sensitive to noise and squeezing. The QCS can on the other hand be very strongly enhanced by the photon-addition/subtraction process and increases with the squeezing. It is also sensitive to noise and losses.
VI. Photon-added squeezed thermal states
In this section, we quantitatively evaluate the effect produced by adding a photon to a general centered singlemode Gaussian state on the Wigner negative volume and on the QCS of the state.
We note that the nonclassical nature of such photonadded states has previously been certified theoretically and/or experimentally only for the two particular cases of photon-added thermal states (see [9,35,36]) and of photon-added squeezed vacuum states (see [20,21,37,38]) using various nonclassicality witnesses, but without providing a complete quantitative assessment, even in these particular cases.
Our analysis of the Wigner negative volume of the photon-added Gaussian states shows that it is highest for photon-added squeezed vacuum states. It is sensitive to noise and, in the presence of noise, it decreases with increased squeezing (Sec. VI A). In this sense, there isat a fixed noise level -a tradeoff between Wigner negativity and squeezing for such states. We will further see that the degaussification process of photon-adding tends to entail a considerable percentage increase in QCS (Sec. VI B). Whereas this entails a corresponding gain in nonclassicality, it also means the resulting state is considerably more sensitive to environmental decoherence than its Gaussian mother state, as explained in Section III.
A squeezed thermal (SqTh) state is defined as (41) is a thermal state of temperature 2 q and S = e is the squeezing operator with z = re iφ . The rotational invariance of the QCS implies we can restrict ourselves to the case where φ = 0. The covariance matrix of these states is
ρ SqTh = Sρ Th S † where ρ Th = (1 − q) n q n |n n|V SqTh = 1 + q 1 − q e −2r 0 0 e 2r
and their characteristic function is
χ SqTh (z) = e − 1 2 1+q 1−q (e 2r ξ 2 1 +e −2r ξ 2 2 ) ,
where we recall z = ξ 1 + iξ 2 . Their QCS, computed with Eq. (16), is then equal to
C 2 SqTh (q, r) = 1 − q 1 + q cosh(2r).(42)
Note that it increases sharply with the squeezing parameter r and decreases with q. Increased squeezing therefore reduces the decoherence time sharply. A photon-added squeezed thermal (SqTh+) state is defined as
ρ SqTh+ = N SqTh+ a † ρ SqTh a where N SqTh+ = 2 1 + 1+q 1−q cosh(2r) −1
. Its characteristic function can be computed using Eq. (21):
χ SqTh+ (z) = χ SqTh (z) 2q|z| 2 (1 − q 2 ) cosh 2r + (1 − q) 2 + q + 1 q − 1 e 2r ξ 2 1 + e −2r ξ 2 2 + 1 .(43)
A. The Wigner negativity of photon-added squeezed thermal states
To evaluate the Wigner negativity of the SqTh+ states, we evaluate, as is customary, their Wigner negative volume [39], that we shall denote by N W (ρ): it is defined as the absolute value of the integral of the Wigner function over the area where the latter is negative. We recall that the Wigner negative volume has been proven to be a (non-faithful) monotone of genuine (or quantum) non-Gaussianity [16]. The Wigner function of the SqTh+ state can be readily computed with (24) (see APPENDIX E for the details). One sees that it is negative inside an ellipse centered at the origin where it reaches its minimal value. Except when q = 0, a general analytical expression for N W,SqTh+ (q, r) is not readily obtained, but the result of a numerical computation is shown in Fig. 1a. 2 The actual temperature is given by T with q = e − ω kT ; q is also related to the mean photon number n as q = For SqV+ states, when q = 0, an analytical computation yields the following value, independently of r [39]:
N W,SqV (r) = N W,SqTh+ (0, r) = 2 √ e − 1 = 0.213. (44)
This is the maximal value attained on SqTh+ states, and it is in particular the value for the first Fock state. The Wigner negative volume N W,SqTh+ (q, r) decreases with the noise q, at a given value of the squeezing r: this is not surprising, since higher q is expected to make the state more classical. The dot-dashed purple line on Fig. 1a indicates for which value of the noise the Wigner negative volume drops down to half the value it takes on the first Fock state. This happens with a noise in the range 0.12 ≤ q ≤ 0.2 depending on the squeezing and it shows that the Wigner negative volume of the SqTh+ states is quite sensitive to noise.
Contrary to what happens when q = 0, when q = 0, the Wigner negative volume does depend on the squeezing and in fact decreases with increasing r. In this sense, at a given noise level, there is a tradeoff to be considered: one pays in Wigner negative volume to gain in squeezing. In addition, as we will see below, increased squeezing substantially reduces the decoherence time.
The Wigner negative volume saturates to a finite value at large r that decreases with q and that is readily com-puted (see APPENDIX E) for small q:
N W,SqV (r) = N W,SqV (+∞) ≥ N W,SqTh+ (q, +∞) 2 √ e − 1 (1 − q) 3 (1 + q) 3 .
B. The QCS of photon-added squeezed thermal states
The QCS of the SqTh+ states can be computed with Eq. (13). The result is explicit (it can be found in AP-PENDIX D), but is not very instructive. To second order in q, r, it reads
C 2 SqTh+ (q, r) 3 − 8q + 8q 2 + 6r 2
The expression simplifies considerably for photon-added squeezed vacuum (SqV+, q = 0) states and for photonadded thermal (Th+, r = 0) states:
C 2 SqV+ (0, r) = 3 cosh(2r) (45) C 2 Th+ (q, 0) = 6 q + 1 − 1 − 2(q + 1) q 2 + 1 .(46)
Examining Fig. 1b, one observes that the QCS of SqTh+ states increases sharply with r and decreases with q, as the QCS of their Gaussian mother states (see Eq. (42)). The QCS of the SqTh+ states tends however to be considerably higher as we will see below. While they therefore display correspondingly stronger nonclassical effects, they are also more prone to environmental decoherence. For example, the QCS (squared) of the first Fock state (which is the photon-added state of the vacuum, corresponding to q = 0 = r) equals 3 [see Eq. (15)], while that of the vacuum itself is only 1: this corresponds to a 200% increase.
We now investigate quantitatively how strongly the degaussification through photon-addition affects the QCS for general (q, r). For that purpose we will use the relative QCS change R ± (ρ) defined as
R ± (ρ) = C 2 (ρ ± ) − C 2 (ρ) C 2 (ρ) ,(47)
so that C 2 (ρ ± ) = (1 + R ± (ρ))C 2 (ρ). It provides the percentage change in QCS as a result of the photonaddition/subtraction process. It is indeed clear that some of the QCS of the photon-added/subtracted Gaussian states is inherited from the Gaussian mother state to which a photon is added or from which it is subtracted; and that part of it is due to the addition/subtraction process itself. We show in Fig. 1c the contour plot in the (q, r)plane of the relative change R SqTh+ (q, r) of the QCS obtained with the addition of a photon. From Eq. (16) and Eq. (45) one sees that for squeezed vacuum states, one has R SqTh+ (0, r) = 2, independently of the squeezing. This corresponds to a 200% increase of the QCS due to photon addition, and it is the maximal value reached, as can be seen from the figure. When q > 0, the change in QCS decreases with increasing r and with increasing q. Nevertheless, there is a large region in the parameter space (q, r) where the relative change is positive and sizable. For q < 0.1 and values of r in the range 1 ≤ r ≤ 2 (which corresponds to a squeezing factor comprised between 7 and 15 dB), it is at least 90%, for example. For q < 0.2 and the same range of r values, it is still at least 50%.
In the region to the right of the blue dot-dashed curve, the relative gain is negative. This means that photonaddition leads to a decrease in QCS and a concomitant increase in decoherence time. The latter is however less than 10% in the region represented. In addition, in this region the Wigner negative volume of the states is small, at most 25% of the maximal value reached on the photonadded squeezed vacuum states.
Finally, one may note that the level curves of R SqTh+ (q, r) have vertical asymptotes, reflecting the fact that, at large r, the change in QCS is independent of the squeezing. One finds readily, for all q and r (see AP-PENDIX D) that R SqTh+ (q, r) ≥ R SqTh+ (q, +∞) = 2 − 12q q 2 + 1 q 4 + 10q 2 + 1 .
For example, when q ≤ 0.1, it is larger than 90% for all values of r.
In view of what precedes, one observes that this asymptotic value is nearly reached when r = 2. We conclude that a considerable increase of the QCS can therefore result from the photon-addition process at experimentally accessible values of the squeezing and provided q is not too large.
In conclusion, if the Wigner negative volume is used as the figure of merit, degaussification of a Gaussian one-mode state through photon addition gives an optimal result for squeezed vacua, independently of the amount of squeezing. This means that photon addition can both produce a Wigner negativity equal to that of a one-photon Fock space and at the same time admit an arbitrary amount of squeezing. However, as our analysis shows, the higher the squeezing, the more sensitive the Wigner negative volume is to noise, which is always present. In addition, a high squeezing induces a large QCS in the SqTh+ states, meaning that the resulting states are very sensitive to environmental decoherence, much more so than their Gaussian mother states.
VII. Photon-subtracted squeezed thermal states
Like in the photon-addition case, subtracting a photon can enhance certain nonclassical features of the state, as already mentioned in [20,21,40]. We provide here a quantitative analysis of the Wigner negative volume and QCS of photon-subtracted squeezed thermal (SqTh−) states and compare the results with those of the previous section. Details of the computations, which follow along similar lines as those for the SqTh+ states, can be found in APPENDIX D and APPENDIX E. As already mentioned, photon-subtraction turns the one-photon Fock state into the vacuum whereas photonaddition turns it into the two-photon Fock state. Photonsubtraction also preserves the coherent states and generally transforms a classical state into a classical state. This suggests that photon subtraction reduces the nonclassical nature of any state to which it is applied or at least that it cannot be very efficient in enhancing it. Whereas photon-addition increases the nonclassicality efficiently. By investigating the SqTh− states we will see this is indeed correct, but only to some extent. We will distinguish three regimes: q = 0, q = 0 and r small, q = 0 and r large.
In the absence of noise (q = 0), it is well known that adding a photon to or removing a photon from a squeezed vacuum (SqV) state (with r > 0) produces in fact the exact same state. Indeed, using the relations S † (z)a † S(z) = a † cosh r − ae −iφ sinh r, S † (z)aS(z) = a cosh r − a † e iφ sinh r, we have
a † |SqV = a † S|0 = S(a † cosh r − ae −iφ sinh r)|0 ∝ S|1 , a|SqV = aS|0 = S(a cosh r − a † e iφ sinh r)|0 ∝ S|1 .
Hence, once they are normalized, the SqV+ and SqV− states are identical. They therefore have the same Wigner negative volume [see Eq. (44)] and the same QCS [see Eq. (45)], both independent of r. In the absence of noise, photon-subtraction is consequently not less efficient than photon addition in creating nonclassical features.
We now consider the case where q = 0. In that case, the photon-added and -subtracted states are distinct. We plot the Wigner negative volume of the SqTh− states in Fig. 2a and their QCS in Fig. 2b. Recall first from Proposition 1 that the line (dotted green)
r = 1 2 ln 1 + q 1 − q
separates the classical SqTh states from the nonclassical ones and also the classical SqTh− states from the nonclassical ones. So, SqTh− states are nonclassical only if sufficiently squeezed. This is in contrast with SqTh+ states, that always are Wigner negative and hence nonclassical. In addition, for the SqTh− state to be Wigner negative, the squeezing must be larger still: the point (q, r) must lie above the curve C 2 SqTh− (q, r) = 1 (red dashed), which can be proven to coincide with the curve C 2 SqTh (q, r) = 1. In the region between the (red) dashed and (green) dotted curves, one therefore finds nonclassical Wigner positive states. They are weakly nonclassical since C 2 SqTh− (q, r) < 1; note that photon-subtraction therefore transforms a weakly nonclassical Gaussian state into a weakly nonclassical photon-subtracted state. We conclude that, in the presence of noise and at low enough squeezing, the SqTh− states are either classical or else weakly nonclassical and Wigner positive. More generally, in comparing photon-addition to photon-subtraction, we find that, for all values of q and r,
N W,SqTh− (q, r) ≤ N W,SqTh+ (q, r).
We now turn to the question of the (quantum) non-Gaussianity of the photon-subtracted Gaussian states. It is guaranteed to hold whenever the squeezing is strong enough so that the state is strongly nonclassical, since then the Wigner volume of those states does not vanish. When the squeezing is too small, they are classical and hence in the convex hull of the Gaussian states. The question therefore poses itself nontrivially only for the weakly non-classical photon-subtracted Gaussian states that correspond to the points in the region between the (red) dashed and (green) dotted curves in Fig. 2, which are Wigner positive. We will use the sufficient criterium for quantum non-Gaussianity developed in [15] to address the question. It is shown in [15] that, if a state's Wigner function satisfies
W (0) ≤ 2 π e −2n(1+n)(48)
then the state is quantum non-Gaussian. Here n = Tr(ρa † a) is the mean photon number of ρ. For the SqTh− states under consideration here, we have (see AP-PENDIX C)
W G − (0) = N − M − (V )W G (0) = 1 n G M − (V ) 2 π 1 √ det V , where N − = 1
Trρ G a † a = 1 n G andn G is the mean photon number of the Gaussian mother state and where
M − (V ) = 1 2 (1 − C 2 (ρ G )).
This yields explicitly
W G (0) = 2(1 − q) 2 (1 + q − (1 − q) cosh(2r)) π(1 + q) 2 ((1 + q) cosh(2r) − (1 − q))
and
n − = Trρ SqTh− a † a (49) = W G − (α) α 2 1 + α 2 2 − 1 2 d 2 α = 1 2 3(1 + q) cosh(2r) − 4q (1+q) cosh(2r)−(1−q) 1 − q − 1 .
The points where the inequality in Eq. (48) are saturated are indicated as the (orange) dashed-dotted line in Fig. 2 (d). Above this line, and below the (red) dashed line, the states are therefore guaranteed to be quantum non-Gaussian. Finer criteria would be needed to decide if the states between the (orange) dashed-dotted line and above the (green) dotted line are quantum non-Gaussian. We may conclude that at low squeezing, photonaddition applied to Gaussian states creates Wigner negativity whereas photon-subtraction does not and, in general, that the Wigner negative volume is larger after photon-addition than after photon-subtraction. This indicates that photon-addition is more efficient in inducing nonclassical features, a picture that is confirmed by the analysis of the QCS at small and intermediate values of r that follows. As we shall see however, the relative advantage of photon-addition over photon-subtraction is strongly suppressed at large squeezing.
As in the case of photon-addition, the explicit expression for the QCS is not very instructive for general q and r (see APPENDIX D), but it simplifies for SqV− and SqTh− states to
C 2
SqV− (0, r) = 3 cosh(2r) = C 2 SqV+ (0, r),
C 2 Th− (q, 0) = 6 q + 1 − 3 − 2(1 − q) q 2 + 1 ≤ 1.
The QCS of SqTh− states is plotted in Fig. 2b. One sees that, as for SqTh+ states, C 2 SqTh− (q, r) is increasing in r and decreasing in q.
Comparing the effect of photon-addition on the QCS to the one of photon-subtraction, we find that, provided either q < 0.5 or r < 0.5,
C 2 Th− (q, r) ≤ C 2 Th+ (q, r).
The last inequality is reversed when q > 0.5 and r > 0.5 but in this region the nonclassical features of the photon-added/subtracted states are at any rate limited as can be seen from Fig. 1-2. This again indicates that photon-addition tends to enhance the nonclassical features more than photon-subtraction. For example, one finds C 2 SqTh+ (0.1, 0.5) 3.12 and Note that, at these values of q and r, the Wigner negative volume of the SqTh− state represents only 16% of the maximal possible value, which is the one of the SqV± state (N W,SqV± = 0.213). The Wigner negative volume of the SqTh+ state is still 70% of the maximal value. This is due to a general effect, namely that the loss of Wigner negativity due to the noise, at a given value of r, is larger for SqTh− states: the level lines of the Wigner negative volume are closer together (compare Fig. 1a and Fig. 2a). As a result, for a given squeezing parameter r, the Wigner negative volume of a SqTh− state drops down to half its value for the single photon state (purple dot-dashed line) at a smaller q value than for the corresponding SqTh+ state. We conclude that, whereas at q = 0, photon-addition and -subtraction applied to Gaussian states produce exactly the same result, the nonclassical properties of those states -and in particular their Wigner negative volume -are more sensitive to noise for the case of photonsubtraction than for the one of photon-addition. On the other hand, the price to pay is that the photon-added states, having a larger QCS, are more sensitive to decoherence.
We finally consider the regime where r is very large. The situation is then very different. For the Wigner negative volume, one finds
N W,SqTh+ (q, +∞) = N W,SqTh− (q, +∞) 2 √ e − 1 (1 − q) 3 (1 + q) 3 ;
it is identical for photon-added and for photonsubtracted Gaussian states.
In addition, we plotted in Fig. 2c the relative gain R SqTh− of the photon subtracted squeezed thermal state. As for photon addition, the level curves of the relative gain have vertical asymptotes meaning that at large r the gain is independent of the squeezing. This asymptotic value is again identical for photon-addition and for photon-subtraction and for the latter it now upper bounds the relative gain;
R SqTh− (q, r) ≤ R SqTh± (q, +∞) = 2 − 12q q 2 + 1 q 4 + 10q 2 + 1 ≤ R SqTh+ (q, r).
This means that at large enough r a sizable relative gain in QCS is observed when the state is not too noisy, both for photon-addition and for photon-subtraction.
In fact, by noticing that R SqTh+ (+∞, q) = R SqTh− (+∞, q), we see that photon addition or subtraction has a very similar effect on sufficiently squeezed Gaussian states. In this regime, we therefore find the following simple approximate formula for the QCS of either states:
C 2 SqTh± (r, q) 3 − 12q q 2 + 1 q 4 + 10q 2 + 1 1 − q 1 + q cosh(2r).
The error between this formula and the exact expression is less than 10% for all values of q and for r > 1.3 in the photon-added case and for r > 0.6 in the photonsubtracted case.
In fact, as shown in APPENDIX E, in the limit r → +∞, the photon-added and photon-subtracted Gaussian states coincide at all values of q.
VIII. Photon-added two-mode Gaussian states
In this section we illustrate how the general formulas for the Wigner and characteristic functions of photonadded Gaussian states shown in Sec. IV B can be used to study their nonclassical features in the case when two modes are present. We will consider states of the form
a † (c)ρ G ⊗ ρ G a(c),(50)
where ρ G is a single-mode Gaussian state. We will not give an exhaustive treatment here, but consider two particular cases. In Sec. VIII A we consider the case where ρ G is a coherent state. Such states where realized experimentally as reported in [10]. In Sec. VIII B we consider the case where ρ G is a squeezed thermal state.
A. Photon added two-mode coherent state
In [10] the delocalized single-photon addition on two input modes containing identical coherent states |α is realized experimentally. Two families of states are thus constructed and studied:
|ψeven odd = Neven odd √ 2 (a † 1 |α |α ± |α a † 2 |α )
where N even = (1 + 2|α| 2 ) −1/2 and N odd = 1.
The Wigner negative volume of these states can be computed from Eq. (24). One finds
N Weven = 2 1 + 2|α| 2 1 √ 2 0 e −r 2 −|α| 2 r(1 − 2r 2 )I 0 (2r|α|)dr, N W odd = 2 √ e − 1 = N W |1 ,
where I 0 is the modified Bessel function. The odd states have a Wigner negative volume equal to that of the single-mode one-photon Fock state, independently of α.
The situation is different for the even states. When α = 0, N Weven = N W odd but N Weven is monotonically decreasing and for α = 1.9, N Weven has dropped to 5% of its maximal value showing that the even states loose their Wigner negativity fast as a function of α.
It is straightforward to compute the QCS of these states: since they are pure, one can use Eq. (14) directly. One finds
C 2 ψeven = 1 + 1 (1 + 2|α| 2 ) 2 , C 2 ψ odd = 2.
Here also, the odd state shows an α-independent QCS, which is however lower than the one of the single-mode one-photon Fock state. The QCS of the even states decreases fast with α. It follows then that, by this criterium also, the odd states are more nonclassical than the even ones but also more prone to environmental decoherence. One concludes that the odd states have stronger nonclassicality properties than the even ones. The same conclusion can be drawn from the study of the entanglement between the two modes for those states. Indeed, in [10] it is shown to be maximal and independent of α for the odd state. More precisely, the Negativity of the Partial Transpose (NPT) of these states is
NPT even = 1 1 + |α| 2 , NPT odd = 1,
indicating the odd state is more strongly entangled than the even one. Again, since the states are pure, one can easily compute their entanglement of formation (EoF) [41], which has the same behaviour. Since the reduced density matrix is a rank two operator, the maximal possible EoF is ln 2. This is indeed the value reached for the odd states at all values of α as well as for the even states when α = 0, as is readily checked. For even states, on the other hand, it tends to its minimal value, which is 0, as α tends to infinity.
B. Photon added two-mode squeezed thermal states (2SqTh+)
We now briefly consider the case where ρ G in Eq. (50) is a squeezed thermal state. We then add one photon with the creation operator a † (c) where c = (c 1 ± 1 − c 2 1 ) T ∈ C 2 . The characteristic functions and QCS can be obtained with Eqs. (21) and (13). Once again, the formulas are explicit, but not very instructive, and we do not show them here. It turns out to be easy to evaluate the Wigner function of the photon-added state at the origin and to observe it does not depend on c. The same values are obtained whether the photon is added on the first mode, on the second one, or shared between the two modes. One therefore finds, for q = 0.2, r = 0.5
N W 2SqTh+ (0.2, 0.5) = 0.104.
which is the same value as for the SqTh+ states with the same r and q.
The computation of the QCS and hence of the nonclassicality gain of this state reveals the same phenomenon: they do not depend on c. One finds
C 2 2SqTh+ = 1.54 = 1 2 (C 2 SqTh + C 2 SqTh+ ),
a reflection of the fact that the QCS is the average of the coherence scale of the quadratures.
IX. Conclusion/Discussion
We have quantitatively analysed how photon addition/subtraction affects the nonclassical properties of Gaussian states. We concentrated on two measures of nonclassicality, the Wigner negative volume and the quadrature coherence scale (QCS). We have established that, since the QCS tends to undergo a very substantial increase in the photon-addition/subtraction process, the resulting non-Gaussian states are considerably more sensitive to environmental decoherence than the original Gaussian states. In addition, the decoherence time shortens rapidly with increased squeezing.
For single-mode fields, we have shown that, whereas at low and intermediate squeezing, photon-addition is considerably more efficient in enhancing or creating nonclassicality than photon subtraction, at high squeezing, the two procedures produce the same effects. The Wigner negative volume saturates in this regime to a noise-dependent fraction of its maximal value, which is attained on photon-added/subtracted squeezed vacuum states. In the course of our analysis, we identified what seems to be a new family of quantum non-Gaussian Wigner positive states, obtained by photon-subtraction from squeezed thermal states.
One may note that the Wigner negativity and the quadrature coherence scale are not the only telltale signs of nonclassicality in quantum optics: nonclassicality comes in many guises and can be recognized through the observation of a variety of physical or mathematical properties signaling the quantum nature of the state, the most prominent ones being non-Poissonian statistics, squeezing, Wigner negativity, interference fringes, (quantum) non-Gaussianity and entanglement. In the context of quantum optics, a large number of witnesses, measures and monotones of nonclassicality have consequently been developed [16,25,35,39,. It would be of interest to complete the present study by also testing how these other figures of merit are affected by the photon-addition/subtraction process. Note however that the analytical or even numerical computation of many of those quantities is not straightforward. For example, to compute the Wigner-Yanase skew information one needs a priori to compute the square root of the density matrix, which is not obvious for most states, including the photon-added/subtracted states considered here. Similarly, except for pure states, the quantum Fisher information, that can be used as a nonclassicality measure [64,65], is generally hard to determine. Let us note that on Gaussian states, the quantum Fisher information coincides with the QCS [19], as well as on pure states, but not in general.
As for the entanglement of multi-mode photonadded/subtracted Gaussian states, it has been investigated in [20,21]. The maximal entanglement increaseas measured by the Renyi entropy of the Wigner function -that can occur in the process has been evaluated in [66]. For pure states, upper bounds on the entanglement of formation in terms of the QCS can be inferred from the results of [26].
In experiments, losses are inevitable and any theoretical analysis needs to take them into account in its modelling of the situation. In quantum optics, this is usually done by coupling the field via a beamsplitter to a vacuum mode [3,8,12,52,67,68], or by simply mixing the state with a vacuum component [69]. In both cases, explicit expressions of the characteristic function of the lossy state are available in terms of the original one. Hence, the methods expounded here can be used to compute both the quadrature coherence scale and Wigner negativity for lossy photon-added/subtracted Gaussian states. They will both be diminished by the losses, as al-ready illustrated in [12] for the Wigner function of singlephoton-added thermal states. For the quadrature coherence scale, preliminary computations, not shown here, confirm this picture. Much of the experimental work on those states has concentrated on certifying their nonclassicality [3,8,9,12], in particular for high noise and losses, where the quantum nature of the states is strongly suppressed and this certification therefore difficult. Whereas this constitutes an obvious challenge, a perhaps more important challenge is to prepare states of the optical field that show strong nonclassical properties, such as a high value for the Wigner negative volume and/or the squeezing. They can be expected to be more likely to be useful in various quantum technology protocols, but, as we show here, the high value of the quadrature coherence scale generated by the photon-addition/subtraction makes them strongly sensitive to environmental decoherence and hence hard to prepare and maintain.
We finally point out that the method to compute the characteristic function of single-photonadded/subtracted states that we introduce here can easily be generalized to the case of multi-photon addition/subtraction and can provide a useful tool for further studies of various features of such states.
Acknowledgments: This work was supported in part by the Agence Nationale de la Recherche under grant ANR-11-LABX-0007-01 (Labex CEMPI) and by the Nord-Pas de Calais Regional Council and the European Regional Development Fund through the Contrat de ProjetsÉtat-Région (CPER). AH acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC). We thank the anonymous referees for their constructive comments that helped us change the focus of the paper. We are also grateful to G. Patera for helpful discussions. We focus on the photon-addition case. Calculations are similiar in the photon-subtraction case. The Wigner function is defined as
W + (α) = 1 π 2n χ + (z)e (z·α−z·ᾱ) d 2n z
where χ + is given by Eq. (19). Computing each term of the integral, we have :
1 π 2n (c · ∂z)(c · ∂ z )χ(z)ez ·α−z·ᾱ d 2n z = ijc i c j 1 π 2n ∂z i ∂ zj χ(z) ez ·α−z·ᾱ d 2n z = − ijc i c j 1 π 2n ∂ zj χ(z) α i ez ·α−z·ᾱ d 2n z = − ijc i c j α iᾱj 1 π 2n χ(z)ez ·α−z·ᾱ d 2n z = −|c ·ᾱ| 2 W (α),
where we used an integration by parts. Here, W (α) is the Wigner function of the initial state. The other terms in the integral are :
1 π 2n (c ·z)(c · ∂z)χ(z)ez ·α−z·ᾱ d 2n z = ij c icj 1 π 2n z i ∂z j χ(z) ez ·α−z·ᾱ d 2n z = − ij c icj 1 π 2n z i α j χ(z)ez ·α−z·ᾱ d 2n z + δ ij χ(z)ez ·α−z·ᾱ d 2n z = − ij c icj 1 π 2n α j ∂ αi χ(z)ez ·α−z·ᾱ d 2n z + δ ij W (α) = −(c · α)(c · ∂ α )W (α) − W (α),
and similarly
1 π 2n (c · z)(c · ∂ z )χ(z)ez ·α−z·ᾱ d 2n z = −(c ·ᾱ)(c · ∂ᾱ)W (α) − W (α), 1 π 2n (c ·z)(c · z)χ(z)ez ·α−z·ᾱ d 2n = −(c · ∂ α )(c · ∂ᾱ)W (α).
Putting everything together, we obtain the Wigner function of the photon added state :
W + (α) = N + − 1 2 + |c ·ᾱ| 2 − 1 2 (c · α)(c · ∂ α ) − 1 2 (c ·ᾱ)(c · ∂ᾱ) + 1 4 (c · ∂ α )(c · ∂ᾱ) W (α) = N + c · ∂ α 2 −ᾱ c · ∂ᾱ 2 − α W (α)
APPENDIX B. Characteristic function of a photon-added/subtracted Gaussian state (proof of Eq. (21))
We will compute the characteristic function of a general multi-mode photon-added/subtracted Gaussian state using Eq. (19). This involves taking derivatives of the Gaussian characteristic function (9)
χ G (ξ) = e K G (ξ) , with K G (ξ) = − 1 2 ξ T ΩV Ω T ξ − i √ 2(Ωd) T ξ and Ω = n j=1 Ω 2 , Ω 2 = 0 1 −1 0 ,
with respect to the complex variables z j = ξ j1 + iξ j2 ∈ C, z j ∈ C. For that purpose, we first recall the expression of K G in terms of these variables. We define ν = 1
√ 2 1 −i T so that ξ = 1 √ 2 (z ν +z ν).
Since Ω 2 ν = −iν and Ω 2ν = iν, we find
Ω 2 ξ = i 1 √ 2 u T Ω 2 z z , where u = ν T ν T = 1 √ 2 1 i 1 −i .
Introducing the unitary matrix U = j u, and defining A = U V U T , we find that A is the matrix of the covariances of the creation and annihilation operators:
A = à 11Ã12 · · ·Ã 1ñ A 21 . . . . . . A n1 · · ·Ã nn withà ij = 2 Cov[a i , a j ] Cov[a i , a † j ] Cov[a † i , a j ] Cov[a † i , a † j ]. = uṼ ij u T .
HereṼ ij is the two-by-two submatrix of the covariance matrix V defined bỹ
V ij = V 2i−1,2j−1 V 2i−1,2j V 2i,2j−1 V 2i,2j = 2 Cov[x i ,x j ] Cov[x i ,p j ] Cov[p i ,x j ] Cov[p i ,p j ] .
One then finds
1 2 ξ T ΩV Ω T ξ = 1 2 kl ξ T k (Ω 2Ṽ kl Ω 2 )ξ l = − 1 4 kl z kzk Ω T 2Ã kl Ω 2 z l z l and i √ 2(Ωd) T ξ = k d T k u T Ω 2 z k z k .
Using that
d T k u T = d T k (ν ν) = a k a † k ,
this leads to
K G (Z) = 1 4 Z T (Ω T AΩ)Z − ∆ T ΩZ, where Z = z 1 z 1 . . . z n z n , and ∆ = a 1 a † 1 . . . a n a † n = U d.
It is now easy to take the derivatives along z k andz k and we obtain:
c · ∂ z χ G (z) = Cov[a † (c), a † (z) − a(z)] + a † (c) χ G (z), c · ∂zχ G (z) = − Cov[a(c), a † (z) − a(z)] + a(c) χ G (z), (c · ∂z)(c · ∂ z )χ G (z) = − Cov[a † (c), a † (z) − a(z)] + a † (c) Cov[a(c), a † (z) − a(z)] + a(c) χ G (z) −Cov[a † (c), a(c)]χ G (z).
According to Eq. (19), the characteristic function of the photon added/subtracted state is given by
χ G ± (z) = N ± ± χ G (z) 2 − (c · ∂z)(c · ∂ z )χ G (z) ± c ·z 2 (c · ∂z)χ G (z) ± c · z 2 (c · ∂ z )χ G (z) − |c · z| 2 4 χ G (z) .
Hence we obtain
χ G ± (z) = N ± Cov[a † (c), a(c)] ± 1 2 − c · γ ± c · δ ± χ G (z), with (γ ± ) k = Cov[a † k , a † (z) − a(z)] ∓ 1 2z k + a † k (δ ± ) k = −Cov[a k , a † (z) − a(z)] ∓ 1 2 z k − a k .
This expression can be further simplified as follows. Note that U U † = n j=1 σ x with σ x = 0 1 1 0 and d T = a 1 a † 1 · · · a n a † n U . Using U T ΩU = −iΩ and the unitarity of U , one then finds
γ 1 δ 1 . . . γ n δ n T ± = 1 2 Ω T AΩZ ∓ U U † Z − ΩU d = U 1 2 (ΩV Ω ∓ I) U † Z + iΩd .
Recalling from Eq. (23) that, for all c ∈ C n ,
m c =Ū T c 1 0 . . . c n 0 = 1 √ 2 c 1 −ic 1 . . . c n −ic n ∈ C 2n , we have m T c U T = 0c 1 . . . 0c n , Cov[a † (c), a(c)] = 1 2 m T c V m c .
The term (c · γ ± )(c · δ ± ) can thus be written as
(c · γ ± )(c · δ ± ) = γ 1 δ 1 γ 2 δ 2 . . . γ n δ n ± c 1 0 . . . c n 0 0c 1 . . . 0c n γ 1 δ 1 γ 2 δ 2 . . . γ n δ n ± = Z TŪ 1 2 (ΩV Ω ∓ I) + id T Ω T m c m T c 1 2 (ΩV Ω ∓ I)Ū T Z + iΩd .
The characteristic function can thus be written To derive the expression in Eq. (24), we proceed similarly. Note first that, using Eq. (6), one readily computes the well known Wigner function of a Gaussian state with characteristic function χ G . It reads
χ G ± (z) = N ± 1 2 m T c V m c ± 1 2 − β T ± m c m T c β ± χ G (z) with β ± = 1 2 (ΩV Ω ∓ I) U † Z + iΩd.W G (α) = 2 n π n √ det V exp −Y T A −1 Y
where Y = (α 1 − a 1 ,ᾱ 1 − a † 1 , . . . , α n − a n ,ᾱ n − a † n ) ∈ C 2n . One then readily computes the α k and α k derivatives of W G (α). Inserting them in Eq. (20), using the definition of m c in Eq. (23) and
A −1 = U V −1 U † ,
one obtains the Wigner function of the photon-added/subtracted state:
W G ± (α) = N ± (c · µ ± )(c · η ± ) + M ± (V, c) W G (α)
where M ± (V, c)∈ R is independent of α and given by
M ± (V, c) = ∓ 1 2 − 1 2 c 1 0 . . . c n 0 A −1 0 c 1 . . . 0 c n = ∓ 1 2 − 1 2 m T c V −1 m c ,
and where the vectors µ ± , η ± ∈ C n are defined by
µ 1 η 1 µ 2 η 2 . . . µ n η n ± = U V −1 ± I U † α 1 α 1 α 2 α 2 . . . α n α n − A −1 a 1 a † 1 a 2 a † 2 . . . a n a † n = U V −1 ± I r − U V −1 d.
Here we used the fact that the vector of quadratures r ∈ R 2n and the vector of displacement d ∈ R 2 can be written as r T = α 1ᾱ1 · · · α nᾱn U and d T = a 1 a † 1 · · · a n a † n U . Using U T ΩU = −iΩ, the term (c · µ ± )(c · η ± ) can be rewritten as follows:
(c · µ ± )(c · η ± ) = µ 1 η 1 µ 2 η 2 . . . µ n η n ±
c 1 0 . . . c n 0 0c 1 . . . 0c n µ 1 η 1 µ 2 η 2 . . . µ n η n ± = r T V −1 ± I − d T V −1 U † c 1 0 . . . c n 0 0c 1 . . . 0c n U V −1 ± I r − V −1 d = r T V −1 ± I − d T V −1 m c m T c V −1 ± I r − V −1 d . Introducing λ ± = V −1 ± I r − V −1 d ∈ R 2n , this yields W G ± (r) = N ± M ± (V, c) + λ T ± m c m T c λ ± W G (r)
which is Eq. (24).
APPENDIX D. QCS of the SqTh+ and SqTh− states
With the characteristic function (43) and Eq. (13) we find the value of the QCS of the SqTh+ state : 2 (1 − q 4 ) cosh 2r + 2 (1 + q 2 ) 2 + (q 4 + 10q 2 + 1) sinh 2 2r × −8q q 2 − 1 + 3 q 4 − 4q 3 + 10q 2 − 4q + 1 cosh 3 2r + 6(q − 1) 2 1 − q 2 cosh 2 2r + 3q 4 + 8q 3 − 26q 2 + 8q + 3 cosh 2r
One then readily computes lim r→+∞ R SqTh+ (q, r) = 2 − 12q q 2 + 1 q 4 + 10q 2 + 1 .
Similarly, with the characteristic function χ SqTh− (z) = χ SqTh (z) 2q|z| 2 (1 − q 2 ) cosh 2r − (1 − q) 2 + q + 1 q − 1 e 2r ξ 2 1 + e −2r ξ 2 2 + 1 .
and Eq. (13) we find the value of the QCS of the SqTh− state :
C 2 SqTh− (q, r) = (1 − q) 1 q+1 2 √
q + 1 (4 (q 4 − 1) cosh(2r) + 3q 4 − 2q 2 + (q 4 + 10q 2 + 1) cosh(4r) + 3)
× 12(q + 1)(q − 1) 3 cosh(4r) + (21q 4 − 4q 3 − 14q 2 − 4q + 21) cosh(2r)
+3(1 − 4q + 10q 2 − 4q 3 + q 4 ) cosh(6r) + 4(q + 1)(3q 2 + 2q + 3)(q − 1) and lim r→+∞ R SqTh− (q, r) = 2 − 12q q 2 + 1 q 4 + 10q 2 + 1 = lim r→+∞ R SqTh+ (q, r).
APPENDIX E. Wigner negative volume of the SqTh+ and SqTh− states
The Wigner negative volume [39], denoted by N W (ρ) is defined as the absolute value of the integral of the Wigner function over the area where the latter is negative. The Wigner function of the SqTh+ state is computed with (24) and we obtain W SqTh+ (x, p) = 2(1 − q) 2 π ((1 + q) 2 cosh(2r) + 1 − q 2 ) exp − (1 − q) e 2r x 2 + e −2r p 2 1 + q × 1 − q 1 + q e 2r + 1 2 x 2 + 1 − q 1 + q e −2r + 1
2 p 2 − 1 − q 1 + q cosh(2r) − 1
We easily see that the Wigner function of a SqTh+ state is negative inside the ellipse 1 − q 1 + q e 2r + 1 2 x 2 + 1 − q 1 + q e −2r + 1 2 p 2 = 1 + (1 − q) cosh(2r) 1 + q .
The semi-major and semi-minor axes are given by κ x = e −r (1 + q) 2 + (1 − q 2 ) cosh(2r) 2(cosh r − q sinh r) κ p = e r (1 + q) 2 + (1 − q 2 ) cosh(2r) 2(cosh r + q sinh r) and the Wigner function reaches its minimal value at the origin:
W SqTh+ (0) = − (1 − q) 2 ((1 − q) cosh(2r) + 1 + q) π(1 + q) 2 ((1 + q) cosh(2r) + 1 − q) .
At large squeezing, the Wigner negative volume of these states saturates to a value that decreases with increasing temperature q. To see this, we note that, at large r, one has, withx = e r x,p = e −r p and µ = 1−q 1+q , that W SqTh+ (x, p) ≈ 4 π µ 3 e −µ(x 2 +p 2 ) µx 2 + µ −1p2 − 1 2 .
It then follows from a straightforward computation that N W,SqTh+ (q, +∞) := N W (ρ SqTh+ )(q, +∞) = µ 3 π 2π 0 2 − a(µ, θ) 2a(µ, θ) − a(µ, θ) −1 e − 1 2 a(µ,θ) dθ where a(µ, θ) = cos 2 θ + µ 2 sin 2 θ.
When q = 0, one has µ = 1 and a(µ, θ) = 1 and hence
N W (ρ SqTh+ )(0, +∞) = 2 √ e − 1,
as expected in view of Eq. (44). As a result, for small q one has approximately N W,SqTh+ (q, +∞) 2 √ e − 1 µ 3 .
Similarly, the Wigner function of the SqTh− state is computed with (24) and we obtain W SqTh− (x, p) = 2(1 − q) 2 π ((1 + q) 2 cosh(2r) − 1 + q 2 ) exp − (1 − q) e 2r x 2 + e −2r p 2 1 + q × 1 − 1 − q 1 + q e 2r 2 x 2 + 1 − 1 − q 1 + q e −2r 2 p 2 − 1 − q 1 + q cosh(2r) + 1
We easily see that the Wigner function of a SqTh− state is negative inside the ellipse 1 − 1 − q 1 + q e 2r 2 x 2 + 1 − 1 − q 1 + q e −2r 2 p 2 = −1 + (1 − q) cosh(2r) 1 + q .
provided that q < tanh 2 (r). Otherwise, the Wigner function is always positive. Remark that when q, r = 0, we get N − = ∞ and the Wigner function is not defined. The semi-major and semi-minor axes are given by κ x = e −r (1 − q 2 ) cosh(2r) − (1 + q) 2 2(sinh(r) − q cosh(r)) κ p = e r (1 − q 2 ) cosh(2r) − (1 + q) 2 2(sinh(r) + q cosh(r)) and the Wigner function reaches its minimal value at the origin:
W SqTh− (0) = (1 − q) 2 (−(1 − q) cosh(2r) + 1 + q) π(1 + q) 2 ((1 + q) cosh(2r) − 1 + q) .
It is readily checked that the asymptotic behaviour of W SqTh− (x, p) is, for large r, identical to that of W SqTh+ (x, p).
IV. Characteristic and Wigner functions of multi-mode photon-added/subtracted states A. General photon-added/subtracted states
χ G − can be bounded only if the polynomial prefactor p(µ) in Eq. (31) vanishes identically. This in turn is equivalent to Eq. (32)-(33). Since Eq. (32) holds for all eigenvectors of V with eigenvalue strictly less than 1, it follows that m c belongs to the nonnegative spectral subspace of V − I. Eq. (33) then implies that m c in fact belongs to the kernel of V − I. And, in addition, that d is perpendicular to m c . By the Lemma, this in turn implies that a(c)ρ G a † (c) = 0, which is a contradiction. In conclusion, V − I ≥ 0. B. Wigner positivity/negativity of photon-subtracted Gaussian states Using (24)-(26) one easily characterizes the Wigner positive/negative photon-subtracted Gaussian states as follows.
which is equivalent to Eq. (35). This proves the first statement of Lemma 2. For the second statement, note that, if d = 0, then the term λ T − m c m T c λ − vanishes when r = 0. When 1 is not an eigenvalue of V , then (V −1 − I) is invertible and then this term vanishes provided r = (I − V ) −1 d. Hence, in both these cases, the Wigner function of ρ G − is negative in at least one point of the phase space if and only if
FIG. 1 :
1Level lines of (a) the Wigner negative volume N W (ρ SqTh+ ), (b) the QCS, C 2 SqTh+ and (c) the relative gain R SqTh+ of photon-added squeezed thermal states in function of the temperature q and the squeezing r. In dashed red the line C 2 SqTh+ (q, r) = 1. In dotted orange, the level line C 2 SqTh (q, r) = 1 of the QCS of the squeezed thermal states.
FIG. 2 :
2Level lines of (a) the Wigner negative volume N W (ρ SqTh− ), (b) the QCS C 2 SqTh− , and (c) the relative gain R SqTh− of photon-subtracted squeezed thermal states in function of the temperature q and the squeezing r. Panel (d) shows a zoom of (a) where the dashed-dotted orange line indicates the values of q and r where the inequality Eq. (48) is saturated. States above this curve are quantum non-Gaussian. In dashed red the line C 2 SqTh− (q, r) = C 2 SqTh (q, r) = 1, and in dotted green, the line r = 1 2 ln( 1+q 1−q ) below which the SqTh and SqTh− states are classical. Above the dashed red line, both types of states are strongly nonclassical and the SqTh− states have Wigner negativity. In the gray region, the Wigner function is positive. The region delimited by the dotted green and dashed red lines corresponds to weakly nonclassical states.
APPENDIX A. Proof of Eq.(20)
This is Eq. (21) . APPENDIX C. Wigner function of a photon added/subtracted state (proof of Eq.(24))
With the usual abuse of notation, we write: W (α) = W (r).
Quantum computation over continuous variables. Seth Lloyd, Samuel L Braunstein, Phys. Rev. Lett. 82Seth Lloyd and Samuel L. Braunstein. Quantum com- putation over continuous variables. Phys. Rev. Lett., 82:1784-1787, Feb 1999.
Percolation thresholds for photonic quantum computing. D Pant, D Towsley, S Englund, Guha, Nature Communications. 10M Pant, D. Towsley, D. Englund, and S. Guha. Percola- tion thresholds for photonic quantum computing. Nature Communications, 10, 2019.
Quantum-to-classical transition with single-photonadded coherent states of light. Alessandro Zavatta, Silvia Viciani, Marco Bellini, Science. 3065696Alessandro Zavatta, Silvia Viciani, and Marco Bellini. Quantum-to-classical transition with single-photon- added coherent states of light. Science, 306(5696):660- 662, 2004.
Generating optical Schrödinger kittens for quantum information processing. Alexei Ourjoumtsev, Rosa Tualle-Brouri, Julien Laurat, Philippe Grangier, Science. 3125770Alexei Ourjoumtsev, Rosa Tualle-Brouri, Julien Laurat, and Philippe Grangier. Generating optical Schrödinger kittens for quantum information processing. Science, 312(5770):83-86, 2006.
Generation of a superposition of odd photon number states for quantum information networks. J S Neergaard-Nielsen, B Nielsen, C Hettich, K Mølmer, E S Polzik, Phys. Rev. Lett. 9783604J. S. Neergaard-Nielsen, B. Melholt Nielsen, C. Hettich, K. Mølmer, and E. S. Polzik. Generation of a superposi- tion of odd photon number states for quantum informa- tion networks. Phys. Rev. Lett., 97:083604, Aug 2006.
Non-gaussian statistics from individual pulses of squeezed light. Jérôme Wenger, Rosa Tualle-Brouri, Philippe Grangier, Phys. Rev. Lett. 92153601Jérôme Wenger, Rosa Tualle-Brouri, and Philippe Grangier. Non-gaussian statistics from individual pulses of squeezed light. Phys. Rev. Lett., 92:153601, Apr 2004.
Probing quantum commutation rules by addition and subtraction of single photons to/from a light field. Valentina Parigi, Alessandro Zavatta, Myungshik Kim, Marco Bellini, Science. 3175846Valentina Parigi, Alessandro Zavatta, Myungshik Kim, and Marco Bellini. Probing quantum commutation rules by addition and subtraction of single photons to/from a light field. Science, 317(5846):1890-1893, 2007.
Experimental nonclassicality of single-photonadded thermal light states. Alessandro Zavatta, Valentina Parigi, Marco Bellini, Phys. Rev. A. 7552106Alessandro Zavatta, Valentina Parigi, and Marco Bellini. Experimental nonclassicality of single-photon- added thermal light states. Phys. Rev. A, 75:052106, May 2007.
Experimental determination of a nonclassical glauber-sudarshan p function. T Kiesel, W Vogel, V Parigi, A Zavatta, M Bellini, Phys. Rev. A. 7821804T. Kiesel, W. Vogel, V. Parigi, A. Zavatta, and M. Bellini. Experimental determination of a nonclassical glauber-sudarshan p function. Phys. Rev. A, 78:021804, Aug 2008.
Entangling macroscopic light states by delocalized photon addition. Nicola Biagi, Luca S Costanzo, Marco Bellini, Alessandro Zavatta, Phys. Rev. Lett. 12433604Nicola Biagi, Luca S. Costanzo, Marco Bellini, and Alessandro Zavatta. Entangling macroscopic light states by delocalized photon addition. Phys. Rev. Lett., 124:033604, Jan 2020.
Non-gaussian quantum states of a multimode light field. Y S Ra, A Dufour, M Walschaers, M C Jacquard, T Michel, C Fabre, N Treps, Nat. Phys. 16Y. S. Ra, A. Dufour, M. Walschaers, M. C. Jacquard, T. Michel, C. Fabre, and N. Treps. Non-gaussian quan- tum states of a multimode light field. Nat. Phys., 16:144-147, 2020.
Experimental certification of nonclassicality via phase-space inequalities. Nicola Biagi, Martin Bohmann, Elizabeth Agudelo, Marco Bellini, Alessandro Zavatta, Phys. Rev. Lett. 12623605Nicola Biagi, Martin Bohmann, Elizabeth Agudelo, Marco Bellini, and Alessandro Zavatta. Experimental certification of nonclassicality via phase-space inequali- ties. Phys. Rev. Lett., 126:023605, Jan 2021.
Recent developments in photon-level operations on travelling light fields. M S Kim, Journal of Physics B: Atomic, Molecular and Optical Physics. 4113133001M S Kim. Recent developments in photon-level oper- ations on travelling light fields. Journal of Physics B: Atomic, Molecular and Optical Physics, 41(13):133001, jun 2008.
Non-gaussian quantum states and where to find them. Mattia Walschaers, PRX Quantum. 230204Mattia Walschaers. Non-gaussian quantum states and where to find them. PRX Quantum, 2:030204, Sep 2021.
Detecting quantum non-gaussianity via the wigner function. G Marco, Mattia L Genoni, Tommaso Palma, Stefano Tufarelli, M S Olivares, Matteo G A Kim, Paris, Phys. Rev. A. 8762104Marco G. Genoni, Mattia L. Palma, Tommaso Tufarelli, Stefano Olivares, M. S. Kim, and Matteo G. A. Paris. Detecting quantum non-gaussianity via the wigner func- tion. Phys. Rev. A, 87:062104, Jun 2013.
Convex resource theory of non-gaussianity. Ryuji Takagi, Quntao Zhuang, Phys. Rev. A. 9762337Ryuji Takagi and Quntao Zhuang. Convex resource the- ory of non-gaussianity. Phys. Rev. A, 97:062337, Jun 2018.
Faithful hierarchy of genuine n-photon quantum non-gaussian light. Lukáš Lachman, Ivo Straka, Josef Hloušek, Miroslav Ježek, Radim Filip, Phys. Rev. Lett. 12343601Lukáš Lachman, Ivo Straka, Josef Hloušek, Miroslav Ježek, and Radim Filip. Faithful hierarchy of genuine n-photon quantum non-gaussian light. Phys. Rev. Lett., 123:043601, Jul 2019.
Measuring nonclassicality of bosonic field quantum states via operator ordering sensitivity. Stephan De Bièvre, Dmitri B Horoshko, Giuseppe Patera, Mikhail I Kolobov, Phys. Rev. Lett. 12280402Stephan De Bièvre, Dmitri B. Horoshko, Giuseppe Pat- era, and Mikhail I. Kolobov. Measuring nonclassicality of bosonic field quantum states via operator ordering sensi- tivity. Phys. Rev. Lett., 122:080402, Feb 2019.
Quadrature coherence scale driven fast decoherence of bosonic quantum field states. Anaelle Hertz, Stephan De Bièvre, Phys. Rev. Lett. 12490402Anaelle Hertz and Stephan De Bièvre. Quadrature co- herence scale driven fast decoherence of bosonic quantum field states. Phys. Rev. Lett., 124:090402, Mar 2020.
Statistical signatures of multimode singlephoton-added and -subtracted states of light. Mattia Walschaers, Claude Fabre, Valentina Parigi, Nicolas Treps, Phys. Rev. A. 9653835Mattia Walschaers, Claude Fabre, Valentina Parigi, and Nicolas Treps. Statistical signatures of multimode single- photon-added and -subtracted states of light. Phys. Rev. A, 96:053835, Nov 2017.
Entanglement and wigner function negativity of multimode non-gaussian states. Mattia Walschaers, Claude Fabre, Valentina Parigi, Nicolas Treps, Phys. Rev. Lett. 119183601Mattia Walschaers, Claude Fabre, Valentina Parigi, and Nicolas Treps. Entanglement and wigner function neg- ativity of multimode non-gaussian states. Phys. Rev. Lett., 119:183601, Oct 2017.
Gaussian quantum information. Christian Weedbrook, Stefano Pirandola, Raúl García-Patrón, Nicolas J Cerf, Timothy C Ralph, Jeffrey H Shapiro, Seth Lloyd, Rev. Mod. Phys. 84Christian Weedbrook, Stefano Pirandola, Raúl García- Patrón, Nicolas J. Cerf, Timothy C. Ralph, Jeffrey H. Shapiro, and Seth Lloyd. Gaussian quantum information. Rev. Mod. Phys., 84:621-669, May 2012.
Exploring continuous-variable entropic uncertainty relations and separability criteria in quantum phase space. Anaelle Hertz, PhD thesisAnaelle Hertz. Exploring continuous-variable entropic uncertainty relations and separability criteria in quantum phase space. PhD thesis, 2018.
Correlation functions for coherent fields. U M Titulaer, R J Glauber, Phys. Rev. 140U. M. Titulaer and R. J. Glauber. Correlation func- tions for coherent fields. Phys. Rev., 140:B676-B682, Nov 1965.
Thermal-difference states of light: Quantum states of heralded photons. D B Horoshko, S De Bièvre, G Patera, M I Kolobov, Phys. Rev. A. 10053831D. B. Horoshko, S. De Bièvre, G. Patera, and M. I. Kolobov. Thermal-difference states of light: Quantum states of heralded photons. Phys. Rev. A, 100:053831, Nov 2019.
Relating the entanglement and optical nonclassicality of multimode states of a bosonic quantum field. Anaelle Hertz, Nicolas J Cerf, Stephan De Bièvre, Phys. Rev. A. 10232413Anaelle Hertz, Nicolas J. Cerf, and Stephan De Bièvre. Relating the entanglement and optical nonclassicality of multimode states of a bosonic quantum field. Phys. Rev. A, 102:032413, Sep 2020.
Interferometric measurement of the quadrature coherence scale using two replicas of a quantum optical state. Célia Griffet, Matthieu Arnhem, Stephan De Bièvre, Nicolas J Cerf, Célia Griffet, Matthieu Arnhem, Stephan De Bièvre, and Nicolas J. Cerf. Interferometric measurement of the quadrature coherence scale using two replicas of a quan- tum optical state, 2022.
Measuring the quadrature coherence scale on a cloud quantum computer. Aaron Z Goldberg, Guillaume S Thekkadath, Khabat Heshami, Aaron Z. Goldberg, Guillaume S. Thekkadath, and Kha- bat Heshami. Measuring the quadrature coherence scale on a cloud quantum computer, 2023.
Quantum mechanical pure states with gaussian wave functions. B L Schumaker, Physics Reports. 135B. L. Schumaker. Quantum mechanical pure states with gaussian wave functions. Physics Reports, 135:317-408, 1986.
Precision measurements with photon-subtracted or photon-added gaussian states. Daniel Braun, Pu Jian, Olivier Pinel, Nicolas Treps, Phys. Rev. A. 9013821Daniel Braun, Pu Jian, Olivier Pinel, and Nicolas Treps. Precision measurements with photon-subtracted or photon-added gaussian states. Phys. Rev. A, 90:013821, Jul 2014.
Alessio Serafini, Quantum Continuous Variables: A Primer of Theoretical Methods. Boca RatonCRC Press1st ed.Alessio Serafini. Quantum Continuous Variables: A Primer of Theoretical Methods (1st ed.). CRC Press., Boca Raton, 2017.
Nonclassicality and decoherence of photon-subtracted squeezed states. Asoka Biswas, G S Agarwal, Phys. Rev. A. 7532104Asoka Biswas and G. S. Agarwal. Nonclassicality and decoherence of photon-subtracted squeezed states. Phys. Rev. A, 75:032104, Mar 2007.
Density operators and quasiprobability distributions. K E Cahill, R J Glauber, Phys. Rev. 177K. E. Cahill and R. J. Glauber. Density operators and quasiprobability distributions. Phys. Rev., 177:1882- 1902, Jan 1969.
Mixed states with positive wigner functions. T Bröcker, R F Werner, Journal of Mathematical Physics. 361T. Bröcker and R. F. Werner. Mixed states with posi- tive wigner functions. Journal of Mathematical Physics, 36(1):62-75, 1995.
Nonclassical character of states exhibiting no squeezing or sub-poissonian statistics. G S Agarwal, K Tara, Phys. Rev. A. 46G. S. Agarwal and K. Tara. Nonclassical character of states exhibiting no squeezing or sub-poissonian statis- tics. Phys. Rev. A, 46:485-488, Jul 1992.
Non-classicality of photon added coherent and thermal radiations. R Prabhu, A R Devi, M Uma, Eur. Phys. J. D. 40R. Prabhu A. R. Usha Devi and M. Uma. Non-classicality of photon added coherent and thermal radiations. Eur. Phys. J. D, 40:133-138, 2006.
Nonclassicality of photonadded squeezed vacuum and its decoherence in thermal environment. Li- , Yun Hu, Hong-Yi Fan, Journal of Modern Optics. 57Li-Yun Hu and Hong-Yi Fan. Nonclassicality of photon- added squeezed vacuum and its decoherence in thermal environment. Journal of Modern Optics, 57(14-15):1344- 1354, 2010.
Nonclassicality of photon-added squeezed vacuum states. Si Kun, Ji Xiao-Hui, Jia Huan-Yu, Chinese Physics B. 19664205Si Kun, Ji Xiao-Hui, and Jia Huan-Yu. Nonclassicality of photon-added squeezed vacuum states. Chinese Physics B, 19(6):064205, jun 2010.
Negativity of the wigner function as an indicator of non-classicality. Anatole Kenfack, Karol Zyczkowski, Journal of Optics B: Quantum and Semiclassical Optics. 610Anatole Kenfack and Karol Zyczkowski. Negativity of the wigner function as an indicator of non-classicality. Journal of Optics B: Quantum and Semiclassical Optics, 6(10):396-404, aug 2004.
Nonclassicality of a photon-subtracted gaussian field. M S Kim, E Park, P L Knight, H Jeong, Phys. Rev. A. 7143805M. S. Kim, E. Park, P. L. Knight, and H. Jeong. Non- classicality of a photon-subtracted gaussian field. Phys. Rev. A, 71:043805, Apr 2005.
Mixed-state entanglement and quantum error correction. Charles H Bennett, David P Divincenzo, John A Smolin, William K Wootters, Phys. Rev. A. 54Charles H. Bennett, David P. DiVincenzo, John A. Smolin, and William K. Wootters. Mixed-state entan- glement and quantum error correction. Phys. Rev. A, 54:3824-3851, Nov 1996.
Nonclassical distance in quantum optics. Mark Hillery, Phys. Rev. A. 35Mark Hillery. Nonclassical distance in quantum optics. Phys. Rev. A, 35:725-732, Jan 1987.
The simplex structure of the classical states of the quantum harmonic oscillator. A Bach, U Luxmann-Ellinghaus, Commun. Math. Phys. 107A. Bach and U. Luxmann-Ellinghaus. The simplex struc- ture of the classical states of the quantum harmonic os- cillator. Commun. Math. Phys., 107, 1986.
Nonclassical distance in quantum optics. Mark Hillery, Phys. Rev. A. 35Mark Hillery. Nonclassical distance in quantum optics. Phys. Rev. A, 35:725-732, Jan 1987.
Total noise and nonclassical states. Mark Hillery, Phys. Rev. A. 39Mark Hillery. Total noise and nonclassical states. Phys. Rev. A, 39:2994-3002, Mar 1989.
Theorem on nonclassical states. Ching Tsung Lee, Phys. Rev. A. 52Ching Tsung Lee. Theorem on nonclassical states. Phys. Rev. A, 52:3374-3376, Oct 1995.
Theorem on nonclassical states. Ching Tsung Lee, Phys. Rev. A. 52Ching Tsung Lee. Theorem on nonclassical states. Phys. Rev. A, 52:3374-3376, Oct 1995.
Nonclassical effects in phase space. N Lütkenhaus, Stephen M Barnett, Phys. Rev. A. 51N. Lütkenhaus and Stephen M. Barnett. Nonclassical effects in phase space. Phys. Rev. A, 51:3340-3342, Apr 1995.
Hilbert-schmidt distance and nonclassicality of states in quantum optics. V V Dodonov, O V Man'ko, V I Man'ko, A Wünsche, Journal of Modern Optics. 474V. V. Dodonov, O. V. Man'ko, V. I. Man'ko, and A. Wünsche. Hilbert-schmidt distance and non- classicality of states in quantum optics. Journal of Modern Optics, 47(4):633-654, 2000.
Quantifying nonclassicality of one-mode gaussian states of the radiation field. Paulina Marian, Tudor A Marian, Horia Scutaru, Phys. Rev. Lett. 88153601Paulina Marian, Tudor A. Marian, and Horia Scutaru. Quantifying nonclassicality of one-mode gaussian states of the radiation field. Phys. Rev. Lett., 88:153601, Mar 2002.
Nonclassicality of quantum states: A hierarchy of observable conditions. Th, W Richter, Vogel, Phys. Rev. Lett. 89283601Th. Richter and W. Vogel. Nonclassicality of quantum states: A hierarchy of observable conditions. Phys. Rev. Lett., 89:283601, Dec 2002.
Computable measure of nonclassicality for light. K János, John Asbóth, Helmut Calsamiglia, Ritsch, Phys. Rev. Lett. 94173602János K. Asbóth, John Calsamiglia, and Helmut Ritsch. Computable measure of nonclassicality for light. Phys. Rev. Lett., 94:173602, May 2005.
Unified nonclassicality criteria. S Ryl, J Sperling, E Agudelo, M Mraz, S Köhnke, B Hage, W Vogel, Phys. Rev. A. 9211801S. Ryl, J. Sperling, E. Agudelo, M. Mraz, S. Köhnke, B. Hage, and W. Vogel. Unified nonclassicality criteria. Phys. Rev. A, 92:011801, Jul 2015.
Convex ordering and quantification of quantumness. J Sperling, W Vogel, Physica Scripta. 90774024J. Sperling and W. Vogel. Convex ordering and quantifi- cation of quantumness. Physica Scripta, 90(7):074024, june 2015.
Converting nonclassicality into entanglement. N Killoran, F E S Steinhoff, M B Plenio, Phys. Rev. Lett. 11680402N. Killoran, F. E. S. Steinhoff, and M. B. Plenio. Con- verting nonclassicality into entanglement. Phys. Rev. Lett., 116:080402, Feb 2016.
Non-classicality criteria: Glauber-sudarshan p function and mandel parameter. Moorad Alexanian, Journal of Modern Optics. 651Moorad Alexanian. Non-classicality criteria: Glauber-sudarshan p function and mandel parame- ter. Journal of Modern Optics, 65(1):16-22, 2018.
Nonclassical distance in multimode bosonic systems. Ranjith Nair, Phys. Rev. A. 9563835Ranjith Nair. Nonclassical distance in multimode bosonic systems. Phys. Rev. A, 95:063835, Jun 2017.
Quantifying nonclassicality by characteristic functions. S Ryl, J Sperling, W Vogel, Phys. Rev. A. 9553825S. Ryl, J. Sperling, and W. Vogel. Quantifying non- classicality by characteristic functions. Phys. Rev. A, 95:053825, May 2017.
Operational resource theory of continuous-variable nonclassicality. Benjamin Yadin, Felix C Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, M S Kim, Phys. Rev. X. 841038Benjamin Yadin, Felix C. Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, and M. S. Kim. Op- erational resource theory of continuous-variable nonclas- sicality. Phys. Rev. X, 8:041038, Dec 2018.
Operational resource theory of continuous-variable nonclassicality. Benjamin Yadin, Felix C Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, M S Kim, Phys. Rev. X. 841038Benjamin Yadin, Felix C. Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, and M. S. Kim. Op- erational resource theory of continuous-variable nonclas- sicality. Phys. Rev. X, 8:041038, Dec 2018.
Nonclassicality as a quantifiable resource for quantum metrology. Hyukjoon Kwon, Tyler Kok Chuan Tan, Hyunseok Volkoff, Jeong, Phys. Rev. Lett. 12240503Hyukjoon Kwon, Kok Chuan Tan, Tyler Volkoff, and Hyunseok Jeong. Nonclassicality as a quantifiable resource for quantum metrology. Phys. Rev. Lett., 122:040503, Feb 2019.
Quantifying nonclassicality via wigner-yanase skew information. Shunlong Luo, Yue Zhang, Phys. Rev. A. 10032116Shunlong Luo and Yue Zhang. Quantifying nonclassical- ity via wigner-yanase skew information. Phys. Rev. A, 100:032116, Sep 2019.
Probing nonclassicality with matrices of phase-space distributions. Martin Bohmann, Elizabeth Agudelo, Jan Sperling, 343Martin Bohmann, Elizabeth Agudelo, and Jan Sperling. Probing nonclassicality with matrices of phase-space dis- tributions. Quantum, 4:343, October 2020.
Operational resource theory of continuous-variable nonclassicality. Benjamin Yadin, Felix C Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, M S Kim, Phys. Rev. X. 841038Benjamin Yadin, Felix C. Binder, Jayne Thompson, Varun Narasimhachar, Mile Gu, and M. S. Kim. Op- erational resource theory of continuous-variable nonclas- sicality. Phys. Rev. X, 8:041038, Dec 2018.
Nonclassicality as a quantifiable resource for quantum metrology. Hyukjoon Kwon, Tyler Kok Chuan Tan, Hyunseok Volkoff, Jeong, Phys. Rev. Lett. 12240503Hyukjoon Kwon, Kok Chuan Tan, Tyler Volkoff, and Hyunseok Jeong. Nonclassicality as a quantifiable resource for quantum metrology. Phys. Rev. Lett., 122:040503, Feb 2019.
Maximal entanglement increase with singlephoton subtraction. Kun Zhang, Jietai Jing, 704Nicolas Treps, and Mattia WalschaersKun Zhang, Jietai Jing, Nicolas Treps, and Mattia Walschaers. Maximal entanglement increase with single- photon subtraction. Quantum, 6:704, May 2022.
Nonclassicality of noisy quantum states. A A Semenov, D Yu, B I Vasylyev, Lev, J. Phys. B: At. Mol. Opt. Phys. 39A. A. Semenov, D.Yu. Vasylyev, and B. I. Lev. Nonclas- sicality of noisy quantum states. J. Phys. B: At. Mol. Opt. Phys., 39:905-916, 2006.
Exploring the Quantum: Atoms, Cavities and Photons. S Haroche, J.-M Raimond, Oxford University PressS. Haroche and J.-M. Raimond. Exploring the Quantum: Atoms, Cavities and Photons. Oxford University Press, 2006.
Quantum state reconstruction of the single-photon fock state. A I Lvovsky, H Hansen, T Aichele, O Benson, J Mlynek, S Schiller, Phys. Rev. Lett. 8750402A. I. Lvovsky, H. Hansen, T. Aichele, O. Benson, J. Mlynek, and S. Schiller. Quantum state reconstruc- tion of the single-photon fock state. Phys. Rev. Lett., 87:050402, Jul 2001.
| []
|
[
"DETECTING PLANETS AROUND VERY LOW MASS STARS WITH THE RADIAL VELOCITY METHOD",
"DETECTING PLANETS AROUND VERY LOW MASS STARS WITH THE RADIAL VELOCITY METHOD"
]
| [
"A Reiners \nDraft version\n\n",
"J L Bean \nDraft version\n\n",
"K F Huber \nDraft version\n\n",
"S Dreizler \nDraft version\n\n",
"A Seifahrt \nDraft version\n\n",
"& S Czesla \nDraft version\n\n"
]
| [
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n"
]
| []
| The detection of planets around very low-mass stars with the radial velocity method is hampered by the fact that these stars are very faint at optical wavelengths where the most high-precision spectrometers operate. We investigate the precision that can be achieved in radial velocity measurements of low mass stars in the near infrared (nIR) Y -, J-, and H-bands, and we compare it to the precision achievable in the optical assuming comparable telescope and instrument efficiencies. For early-M stars, radial velocity measurements in the nIR offer no or only marginal advantage in comparison to optical measurements. Although they emit more flux in the nIR, the richness of spectral features in the optical outweighs the flux difference. We find that nIR measurement can be as precise than optical measurements in stars of spectral type ∼M4, and from there the nIR gains in precision towards cooler objects. We studied potential calibration strategies in the nIR finding that a stable spectrograph with a ThAr calibration can offer enough wavelength stability for m s −1 precision. Furthermore, we simulate the wavelength-dependent influence of activity (cool spots) on radial velocity measurements from optical to nIR wavelengths. Our spot simulations reveal that the radial velocity jitter does not decrease as dramatically towards longer wavelengths as often thought. The jitter strongly depends on the details of the spots, i.e., on spot temperature and the spectral appearance of the spot. At low temperature contrast (∼ 200 K), the jitter shows a decrease towards the nIR up to a factor of ten, but it decreases substantially less for larger temperature contrasts. Forthcoming nIR spectrographs will allow the search for planets with a particular advantage in mid-and late-M stars. Activity will remain an issue, but simultaneous observations at optical and nIR wavelengths can provide strong constraints on spot properties in active stars. | 10.1088/0004-637x/710/1/432 | [
"https://arxiv.org/pdf/0909.0002v2.pdf"
]
| 18,231,354 | 0909.0002 | aaf472d370bbbbb18c765b4202d2bb28a94bff23 |
DETECTING PLANETS AROUND VERY LOW MASS STARS WITH THE RADIAL VELOCITY METHOD
11 Dec 2009 Draft version December 11, 2009 December 11, 2009
A Reiners
Draft version
J L Bean
Draft version
K F Huber
Draft version
S Dreizler
Draft version
A Seifahrt
Draft version
& S Czesla
Draft version
DETECTING PLANETS AROUND VERY LOW MASS STARS WITH THE RADIAL VELOCITY METHOD
11 Dec 2009 Draft version December 11, 2009 December 11, 2009Preprint typeset using L A T E X style emulateapj v. 08/22/09Subject headings: stars: activity -stars: low-mass, brown dwarfs -stars: spots -techniques: radial velocities
The detection of planets around very low-mass stars with the radial velocity method is hampered by the fact that these stars are very faint at optical wavelengths where the most high-precision spectrometers operate. We investigate the precision that can be achieved in radial velocity measurements of low mass stars in the near infrared (nIR) Y -, J-, and H-bands, and we compare it to the precision achievable in the optical assuming comparable telescope and instrument efficiencies. For early-M stars, radial velocity measurements in the nIR offer no or only marginal advantage in comparison to optical measurements. Although they emit more flux in the nIR, the richness of spectral features in the optical outweighs the flux difference. We find that nIR measurement can be as precise than optical measurements in stars of spectral type ∼M4, and from there the nIR gains in precision towards cooler objects. We studied potential calibration strategies in the nIR finding that a stable spectrograph with a ThAr calibration can offer enough wavelength stability for m s −1 precision. Furthermore, we simulate the wavelength-dependent influence of activity (cool spots) on radial velocity measurements from optical to nIR wavelengths. Our spot simulations reveal that the radial velocity jitter does not decrease as dramatically towards longer wavelengths as often thought. The jitter strongly depends on the details of the spots, i.e., on spot temperature and the spectral appearance of the spot. At low temperature contrast (∼ 200 K), the jitter shows a decrease towards the nIR up to a factor of ten, but it decreases substantially less for larger temperature contrasts. Forthcoming nIR spectrographs will allow the search for planets with a particular advantage in mid-and late-M stars. Activity will remain an issue, but simultaneous observations at optical and nIR wavelengths can provide strong constraints on spot properties in active stars.
INTRODUCTION
The search for extrasolar planets with the radial velocity technique has led to close to 400 discoveries of planets around cool stars of spectral type F-M 5 . Fourteen years after the seminal discovery of 51 Peg b by Mayor & Queloz (1995), the radial velocity technique is still the most important technique to discover planetary systems, and radial velocity measurements are required to confirm planetary candidates found by photometric surveys including the satellite missions CoRoT and Kepler.
The largest number of planets found around solar-type stars are approximately as massive as Jupiter, and are orbiting their parent star at around 1 AU or below. In order to find Earth-mass planets in orbit around a star, the radial velocity technique either has to achieve a precision on the order of 0.1 m s −1 , or one has to search around less massive stars, which would show a larger effect due to the gravitational influence of a companion. Therefore, low-mass M dwarfs are a natural target for the search for low-mass planets with the radial velocity technique. In addition, there seems to be no general argument against the possibility of life on planets that are in close orbit around an M dwarf (inside the habitable zone; Tarter et al. 2007). So these stars are becoming primary targets for the search for habitable planets. So far, only a dozen M dwarfs are known to harbor one or more planets (e.g., Marcy et al. 1998;Udry et al. 2007). The problem with the detection of radial velocity variations in M dwarfs is that although they make up more than 70 % of the Galaxy including our nearest neighbors, they are also intrinsically so faint that the required data quality can not usually be obtained in a reasonable amount time, at least not in the spectral range most high resolution spectrographs operate at. M dwarfs have effective temperatures of 4000 K or less, and they emit the bulk of their spectral energy at wavelengths redward of 1 µm. The flux emitted by an M5 dwarf at a wavelength of 600 nm is about a factor of 3.5 lower than the flux emitted at 1000 nm. Thus, infrared spectroscopy can be expected to be much more efficient in measuring radial velocities of low-mass stars.
A second limit on the achievable precision of radial velocity measurements is the presence of apparent radial velocity variations by corotating features and temporal variations of the stellar surface. Such features may influence the line profiles, and that can introduce a high noise level or be misinterpreted as radial velocity variations due to the presence of a planet. Flares on active M dwarfs might not pose a substantial problem to radial velocity measurements (Reiners 2009), but corotating spots probably do. Desort et al. (2007) modeled the effect of a cool spot for observations of sun-like stars at optical wavelengths. Their results hint at a decrease of spot-induced radial velocity signals towards longer wavelengths. Martín et al. (2006) report the decrease of a radial velocity signal induced by a starspot on the very active M9 dwarf LP 944-20 (v sin i ≈ 30 km s −1 ); the amplitude they find is 3.5 km s −1 at optical wavelengths but only an rms dispersion of 0.36 km s −1 at 1.2 µm. Thus, observations at infrared wavelength regions may substantially reduce the effect of stellar activity on radial velocity measurements, which would allow the detection of lowmass planets around active stars.
In this paper, we investigate the precision that can be reached in radial velocity measurements at infrared wavelength regions. The first goal of our work is to study the detectability of planets around low-mass stars using infrared spectrographs. We focus on the wavelength bands Y, J, and H because these are the regions where spectrographs can be built at relatively low cost. Extending the wavelength coverage to the K-band imposes much larger costs because of severe cooling requirements and large gaps in the spectral format. We chose to exclude this case from the current paper. Our second motivation is to see to what extent the radial velocity signal of active regions can be expected to vanish at infrared wavelength regions. So far, only rough estimates based on contrast arguments are available, and no detailed simulation has been performed.
The paper is organized as follows. In §2, we introduce the spectral characteristics of M dwarfs and compare model spectra used for our simulations to observations. In §3, we calculate radial velocity precisions that can be achieved at different wavelengths, and we investigate the influence of calibration methods. In §4, we simulate the effect of starspots on radial velocities in the infrared, and §5 summarizes our results.
2. NEAR INFRARED SPECTRA OF M DWARFS M dwarfs emit the bulk of their flux at near-infrared (nIR) wavelengths between 1 and 2 µm. However, highresolution spectrographs operating in the nIR are not as ubiquitous as their counterparts in the optical. Therefore, our knowledge about M dwarf spectra past 1 µm is far behind what is known about the visual wavelength range. Another complication is that strong absorption bands of water from the Earth's atmosphere cover large fractions of the nIR wavelength range. Only some discrete portions of the region 1 -2 µm can be used for detailed spectroscopic work. For our investigation of radial velocity precision in M dwarfs, we concentrate on the three bands, Y , J, and H between the major water absorption bands.
We show in Fig. 1 the transmission spectrum of the Earth's atmosphere together with band identification that we use in the following. We modeled the telluric features using the FASCODE algorithm (Clough et al. 1981(Clough et al. , 1992, a line-by-line radiative transfer code for the Earth's atmosphere, and HITRAN (Rothman et al. 2005) as a database for molecular transitions. Our model is based on a MIPAS 6 nighttime model atmosphere with updated temperature, pressure and water vapor profiles for the troposphere and lower stratosphere based on GDAS 7 meteorological models for Cerro Paranal (see 6 http://www-atm.physics.ox.ac.uk/RFM/atm/ 7 http://www.arl.noaa.gov/ready/cmet.html Seifahrt et al. submitted to A&A). Fig. 1 shows that the wavelength regions between Y , J, and H are not useful for spectroscopic analysis of stars. Furthermore, it is important to note that the bands themselves are not entirely free of telluric absorption lines. While the Y -band shows relatively little telluric contamination, the J-and H-bands have regions of significant telluric absorption that must be taken into account when radial velocities are measured at these wavelengths. In our calculations, we mask out the regions that are affected by significant telluric absorption (see §3.1).
Observed low-resolution (R ≈ 2000) infrared spectra of cool stars and brown dwarfs were presented by McLean et al. (2003) and Cushing et al. (2005). Highresolution infrared spectra of M dwarfs are still very rare; McLean et al. (2007) and Zapatero-Osorio et al. (2007) show J-band spectra of M, L, and T dwarfs taken with NIRSPEC at R ≈ 20, 000. Hinkle et al. (2003) report measurements of rotational velocities from short portions of R ≈ 50, 000 K-band spectra taken with the Phoenix infrared spectrograph, and Wallace & Hinkle (1996) show K-band spectra of cool stars at R ≈ 45, 000. Short H-band spectra of cool stars with a focus on OH lines are presented by O'Neal et al. (2001), in particular they show that OH is present in M dwarfs but weaker than in giants and subgiants.
The high-resolution spectrum that is probably closest to the appearance of an M dwarf spectrum, and that fully covers the wavelength range considered in this paper (Y, J, and H) is the spectrum of a sunspot. Wallace & Livingston (1992) and Wallace et al. (1998) presented spectra of a sunspot in the visual and nIR regions, and in the nIR up to 5.1 µm, respectively. However, the sunspot spectrum does not resemble the spectra of M dwarfs at high detail, and it cannot be used to investigate a range of temperatures among the M dwarfs. The sunspot spectrum is probably closest to an early-M dwarf with low gravity (Maltby et al. 1986), and we use it below only as a cross-check for the existence of absorption features predicted in our models.
To investigate the radial velocity precision that can be reached using nIR spectra of M dwarfs, we used model spectra calculated with the PHOENIX code (e.g., Hauschildt et al. 1999;Allard et al. 2001). This strategy has the advantage that we can use the full wavelength range without any restrictions imposed by the unavailability of empirical data, and that we can model stars of any temperature and surface gravity. The caveat, however, is that the model spectra may not adequately resemble real stellar spectra, in particular at the high spectral resolution we require.
We show in Fig. 2 three model spectra of cool dwarfs, spectral types are approximately M3, M6, and M9, i.e., effective temperatures of T = 3500, 2800, and 2600 K, respectively. We use models with a surface gravity of log g = 4.75 throughout this paper. The spectra reproduce the data shown by McLean et al. (2003) and Cushing et al. (2005) reasonably well when degraded to low-resolution. In Fig. 3, we show short parts of the model spectra at Y -, J-, and H-bands at higher detail for the case of an M3 star (T = 3500 K). The three spectral windows in Fig. 3 all cover the same number of resolution elements assuming the spectral resolution is the same in all bands. Obviously, the Y -band is very rich in structure. There are only a few deep lines in the J-band, and the number of sharp features in the H-band is lower than in the Y -band, but higher than in the J-band.
The model shown in Fig. 3 is at the hot end of targets we are interested in. Comparison to Fig. 2 indicates that with lower temperature, the features in the Y -band become stronger. The same is true for features in the Jband, but the H-band becomes relatively featureless at late-M spectral class, which is mainly due to the disappearance of OH lines. In Fig. 3, we also show the sunspot spectrum in comparison to the M3 model spectrum. The FeH lines in the sunspot Y -band are somewhat weaker than in M dwarfs and are strongly affected by Zeeman broadening (Reiners & Basri 2006), the latter leading to significantly reduced depths for many lines in the sunspot spectrum. In the J-and the H-bands, the main features and their depth are relatively well described, so that our model spectra likely reproduce the stellar spectrum relatively well, at least at this temperature.
Another comparison of our models to an observed high-resolution spectrum of a mid-M dwarf is shown in Fig. 4, where we compare an observed spectrum of GJ 1002 (M5.5) to a model spectrum at a temperature of 3200 K. The observed spectrum was obtained by us using CRIRES (Käufl et al. 2006) at the Very Large Telescope (VLT) and reduced following standard infrared reduction procedures including bias-and sky-subtraction, flatfielding, and wavelength calibration using ThAr lines. The model lines (predominantly from FeH) provide a good fit to the observed spectrum of GJ 1002. Thus, we feel confident that our model spectra reproduce the FeH band in the Y -band in M dwarfs reasonably well. More generally, all these comparisons suggest that the PHOENIX model spectra are accurate enough for simulations of radial velocities measured from the nIR spectra of M dwarfs.
3. RADIAL VELOCITY PRECISION 3.1. Calculation of radial velocity precision The achievable precision of a radial velocity measurement from a spectrum of given quality was calculated by Connes (1985) and Butler et al. (1996). This value is a general limit for radial velocity measurements; it reflects the limited intrinsic information content of any observed spectrum. For example, if a star exhibits more lines in a certain wavelength range, this will lead to a higher precision compared to a star with fewer lines. Similarly, a set of shallow, perhaps blended, lines will put a weaker constraint on the radial velocity than a set of narrow, deep lines. In their Eq. 6, Butler et al. (1996) provide a formula for the radial velocity uncertainty as a function of intrinsic content, instrument resolution (R), and signalto-noise ratio (S/N ). This value is inversely proportional to the weighted sum of the spectrum derivative, which means that the precision is higher as the spectrum has more deep and sharp features. In the following, we first calculate the intrinsic radial velocity precision achievable in a stellar spectrum at given R and S/N . In a second step, we ask how this precision is affected by the limited precision of the wavelength calibration ( §3.3).
In order to compare the potential of different wavelength bands, we take a model spectrum, assume a S/N at one given wavelength, and calculate the S/N at other wavelengths according to the spectral flux distribution.
We assume constant instrument efficiency at all wavelengths, and let the signal quality vary according to the stellar flux distribution. We also assume constant spectral resolution and sampling at different wavelength ranges. Calculating the S/N from the spectral flux distribution, it is important to use the number of photons per spectral bin instead of energy per spectral bin, which is provided by the PHOENIX model spectra. To convert energy flux into photon flux, we need to divide by the energy carried per photon, which is hc/λ. Neglecting the constants, this means we need to multiply the spectrum with λ.
For our calculations, we assume an average S/N of 100 at a resolving power of R = 60, 000 in the Y -band. To account for the spectral resolution, we apply an artificial broadening to the spectra so that the lines become wider, and we assume that the instrument collects a constant number of photons per wavelength interval, i.e., more photons are available per bin at lower spectral resolution. Note that this approach ignores the possibly higher losses from using a narrower slit or other design differences to achieve higher resolution that are likely in practice. That is, a real spectrograph would likely deliver more photons per wavelength interval when used in a lower resolution mode. As this effect is likely a lower-order consideration than the effect of varying dispersion, and is also difficult to predict in a general sense, we do not consider it here.
Impact of telluric lines
Near infrared wavelength regions are more severely affected by telluric contamination than the optical range. The limiting effect of telluric contamination lies in the difficulty removing it to a level at which it does not affect the radial velocity measurement on a m s −1 level. For example, Bean et al. (2009) report a limit of 5 m s −1 that can be reached when spectral regions with telluric lines are included in the analysis. To reach higher precision, contaminated regions need to be excluded from the analysis.
In our calculations, we mask the regions affected by telluric lines and do not use them for our analysis of radial velocity precision. We chose to ignore all spectral regions that fall in the vicinity of ±30 km s −1 around a telluric absorption of 2 % or more, which is approximately the span introduced by maximum barycentric velocity differences. The telluric transmission spectrum was artificially broadened to a spectral resolving power of R = 100, 000 before the 2 % cutoff was applied. The exact regions that fulfill this criterion also depend on the atmospheric conditions. The wavelength bands together with the fractions ignored due to contamination with telluric lines are summarized in Table 1. The telluric contamination in the V -band is rather small (2 %), and does not badly affect the Y -band either (< 20 %). On the other hand, roughly half of the spectrum in the J-and H-bands (55 % and 46 %, respectively) is affected by telluric contamination. The effect on the theoretical information content of the stellar spectrum hence is ∼ √ 2, which is still not an order-of-magnitude effect but can no longer be neglected. At these wavelengths, one has to decide whether the RV precision is higher after discarding the contaminated spectral range, or if one should attempt to correct for the telluric contamination.
We note that the same excercise for the K-band (2050- 2400 nm) shows that about 80 % of this wavelength range is affected by significant telluric contamination. Clearly, radial velocity work in the K-band faces very severe limitations due to telluric lines.
3.3. Wavelength calibration methods A critical part of radial velocity measurements is the wavelength calibration. When considering such measurements in new spectral regimes the influence of available calibration precision must be considered in addition to the intrinsic information content of the spectra of the stars of interest. There are generally two types of wavelength standards that are being used very successfully at optical wavelengths for high-precision radial velocity work; 1) the calibration signal is injected in the spectrograph following a separate path, e.g., using a ThAr emission lamp (Pepe et al. 2002), and 2) the calibration signal is imposed on the stellar light using a gas absorption cell (e.g. I 2 , Butler et al. 1996). At nIR wavelengths, both techniques so far have not been used at the same level of efficiency as in the optical, mainly because no instruments are yet available that can provide comparable spectral resolving power and wavelength range as instruments operating at optical wavelengths. However, such spectrographs are foreseen for the future, and we estimate the precision of both techniques and their current applicability in the nIR. We cast these calculations in terms of equivalent radial velocity precision so that they may be compared directly to the estimated information content of the stellar spectra. For our purpose, we have not considered calibration using a laser comb (Steinmetz et al. 2008) or an etalon, which essentially follow the same principle as the ThAr calibration. A laser comb or etalon that cover the full desired wavelength range would largely solve the problems of inadequate wavelength coverage. Unfortunately, both are not yet available and we restrict the discussion to the ThAr and gas cell options.
ThAr lamp
In the nIR, the ThAr method could in principle just be copied from the optical regime. Standard ThAr lamps produce fewer lines in the nIR, but that does not necessarily mean that the precision over large wavelength regions must be lower than at optical wavelengths. For example, Kerber et al. (2008) provide a list of ThAr lines that includes more than 2400 lines in the range 900 -4500 nm. Lovis et al. (2006) found that the Ar lines produced by a ThAr lamp are unsuitable for high precision wavelength calibration because they show high intrinsic variability on the order of tens of m s −1 between different lamps. Nevertheless, Kerber et al. (2008) show that in the wavelength range we consider here, the fraction of Ar lines in the ThAr lamp spectrum is only on the order of ∼ 15 %, and these authors also discuss that the pressure sensitivity only appears in the high-excitation lines of Ar I. So although there are fewer lines than in the optical, the still high number of possible lines suggests that a ThAr lamp should be evaluated as a possible wavelength calibration.
To estimate the calibration precision that can be reached with a ThAr spectrum in a given wavelength interval, we count the number of lines contained in this interval in the list of Kerber et al. (2008), and we estimate the uncertainty in determining the position of every line (converted to radial velocity units) according to its intensity. We assume that we only take one exposure of the ThAr lamp, and we scale the line intensities to a given dynamic range. The range ultimately used for our calculations was selected to achieve a high number of useful lines while losing as few lines as possible due to detector saturation. We quadratically add the uncertainties of all lines to calculate the total uncertainty of a ThAr calibration at the chosen spectral region. "Saturated" lines are not taken into account, but we note that infrared detectors offer advantages over CCDs in this regard. One advantage of infrared detectors is that saturated pixels do not bleed into neighboring pixels like in CCDs. Therefore, although a particular calibration line may saturate some pixels, the problem would be localized only to the pixels the line falls on and the signals recorded for lines falling on neighboring pixels would be unaffected. In practice then, some lines may be allowed to saturate and thus be ignored during the wavelength calibration if a higher overall signal would result in a net gain of useful lines. A second issue is that individual pixels in infrared detectors can be read out at different rates. Therefore, there is the possibility of an increased dynamic range for a given exposure level. However, it is unclear what would be the influence of the differences in response and noise properties that pixels read out at different rates would have. So for this work we take the conservative approach that lines which would apparently saturate given our selected dynamic range can not be used for the wavelength calibration.
Our estimation of the utility of a ThAr lamp for wavelength calibration in order to obtain high-precision radial velocities is based on the assumption that the calibration is only needed to track minor changes in the instrument response. Such is true for an isolated and stabilized instrument with a nearly constant pupil illumination (e.g. like HARPS, Mayor et al. 2003). The main reason is that the light from a ThAr lamp will not pass directly through the instrument in the exact same way and/or at the exact same time as the light from a star. Therefore, the utility of ThAr as a calibration for radial velocity measurements will be reduced from that discussed here for instruments that experience significant temporal variations.
Gas absorption cell
The gas absorption technique requires a gas that provides a large number of sharp spectral lines. Currently, no gas has been identified that produces lines in the full nIR wavelength range at a density comparable to I 2 spectrum in the optical (I 2 only provides lines in the optical wavelength range), although there have been some investigations into gases suitable for small windows in this region. D'Amato et al. (2008) report on a gas absorp-tion cell using halogen-hydrates, HCl, HBr, and HI, that has absorption lines between 1 and 2.4 µm, but these gases only produce very few lines so that a calibration of the wavelength solution and the instrumental profile can only be done over a small fraction of the spectrum. Mahadevan & Ge (2009) discuss various options for nIR gas cells and conclude that the gases H 13 C 14 N, 12 C 2 H 2 , 12 CO, and 13 CO together could provide useful calibration in the H-band. Another gas that provides some utility in the nIR is ammonia (NH 3 ), which exhibits a dense forest of spectral lines in the K-band. We are currently using an ammonia cell for a radial velocity planet search with CRIRES at the VLT. More details about the cell and these radial velocity measurements are contained in another paper (Bean et al. 2009).
For an estimate of the calibration precision that could potentially be achieved over a broad wavelength range using a gas cell, we assume that a gas or combination of gases might be found with absorption lines similar to ammonia, but throughout the entire nIR region. We calculate the radial velocity precision from a section of an ammonia cell spectrum with various S/N and R just as for the stellar spectra (i.e. using Eq. 6 in Butler et al. 1996). The basis for this calculation is a 50 nm section of a spectrum of our CRIRES ammonia cell (18 cm length and filled with 50 mb of ammonia) measured with an FTS at extremely high-resolution (R ∼ 700, 000). Convolved to a resolving power of R = 100, 000 and S/N = 100, the calculated precision is 9 m s −1 . We note that this value would change if a longer cell or higher gas pressure would be used, but this change would be relatively small for conceivable cells. To extrapolate the precision estimate to arbitrary wavelength regions we scale the calculated value by the corresponding size of the regions. For example, the uncertainty from a region of 100 nm would be a factor of √ 2 less than that calculated for the 50 nm region.
We emphasize that our estimates on the performance of a gas cell in the nIR are purely hypothetical. Currently, in the wavelength range under consideration, we know of no real gas that shows as dense a forest of lines in the Y -, J-, and H-bands as ammonia does in the K-band.
Radial velocity precision in low-mass stars
The precision that can be achieved using (model) spectra of stars at 3500 K (M3), 2800 K (M6), and 2400 K (M9) for different wavelength regions are summarized in Table 2 and shown in Fig. 5. For each case, we first calculated the intrinsic precision over the wavelength bin under consideration. As explained in §3, we assume S/N of 100 at 1 µm at R = 60, 000 and scale the signal quality according to the spectral flux distribution and spectral resolution. The differences between radial velocity precisions at different wavelength bands are dominated by the differences between the S/N and between the appearance of spectral features in these bands (see Figs. 2 and 3). A secondary effect is the length of the spectral range that differs between the bands, but it is not always the band with the largest coverage that yields the highest precision. We show the situation for three different values of R = 100, 000, 80,000, and 60,000.
The S/N given in Table 2 varies according to the number of photons per pixel, which decreases at higher spec- tral resolution because the wavelength bins per pixel become smaller. The S/N is always comparable between the three nIR bands, but the optical wavelength range provides S/N that is about a factor of two smaller in the M3, a factor of five smaller at M6, and a factor of ten smaller in the M9 star.
In addition to the intrinsic precision, we show the precision achievable if an imperfect wavelength calibration is considered. The additional uncertainty due to ThAr or gas cell calibration (see §3.3) leads to somewhat higher limits that can be achieved in a real observation. We show no ThAr or gas cell values for the V -band because here the wavelength calibration is not the critical factor for the situations investigated in this paper.
The question Fig. 5 tries to answer is what is the highest attainable precision of a radial velocity measurement if a given star is observed at different wavelength regions and spectral resolutions, under the assumption of the same exposure time, telescope size, and instrument throughput for all setups.
For an early-M star (M3), the highest precision is still reached in the V -band, although the Y -band does not perform much worse. For the given choice of parameters, the highest obtainable precision in the V -band at R = 100, 000 is roughly 2.5 m s −1 , and in the Y -band it is ∼ 3.8 m s −1 . The J-and H-bands are worse with only ∼ 16 m s −1 and 8 m s −1 precision, respectively, at the highest resolution. In general, in the absence of rotation, higher precision is obtained for higher spectral resolving power 8 . We discuss the limits to the precision in rotating stars in §3.5. A remarkable feature of our precision calculations for T = 3500 K is that although the flux level in the visual wavelength range is much lower than the flux around 1 µm and redder, the radial velocity precision is not worse. This is because the optical spectrum of an early-M dwarf is extremely rich in narrow 8 As an approximation, precision scales linearly with S/N but quadratically with R. If a constant number of photons is assumed, S/N scales down with √ R, and as a result, the precision scales approximately with R 3/2 .
features. At nIR wavelengths, the number of features is much lower so that the attainable precision is lower, too. The same explanation holds for the comparison between the nIR Y -, J-, and H-bands. The low precision obtainable in the J-band is due to the lack of sharp and deep spectral features in that wavelength range (compare Figs. 2 and 3).
At lower temperature, T = 2800 K (M6), the overall precision (at the same S/N , that means in general after longer exposures) has gained in the nIR-bands in comparison to the optical, because now the S/N is much higher at nIR regions. The Y -and J-band precisions improve a lot in comparison to the M3 star, which is also due to the appearance of FeH bands (see Cushing et al. 2005). The H-band precision at 2800 K is comparable to the 3500 K spectrum. Now, the V -band performs worse than the Y -band, but still it yields a remarkably high precision: Although the flux level in the V -band is about an order of magnitude below the flux at nIR wavelengts, the richness in sharp features can almost compensate for this. The V -band precision is only about 30 % worse than the Y -band precision, and it still yields much better precision than the J-and the H-bands.
Finally, in the M9 star at T = 2400 K, all three nIR bands outperform the V -band because of the high flux ratio between the nIR and the optical range. Still, the Y -band provides the highest precision about a factor of two better than the J-and the H-band.
We consider now the effects of limited precision in the wavelength calibration using the ThAr lamp or a gas cell (shown in in Fig. 5 as open rhombs and crosses, respectively). The ThAr calibration apparently can provide a very reliable calibration that introduces only a few percent loss in precision. Of course, in a real spectrograph, effects like wavelength stability over time additionally limit the precision that can be reached (Lovis & Pepe 2007). This effect has not been taken into account. Nevertheless, our calculations show that enough suitable ThAr lines are available and that a wavelength solution at nIR wavelengths (Y -H-bands) based on ThAr lines is a reliable calibration that can in principle be expected to work almost as successfully as in the optical wavelength range. In contrast to that, the calibration using the virtual gas cell as a reference yields much worse a result, in particular at short wavelengths like in the Y -band. In order to make the gas-cell calibration provide the same accuracy as a calibration using a ThAr lamp (in a stabilized spectrograph), a gas is needed that provides more and deeper lines than NH 3 provides in the K-band. So far, all gases known provide many fewer lines so that currently achievable precision turns out to be significantly below than what can be achieved with ThAr. We note that in order to make the gas cell calibration work, the spectrum must have a minimum S/N allowing a reliable reconstruction of the instrument profile. This means a typical minimum S/N of ∼ 100 is required for the gas cell method. Thus, using a gas cell for the wavelength calibration, low-S/N spectra as in the M6 and M9 cases in the V -band considered above could not be used.
The influence of rotation 3.5.1. Distribution of rotation in M dwarfs
Higher resolution spectra only offer an advantage for radial velocity measurements if the stars exhibit sharp lines (see also Bouchy et al. 2001). Field mid-and late-M dwarfs can be very rapid rotators with broad and shallow spectral lines. For example, Reiners & Basri (2008) show that rapid rotation is more frequent in cooler M dwarfs. We have collected measurements on rotational velocities from Delfosse et al. (1998); Mohanty & Basri (2003); Reiners & Basri (2008) and Reiners & Basri (submitted to ApJ), and we show the cumulative distributions of v sin i for early-, mid-, and late-M stars in Fig. 6. All early-M dwarfs (M1 -M3) in the samples are rotating slower than v sin i = 7 km s −1 , which means that early-M dwarfs are very good targets for high precision radial velocity surveys. Among the mid-M dwarfs (M4 -M6), about 20 % of the stars are rotating faster than v sin i = 10 km s −1 , and this fraction goes up to about 50 % in the late-M dwarfs (M7 -M9).
Rotational broadening and radial velocity precision
We show in Fig. 7 the Y -band precision of radial velocity measurements that can be achieved in a 3000 K star (M5) as a function of projected rotational velocity at different spectral resolutions (R = 20,000, 60,000, 80,000, and 100,000). All assumptions about S/N and R are as explained above.
As expected, at high rotation velocities (v sin i > 30 km −1 ), the precision achieved with spectrographs operating at different R are hardly distinguishable. Only if the line width of the rotating star is narrower than the instrumental profile does higher resolution yield higher precision. However, for R > 60, 000, the difference in precision is relatively small even at very slow rotation. At a rotation velocity of v sin i = 10 km s −1 , the precision is roughly a factor of 3 lower than the precision in a star with v sin i = 1 km s −1 , and v sin i = 6 km s −1 brings down the precision by about a factor of 2 compared to v sin i = 1 km s −1 .
RADIAL VELOCITY VARIATIONS FROM
STARSPOTS In this Section, we investigate the influence of starspots on the measurement of radial velocities. A similar study was performed by Desort et al. (2007), but for sun-like stars in the optical wavelength regime. We extend this study to cooler stars and nIR wavelengths.
Spot properties
Radial velocity variations can be mimicked by surface features corotating with the star. We know from the Sun that magnetic activity can cause spots that are significantly cooler than the photosphere, and much larger spots are found on stars more active than the Sun (in general, rapidly rotating stars are more active). However, the temperature differences between starspots and the corresponding "quiet" photosphere remains a rather unknown parameter, especially for very active stars.
The coolest temperatures in large sunspots can be ∼ 2000 K lower than the temperature in the surrounding photosphere (Solanki 2003). Observed as a star, however, such differences would still be difficult to observe because sunspots cover only a very small fraction of the surface (< 1%). O'Neal et al. (2001O'Neal et al. ( , 2004 reported spot temperatures up to 1500 K below photospheric temperatures in active G-and K-type stars. These spots cover between 10 and 50 % of the stellar surface. Strassmeier & Rice (1998) reported spot temperatures in an active K dwarf that are ∼ 800 K below the effective photospheric temperature based on Doppler imaging.
From the available observations, no general prediction on the occurrence and properties of spots on stellar surfaces is possible. In particular, no observational information is available on spot distributions on M dwarfs, which have atmospheres significantly different than those of sun-like stars, and may also have different magnetic field topologies (Donati et al. 2008;Reiners & Basri 2009). Therefore, we investigate a set of different photosphere and spot temperature combinations in the following.
Before we investigate apparent radial velocity shifts using a detailed model ( §4.2), we consider a "toy model" of an ideal spectral line composed of signal from two regions: the "quiet" surface of a rotating star, and a co-rotating, cool spot. To demonstrate the influence of the temperature contrast and differing spectral features for a spot, we generate the line profiles sampled over a complete rotational period and compute the apparent radial velocity shift for each rotational phase by fitting them with a Gaussian. An example radial velocity curve is given in Fig. 8. We report on the amplitude K of the radial velocity curve occurring during a full rotation 9 .
Contrast
One crucial parameter for the influence of a starspot on a spectral line, and thus the star's measured radial velocity, is the flux contrast between the quiet surface and the spot at the wavelength of interest. In general, one expects the radial velocity amplitude due to a spot to be smaller with lower contrast and vice versa. We illustrate the effect of a cool spot on the line profile and the measured radial velocity shift in Fig. 9, where the situation for a spot covering 2 % of the visible surface is shown for two different temperatures (contrasts).
To investigate the influence of contrast over a wide range of parameters we used the toy model described above. We assumed a blackbody surface brightness according to a surface temperature T 0 , and we subtracted the amount of flux that originates in a spot covering 2 % of the stellar surface. We then added the flux originating in the spot at a temperature T 1 . This spot has the same line profile as the photosphere, which means that we assume a constant line profile for the whole star. For our example, we chose a rotational velocity of v sin i = 2 km s −1 .
In Fig. 10, we show the wavelength-dependent flux ratio between the quiet photosphere and the spot (the contrast, upper panel), and the resulting apparent radial velocity shift from the toy model calculations (lower panel). We show cases for three photosphere temperatures T 0 (5700 K, 3700 K, and 2800 K), for each T 0 we show cases with a small temperature difference, T 0 −T 1 = 200 K, and with a larger temperature difference of (T 0 − T 1 )/T 0 = 0.35.
For small temperature differences (left panel of Fig. 10), the flux ratio between photosphere and spot decreases by approximately a factor of two in the range 500 -1800 nm, while the radial velocity (RV) signal decreases by roughly a factor between two and three. The RV signal is higher for the lowest T 0 because the relative difference between T 1 and T 0 is much larger than in the case of T 0 = 5700 K. The cases of large temperature contrast (right panel in Fig. 10) produce flux ratios > 100 in the coolest star at 500 nm. At 1800 nm the flux ratio decreases to a value around 5, i.e., a factor of 20 lower than at 500 nm. This is much larger than for the cases with small temperature contrast, where the flux ratio only decreases by a factor of two or less. On the other hand, the RV signal does not change as dramatically as the flux ratio. For large temperature contrasts, the absolute values of the RV signal are larger than in the case of low contrast, but the slope of RV with wavelength is much shallower; it is well below a factor of two in all three modeled stars. The explanation for this is that the large contrast in flux ratio implies that the spot does not contribute a significant amount to the full spectrum of the star at any of these wavelengths. Therefore, a relatively small change of the flux ratio with wavelength has no substantial effect on the RV signal. If, on the other hand, the flux ratio is on the order of two or less, a small decrease in the ratio with wavelength means that the larger contribution from the spot can substantially change the RV signal. Thus, a significant wavelength dependence for an RV signal induced by a cool spot can only be expected if the temperature difference between the quiet photosphere and the spot is not too large.
Line profile in the spot
Line profile deviations that cause radial velocity variations do not solely depend on temperature contrast but also on the dependence of spectral features on temperature; effective temperature generally corresponds to a different spectrum and does not just introduce a scaling factor in the emitted flux. For example, a spectral line variation, and hence a radial velocity shift, can also appear at zero temperature contrast if the spectral line depths differ between the spot and the photosphere (as for example in hot stars with abundance spots; Piskunov & Rice 1993).
In Fig. 11, we consider a similar situation as in Fig. 9. Here, the temperature contrast between spot and photosphere is held constant but we show three cases in which the depths of the spectral line originating in the spot are different. The three spot profiles are chosen so that the line depths are 0.5, 1.0, and 1.5 times the line depth of the photospheric line. Fig. 11 illustrates that if spectral features are weaker in the spot than in the surrounding photosphere, the radial velocity shift (bottom panel) is larger than in the case of identical line strengths. If, on the other hand, the spectral features become stronger, as for example in some molecular absorption bands, the spot signature weakens and the radial velocity distortion is smaller. In our example of a stronger spot feature, this effect entirely cancels out the effect of the temperature contrast, so that the radial velocity signal of the spot is zero although a cool spot is present.
Spot simulations
After considering the general effects of starspots in the last Section, we now discuss results of a more sophisticated simulation. Here, we calculate a full spectrum by integrating over the surface of a spotted star using spectra from a model-atmosphere code. The resulting radial velocity shift is estimated by cross-correlation against a spectrum unaffected by spots.
Line profile integration
Our model spectra for a spotted star were calculated using a discrete surface with 500 × 500 elements in longitude and latitude arranged to cover the same fraction of the surface each. All surface elements are characterized by a 'local' temperature; unspotted areas are assigned to the photospheric temperature, and spotted areas contribute spectra pertaining to atmospheres with lower temperatures. The associated spectra f (λ, T ) were generated with PHOENIX for all temperatures used. Depending on the rotational phase p, the visibility A i of each surface element i is calculated, considering projection effects. We determine the radial velocity shift v rad,i for each surface element due to the stellar rotation. The resulting model spectrum f p (λ) for the spotted star is
f p (λ) = N i=1 f (λ, T i , v rad,i )A i N i=1 A i ,(1)
where N is the total number of elements. In all of our calculations the model star has an inclination of i = 90 • , and for simplicity we assume a linear limb darkening coefficient ǫ = 0 (no limb darkening). Using no limb darkening slightly overestimates the radial velocity signal but captures the qualitative behaviour that we are interested in. Stellar spots are considered to be circular and located at 0 • longitude and +30 • latitude.
We calculated the RV signal introduced by a temperature-spot on a rotating star. We chose the same star/spot temperature pairs as in the contrast calculations in the foregoing Section, but we used detailed atmosphere models and PHOENIX spectra to calculate the RV signal over the wavelength area 500 -1800 nm. That means, our calculations include the effects of both the contrast and differences in the spectral features between atmospheres of different temperatures.
Results from spot simulations
The results of our calculations are shown in Fig. 12. As in Fig. 10, we show six different stars, the temperature combinations are identical. The spot size is 1 % of the projected surface. We compute RV amplitudes for 50 nm wide sections. For each model star, we show four cases for rotational velocities of v sin i = 2, 5, 10, and 30 km s −1 .
In general, the trends seen in the detailed model are consistent with our results from the toy model of a single line taking into account only the flux ratio between photosphere and spot. As long as the flux ratio is relatively small (200 K, left panel in Fig. 12), the RV amplitude strongly depends on wavelength due to the wavelength dependency of the contrast. The strongest gradient in RV amplitude occurs between 500 and 800 nm; the RV signal decreases by almost a factor of 10 in the lowest temperature model where the flux gradient is particularly steep over this wavelength range. The decrease occurs at somewhat longer wavelengths in the cooler model stars than in the hotter models. In the model stars with a high flux ratio between photosphere and spot (right panel in Fig. 12), the variation in RV amplitude with wavelength is very small. The very coolest model shows a few regions of very low RV amplitude, but the general decline is not substantial.
The absolute RV signal in a Sun-like star with T 0 = 5700 K at 500 nm comes out to be ∼ 40 m s −1 at a spot temperature of T 1 = 3700 K on a star rotating at v sin i = 5 km s −1 . This is consistent with the result of Desort et al. (2007), who reported a peak-to-peak amplitude of ∼ 100 m s −1 , i.e., an "amplitude" of 50 m s −1 , in a similar star. Over the wavelength range they investigated (roughly 500 -600 nm), Desort et al. found that the RV amplitude decreases by about 10 %. We have not calculated the RV amplitude over bins as small as the bins in their work and our calculation only has two bins in this narrow wavelength range. However, we find that the decrease Desort et al. (2007) reported between 500 nm and 600 nm, is not continued towards longer wavelengths. In our calculations, the RV amplitude does not decrease by more than ∼ 20 % between 500 and 1800 nm. Similar results apply for higher rotation velocities and lower photosphere temperatures.
The models with lower flux ratio, T 0 −T 1 = 200 K, show more of an effect with wavelength, although the absolute RV signal is of course smaller. The RV amplitude in a star with T 0 = 3700 K, a spot temperature of T 1 = 3500 K, and v sin i = 5 km s −1 is slightly above 10 m s −1 at 500 nm and ∼ 4 m s −1 at 1000 nm. Above 1000 nm, no further decrease in RV amplitude is observed. The behavior, again, is similar in other stars with low flux ratios and with different rotation velocities.
For the cool models with large temperature differences, the individual RV amplitudes show relatively large scatter between individual wavelength bins. We attribute this to the presence of absorption features in some areas, while other areas are relatively free of absorption features. The temperature dependence of the depth of an absorption feature is important for the behavior of the spot signature in the spectral line. An example of this effect can be observed around 1100 nm, where the spectrum is dominated by absorption of molecular FeH that becomes stronger with lower temperature.
Comparison to LP 944-20
We can compare our simulations to the optical and nIR radial velocity measurements in LP 944-20 reported by Martín et al. (2006). At optical wavelengths, they found a periodical radial velocity variation of K = 3.5 km s −1 , while in the nIR they could not find any periodical variation and report an rms of 0.36 km s −1 . The approximate effective temperature of an M9 dwarf like LP 944-20 is around 2400 K, we compare it to our coolest model that has a temperature of T 0 = 2800 K. The radial velocity amplitude of 3.5 km s −1 at visual wavelengths is much higher than the results of our simulations, but this can probably be accounted for by the different size of the surface inhomogeneities (only 1% in the simulations) 10 .
The observations of Martín et al. (2006) suggest a ratio between optical and nIR radial velocity variations larger than a factor of 10. The largest ratio in our set of simulations in fact is on that order; the radial velocity jitter in our model with T 0 = 2800 K and T 1 = 2600 K diminishes by about a factor of ten between 600 nm and 1200 nm. Extrapolating from the results of the hotter models, in which the ratio between visual and nIR jitter is smaller, we cannot exclude that this ratio may become even larger in cooler stars. Our model with larger temperature contrast (T 0 = 2800 K, T 1 = 1800 K) produces a smaller ratio between visual and nIR jitter. Thus, our simulations are not in contradiction to the results reported by Martín et al. (2006). A ratio of ten or more in jitter between optical and nIR measurements seems possible if the temperature contrast is fairly small (100-200 K).
We note that no radial velocity variations were actually detected in LP 944-20 in the nIR, which means that at the time of the nIR observations, it simply could have experienced a phase of lower activity and reduced spot coverage. In order to confirm the effects of a wavelengthdependent contrast on radial velocity measurements, observations carried out simultaneously at visual and nIR wavelenghts are required. Martín et al. (2006) propose that weather effects like variable cloud coverage may be the source for the radial velocity jitter in the visual and for the wavelengthdependent contrast. This would probably mean very small temperature contrast but a strong effect in wavelength regions with spectral features that are particularly sensitive to dust. Our simulations do not predict the wavelength dependence of pure dust clouds, but at this point we see no particular reason why the jitter from purely dust-related clouds should be much stronger in the visual than in the nIR. To model this, a separate simulation would be needed taking into account the effects of dust on the spectra of ultracool dwarfs.
SUMMARY AND CONCLUSIONS
We have investigated the possibility of measuring radial velocity variations of low-mass stars at nIR wavelengths (Y, J, H). The spectral flux distribution of red stars favors long wavelengths because higher S/N can be achieved in comparison to optical wavelengths. On the other hand, the spectral information content of the spectra (presence of sharp and strong spectral features) is lower at longer wavelengths, and the efficiency of calibration methods is not well established.
For early M dwarfs, nIR radial velocities do not offer any advantage in terms of photon-limited precision. Indeed, the fact that measurement methods in the optical are much more advanced than those in the nIR means that there is not really any motivation of nIR radial velocities from this perspective. On the other hand, at mid-M spectral type, the achievable precision becomes higher in the nIR around spectral type M4-5; Y -band observations can be expected to achieve a radial velocity precision higher than observations at optical wavelengths. At late-M dwarfs, the Y -band outperforms the V -band precision by about a factor of 4-5. Observations in the J-and H-bands are a factor of 2-5 worse than the Y -band across the M star spectral types. They are only superior to the V -band observations in very-late-M dwarfs.
Our investigation into the effects of activity on radial velocity measurements showed that a crucial parameter for the wavelength dependence of jitter is the temperature contrast between the spot and the photosphere. If the spot temperature is only a few hundred Kelvin below the photospheric temperature, the induced radial velocity signal is on the order of several m s −1 in the optical and becomes weaker towards longer wavelengths. Note that the absolute size of this effect depends on the size of the spot (1 % in our simulation) and will grow with larger spot size. High temperature contrast, on the other hand, causes a much larger radial velocity signal that only weakly depends on wavelength. For example, in M stars with spots only ∼ 200 K cooler than the photosphere, the jitter at nIR wavelengths is roughly a factor of ten lower than at optical wavelengths, but it is smaller than a factor of two if the temperature contrast is 1000 K or higher. Unfortunately, not much is known about spot temperatures, particularly in low-mass stars, but our results show that simultaneous observations at optical and nIR wavelengths can provide useful constraints on the spot temperatures of active stars.
Another important factor for the effect of active regions on radial velocity measurements are the differences between spectral features appearing in the photosphere and the spots. Conventional estimates usually assume that both are comparable, but given the perhaps relatively large temperature contrasts and the strong temperature dependence of the molecular features, this may not be the case. Thus, large differences in the radial velocity signal between different spectral regions can occur if spectral features of different temperature sensitivity appear.
The radial velocity signal may not vanish as expected at nIR wavelengths, and it seems unlikely that strong radial velocity signals observed at optical wavelengths can vanish in the nIR, particularly in very active stars in which relatively large temperature contrast is expected. The advantage of a nIR spectrograph over an optical spectrograph becomes obvious in the late-M dwarfs. Our results point towards a spectrograph covering the wavelength range 500-1150 nm that captures the region where the RV precision is highest at all M spectral classes, and where the wavelength dependece of jitter shows the largest gradient in order to distingiush between orbital motion and spots. Such a spectrograph should be designed to be very stable and could use a ThAr lamp for calibration. In the future, other calibration strategies might become available (e.g. a laser frequency comb, Steinmetz et al. 2008), but the ThAr method can in principle provide the sensitivity required to detect Earthmass planets around low mass stars.
We thank Peter H. Hauschildt for promptly providing PHOENIX model spectra. A.R. acknowledges research funding from the DFG as an Emmy Noether fellow, A.R. and A.S. received support from the DFG under RE 1664/4-1. J.B. has received research funding from the European Commissions Seventh Framework Programme . The situation is shown for three different spectral resolutions (red: R = 100, 000; green: R = 80, 000; blue: R = 60, 000), S/N is scaled according to the spectral resolution and assuming constant instrument efficiency (see Table 2). Horizontal lines show the spectral coverage used for the calculation. Filled circles show the best achievable precision assuming perfect wavelength calibration, i.e., the intrinsic stellar information content. Open rhombs and crosses show the situation for wavelength calibration using ThAr lines and a hypothetical gas cell, respectively. In V , only the ideal case is shown because the wavelength calibration is not the limiting factor for the situations shown here. Individual points indicate the RV amplitude at one wavelength chunk, blue crosses are for v sin i = 2 km s −1 , green circles: v sin i = 5 km s −1 ; orange stars: v sin i = 10 km s −1 ; red plusses: v sin i = 30 km s −1 . Temperatures of the photosphere (T 0 ) and the spot (T 1 ) are shown in the panels. The spot has a size of 1 % of the projected surface. Individual points are connected by a polynomial fit to guide the eye.
Fig. 1 .
1-Telluric absorption spectrum with the standard windows in the Y -, J-, and H-bands indicated. as an International Incoming Fellow (PIFF-GA-2009-234866).
Fig. 2 .
2-M star model spectra for three different effective temperatures: 3500 K (M3, upper panel), 2800 K (M6, middle panel), and 2400 K (M9, lower panel). The black regions show the photometric windows Y , J, and H. Gray regions are the telluric gaps separating these windows.
Fig. 3 .
3-M star model spectrum for 3500 K (M3, black line) and a sunspot spectrum (grey line) in the Y -, J-, and H-bands.
Fig. 4 .
4-CRIRES spectrum of GJ 1002, M5.5 (red), and a model spectrum for T=3200K and logg=4.75 (black).
Fig. 5 .
5-Radial velocity precision for 3500 K (M3, top panel), 2800 K (M6, middle), and 2400 K (M9, bottom)
Fig. 6 .
6-Distribution of rotational velocities among field M dwarfs. Cumulative plot showing the fraction of early-M (M1-3), mid-M (M4-M6), and late-M (M7-M9) stars rotating faster than a given value of v sin i. Data are from Delfosse et al. (1998); Mohanty & Basri (2003); Reiners & Basri (2008) and Reiners & Basri (submitted to ApJ). A lower detection limit of v sin i = 3 km s −1 is assumed for all data.
Fig. 7 .
7-Radial velocity precision as a function of rotational velocity for four different resolving powers. The assumed model is a 3000 K star and the precision was calculated in the Y -band.
Fig. 9 .
9-Illustration of the apparent radial velocity shift due to a single spot for two contrast values. From left to right, three different phases are shown, the top panel illustrates the location of the spot at each phase. The center panel shows line profiles (solid lines) and residuals (dashed lines; residual between the profile of the quiet photosphere and the profile of the spotted star) for a star with a photospheric temperature of T = 3700 K rotating at v sin i = 5 km s −1 . Black and red lines indicate different spot temperatures (black: Tspot = 0 K, red: Tspot = 3500 K,). The lower panel shows the measured radial velocity shift.
Fig. 10 .
10-Apparent radial velocity shift by a single spot for different temperature contrasts calculated with our "toy model". Upper panel: Flux ratio due to a cool spot with temperature T 1 on a surface at temperature T 0 , the contrast follows the ratio of the black-body distributions of the flux. Lower panel: Radial velocity signal induced by the spot during a rotation of the star at v sin i = 2 km s −1 . The left panel shows the situation of small contrast (T 0 − T 1 = 200 K), in the right panel the contrast is larger ((T 0 − T 1 )/T 0 = 0.35).
Fig. 11 .
11-Illustration of apparent radial velocity shift by a single spot for different spot line intensities (toy model). The upper panel of the line profile plots gives the line profile contribution from the background photosphere. The middle panel gives the three different line profile contributions from a cool spot. The lower panel shows the sum of the two line profile contributions, i.e., the final spectrum of the spotted star for three cases of different spot line depths. The flux scale is normalized so that the total spectrum has a continuum value of 1, note the background continuum is at a value of 0.95 and the spot continuum at 0.05. The black line shows the case where the spot and background photosphere lines are identical. The bottom plot shows the resulting radial velocities for the different line profile combinations.
Fig. 12 .
12-Simulations of RV amplitude as a function of wavelength for different temperature combinations and different rotation velocities.
TABLE 1
1Wavelength coverage of the spectral windows used in this work and the fraction of the wavelength range affected by telluric contamination.Band
V
Y
J
H
λ-range [nm] 505-595 980-1110 1200-1330 1510-1735
telluric loss
2 %
19 %
55 %
46 %
TABLE 2
2Wavelength-dependent S/N and radial velocity precision that can be achieved from data of this quality. The upper part of the table shows the results for an M3 star, the middle for an M6, and the lower part for an M9 star.Resolution
S/N
RV precision [m s −1 ]
V
Y
J
H
V
Y
J
H
Spectral Type M3
60000
50 100 101
95
3.6 5.7 22.9 10.0
80000
43
86
87
82
2.9 4.4 18.1
8.4
100000
39
77
78
74
2.5 3.8 15.5
7.6
Spectral Type M6
60000
20 100 114 107
4.7 3.8 11.2
9.7
80000
18
86
99
93
3.7 3.0
8.8
7.8
100000
16
77
88
83
3.2 2.6
7.5
6.9
Spectral Type M9
60000
12 100 134 128
8.0 2.2
4.6
4.0
80000
10
86
116 111
6.2 1.7
3.5
3.5
100000
9
77
104
99
5.3 1.5
2.9
3.3
Desort et al. (2007) report peak-to-peak amplitude, which is twice the value we are using.
Using our "toy model" we estimate that a spot covering ∼ 10 % of the surface can generate a velocity amplitude of a few km s −1 .
. F Allard, P H Hauschildt, D R Alexander, A Tamanai, A Schweitzer, ApJ. 556357Allard, F., Hauschildt, P.H., Alexander, D.R., Tamanai, A., & Schweitzer, A., 2001, ApJ, 556, 357
. F D'amato, SPIE. 701470143D'Amato, F., et al., 2008, SPIE, 7014, 70143V
. J Bean, A Seifahrt, H Hartmann, H Nilsson, G Wiedemann, A Reiners, S Dreizler, T Henry, arXiv:0911.3148ApJ. submitted toBean, J., Seifahrt, A., Hartmann, H., Nilsson, H., Wiedemann, G., Reiners, A., Dreizler, S., & Henry, T., submitted to ApJ, arXiv:0911.3148
. F Bouchy, F Pepe, D Queloz, A&A. 374733Bouchy, F., Pepe, F., & Queloz, D., 2001, A&A, 374, 733
. R P Butler, G W Marcy, E Williams, C Mccarthy, P Dosanjh, S S Vogt, PASP. 108500Butler, R.P., Marcy, G.W., Williams, E., McCarthy, C., Dosanjh, P., & Vogt, S.S., 1996, PASP, 108, 500
. P Connes, Ap&SS110211Connes, P., 1985, Ap&SS, 110 211
S A Clough, F X Kneizys, L S Rothman, W O Gallery, Proc. SPIE. SPIE277152Clough, S. A., Kneizys, F. X., Rothman, L. S., & Gallery, W. O. 1981, Proc. SPIE, 277, 152
. S A Clough, M J Iacono, J.-L Moncet, J. Geophys. Res. 9715761Clough, S. A., Iacono, M. J., & Moncet, J.-L. 1992, J. Geophys. Res., 97, 15761
. M C Cushing, J T Rayner, W D Vacca, ApJ. 6231115Cushing, M.C., Rayner, J.T., & Vacca, W.D., 2005, ApJ, 623, 1115
. X Delfosse, T Forveille, C Perrier, M Mayor, A&A. 331581Delfosse, X., Forveille, T., Perrier, C., & Mayor, M., 1998, A&A, 331, 581
. M Desort, A.-M Lagrange, F Galland, S Udry, M Mayor, A&A. 473983Desort, M., Lagrange, A.-M., Galland, F., Udry, S., & Mayor, M., 2007, A&A, 473, 983
. J.-F Donati, J Morin, P Petit, MNRAS. 390545Donati, J.-F., Morin, J., Petit, P., et al., 2008, MNRAS, 390, 545
. P H Hauschildt, F Allard, E Baron, ApJ. 512377Hauschildt, P.H., Allard, F., & Baron, E., 1999, ApJ, 512, 377
K H Hinkle, L Wallace, J Valenti, T Tsuji, IAU Symp. 215213Hinkle, K.H., Wallace, L., Valenti, J., & Tsuji, T., 2003, IAU Symp. 215, 213
. H U Käufl, Msngr. 12632Käufl, H. U., et al., 2006, Msngr, 126, 32
F Kerber, G Nave, C J Sansonetti, C Lovis, F Pepe, F Bouchy, Proc. SPIE. SPIE17862690Kerber, F., Nave, G., & Sansonetti, C. J., 2008, ApJS, 178, 374 , Lovis., C., Pepe, F., Bouchy, F., et al., 2006, Proc. SPIE, Vol. 6269, 62690P
. C Lovis, F Pepe, A&A. 4681115Lovis, C., & Pepe, F., 2007, A&A, 468, 1115
. S Mahadevan, J Ge, ApJ. 6921590Mahadevan, S., & and Ge, J., 2009, ApJ, 692, 1590
. P Maltby, E H Avrett, M Carlsson, O Kjeldseth-Moe, R L Kurucz, R Loeser, ApJ. 306284Maltby, P., Avrett, E.H., Carlsson, M., Kjeldseth-Moe, O., Kurucz, R.L., & Loeser, R., 1986, ApJ, 306, 284
. G W Marcy, R P Butler, S S Vogt, D Fischer, J J Lissauer, ApJ. 505147Marcy, G.W., Butler, R.P., Vogt, S.S., Fischer, D., Lissauer, J.J., 1998, ApJ, 505, L147
. E L Martín, E Guenther, M R Zapatero Osorio, Bouy, R Wainscoat, ApJ. 64475Martín, E.L., Guenther, E., Zapatero Osorio, M.R., Bouy, & Wainscoat, R., 2006, ApJ, 644, L75
. M Mayor, D Queloz, Nature. 378355Mayor, M., & Queloz, D., 1995, Nature, 378, 355
. M Mayor, Msngr. 12420Mayor, M., et al., 2003, Msngr, 124, 20
. I S Mclean, M R Mcgovern, A J Burgasser, J D Kirkpatrick, L Prato, S S Kim, ApJ. 596561McLean, I.S., McGovern, M.R., Burgasser, A.J., Kirkpatrick, J.D., Prato, L., & Kim, S.S., 2003, ApJ, 596, 561
. I S Mclean, L Prato, M R Mcgovern, A J Burgasser, J D Kirkpatrick, E L Rice, S S Kim, ApJ. 6581217McLean, I.S., Prato, L., McGovern, M.R., Burgasser, A.J., Kirkpatrick, J.D., Rice, E.L., & Kim, S.S., 2007, ApJ, 658, 1217
. S Mohanty, G Basri, ApJ. 583451Mohanty, S., & Basri, G., 2003, ApJ, 583, 451
. D O'neal, J E Neff, S H Saar, J K Mines, AJ. 1221954O'Neal, D., Neff, J.E., Saar, S.H., & Mines, J.K., 2001, AJ, 122, 1954
. D O'neal, J E Neff, S H Saar, M Cuntz, AJ. 1281802O'Neal, D., Neff, J.E., Saar, S.H., & Cuntz, M., 2004, AJ, 128, 1802
. F Pepe, M Mayor, F Galland, D Naef, D Queloz, N C Santos, S Udry, M Burnet, A&A. 388632Pepe, F., Mayor, M., Galland, F., Naef, D., Queloz, D., Santos, N.C., Udry, S., & Burnet, M., 2002, A&A, 388, 632
. N E Piskunov, J B Rice, PASP. 1051415Piskunov, N.E., & Rice, J.B., 1993, PASP, 105, 1415
. A Reiners, A&A. 498853Reiners, A., 2009, A&A, 498, 853
. A Reiners, G Basri, ApJ. 644497Reiners, A., & Basri, G., 2006, ApJ, 644, 497
. A Reiners, G Basri, ApJ. 6841390Reiners, A., & Basri, G., 2008, ApJ, 684, 1390
. A Reiners, G Basri, A&A. 496787Reiners, A., & Basri, G., 2009, A&A, 496, 787
. A Reiners, G Basri, L S Submitted To Apj Rothman, Journal of Quantitative Spectroscopy and Radiative Transfer. 96139Reiners, A., & Basri, G., submitted to ApJ Rothman, L. S., et al. 2005, Journal of Quantitative Spectroscopy and Radiative Transfer, 96, 139
. A Seifahrt, H U Käufl, J Bean, Richter, R Siebenmorgen, A&AR. A&A Solanki, S.K.11153Seifahrt, A., Käufl, H.U., Bean J., Richter, & Siebenmorgen, R., 2010, submitted to A&A Solanki, S.K., 2003, A&AR, 11, 153
. T Steinmetz, T Wilken, C Araujo-Hauck, R Holzwarth, T W Hänsch, L Pasquini, A Manescau, Science. 3211335Steinmetz, T., Wilken, T., Araujo-Hauck, C., Holzwarth, R., Hänsch, T. W., Pasquini, L., Manescau, A., & et al. 2008, Science, 321, 1335
. K G Strassmeier, J B Rice, A&A. 330685Strassmeier, K.G., & Rice, J.B., 1998, A&A, 330, 685
. J C Tarter, Astrobiology. 730Tarter, J.C., et al., 2007, Astrobiology, 7, 30
. S Udry, X Bonfils, X Delfosse, A&A. 46943Udry, S., Bonfils, X., Delfosse, X., et al., 2007, A&A, 469, L43
. L Wallace, W Livingston, #92-001Technical ReportWallace, L., & Livingston, W., 1992, N.S.O. Technical Report #92-001
. L Wallace, K H Hinkle, ApJS. 107312Wallace, L., & Hinkle, K.H., 1996, ApJS, 107, 312
. L Wallace, W Livingston, P F Bernath, R S Ram, #1998-002Technical ReportWallace, L., Livingston, W., Bernath, P.F., & Ram, R.S., 1998, N.S.O. Technical Report #1998-002
. M R Zapatero-Osorio, E L Martín, V J Béjar, H Bouy, R Deshpande, R J Wainscoat, ApJ. 6661205Zapatero-Osorio M.R., Martín, E.L., Béjar, V.J., Bouy, H., Deshpande, R., & Wainscoat, R.J., 2007, ApJ, 666, 1205
Example of the apparent radial velocity shift induced by a single cool spot as a function of rotational phase. Our assumed definition of the radial velocity amplitude, K, is indicatedFig. 8.-Example of the apparent radial velocity shift induced by a single cool spot as a function of rotational phase. Our assumed definition of the radial velocity amplitude, K, is indicated.
| []
|
[
"Hyperbolic Attention Networks",
"Hyperbolic Attention Networks"
]
| [
"Caglar Gulcehre [email protected] ",
"Misha Denil ",
"Mateusz Malinowski ",
"Ali Razavi ",
"Razvan Pascanu ",
"Karl Moritz ",
"Hermann Peter Battaglia ",
"Victor Bapst ",
"David Raposo ",
"Adam Santoro ",
"Nando De Freitas "
]
| []
| []
| We introduce hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure. A few recent approaches have successfully demonstrated the benefits of imposing hyperbolic geometry on the parameters of shallow networks. We extend this line of work by imposing hyperbolic geometry on the activations of neural networks. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Our method shows improvements in terms of generalization on neural machine translation, learning on graphs and visual question answering tasks while keeping the neural representations compact. | null | [
"https://arxiv.org/pdf/1805.09786v1.pdf"
]
| 43,968,607 | 1805.09786 | 77882930692d41db107430a5a524ff5e4bb2ee5c |
Hyperbolic Attention Networks
Caglar Gulcehre [email protected]
Misha Denil
Mateusz Malinowski
Ali Razavi
Razvan Pascanu
Karl Moritz
Hermann Peter Battaglia
Victor Bapst
David Raposo
Adam Santoro
Nando De Freitas
Hyperbolic Attention Networks
We introduce hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure. A few recent approaches have successfully demonstrated the benefits of imposing hyperbolic geometry on the parameters of shallow networks. We extend this line of work by imposing hyperbolic geometry on the activations of neural networks. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Our method shows improvements in terms of generalization on neural machine translation, learning on graphs and visual question answering tasks while keeping the neural representations compact.
Introduction
The focus of this work is to endow neural network representations with suitable geometry to capture fundamental properties of data, including hierarchy and clustering behaviour. These properties emerge in many real-world scenarios that approximately follow power-law distributions [27,9]. This includes a wide variety of natural phenomena in physics [22], biology [25], and even human-made structures such as metabolic-mass relationships [5], social networks [20,30], and frequencies of words [32,31,37].
Complex networks [20], which connect distinguishable heterogeneous sets of elements represented as nodes, provide us with an intuitive way of understanding these structures. They will also serve as our starting point for introducing hyperbolic geometry, which is by itself difficult to visualize. Nodes in complex networks are referred to as heterogeneous, in the sense that they can be divided into sub-nodes which are themselves distinguishable from each other. The scale-free structure of natural data manifests itself as a power law distribution on the node degrees of the complex network that describes it.
Complex networks can be approximated with tree-like structures, such as taxonomies and dendrograms. Let us begin by recalling a simple property of n-ary trees that will help us understand hyperbolic space and why Euclidean geometry is perhaps inadequate to model relational data.
In an n-ary tree, the number of nodes at distance r from the root and the number of nodes at distance no more than r from the root both grow as n r . Just like trees, the volume of hyperbolic space also expands exponentially. For example, in a two-dimensional hyperbolic space with curvature −ζ 2 , ζ > 0, the length and area of the disc of radius r grow as 2π sinh(ζr) and 2π(cosh(ζr) − 1), respectively, both of which grow exponentially in r [20,19].
The growth of volume in hyperbolic space should be contrasted with Euclidean space where the corresponding quantities expand only polynomially, length as 2πr and area as πr 2 in the twodimensional example. Figure 1 is an attempt at visualizing this. For hierarchical data with scale-free Figure 1: An intuitive depiction of how images might be embedded in 2D. The location of the embeddings reflects the similarity between each image and that of a pug. Since the number of instances within a given semantic distance from the central object grows exponentially, the Euclidean space is not able to compactly represent such structure (left). In hyperbolic space (right) the volume grows exponentially, allowing for sufficient room to embed the images. For visualization, we have shrunk the images in this Euclidean diagram, a trick also used by Escher.
structure, the polynomially expanding Euclidean space cannot capture the exponential complexity in the data (and so the images overlap). On the other hand, the exponentially expanding hyperbolic space is able to match the complexity of the data (here we have used the visualization trick of the famous artist Escher of making the images progressively smaller toward the boundary to illustrate the exponential expansion).
The intimate connection between hyperbolic space scale free networks (where node degree follows a power law) is made more precise in Krioukov et al. [20]. In particular, there it is shown that the heterogeneous topology implies hyperbolic geometry, and conversely hyperbolic geometry yields heterogeneous topology.
Moreover, Sarkar [35] describes a construction that embeds trees in two-dimensional hyperbolic space with arbitrarily low distortion, which is not possible in Euclidean space of any dimension [23]. Following this exciting line of research, recently the machine learning community has gained interest in learning non-Euclidean embeddings directly from data [28,8,33,29,38,6].
Fuelled by the desire of increasing the capacity of neural networks so as to match the complexity of data, we propose hyperbolic attention networks. As opposed to previous approaches, which impose hyperbolic geometry on the parameters of shallow networks [28,8], we impose hyperbolic geometry on their activations. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by introducing efficient hyperbolic operations to express the popular, ubiquitous mechanism of attention [2,11,41,46]. Our method shows improvements in terms of generalization on neural machine translation [41], learning on graphs and visual question answering [1,24,15] tasks while keeping the representations compact.
Models of hyperbolic space
Hyperbolic space cannot be isometrically embedded into Euclidean space [20]. There are several ways to endow different subsets of Euclidean space with a hyperbolic metric, leading to different models of hyperbolic space. This leads to the well known Poincaré ball model [14] and many others.
The different models of hyperbolic space are all necessarily the same, but different models define different coordinate systems, which offer different affordances for computation. In this paper, we primarily make use of the hyperboloid, whose status as the only commonly used unbounded model makes it a convenient target for projecting into hyperbolic space. We also make use of the Klein model, because it admits an efficient expression for the hyperbolic aggregation operation we define in Section 4.2.
We briefly review the definitions of the hyperboloid and Klein models and the relationship between them, in just enough detail to support the presentation in the remainder of the paper. A more thorough treatment can be found in Iversen [14]. The geometric relationship between the the two models is diagrammed in Figure 5 of the supplementary material.
Hyperboloid model: This model of n dimensional hyperbolic space is a manifold in the n + 1 dimensional Minkowski space. The Minkowski space is R n+1 endowed with the indefinite Minkowski bilinear form
q, k M = n i=1 q i k i − q n+1 k n+1 .
With this definition in had, the hyperboloid model consists of the set
H n = {x ∈ R n+1 | x, x M = −1, x n+1 > 0} endowed with the distance metric d H (q, k) = arccosh(− q, k M ).
Klein model: This model of hyperbolic space is a subset of R n given by K n = {x ∈ R n | x < 1}, and a point in the Klein model can be obtained from the corresponding point in the hyperboloid model by projection
π H→K (x) i = x i x n+1 ,
with its inverse given by
π K→H (x) = 1 1 − x (x, 1) .
Distance computations in the Klein model can be inherited from the hyperboloid, in the sense that d K (q, k) = d H (π K→H (k), π K→H (q)).
Attention as a building block for relational reasoning
Learning relations in a graph by using neural networks to model the interactions or relations has shown promising results in visual question answering [34], modelling physical dynamics [3], and reasoning over graphs [21,43,16,18]. Graph neural networks [21,3] incorporate a message passing as part of the architecture in order to capture the intrinsic relations between entities. Graph convolution networks [7,17,10] use convolutions to efficiently learn a continuous-space representation for a graph of interest.
Many of these relational reasoning models can be expressed in terms of an attentive read operation. In the following we give a general description of the attentive read, and then discuss its specific instantiations in two relational reasoning models from the literature.
Attentive read
First introduced for translation in Bahdanau et al. [2], attention has seen widespread use in deep learning, not only for applications in NLP but also for image processing [46] imitation learning [11] and memory [13]. The core computation is the attentive read operation, which has the following form:
r(q i , {k j } j ) = 1 Z j α(q i , k j )v ij .(1)
Here q i is a vector called the query and the k j are keys for the memory locations being read from. The pairwise function α(·, ·) computes a scalar matching score between a query and a key, and the vector v ij is a value to be read from location j by query i. Z > 0 is a normalization factor for the full sum. Both v ij and Z are free to depend on arbitrary information, but we leave any dependencies here implicit.
It will be useful in the discussion to break this operation down into two parts. The first is the matching, which computes attention weights α ij = α(q i , k j ) and the second is the aggregation, which takes a weighted average of the values using these weights,
m i ({α ij } j , {v ij } j ) = 1 Z j α ij v ij .
Instantiating a particular attentive read operation involves specifying both α(·, ·) and v ij along with the normalization constant Z.
If one performs an attentive read for each element of the set j then the resulting operation corresponds in a natural way to message passing on a graph, where each node i aggregates messages {v ij } j from its neighbours along edges of weight α(q i , k j )/Z.
We can express many (although not all) message passing neural network architectures [12] using the attentive read operation of Equation 1 as a primitive. In the following sections we do this for two architectures and then discuss how we can replace both the matching and aggregation steps with versions that leverage hyperbolic geometry.
Relation networks
Relation Networks (RNs) [34] are a neural network architecture designed for reasoning about the relationships between objects. An RN operates on a set of objects O by applying a shared operator to each pair of objects
(o i , o j ) ∈ O × O.
The pairs can be augmented by a global information, and the result of each relational operation is passed through a further global transformation.
Using the notation of the previous section, we can write the RN as
RN (O, c) = f i r(o i , {o j } j )) , where α(o i , o j ) = 1, v ij = g(o i , o j , c), Z = 1.
f is the global transformation g is the global transformation, the local transformation and c is the global context, as described in Santoro et al. [34]. We augment the basic RN to allow α(o i , o j ) ∈ [0, 1] to be a general learnable function.
Interpreting the RN as learned message passing on a graph over objects, the attention weights take on the semantics of edge weights, where α ij can be thought of as the probability of the the (directed) edge o j → o i appearing in the underlying reasoning graph.
Scaled dot-product attention
In the Transformer model of Vaswani et al. [41] the authors define an all-to-all message passing operation on a set of vectors which they call scaled dot-product attention. In the language of Section 3.1 the scaled dot-product attention operation performs several attentive reads in parallel, one for each element of the input set.
Vaswani et al. [41] write scaled dot-product attention as
R = softmax QK T √ d V,
where Q, K and V are referred to as the queries, keys, and values respectively, and d is the shared dimensionality of the queries and keys. Using lowercase letters to denote rows of the corresponding matrices, we can write each row of R as the result of an attentive read with
α(q i , k j ) = exp 1 √ d q i , k j , v ij = v j , Z = j α(q i , k j ) .
We experiment with both softmax and sigmoid operations for computing the attention weights in our hyperbolic models. The motivation for considering sigmoid attention weights is that in some applications (e.g. visual question answering) it makes sense for the attention weights over different entities to not compete with each other.
The path forward
At this point, we have discussed how many natural hierarchies posses scale-free structures, and also how the geometry of hyperbolic space naturally encodes this structure. We have considered two models of hyperbolic space (hyperboloid and Klein), and have seen how points in the two models correspond.
We described the attentive read operation, and how it can be broken into two steps which we call matching and aggregation. We then showed how the relational operations from Relation Networks and the Transformer could be expressed using the attentive read as a primitive.
The following section is devoted to explaining how we can redefine the attentive read as an operation on points in hyperbolic space. This will allow us to create hyperbolic versions of Relation Networks and the Transformer by replacing their attentive read with our hyperbolic version.
Hyperbolic attention networks
In this section we show how to redefine the attentive read operation of Section 3.1 as an operation on points in hyperbolic space. The key to doing this is to define new matching and aggregation functions that operate on hyperbolic points and take advantage of the metric structure of the manifold they live on. However, in order to apply these operations inside of a network we first we need a way to interpret network activations as points in hyperbolic space.
We describe how to map an arbitrary point in R n onto the hyperboloid, where we can interpret the result as a point in hyperbolic space. The choice of mapping is important since we must ensure that the rapid scaling behavior of hyperbolic space is maintained. Armed with an appropriate mapping we proceed to describe the hyperbolic matching and aggregation operations that operate on these points.
Hyperbolic network activations
Mapping neural network activations into hyperbolic space requires care, since network activations might live anywhere in R n , but hyperbolic structure can only be imposed on special subsets of Euclidean space [20]. This means we need a way to map activations into an appropriate manifold. We choose to map into the hyperboloid, which is convenient since it is the only unbounded model of hyperbolic space in common use.
Pseudo-polar coordinates: In polar coordinates, we express an n-dimensional point as a scalar radius, and n − 1 angles. Pseudo-polar coordinates consist of a radius r, as in ordinary polar coordinates, and an n-dimensional vector d representing the direction of the point from the origin. In the following discussion we assume that the coordinates are normalized, i.e. that d = 1.
If (d, r) ∈ R n+1 are the activations of a layer in the network, we map them onto the hyperbolid in R n+1 using π((d, r)) = (sinh(r)d, cosh(r)), which increases the scale by an exponential factor.
It is easily verified that the resulting point lies in the hyperboloid, and to verify that we maintain the appropriate scaling properties we compute the distance between a point and the origin using this projection:
d H (0, (d, r)) = arccosh(− π(0), π((d, r)) M ) = arccosh( 1 2 (cosh(2r) + 1)) ∼ r ,
which shows that this projection preserves exponential growth in volume for a linear increase in r. Without the exponential scaling factor the effective distance of π((d, r)) from the origin grows logarithmically in hyperbolic space. 1
Hyperbolic attention
In this section, we show how to build an attentive read operation that operates on points in hyperbolic space. We consider how to exploit hyperbolic geometry in both the matching and the aggregation steps of the attentive read operation separately.
Hyperbolic matching: The most natural way to exploit hyperbolic geometry for matching pairs of points is to use the hyperbolic distance between them. Given a query q i and a key k j both lying in hyperbolic space we take,
α(q i , k j ) = f (−βd H (q i , k j ) − c) ,(2)
where d H (·, ·) is the hyperbolic distance, and β and c are parameters that can be set manually or learned along with the rest of the network. Having the bias parameter c is useful because distances are non-negative. We take the function f (·) to be either exp (·), in which case we set the normalization appropriately to obtain a softmax, or sigmoid(·).
Hyperbolic aggregation: The path to extend the weighted midpoint to hyperbolic space is much less obvious, but fortunately such a extension already exists as the Einstein midpoint. The Einstein midpoint is straightforward to compute by adjusting the aggregation weights appropriately (see Ungar [39,Definition 4.21])
m i ({α ij } j , {v ij } j ) = j α ij γ(v ij ) α i γ(v i ) v ij ,(3)
where the γ(v ij ) are the Lorentz factors, that are given by γ(v ij ) = 1 √ 1− vij 2 . The norm in the denominator of the Lorentz factor is the Euclidean norm of the Klein coordinates of the point v ij , and the correctness of Equation 3 also relies on the points v ij being represented by their Klein coordinates. Fortunately the various models of hyperbolic space in common use are all isomorphic, so we can work in an arbitrary hyperbolic model and simply project to and from the Klein model to execute midpoint computations, as we discuss in the following section.
The reason for using the Einstein midpoint for hyperbolic aggregation is that it obeys many of the properties that we expect from a weighted average in Euclidean space. In particular, it is invariant to translating the v ij 's by a fixed distance in a common direction, and also by rotations of the constellation of points about the origin. The derivation of this operation is quite involved, and beyond the scope of this paper. We point the interested reader to Ungar [39,40] for a full exposition.
Experiments
We evaluate our models on synthetic and real-world tasks. Experiments where the underlying graph structure is explicitly known clearly show the benefits of using hyperbolic geometry as an inductive bias. At the same time, we show that real-world tasks withi implicit graph strucutre such as a diagostic visual question answering task [15], and neural machine translation, equally benefit from relying on hyperbolic geometry.
We provide experiments with feedforward networks, the Transformer [41] and Relation Networks [34] endowed with hyperbolic attention.
Our results show the effectiveness of our approach on diverse tasks and architectures. The benefit of our approach is particularly prominent in relatively small models, which supports our hypothesis that hyperbolic geometry induces compact representations and is therefore better able to represent complex functions in limited space.
Modeling scale-free graphs
We use the algorithm of von Looz et al. [45] to efficiently generate large scale-free graphs, and define two predictive tasks that test our model's ability to represent different aspects of the structure of these networks. For both tasks in this section, we train Recursive Transformer (RT) models, using hyperbolic and Euclidean attention. A Recursive Transformer is identical to the original transformer, except that the weights of each self-attention layer are tied across depth. We use models with 3 recursive self-attention layers, each of which has 4 heads with 4 units each for each of q, k, and v. This model has similarities to Graph Attention Networks [42,18].
Link prediction (LP): Link prediction is a classical graph problem, where the task is to predict if an edge exists between two nodes in the graph. We experimented with graphs of 1000 and 1200 nodes and observed that the hyperbolic RT performs better than the Euclidean RT on both tasks. We report the results in Figure 3 (middle). In general, we observed that for graphs of size 1000 and 1200 the hyperbolic transformer performs better than the Euclidean transformer given the same amount of capacity.
Shortest path length prediction (SPLP): In this task, the goal is to predict the length of the shortest path between a pair of nodes in the graph. We treat this as a classification problem with a maximum path-length of 25 which becomes naturally an unbalanced classification problem. We use rejection sampling during training to ensure the network is trained on an approximately uniform distribution of path lengths. At test time we sample paths uniformly at random, so the length distribution follows that of the underlying graphs. We report the results in Figure 3 (left). In Figure 3 (right), we visualize the distribution of the scale of the learned activations (r in the projection of Section 4.1) when training on graphs of size 100 and 400. We observe that our model tends to use larger scales for the larger graphs. As a baseline, we compare to the optimal constant predictor, which always predicts the most common expected path length. This baseline does quite well since the path length distribution on the test set is quite skewed.
For both tasks, we generate training data online. Each example is a new graph in which we query the connectivity of a randomly chosen pair of nodes. To make training easier, we use a curriculum,
Accuracy (%)
Hyperbolic RN (Sigmoid) Hyperbolic RN (Softmax) Euclidean RN (Softmax) Figure 4: Left: Comparison of our models with low-capacity on the Sort-of-CLEVR dataset. The "EA" refers to the model that uses hyperbolic attention weights with Euclidean aggregation. Right: Performance of Relation Network extended by attention mechanism in either Euclidean or hyperbolic space on the CLEVR dataset.
whereby we start training on smaller graphs and gradually increase the number of vertices towards the final number. More details on the dataset generation procedure and the curriculum scheme are found in the supplementary material.
Sort-of-CLEVR
Since we expect hyperbolic attention to be particularly well suited to relational modelling, we investigate our models on the relational variant of the Sort-of-CLEVR dataset [34]. This dataset consists of simple visual scenes allowing us to solely focus on the relational aspect of the problem. Our models extend Relation Nets (RNs) with the attention mechanism in hyperbolic space (with the Euclidean or Einstein midpoint aggregation), but otherwise we follow the standard setup-up [34]. Our best method yields accuracy of 99.2% that significantly exceeds the accuracy of the original RN (96%).
However, we are more interested in evaluating models on the low-capacity regime. Indeed, as Figure 4 (left) shows, the attention mechanism computed in the hyperbolic space improves around 20 percent points over the standard RN, where all the models use only two units of the relational MLP.
CLEVR
We train our Relation Network with various attention mechanisms on the CLEVR dataset [15]. CLEVR is a synthetic visual question answering datasets consisting of 3D rendered objects like spheres, cubes, or cylinders of various size, material, or color. In contrast to other visual question answering datasets [1,24,47], the focus of CLEVR is on relational reasoning.
In our experiments, we closely follow the procedure established in [34], both in terms of the model architecture, capacity, or the choice of the hyperparameters, and only differ by the attention mechanism (Euclidean or hyperbolic attention), or sigmoid activations.
Results are shown in Figure 4 (Right). For each model, we vary the capacity of the relational part of the network and report the resulting test accuracy. We find that hyperbolic attention with sigmoid consistently outperforms other models.
Our RN with hyperbolic attention and sigmoid achieves 95.7% accuracy on the test set at the same capacity level as RN, whereas the latter reportedly achieves 95.5% accuracy [34].
Neural machine translation
The Transformer [41] is a recently introduced state of the art model for neural machine translation. It relies heavily on attention as its core operation. As described in Section 3.3, we have extended the Transformer 2 by replacing its scaled dot-product attention operation with its hyperbolic counterpart. We evaluate all the models on the WMT14 En-De dataset [4].
Conclusion
We have presented a novel way to impose the inductive biases from hyperbolic geometry on the activations of deep neural networks. Our proposed hyperbolic attention operation makes use of hyperbolic geometry in both the computation of the attention weights, and in the aggregation operation over values. We implemented our proposed hyperbolic attention mechanism in both Relation Networks and the Transformer and showed that we achieve improved performance on a diverse set of tasks. We have shown improved performance on link prediction and shortest path length prediction in scale free graphs, on two visual question answering datasets, and finally on English to German machine translation. The gains are particularly prominent in relatively small models, which confirms our hypothesis that hyperbolic geometry induces more compact representations.
A.2 Scale-free graph generation
We use the algorithm described by von Looz et al. [45]. In our experiments, we set the α to 0.95 and edge_radius_R_factor to 0.35.
A.3 Scale-free graph curriculum
Curriculum was an essential part of our training on the scale-free graph tasks. On LP and SPLP tasks, we use a curriculum where we extract the connected components from the graph by cutting the disk that the graphs generated on into slices by starting from a 30 degree angle and gradually increasing the size of the slice from the disk by increasing the angle during the training according to the number of lessons that are involved in the curriculum. This process is also visualized in Figure 6.
A.4 Travelling salesman problem (TSP)
We train an off-policy DQN-like agent [26] with the HRT. The graphs for the TSP is generated following the procedure introduced in [44].
On this task, as an ablation we just compared the hyperbolic networks with and The results are provided in Figure 4 (Right) with and without implicit coordinates. Overall, we found that the hyperbolic transformer networks performs better when using the implicit polar coordinates.
A.5 Hyperbolic Recursive Transformer
As shown in Figure 9, the hyperbolic RT is an extension of transformer that ties the parameters of the selfattention layers. The self-attention layer gets the representations of the nodes of the graph coming from the encoder and the decoder decodes that representation from the recursive self-attention layers for the prediction. Figure 7: An illustration of how trees can be represented in hyperbolic (left) and Euclidean geometry (right) in a cone. In hyperbolic space, as the tree grows the angles between the edges (θ) can be preserved from one level to the next. In Euclidean space, since the number of nodes in the tree grows faster than the rate that the volume grows, angles may not be preserved (θ to α). Lines in the left diagram are straight in hyperbolic space, but appear curved in this Euclidean diagram. Encoder Decoder Hyperbolic Self Attention Figure 9: A depiction of a hyperbolic recursive transformer on the graph structured data. The model takes in nodes of the graph as an input and the encoder maps those inputs and provides them to the recursive hyperbolic self-attention block as an input.
Figure 2 :
2The computational graph for the self-attention mechanism of the hyperbolic Transformer. We show the different operations in the blocks and their interactions are represented by the arrows.
Figure 3 :
3Left: Performance of the Recursive Transformer models on the Shortest Path Length Prediction task on graphs of various sizes. The black dashed line indicates chance performance. Center: Results on Link Prediction Tasks. Right: The histogram of the radiuses for a model trained on a graph with 100 and 400 nodes.
Figure 5 :
5Relationships between different representations of points used in the paper. Left: The relationship between pseudo-polar coordinates in R n and the hyperboloid in R n+1 . Right: Projections relating the hyperboloid, Klein and Poincaré models of hyperbolic space.
Figure 8 :
8The comparisons between a hyperbolic recursive transformer with and without spherical coordinates on the travelling salesman problem.
We train several versions of the Transformer model with hyperbolic attention. They use different coordinate systems (Weierstrass or polar), or different attention functions (softmax or sigmoid). We consider two model sizes, referred to here as tiny and base. The tiny model has two layers of encoders and decoders, each with 128 units and 4 attention heads. The base model has 6 layers of encoders and decoders, each with 512 units and 8 attention heads. All hyperparameter configurations for the Euclidean versions of these models are available in the Tensor2tensor repository.Results are shown inTable 1. We observe improvements over the Euclidean model by using hyperbolic attention, in particular when coupled with the sigmoid activation function for the attention weights. The improvements are more significant when the model capacity is restricted. In addition, our best model (with sigmoid activation function and without pseudo-polar coordinates) using the big architecture from Tensor2tensor, achieves 28.45 BLEU score, whereas Vaswani et al.[41] report 28.4 BLEU score with the original version of this model.3 .WMT 2014 En-De BLEU Scores
Tiny
Base
Transformer (Vaswani et al. [41])
-
27.3
Transformer (Shaw et al. [36])
-
26.5
Transformer (Latest)
17.3
27.1
Hyperbolic Transformer (+Sigmoid)
17.3
27.4
Hyperbolic Transformer (+Softmax, +Polar) 17.5
27.0
Hyperbolic Transformer (+Sigmoid, +Polar) 18.0
27.5
Table 1: Results for the WMT14 English to German translation task. Results are computed following
the procedure in Vaswani et al. [41]. Citations indicate results taken from the literature. Latest is
the result of training a new model using an unmodified version of the same code where we added
hyperbolic attention (we have observed that the exact performance of the transformer on this task
varies as the Tensor2tensor codebase evolves).
Alternatively we can treat d as a vector in the tangent space of H n at the origin defining a geodesic and compute distances between points using the law of cosines, this leads to similar scaling properties.
We use a publicly available version: https://github.com/tensorflow/tensor2tensor
We achieve 28.3 BLEU score using the big Transformer with the publicly available framework.
AcknowledgementsWe would like to thank Neil Rabinowitz, Chris Dyer and Serkan Cabi for constructive comments on earlier versions of this draft. We thank Yannis Assael for helping us with the styles of the plots in this draft. We would like to thank Thomas Paine for the discussions.
Vqa: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433, 2015.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, 2014.
Interaction networks for learning about objects, relations and physics. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, Advances in neural information processing systems. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502-4510, 2016.
Findings of the 2014 workshop on statistical machine translation. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, Aleš Tamchyna, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsOndrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W14/ W14-3302.
Psychophysical bases of perceived exertion. A Gunnar, Borg, Med sci sports exerc. 145Gunnar A Borg. Psychophysical bases of perceived exertion. Med sci sports exerc, 14(5):377-381, 1982.
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann Lecun, arXiv:1312.6203Spectral networks and locally connected networks on graphs. arXiv preprintJoan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
James Benjamin Paul Chamberlain, Marc Peter Clough, Deisenroth, arXiv:1705.10359Neural embeddings of graphs in hyperbolic space. arXiv preprintBenjamin Paul Chamberlain, James Clough, and Marc Peter Deisenroth. Neural embeddings of graphs in hyperbolic space. arXiv preprint arXiv:1705.10359, 2017.
Power-law distributions in empirical data. Aaron Clauset, Cosma Rohilla Shalizi, Mark, Newman, SIAM review. 514Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power-law distributions in empirical data. SIAM review, 51(4):661-703, 2009.
Convolutional neural networks on graphs with fast localized spectral filtering. Michaël Defferrard, Xavier Bresson, Pierre Vandergheynst, Advances in Neural Information Processing Systems. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844-3852, 2016.
One-shot imitation learning. Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Openai, Jonas Ho, Ilya Schneider, Pieter Sutskever, Wojciech Abbeel, Zaremba, Advances in neural information processing systems. Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. In Advances in neural information processing systems, pages 1087-1098, 2017.
Neural message passing for quantum chemistry. Justin Gilmer, S Samuel, Schoenholz, F Patrick, Oriol Riley, George E Vinyals, Dahl, arXiv:1704.01212arXiv preprintJustin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. 5387626471Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
Hyperbolic geometry. Birger Iversen, Cambridge University Press25Birger Iversen. Hyperbolic geometry, volume 25. Cambridge University Press, 1992.
Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEEJustin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1988-1997. IEEE, 2017.
Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, Richard Zemel, arXiv:1802.04687Neural relational inference for interacting systems. arXiv preprintThomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. arXiv preprint arXiv:1802.04687, 2018.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
Wwm Kool, M Welling, arXiv:1803.08475Attention solves your tsp. arXiv preprintWWM Kool and M Welling. Attention solves your tsp. arXiv preprint arXiv:1803.08475, 2018.
Curvature and temperature of complex networks. Dmitri Krioukov, Fragkiskos Papadopoulos, Amin Vahdat, Marián Boguná, Physical Review E. 80335101Dmitri Krioukov, Fragkiskos Papadopoulos, Amin Vahdat, and Marián Boguná. Curvature and temperature of complex networks. Physical Review E, 80(3):035101, 2009.
Hyperbolic geometry of complex networks. Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, Marián Boguná, Physical Review E. 82336106Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Marián Boguná. Hyperbolic geometry of complex networks. Physical Review E, 82(3):036106, 2010.
Gated graph sequence neural networks. Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel, arXiv:1511.05493arXiv preprintYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
Critical behavior in physics and probabilistic formal languages. W Henry, Max Lin, Tegmark, Entropy. 197299Henry W Lin and Max Tegmark. Critical behavior in physics and probabilistic formal languages. Entropy, 19(7):299, 2017.
Low distortion euclidean embeddings of trees. Nathan Linial, Avner Magen, Michael E Saks, Israel Journal of Mathematics. 1061Nathan Linial, Avner Magen, and Michael E Saks. Low distortion euclidean embeddings of trees. Israel Journal of Mathematics, 106(1):339-348, 1998.
A multi-world approach to question answering about real-world scenes based on uncertain input. Mateusz Malinowski, Mario Fritz, Advances in neural information processing systems. Mateusz Malinowski and Mario Fritz. A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in neural information processing systems, pages 1682-1690, 2014.
Rebuilding community ecology from functional traits. J Brian, Brian J Mcgill, Evan Enquist, Mark Weiher, Westoby, Trends in ecology & evolution. 214Brian J McGill, Brian J Enquist, Evan Weiher, and Mark Westoby. Rebuilding community ecology from functional traits. Trends in ecology & evolution, 21(4):178-185, 2006.
Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, G Marc, Alex Bellemare, Martin Graves, Andreas K Riedmiller, Georg Fidjeland, Ostrovski, Nature. 5187540529Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
Power laws, pareto distributions and zipf's law. Contemporary physics. E J Mark, Newman, 46Mark EJ Newman. Power laws, pareto distributions and zipf's law. Contemporary physics, 46(5):323-351, 2005.
Poincaré embeddings for learning hierarchical representations. Maximillian Nickel, Douwe Kiela, Advances in Neural Information Processing Systems. Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pages 6341-6350, 2017.
Hyperbolic self-organizing maps for semantic navigation. Jorg Ontrup, Helge Ritter, Advances in neural information processing systems. Jorg Ontrup and Helge Ritter. Hyperbolic self-organizing maps for semantic navigation. In Advances in neural information processing systems, pages 1417-1424, 2002.
Greedy forwarding in dynamic scale-free networks embedded in hyperbolic metric spaces. Fragkiskos Papadopoulos, Dmitri Krioukov, Marián Boguñá, Amin Vahdat, INFOCOM, 2010 Proceedings IEEE. IEEEFragkiskos Papadopoulos, Dmitri Krioukov, Marián Boguñá, and Amin Vahdat. Greedy forwarding in dynamic scale-free networks embedded in hyperbolic metric spaces. In INFOCOM, 2010 Proceedings IEEE, pages 1-9. IEEE, 2010.
Zipf's word frequency law in natural language: A critical review and future directions. T Steven, Piantadosi, Psychonomic bulletin & review. 215Steven T Piantadosi. Zipf's word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review, 21(5):1112-1130, 2014.
Applications and explanations of zipf's law. M W David, Powers, Proceedings of the joint conferences on new methods in language processing and computational natural language learning. the joint conferences on new methods in language processing and computational natural language learningAssociation for Computational LinguisticsDavid MW Powers. Applications and explanations of zipf's law. In Proceedings of the joint conferences on new methods in language processing and computational natural language learning, pages 151-160. Association for Computational Linguistics, 1998.
Self-organizing maps on non-euclidean spaces. Helge Ritter, Kohonen maps. ElsevierHelge Ritter. Self-organizing maps on non-euclidean spaces. In Kohonen maps, pages 97-109. Elsevier, 1999.
A simple neural network module for relational reasoning. Adam Santoro, David Raposo, G David, Mateusz Barrett, Razvan Malinowski, Peter Pascanu, Tim Battaglia, Lillicrap, Advances in neural information processing systems. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4974-4983, 2017.
Low distortion delaunay embedding of trees in hyperbolic plane. Rik Sarkar, International Symposium on Graph Drawing. SpringerRik Sarkar. Low distortion delaunay embedding of trees in hyperbolic plane. In International Symposium on Graph Drawing, pages 355-366. Springer, 2011.
Peter Shaw, Jakob Uszkoreit, Ashish Vaswani, arXiv:1803.02155Self-attention with relative position representations. arXiv preprintPeter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155, 2018.
Do neural nets learn statistical laws behind natural language?. Shuntaro Takahashi, Kumiko Tanaka-Ishii, PloS one. 1212189326Shuntaro Takahashi and Kumiko Tanaka-Ishii. Do neural nets learn statistical laws behind natural language? PloS one, 12(12):e0189326, 2017.
Hyperbolic representation learning for fast and efficient neural question answering. Yi Tay, Anh Luu, Siu Cheung Tuan, Hui, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. the Eleventh ACM International Conference on Web Search and Data MiningACMYi Tay, Luu Anh Tuan, and Siu Cheung Hui. Hyperbolic representation learning for fast and efficient neural question answering. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 583-591. ACM, 2018.
Analytic hyperbolic geometry: Mathematical foundations and applications. Abraham Albert Ungar, World ScientificAbraham Albert Ungar. Analytic hyperbolic geometry: Mathematical foundations and applications. World Scientific, 2005.
A gyrovector space approach to hyperbolic geometry. Abraham Albert Ungar, Synthesis Lectures on Mathematics and Statistics. 11Abraham Albert Ungar. A gyrovector space approach to hyperbolic geometry. Synthesis Lectures on Mathematics and Statistics, 1(1):1-194, 2008.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010, 2017.
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
Order-embeddings of images and language. Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun, ICLR. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In ICLR, 2016.
Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, Advances in Neural Information Processing Systems. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692-2700, 2015.
Generating random hyperbolic graphs in subquadratic time. Henning Moritz Von Looz, Roman Meyerhenke, Prutkin, International Symposium on Algorithms and Computation. SpringerMoritz von Looz, Henning Meyerhenke, and Roman Prutkin. Generating random hyperbolic graphs in subquadratic time. In International Symposium on Algorithms and Computation, pages 467-478. Springer, 2015.
Xiaolong Wang, Ross Girshick, arXiv:1711.07971Abhinav Gupta, and Kaiming He. Non-local neural networks. arXiv preprintXiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. arXiv preprint arXiv:1711.07971, 2017.
Visual7w: Grounded question answering in images. Yuke Zhu, Oliver Groth, Michael Bernstein, Li Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995-5004, 2016.
| [
"https://github.com/tensorflow/tensor2tensor"
]
|
[
"Homological mirror symmetry for singularities",
"Homological mirror symmetry for singularities"
]
| [
"Wolfgang Ebeling "
]
| []
| [
"Mathematics Subject Classification. Primary 14B05"
]
| We give a survey on results related to the Berglund-Hübsch duality of invertible polynomials and the homological mirror symmetry conjecture for singularities. | 10.4171/171-1/5 | [
"https://arxiv.org/pdf/1601.06027v1.pdf"
]
| 119,579,077 | 1601.06027 | b59cbccf81c248a1d68bbcdf82157e56b98df175 |
Homological mirror symmetry for singularities
2010
Wolfgang Ebeling
Homological mirror symmetry for singularities
Mathematics Subject Classification. Primary 14B05
14332010Homological mirror symmetrysingularitiesstrange dualityinvertible poly- nomialsderived categoriesweighted projective linesCoxeter-Dynkin diagramsgroup actionorbifold E-functionBurnside ringunimodalbimodal
We give a survey on results related to the Berglund-Hübsch duality of invertible polynomials and the homological mirror symmetry conjecture for singularities.
Introduction
V. I. Arnold observed a strange duality between the 14 exceptional unimodal singularities. When physicists came up with the idea of mirror symmetry, it was found that Arnold's strange duality can be considered as part of the mirror symmetry of K3 surfaces. In his 1994 talk at the International Congress of Mathematicians, M. Kontsevich [57] proposed an interpretation of the mirror phenomenon in mathematical terms which is now commonly known as the homological mirror symmetry conjecture. It was originally formulated for two mirror symmetric Calabi-Yau manifolds X and X ′ and states that there is an equivalence between the derived category of coherent sheaves on X and the derived Fukaya category of X ′ and vice versa.
Kontsevich also suggested that homological mirror symmetry can be extended to a more general setting by considering Landau-Ginzburg models. Many aspects of Landau-Ginzburg models are related to singularity theory. One of the early constructions of mirror symmetric manifolds was the construction of P. Berglund and T. Hübsch [5]. They considered a polynomial f of a special form, a so called invertible one, and its Berglund-Hübsch transpose f : see Sect. 3. These polynomials can be considered as potentials of Landau-Ginzburg models. This construction can also be generalized to an orbifold setting. One can formulate different versions of the homological mirror symmetry conjecture for Berglund-Hübsch pairs. It turned out that Arnold's strange duality is also part of this duality and features of Arnold's strange duality appeared as features of homological mirror symmetry. We review results related to these conjectures.
We briefly outline the contents of this survey. We start by discussing Arnold's strange duality. In Sect. 3, we review the notion of an invertible polynomial and the Berglund-Hübsch construction. In Sect. 4, we state the homological mirror symmetry conjectures for invertible polynomials. In Sect. 5, we give a survey on the evidence for these conjectures. More precisely, we give a generalization of Arnold's strange duality. In Sect. 6, we show that the mirror symmetry for Berglund-Hübsch dual pairs also holds on the level of suitably defined Hodge numbers. For this purpose we discuss the notion of an orbifold E-function of a polynomial with an isolated singularity at the origin and we consider these functions for dual pairs. Another feature of Arnold's strange duality was discovered by K. Saito and is known as Saito duality. We discuss how this duality generalizes to the Berglund-Hübsch duality. In Sect. 8, we compile the more detailed information one has about specific classes of singularities, like the simple, unimodal and bimodal singularities. Finally, we derive in Sect. 9 the extension of Arnold's strange duality involving complete intersection singularities [38] from the Berglund-Hübsch construction.
Arnold's strange duality
According to Arnold's classification of singularities [1], there are 14 exceptional unimodal singularities. Setting the modulus equal to zero, they can be given by equations f (x, y, z) = 0 where the polynomial f is given in Table 1. We use the name of Arnold for the corresponding singularity.
We associate Dolgachev and Gabrielov numbers to these singularities as follows.
Consider the quotient stack
C f := f −1 (0)\{0} /C * .
This is a Deligne-Mumford stack and can be regarded as a smooth projective line P 1 with three isotropic points of orders α 1 , α 2 , α 3 . The numbers (α 1 , α 2 , α 3 ) are called the Dolgachev numbers of f [11,12].
The manifold V f := f −1 (1) is called the Milnor fibre of f . Since f has an isolated singularity at the origin, the only interesting homology group is H 2 (V f , Z). We denote by , the intersection form on H 2 (V f , Z) and by H = (H 2 (V f , Z), , ) the Milnor lattice. A. M. Gabrielov [43] has shown that there exists a weakly distinguished basis of vanishing cycles of H with a Coxeter-Dynkin diagram of the form of Fig. 1. The author [16] (see also [19]) has shown that one can even find a distinguished basis (δ 1 , δ 1 1 , . . . δ 1 γ1−1 , δ 2 1 , . . . , δ 2 γ2−1 , δ 3 1 , . . . , δ 3 γ3−1 , δ 2 , δ 3 )
with this Coxeter-Dynkin diagram. (For the notions of a distinguished and weakly distinguished basis of vanishing cycles see e.g. [23]). The numbers γ 1 , γ 2 , γ 3 are called the Gabrielov numbers of the singularity. Here each vertex represents a sphere of self-intersection number −2, two vertices connected by a single solid edge have intersection number 1, two vertices connected by a double broken line have intersection number −2 and vertices which are not connected have intersection number 0. Arnold [1] has now observed: There exists an involution X → X ∨ (indicated in Table 1) on the set of the 14 exceptional unimodal singularities, such that the Figure 1. The graph Sγ 1 ,γ 2 ,γ 3
• δ3 • ✤ ✤ ✤ ✤ ✤ ✤ ❂ ❂ ❂ ❂ ❂ ❂ ❂ ❂ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ δ2 • δ 2 1 · · · • ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ δ 2 γ 2 −1 • ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ δ1 • δ 3 γ 3 −1 · · · • δ 3 1 • ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ δ 1 γ 1 −1 · · · ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ • δ 1 1
Dolgachev numbers of X are the Gabrielov numbers of X ∨ and the Gabrielov numbers of X are the Dolgachev numbers of X ∨ . This is called Arnold's strange duality.
Consider f as a function f : (C 3 , 0) → (C, 0). A characteristic homeomorphism of the Milnor fibration of f induces an automorphism c :
H 2 (V f , Z) → H 2 (V f , Z)
called the (classical) monodromy operator of f . It is the Coxeter element corresponding to a distinguished basis {δ 1 , . . . , δ µ } of vanishing cycles of f . By this we mean the following: Each vanishing cycle δ i defines a reflection
s δi : H 2 (V f , Z) → H 2 (V f , Z) x → s δi (x) := x − 2 x,δi δi,δi δ i Then c = s δ1 • s δ2 • · · · • s δµ .
It is a well known theorem (see e.g. [6]) that the eigenvalues of c are roots of unity. This means that the characteristic polynomial φ(λ) = det(λI − c) of c is a monic polynomial the roots of which are roots of unity. Moreover, since f is weighted homogeneous, the operator c has finite order h. Such a polynomial can be written in the form
φ(λ) = m|h (λ m − 1) χm for χ m ∈ Z.
K. Saito [65,66] defines a dual polynomial φ ∨ (λ) to φ(λ):
φ ∨ (λ) = k|h (λ k − 1) −χ h/k .
He obtains the following result. (Saito). If φ(λ) is the characteristic polynomial of the monodromy of an exceptional unimodal singularity X, then φ ∨ (λ) is the corresponding polynomial of the dual singularity X ∨ .
The author and C.T.C. Wall [38] discovered an extension of Arnold's strange duality embracing on one hand series of bimodal singularities and on the other, isolated complete intersection singularities (ICIS) in C 4 . The duals of the complete intersection singularities are not themselves singularities, but are virtual (k = −1) cases of series (e.g. W 1,k : k ≥ 0) of bimodal singularities. They associated to these Dolgachev and Gabrielov numbers and showed that all the features of Arnold's strange duality continue to hold. Moreover, in [20] the author showed that also Saito's duality holds for this duality. We come back to this extension in Sect. 9.
Invertible polynomials
We recall some general definitions about invertible polynomials.
Let f (x 1 , . . . , x n ) be a weighted homogeneous polynomial, namely, a polynomial with the property that there are positive integers w 1 , . . . , w n and d such that f (λ w1 x 1 , . . . , λ wn x n ) = λ d f (x 1 , . . . , x n ) for λ ∈ C * . We call (w 1 , . . . , w n ; d) a system of weights. (1) the number of variables (= n) coincides with the number of monomials in the polynomial f (x 1 , . . . x n ), namely,
f (x 1 , . . . , x n ) = n i=1 a i n j=1
x Eij j for some coefficients a i ∈ C * and non-negative integers E ij for i, j = 1, . . . , n,
(2) the system of weights (w 1 , . . . , w n ; d) of f is uniquely determined by the polynomial f (x 1 , . . . , x n ) up to a constant factor gcd(w 1 , . . . , w n ; d), namely, the matrix E := (E ij ) is invertible over Q.
An invertible polynomial is called non-degenerate, if it has an isolated singularity at the origin.
Without loss of generality we shall assume that det E > 0. An invertible polynomial has a canonical system of weights W f = (w 1 , . . . , w n ; d) given by the unique solution of the equation
E w 1 . . . w n = det(E) 1 . . . 1 , d := det(E).
This system of weights is in general non-reduced, i.e. in general G h := {(λ 1 , . . . , λ n )) ∈ (C * ) n | h(λ 1 x 1 , . . . , λ n x n ) = h(x 1 , . . . , x n )} .
c f := gcd(w 1 , . . . , w n , d) > 1.Definition 3.3. Let f (x 1 , . . . , x n ) = n i=1 a i n j=1 x
Eij j be an invertible polynomial. Consider the free abelian group ⊕ n i=1 Z x i ⊕ Z f generated by the symbols x i for the variables x i for i = 1, . . . , n and the symbol f for the polynomial f . The maximal grading L f of the invertible polynomial f is the abelian group defined by the quotient
L f := n i=1 Z x i ⊕ Z f /I f ,
where I f is the subgroup generated by the elements where CL f denotes the group ring of L f . Equivalently,
f − n j=1 E ij x j , i = 1, . . . , n.G f = (λ 1 , . . . , λ n ) ∈ (C * ) n n j=1 λ E1j j = · · · = n j=1 λ Enj j .
We have
G f = (λ 1 , . . . , λ n ) ∈ G f n j=1 λ E1j j = · · · = n j=1 λ Enj j = 1 .
Let f (x 1 , . . . , x n ) be an invertible polynomial and W f = (w 1 , . . . , w n ; d) be the canonical system of weights associated to f . Set
q i := w i d , i = 1, . . . , n.
Note that G f always contains the exponential grading operator
g 0 := (e[q 1 ], . . . , e[q n ]),
where we use the notation e[−] := exp(2π √ −1 · −). Let G 0 be the subgroup of G f generated by g 0 . One has (cf. [34]) Definition 3.6 (Berglund, Hübsch). Following [5], the Berglund-Hübsch transpose of f (x 1 , . . . , x n ) of f is defined by
[G f : G 0 ] = c f .f (x 1 , . . . , x n ) = n i=1 a i n j=1 x Eji j .
Definition 3.7 (Berglund, Henningson). By [4], for a subgroup G ⊂ G f its dual group G is defined by G := Hom(G f /G, C * ).
One has the following easy facts: [4]. By [58] (see also [27,
• G f = {e} • H ⊂ G ⇒ G ⊂ H • H = H Note that Hom(G f /G, C * ) is isomorphic to G f , seeLemma 1]), we have G 0 = SL n (Z) ∩ G f . Moreover, by [34, Proposition 3.1], we have | G 0 | = c f .
For a subgroup G ⊂ G f , let G be the subgroup of G f defined by the following commutative diagram of short exact sequences
{1} G G G G G _ G G G _ C * G G {1} {1} G G G f G G G f G G C * G G {1} .
Homological mirror symmetry
There are several versions of the homological mirror symmetry conjecture for singularities. Let f (x, y, z) be a polynomial which has an isolated singularity at the origin. A distinguished basis of vanishing cycles in the Milnor fiber of f can be categorified to an A ∞ -category Fuk → (f ) called the directed Fukaya category. Any two distinguished bases of vanishing cycles are connected by a sequence of Gabrielov transformations [42]. The set of objects of Fuk → (f ) is a distinguished basis of (Lagrangian) vanishing cycles and the spaces of morphisms are Lagrangian intersection Floer complexes. It can be shown that Gabrielov transformations correspond to mutations of the category ( [68], see also e.g. [53]). Since different choices of distinguished bases are related by mutations, the derived category D b Fuk → (f ) is independent of this choice and is therefore, as a triangulated category, an invariant of the polynomial f . Note that the triangulated category D b Fuk → (f ) has a full exceptional collection.
On the other hand, let f (x, y, z) be a weighted homogeneous polynomial. Then one can consider as an analogue of the bounded derived category of coherent sheaves on a smooth proper algebraic variety the following triangulated category. Denote by S the polynomial ring C[x, y, z]. Let R f := S/(f ) be the coordinate ring and L f the maximal grading of f . D. Orlov [63] considered the triangulated category of a maximally-graded singularity D [7]) defined as the quotient of the bounded derived category of the category of finitely generated L f -graded R f -modules by the full triangulated subcategory corresponding to finitely generated projective L f -graded R f -modules. It Figure 2. The quiver Tp,q,r is equivalent to the stable category of L f -graded maximal Cohen-Macaulay modules over R f and to the stable homotopy category HMF [67]).
L f Sg (R f ) (introduced before by R.- O. Buchweitz• ✤ ✤ ✤ ✤ ✤ ✤ ❂ ❂ ❂ ❂ ❂ ❂ ❂ ❂ r r ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ δ2 • G G δ 2 1 · · · G G • o o d d ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ δ 2 q−1 • Ð Ð ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ G G δ1 • o o δ 3 r−1 · · · o o • δ 3 1 • b b ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ δ 1 p−1 · · · b b ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ • δ 1 1L f S (f ) of L f -graded matrix factorizations of f (see also
Moreover, one can consider the quotient stack
C f = f −1 (0)\{0} G f as in Sect.
2. This is a smooth projective line P 1 with at most three isotropic points of orders p, q, r [33, Theorem 3]. It corresponds to a weighted projective line P 1 p,q,r with weights p, q, r [44]. Let T p,q,r be the quiver of Fig. 2 where the double dashed line corresponds to two relations as follows. Let β 1 , β 2 and β 3 be the path from δ 1 to δ 2 via δ 1 p−1 , δ 2 q−1 and δ 3 r−1 respectively. Then we consider the relations β 2 + β 3 = 0 and β 1 = β 3 . They generate an ideal I in the path algebra C T p,q,r of the quiver. We consider the category mod-C T p,q,r /I of finitely generated right modules over the factor algebra C T p,q,r /I and its bounded derived category D b (mod-C T p,q,r /I).
Let D b coh(P 1 p,q,r ) be the bounded derived category of the category of coherent sheaves on P 1 p,q,r . W. Geigle and H. Lenzing ([44], for the special form of the quiver see also [60, 3.9]) proved the following theorem: Lenzing). There exists a triangulated equivalence
Theorem 4.1 (Geigle,D b coh(P 1 p,q,r ) ≃ D b (mod-C T p,q,r /I).
One has the following L f -graded generalization of Orlov's semi-orthogonal decomposition theorem [63, Theorem 2.5] (see also [79]):
Theorem 4.2 (Orlov). (1) If a f < 0, one has the semi-orthogonal decomposi- tion D b coh(P 1 p,q,r ) ≃ D L f Sg (R f ), A(0), . . . , A(−a f − 1) , where A(i) := O P 1 p,q,r ( l) deg( l)=i . (2) If a f = 0, D b coh(P 1 p,q,r ) ≃ D L f Sg (R f ). (3) If a f > 0, one has the semi-orthogonal decomposition D L f Sg (R f ) ≃ D b coh(P 1 p,q,r ), K(0), . . . , K(a f − 1) , where K(i) := (R f /m f )( l) deg( l)=i and m f is the maximal ideal in R f .
On the other hand, consider a polynomial
x p + y q + z r + axyz, for some a ∈ C, a = 0.
This is called a polynomial of type T p,q,r . For a triple (a, b, c) of positive integers we define ∆(a, b, c) := abc − bc − ac − ab.
If ∆(p, q, r) > 0 then this polynomial has a cusp singularity at the origin. If ∆(p, q, r) = 0 and a is general, then this polynomial has a simple elliptic singularity at the origin. If ∆(p, q, r) < 0 then there are other singularities outside the origin and we consider this polynomial as a global polynomial. A distinguished basis of vanishing cycles of such a polynomial (in the case ∆(p, q, r) < 0 taking the other singularities into account as well) is given by
(δ 1 , δ 1 1 , . . . δ 1 p−1 , δ 2 1 , . . . , δ 2 q−1 , δ 3 1 , . . . , δ 3 r−1 , δ 2 )
with a Coxeter-Dynkin diagram corresponding to the undirected graph T p,q,r underlying the quiver in Fig. 2.
It is known that the Berglund-Hübsch duality for some polynomials with nice properties gives the systematic construction of mirror pairs of Calabi-Yau manifolds and induces the homological mirror symmetry. Therefore, we may expect that the homological mirror symmetry can also be categorified to the following: [80,79]). Let f (x, y, z) be an invertible polynomial.
Conjecture 4.3 (Takahashi
(1) There should exist a triangulated equivalence
D L f Sg (R f ) ≃ D b Fuk → ( f ).(1)
(2) There should exist a triangulated equivalence
D b coh(P 1 p,q,r ) ≃ D b Fuk → (T p,q,r ).(2)
(3) These triangulated equivalences should be compatible in the following sense: There should exist a diagram
D L f Sg (R f ) ∼ G G y y D b Fuk → ( f ) y y D b coh(P 1 p,q,r ) ∼ G G D b Fuk → (T p,q,r ) where D L f
Sg (R f ) and D b coh(P 1 p,q,r ) are related by Theorem 4.2 and D b Fuk → ( f ) and D b Fuk → (T p,q,r ) should also be related by semi-orthogonal decomposition.
A proof of the first part of this conjecture for the simple (ADE) singularities can be derived from a theorem of H. Kajiura [71] (see also [70]). Moreover, it was proved by K. Ueda [83] for simple elliptic singularities. The first part of this conjecture can also be stated for invertible polynomials in any number of variables. In this form, it was proved by M. Futaki and K.Ueda for Brieskorn-Pham singularities [40] and for singularities of type D [41]. In all these cases, the polynomial f is self-dual, i.e. f = f . The second part of this conjecture was proved for the case r = 1 by P. Seidel [69], D. van Straten [78] and D. Auroux, L. Katzarkov and D. Orlov [3], for the case r = 2 by A. Takahashi [81] and in general by A. Keating [51]. Now consider an invertible polynomial f (x, y, z) and a finite group G of diagonal symmetries of f . We assume that G contains the group G 0 generated by the exponential grading operator g 0 . The orbifold curve
C (f,G) := f −1 (0)\{0} G is mirror dual to the following data: A function F : U → C, defined on a suitably chosen submanifold U of C 3 , given by F (x, y, z) = x γ ′ 1 + y γ ′ 2 + z γ ′ 3 − xyz.
The group G leaves F invariant and we can consider a crepant resolution Y → U/ G given by the G-Hilbert scheme and the proper transform
X ⊂ Y of X = F −1 (0)/ G ⊂ U/ G (cf. [72]).
Let HMF G S (f ) be the stable homotopy category of G-graded matrix factorizations of f . Let D b CohC (f,G) be the derived category of the category of coherent sheaves on C (f,G) .
We arrive at the following generalization of Conjecture 4.3 (cf. [34]): Takahashi). There should exist triangulated equivalences
Conjecture 4.4 (E.,HMF G S (f ) ∼ G G y y D b Fuk → ( f )// G y y D b CohC (f,G) ∼ G G D b Fuk → (F )// G where the two lines are related by semi-orthogonal decompositions, F (x, y, z) = x γ ′ 1 + y γ ′ 2 + z γ ′ 3 − xyz is right equivalent to f (x, y, z) − xyz,
and −// G means the smallest triangulated category containing the orbit category −/ G (cf. [2,8] for orbit categories; see also [52]).
Strange duality
We now give some evidence for the conjectures stated in the last section.
Let f (x 1 , . . . , x n ) be an invertible polynomial and G ⊂ G f a subgroup of the maximal group of symmetries. We shall investigate the correspondence
(f, G) ←→ ( f , G).
First let n = 3, f (x, y, z) be a non-degenerate invertible polynomial such that f (x, y, z) is non-degenerate as well and let G = G f . Then the correspondence
(f, G f ) ←→ ( f , {e})
was considered in [33]. We defined Dolgachev numbers for a pair (f, G f ) and Gabrielov numbers for a pair (f, {e}) as follows.
The quotient stack
C (f,G f ) := f −1 (0)\{0} G f
is a smooth projective line P 1 with at most three isotropic points of orders α 1 , α 2 , α 3 (see Sect. 4).
Definition 5.1. The numbers (α 1 , α 2 , α 3 ) are called the Dolgachev numbers of the pair (f, G f ) and the tuple is denoted by
A (f,G f ) .
On the other hand, consider the deformation F (x, y, z) := f (x, y, z) − xyz of f . By [33,Theorem 10], if ∆(γ 1 , γ 2 , γ 3 ) > 0 there exists a holomorphic coordinate change so that this polynomial becomes a polynomial of type T γ1,γ2,γ3 (for the definition see Sect. 4). In the cases ∆(γ 1 , γ 2 , γ 3 ) = 0 and ∆(γ 1 , γ 2 , γ 3 ) < 0 there is also a relation to a polynomial of type T γ1,γ2,γ3 , see [33,Theorem 10]. By [33,Theorem 13] we have the following theorem: Takahashi). Let f (x, y, z) be a non-degenerate invertible polynomial such that f (x, y, z) is non-degenerate as well. Then we have
Theorem 5.3 (E.,A (f,G f ) = Γ ( f ,{e}) , A ( f ,G f ) = Γ (f,{e}) .
Namely, the Dolgachev numbers
A (f,G f ) for the pair (f, G f ) coincide with the Gabrielov numbers Γ ( f ,{e}) for the pair ( f , {e}) and the Dolgachev numbers A ( f ,G f ) for the pair ( f , G f ) coincide with the Gabrielov numbers Γ (f,{e}) for the pair (f, {e}).
The 14 exceptional unimodal singularities can be given by non-degenerate invertible polynomials f (x, y, z) with G f = G 0 . These are the polynomials indicated in Table 1. The Dolgachev and Gabrielov numbers coincide with the corresponding numbers indicated in this table. Therefore we obtain Arnold's strange duality as a special case of this theorem. We come back to this duality in Sect. 8.
More generally, let f (x, y, z) be a non-degenerate invertible polynomial such that f (x, y, z) is non-degenerate as well, but now consider a subgroup G with [34] we defined Dolgachev numbers for the pair (f, G) with G 0 ⊂ G and Gabrielov numbers for a pair (f, G) with G ⊂ SL n (Z) as follows.
G 0 ⊂ G ⊂ G f . Then {e} ⊂ G ⊂ SL n (Z) ∩ G f . In
The quotient stack
C (f,G) := f −1 (0)\{0} G
can be regarded as a smooth projective curve of genus g (f,G) with a finite number of isotropic points.
f, G f ) as follows. Let A (f,G f ) = (α ′ 1 , α ′ 2 , α ′ 3 ) be the Dolgachev numbers of the pair (f, G f ).
For positive integers u and v, by u * v we denote v copies of the integer u. Takahashi). Let H i ⊂ G f be the minimal subgroup containing G and the isotropy group of the point p i , i = 1, 2, 3. Then we have the following formula for the Dolgachev numbers α 1 , . . . , α r of the pair (f, G):
Theorem 5.5 (E.,(α 1 , . . . , α r ) = α ′ i |H i /G| * |G f /H i |, i = 1, 2, 3 ,
where we omit numbers which are equal to one on the right-hand side.
We define the stringy Euler number of the orbifold curve C (f,G) by
e st (C (f,G) ) := 2 − 2g (f,G) + r i=1 (α i − 1). Now consider a pair (f, G) with G ⊂ SL 3 (Z). Definition 5.6. Let Γ (f,{e}) = (γ ′ 1 , γ ′ 2 , γ ′ 3 )
be the Gabrielov numbers of the pair (f, {e}) and let K i ⊂ G be the maximal subgroup of G fixing the coordinate x i , i = 1, 2, 3. Then the Gabrielov numbers of the pair (f, G) are the numbers γ 1 , . . . , γ s defined by
(γ 1 , . . . , γ s ) = γ ′ i |G/K i | * |K i |, i = 1, 2, 3 ,
where we omit numbers which are equal to one on the right-hand side. We denote this tuple of numbers by Γ (f,G) .
In [36], we gave a geometric definition of these numbers as lengths of arms of a certain Coxeter-Dynkin diagram: Let U be a suitably chosen submanifold of C 3 . We consider a crepant resolution Y → U/G and the preimage Z of the image of the Milnor fibre of the cusp singularity T γ ′ 1 ,γ ′ 2 ,γ ′ 3 under the natural projection U → U/G. Using the McKay correspondence, we constructed a basis of the relative homology group H 3 (Y, Z; Q) with a Coxeter-Dynkin diagram where one can read off the Gabrielov numbers.
Let G be a finite group acting linearly on C n . For an element g ∈ G, its age [47] is defined by age (g) := n i=1 α i , where in a certain basis in C n one has g = diag (e[α 1 ], . . . , e[α n ]) with 0 ≤ α i < 1. Now let G ⊂ SL n (Z). Then the age of an element g ∈ G is an integer. Define j G := {g ∈ G | age(g) = 1, g only fixes the origin}.
Let F be a polynomial of type T γ ′ 1 ,γ ′ 2 ,γ ′ 3 with the Gabrielov numbers (γ ′ 1 , γ ′ 2 , γ ′ 3 ) for the pair (f, G), Define the G-equivariant Milnor number of F by µ (F,G) := 2 − 2j G + s i=1 (γ ′ i − 1).G 0 ⊂ G ⊂ G f . Then we have g (f,G) = j G , A (f,G) = Γ ( f, G) , e st (C (f,G) ) = µ (F, G) , where F is a polynomial of type T γ ′ 1 ,γ ′ 2 ,γ ′ 3 with the Gabrielov numbers (γ ′ 1 , γ ′ 2 , γ ′ 3 ) for the pair ( f , {e}).
Orbifold E-functions
We now show that the mirror symmetry for Berglund-Hübsch dual pairs also holds on the level of suitably defined Hodge numbers. Therefore we discuss the notion of an orbifold E-function for a polynomial with an isolated singularity at the origin.
Let f (x 1 , . . . , x n ) be a polynomial with f (0) = 0 and with an isolated singularity at 0. We regard the polynomial f as a holomorphic map f : V → C where V is a suitably chosen neighbourhood of 0 ∈ C n so that the fibration f has good technical properties.
Consider the Milnor fibre V f := {x ∈ V | f (x) = 1} of the fibration f : X → C. J. H. M. Steenbrink [75] constructed a canonical mixed Hodge structure on the vanishing cohomology H n−1 (V f , C) with an automorphism c given by the Milnor monodromy.
We can naturally associate a bi-graded vector space to a mixed Hodge structure with an automorphism. Consider the Jordan decomposition c = c ss · c unip of c where c ss and c unip denote the semi-simple part and unipotent part respectively. For λ ∈ C, let
H n−1 (V f , C) λ := Ker(c ss − λ · id : H n−1 (V f , C) −→ H n−1 (V f , C)).(3)
Denote by F • the Hodge filtration of the mixed Hodge structure. (2) If p + q = n and p ∈ Z, then
H p,q f := Gr p F • H n−1 (V f , C) 1 .
(3) If p + q = n and p / ∈ Z, then
H p,q f := Gr [p] F • H n−1 (V f , C) e 2π √ −1p ,
where [p] is the largest integer less than p.
Let G be a subgroup of the maximal group G f of diagonal symmetries of f . For g ∈ G, we denote by Fix g := {x ∈ C n | g · x = x} the fixed locus of g, by n g := dim Fix g its dimension and by f g := f | Fix g the restriction of f to the fixed locus of g. Note that the function f g has an isolated singularity at the origin [35,Proposition 5].
We shall use the fact that H f g admits a natural G-action by restricting the G-action on C n to Fix g (which is well-defined since G acts diagonally on C n ).
To the pair (f, G) we can associate the following Q × Q-graded super vector space: (H f g ) G (−age(g), −age(g)),
H f,G,1 := g∈G; ng ≡1 (mod 2) (H f g ) G (−age(g), −age(g)),
where (H f g ) G denotes the G-invariant subspace of H f g . is
E(f, G)(t,t) = p,q∈Q dim C (H f,G,0 ) p,q − dim C (H f,G,1 ) p,q · t p− n 2t q− n 2 .(6)
In general, we may have both (H f,G,0 ) p,q = 0 and (H f,G,1 ) p,q = 0 for some p, q ∈ Q (see [29]). However we have the following proposition (see [29,Proposition 3]): Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f . Assume G ⊂ SL(n; C) or G ⊃ G 0 . If (H f,G,i ) p,q = 0, then (H f,G,i+1 ) p,q = 0 for all p, q ∈ Q and i ∈ Z/2Z. Definition 6.5. Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f . Assume G ⊂ SL(n; C) or G ⊃ G 0 . The Hodge numbers for the pair (f, G) are
Proposition 6.4.h p,q (f, G) := dim C (H f,G,0 ) p,q + dim C (H f,G,1 ) p,q , p, q ∈ Q.
Proposition 6.6. Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f . The E-function is given by
E(f, G)(t,t) := p,q∈Q (−1) p+q h p,q (f, G) · t p− n 2t q− n 2 , if G ⊂ SL n (C), p,q∈Q (−1) −p+q h p,q (f, G) · t p− n 2t q− n 2 , if G 0 ⊂ G.E(f, G)(t,t) := p,q∈Q (−1) p+q h p,q (f, G) · t p− n 2t q− n 2 .
The E-function of the pair (f, G) is the generating function of the exponents of the pair (f, G). An exponent of the pair (f, G) is a rational number q with h p,q (f, G) = 0. The set of exponents of the pair (f, G) is the multi-set of exponents
{q * h p,q (f, G) | p, q ∈ Q, h p,q (f, G) = 0} ,
where by u * v we denote v copies of the rational number u. It is almost clear that the mean of the set of exponents of (f, G) is n/2, namely, we have p,q∈Q
(−1) −p+q q − n 2 h p,q (f, G) = 0.
It is natural to ask what is the variance of the set of exponents of (f, G) defined by
Var (f,G) := p,q∈Q (−1) −p+q q − n 2 2 h p,q (f, G).
In [35] we have proved: We have (see [34,Theorem 5.12]):
Theorem 6.10 (E., Takahashi). Let F (x 1 , x 2 , x 3 ) = x γ ′ 1 1 + x γ ′ 2 2 + x γ ′ 3 3 − x 1 x 2 x 3 and G be a subgroup of G F . Then φ(F, G)(t) = (t − 1) 2−2jG s i=1 t γi − 1 t − 1 ,
where γ 1 , . . . , γ s are the Gabrielov numbers defined in Sect. 5.
The characteristic polynomial φ(f, G)(t) agrees with the reduced orbifold zeta function ζ orb f,G (t) defined in [27]. Its degree is the reduced orbifold Euler characteristic χ(V f , G) of (V f , G) (see below).
From physical reasons [4], one expects that there is the following relation between the orbifold E-functions of dual pairs. This was proved in [29]. Using Proposition 6.6, we can derive from this theorem the mirror symmetry of dual pairs on the level of Hodge numbers: Corollary 6.12. Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f ∩ SL(n; C). Then for all p, q ∈ Q, one has h p,q (f, G) = h n−p,q ( f , G).
As another corollary, we get the main result of [27]:
Corollary 6.13. One has ζ orb f , G (t) = ζ orb f,G (t) (−1) n .
From this we derive the main result of [26]:
Corollary 6.14. One has
χ(V f , G) = (−1) n χ(V f , G) .
Note that the latter two results were even proven without the assumption of non-degeneracy.
We also obtain as a corollary from Theorem 6.8 and Theorem 6.11:
Saito duality
In this section we consider a generalization of Saito's duality (Theorem 2.1) to the Berglund-Hübsch duality. For this we recall the notion of the Burnside ring of a finite group (see [55]). Let G be a finite group. A G-set is a set with an action of the group G. Let V be a "good" topological space, say, a union of cells in a finite CWcomplex or a quasi-projective complex or real analytic variety. Let G be a finite group acting on V . For x ∈ V , denote by G x the isotropy subgroup of x. The equivariant Euler characteristic was defined in [84,82].
Definition 7.2. The equivariant Euler characteristic of the G-space V is defined by χ G (V ) := [H]∈Conjsub G χ(V ([H]) /G)[G/H] ∈ B(G),
where Conjsub G denotes the set of conjugacy classes of subgroups of G and χ(Z) denotes the usual Euler characteristic of the topological space Z.
There is also the notion of an orbifold Euler characteristic ( [9,10], see also [46] and the references therein): Definition 7.3. The orbifold Euler characteristic of the G-space V is defined by
χ orb (V, G) = 1 |G| (g,h):gh=hg χ(X g,h )
where g, h is the subgroup of G generated by g and h.
There is a map r orb
r orb G (χ G (V )) = χ orb (V, G). Definition 7.4. • The reduced equivariant Euler characteristic of the G-space V is χ G (V ) := χ G (V ) − 1.
• The reduced orbifold Euler characteristic of the G-space V is
χ orb (V, G) = χ(V, G) − |G|.
We have r orb G (χ G (V )) = χ orb (V, G). For a group G let G * := Hom(G, C * ) be its group of characters. In [25] it was proved:
Theorem 7.6 (E., Gusein-Zade). Let f (x 1 , . . . , x n ) be an invertible polynomial. Then one has
χ G f (V f ) = (−1) n D G f (χ G f (V f ))
In the special case when the groups of diagonal symmetries of the dual polynomials are cyclic and are generated by the monodromy transformations, this yields the original Saito duality (Theorem 2.1). Moreover, it is shown in [25] that the relation between "geometric roots" of the zeta functions for some Berglund-Hübsch dual invertible polynomials described in [24] is a special case of Theorem 7.6. One can also derive Corollary 6.14 from this theorem.
In order to derive Corollary 6.13 from a similar result, we considered in [28] an enhancement of the Burnside ring:
Definition 7.7. A finite enhanced G-set is a triple (X, h, α), where: 1) X is a finite G-set;
2) h is a one-to-one G-equivariant map X → X;
3) α associates to each point x ∈ X a one-dimensional (complex) representation α x of the isotropy subgroup G x = {a ∈ G : ax = a} of the point x so that:
(a) for a ∈ G one has α ax (b) = α x (a −1 ba), where b ∈ G ax = aG x a −1 ; (b) α h(x) (b) = α x (b).
Definition 7.8. The enhanced Burnside ring B(G) is the Grothendieck group of finite enhanced G-sets.
Let V be a complex manifold with a complex analytic action of a finite group G. For a point x ∈ V we consider the one-dimensional representation α x : G x → C * defined by α x (g) = e[age(g)]. Let ϕ : V → V be a G-equivariant map with α ϕ(x) = α x for all x ∈ V . In [28] we defined an enhanced Euler characteristic The factor group G/H is canonically isomorphic to H * = Hom( H, C * ) and the group of characters H * = Hom(H, C * ) is canonically isomorphic to G * / H. In this way, the element h ∈ G/H defines a 1-dimensional representation h : H → C * and the representation α : H → C * defines an element α ∈ G * / H. Definition 7.9. The enhanced equivariant Saito duality between B 1 (G) and B 1 (G * ) is the map D G :
χ G (V, ϕ) ∈ B(G)B 1 (G) → B 1 (G * ) [G/H, h, α] → [G * / H, α, h]
In [28] we proved:
χ G f (V f , h f ) = (−1) n D G f ( χ G f (V f , h f ))
It is shown in [28] that one can derive Corollary 6.13 from this theorem.
Examples
We first consider Arnold's classification of singularities [1]. We first have the simple singularities which are also called the ADE singularities. They are given by invertible polynomials f with a f < 0. These polynomials together with the corresponding Dolgachev and Gabrielov numbers are given in [33, Table 5]. In Table 2 we indicate for each ADE singularity a non-degenerate The corresponding polynomials f are self-dual and the Dolgachev and Gabrielov numbers of f coincide. Moreover, the surface singularity given by f = 0 has a minimal resolution with an exceptional configuration E whose dual graph is given by Fig. 3. Here E and E i 1 , . . . , E i αi−1 for i = 1, 2, 3 are smooth rational curves of self-intersection number −2 and α 1 , α 2 , α 3 are the Dolgachev numbers of f . In Figure 3. The dual graph of E this case this graph coincides with the Coxeter-Dynkin diagram corresponding to a distinguished basis of vanishing cycles of this singularity. The graph is a classical Coxeter-Dynkin diagram of a root system of type A µ , D µ , E 6 , E 7 or E 8 . This is the reason why these singularities are called the ADE singularities.
Name f (x, y, z) α 1 , α 2 , α 3 γ 1 , γ 2 , γ 3 A k xy + y k z + zx, k ≥ 1 k, 1, 1 1, 1, k D 2k+1 x 2 + xy k + yz 2 , k ≥ 2 2, 2, 2k − 1 2, 2, 2k − 1 D 2k+2 x 2 + y 2 z + yz k+1 , k ≥ 1 2, 2, 2k 2, 2, 2k E 6 x 3 + y 2 + yz 2 3, 2, 3 3, 2, 3 E 7 x 2 + y 3 + yz 3 2, 3, 4 2, 3, 4 E 8 x 2 + y 3 + z 5 2, 3, 5 2, 3, 5• E 2 1 · · · • E 2 α 2 −1 • ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ E • E 3 α 3 −1 · · · • E 3 1 • ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ E 1 α 1 −1 · · · ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ ⑤ • E 1 1
These singularities have many characterizations. The ADE singularities are just the quotients of C 2 by a finite subgroup Γ ⊂ SL(2; C). Let ρ 0 , ρ 1 , . . . , ρ ℓ be the equivalence classes of irreducible finite dimen-sional complex representations of Γ where ρ 0 is the class of the trivial representation. J. McKay [62] has observed that if ρ : Γ → SL(2; C) is the given 2-dimensional representation of Γ then the (ℓ + 1) × (ℓ + 1)-matrix B = (b ij ), defined by decomposing the tensor products ρ j ⊗ ρ = i b ij ρ i into irreducible components, satisfies B = 2I − C where C is the affine Cartan matrix of the corresponding root system. The Coxeter-Dynkin diagram corresponding to C is just the extended Coxeter-Dynkin diagram of the corresponding root system. This is obtained by joining an additional vertex (corresponding to the trivial representation ρ 0 ) to the vertices E and E 3 1 in the case A µ ((α 1 , α 2 , α 3 ) = (1, 1, µ)), to E 3 2 in the case D µ , E 1 1 in the case E 6 , E 2 1 in the case E 7 and E 3 1 in the case E 8 . It is shown in [19] that the corresponding diagram can be transformed to a diagram of type T α1,α2,α3 by Gabrielov transformations (see Fig. 2).
G. Gonzalez-Sprinberg and J.-L. Verdier [45] and independently H. Knörrer [54] gave a geometric interpretation of the McKay correspondence by identifying the Grothendieck group of the category of coherent sheaves on the minimal resolution with the representation ring of Γ. M. Kapranov and E. Vasserot [50] extended these results to the derived category of coherent sheaves on the minimal resolution, not just the Grothendieck group.
Let A f be the coordinate ring of the weighted homogeneous polynomial f . It is graded according to the weights of the variables. Let P f (t) be the Poincaré series of the graded algebra A f . Let c f,− and c f,0 be the Coxeter element of the root system and the affine root system associated to the singularity f and φ f,− (t) and φ f,0 (t) respectively the corresponding characteristic polynomial. The author has proved [21]: Theorem 8.1. For a simple singularity not of type A 2k we have
P f (t) = φ f,− (t) φ f,0 (t) .
R. Stekolshchik has generalized this theorem to the root systems with non simply laced Coxeter-Dynkin diagrams [76,77].
The minimal resolution of an ADE singularity can be compactified to a rational surface S f containing the exceptional configuration E. The author and D. Ploog [30] have given the following geometric realization of the graph T α1,α2,α3 .
Let Coh(S f ) be the abelian category of coherent sheaves on S f and K(S f ) its Grothendieck K-group. There is a natural bilinear pairing on K(S f ) given by the Euler form χ(A, B) = i (−1) i dim Ext i S f (A, B) for two coherent sheaves A and B on S f . Let N (S f ) be the numerical K-group which is obtained from K(S f ) by dividing out the radical of the Euler form. Denote by Coh E (S f ) the abelian subcategory of Coh(S f ) consisting of sheaves whose support is contained in E and let K E (S f ) be its K-group. The curves E and E i 1 , . . . , E i αi−1 for i = 1, 2, 3 correspond to spherical objects in the category
Coh E (S f ). (Recall that a coherent sheaf F on S f is called spherical if Ext l S f (F, F ) = C, l = 0 or l = 2 0 else and F ⊗ ω S f ∼ = F.)
It follows from [30, Lemma 1] that the Euler pairing between the classes
[O E (−1)], [O E 1 1 (−1)], . . . , [O E 3 α 3 −1 (−1)], [O E ](7)
in N (S f ) is encoded by the graph T α1,α2,α3 (see Fig. 2) with the length of arms being equal to the Dolgachev numbers α 1 , α 2 , α 3 of f . Using this description, the author and Ploog have given another proof of Theorem 8.1 [30].
In [48], H. Kajiura, K. Saito and A. Takahashi proved the existence of a full strongly exceptional sequence in D The unimodal singularities are the simple elliptic singularities E 6 , E 7 and E 8 given by polynomials of type T 3,3,3 , T 2,4,4 and T 2,3,6 respectively (where a f = 0), the singularities of type T p,q,r with ∆(p, q, r) > 0 and the 14 exceptional unimodal singularities (with a f = 1). Invertible polynomials for the simple elliptic singularities are given in [33, Table 6]. The singularities of type T p,q,r with ∆(p, q, r) > 0 are not weighted homogeneous. Now we come back to Arnold's strange duality. Let f (x, y, z) be one of the invertible polynomials of Table 1. Then a f = 1 and a Coxeter-Dynkin diagram of f is given by the graph S γ1,γ2,γ3 which is an extension of the graph T γ1,γ2,γ3 by one vertex in accordance with Conjecture 4.3.
Here γ 1 , γ 2 , γ 3 are the Gabrielov numbers of f . H. Pinkham [64] and I. V. Dolgachev and V. V. Nikulin [15] have shown that the Milnor fibre V f can be compactified in a weighted projective space P(x, y, z, w) so that after minimal normal crossing resolution of singularities one obtains a K3surface S f . In this way, Arnold's strange duality can be considered as a special case of the mirror symmetry of K3-surfaces (see [13]). One can use this K3-surface to find a categorical realization of a Coxeter-Dynkin diagram of the dual singularity, namely of the graph S α1,α2,α3 , where α 1 , α 2 , α 3 are the Dolgachev numbers of f . This was obtained by the author and Ploog [30].
Namely, the K3-surface S f carries an exceptional configuration E at ∞ whose dual graph is given by Fig. 3. Here E and E i 1 , . . . , E i αi−1 for i = 1, 2, 3 are again smooth rational curves of self-intersection number −2 and α 1 , α 2 , α 3 are the Dolgachev numbers of f . The same construction as above can be applied to the K3 surface S f . Moreover, the structure sheaf O S f of the K3-surface S f is also spherical. We consider in this case the classes
[O E (−1)], [O E 1 1 (−1)], . . . , [O E 3 α 3 −1 (−1)], [O E ], [O S f ](8)
in N (S f ). The Euler pairing between these classes is encoded by the graph S α1,α2,α3 (see Fig. 1) with the lengths of arms being equal to the Dolgachev numbers α 1 , α 2 , α 3 of f . Let c f,+ be the Coxeter element corresponding to this graph and φ f,+ (t) be its characteristic polynomial. The author has shown [22]:
Theorem 8.2. P f (t) = φ f,+ (t) φ f,0 (t) .
More generally, this result was proved for so-called Fuchsian singularities. These are normal surface singularities with a good C * -action which are related to Fuchsian groups of the first kind. The hypersurface singularities among the Fuchsian singularities are just given by invertible polynomials f with a f = 1. In this case, the orbifold curve C (f,G0) is of the form H/Γ where H is the upper half plane and Γ is a Fuchsian group of the first kind. The genus g (f,G0) is called the genus of the Fuchsian singularity. There are 31 such singularities [74,85]. There are 22 such singularities with genus g (f,G0) = 0. They include the 14 exceptional unimodal singularities, the 6 heads of the bimodal series (see below) and two more.
A possible generalization of the McKay correspondence for Fuchsian groups of genus 0 has been discussed by I. Dolgachev [14]. H. Lenzing and J. A. de la Peña [60] derived Theorem 8.2 for Fuchsian singularities of genus 0 from the representation theory of certain algebras related with weighted projective lines. In [30,31] the author and Ploog derive this result for smoothable Fuchsian singularities of any genus from a generalization of the categorical approach indicated above.
In [30], the Coxeter elements were described as autoequivalences of triangulated categories as follows. The autoequivalences c f,0 and c f,+ correspond to the Coxeter elements of the graphs T α1,α2,α3 and S α1,α2,α3 respectively.
In [49], H. Kajiura, K. Saito and A. Takahashi proved the existence of a full strongly exceptional sequence in D L f Sg (R f ) for a weighted homogeneous polynomial f of with ε f = −a f = −1 and genus g (f,G0) = 0. This includes the case of the 14 exceptional unimodal singularities. Lenzing and de la Peña [60] proved that the category D L f Sg (R f ) in this case is equivalent to the derived category of finitely generated modules over the extended canonical algebra associated with the weighted projective line C (f,G f ) . The relation between the categories D L f Sg (R f ) and D f,+ for the 14 exceptional unimodal singularities was studied by M. Kobayashi, M. Mase and K. Ueda in [56].
We now turn to the bimodal singularities. They are also classified by Arnold [1]. They fall into the following 8 infinite series of singularities (for k ∈ Z) One can find weighted homogeneous polynomials for the classes for k = 0 in the series and for the 14 exceptional singularities. In each of these classes, one can find a non-degenerate invertible polynomial f . These polynomials, their Berglund-Hübsch transposes and their Dolgachev numbers A (f,G f ) = (α 1 , α 2 , α 3 ) and Gabrielov numbers Γ (f,{e}) = (γ 1 , γ 2 , γ 3 ) are indicated in Table 3. Note that the dual singularities are only bimodal for the self-dual cases, in the other cases other singularities are involved. 3, 3, 7 x 4 z + xy 3 + z 2 x 4 y + y 3 + xz 2 2, 4, 10 Q 2,0 Z 18 2, 4, 10 x 6 y + xy 3 + z 2 x 6 y + xy 3 + z 2 2, 4, 10 Z 18 Z 19 2, 3, 16 x 9 + xy 3 + z 2 x 9 y + y 3 + z 2 2, 4, 9 E 25 Q 16 3, 3, 9
J 3,k , Z 1,k , Q 2,k , W 1,k , S 1,k , U 1,k , k ≥ 0,Name α 1 , α 2 , α 3 f f γ 1 , γ 2 , γ 3 Dual J 3,
x 4 z + y 3 + xz 2 x 4 z + y 3 + xz 2 3, 3, 9 Q 16 Q 17 2, 4, 13
x 5 y + y 3 + xz 2 x 5 z + xy 3 + z 2 3, 3, 9 Z 2,0 Q 18 2, 3, 21
x 8 + y 3 + xz 2 x 8 z + y 3 + z 2 3, 3, 8 E 30 W 17
3, 5, 5
x 5 z + yz 2 + y 2 x 5 + xz 2 + y 2 z 2, 6, 8 S 1,0 W 18 2, 7, 7 x 7 + y 2 + yz 2 x 7 + y 2 z + z 2 2, 7, 7 W 18 S 16 3, 5, 7 x 4 y + xz 2 + y 2 z x 4 y + xz 2 + y 2 z 3, 5, 7 S 16 S 17 2, 7, 10 x 6 + xy 2 + yz 2 x 6 y + y 2 z + z 2 3, 6, 6 X 2,0 U 16 5, 5, 5 x 5 + y 2 z + yz 2 x 5 + y 2 z + yz 2 5, 5, 5 U 16 Table 3. Strange duality of the bimodal singularities
The Gorenstein parameter a f takes the value 1 for the classes for k = 0 in the series and it takes the values 2,3 and 5 for the exceptional bimodal singularities. Coxeter-Dynkin diagrams with respect to distinguished bases of vanishing cycles for the bimodal singularities have been computed in [17]. Each of these diagrams is an extension of the corresponding graph T γ1,γ2,γ3 by a f vertices in accordance with Conjecture 4.3. The author and Ploog [32] have given a geometric construction of these Coxeter-Dynkin diagrams by a procedure as above using compactifications of the Milnor fibres of the polynomials f . (Note that there are some mistakes in [32, Table 4], they are corrected in arXiv:1102.5024.) M. Mase and K. Ueda [61] have shown that this result can also be derived from Conjecture 4.3 and they used our construction to relate the Berglund-Hübsch duality in this case to Batyrev's polar duality for the associated toric K3 surfaces.
We also mention some other examples given in [34]. Consider the invertible polynomial f (x, y, z) = x 2 + xy 3 + yz 5 . The canonical system of weights is W f = (15, 5, 5; 30), so c f = 5 and the reduced system of weights is W red f = (3, 1, 1; 6). This is again a singularity with a f = 1, but the genus of C (f,G0) is equal to two and there are no isotropic points. The Dolgachev numbers of the pair (f, G fin f ) are (5,5,5) and G T 0 is generated by the element (e[ 1 5 ], e[ 3 5 ], e[ 1 5 ]). This group has two elements of age 1. The singularity f T (x, y, z) − xyz is right equivalent to the cusp singularity x 5 + y 5 + z 5 − xyz. We recover the example of Seidel [72]. Similarly, let f (x, y, z) = x 3 y + y 3 z + z 3 x. Then again a f = 1, the genus of C (f,G0) is equal to three and there are no isotropic points. The group G T 0 is generated by the element (e[ 1 7 ], e[ 2 7 ], e[ 4 7 ]) and f T (x, y, z) − xyz is right equivalent to the cusp singularity x 7 + y 7 + z 7 − xyz.
More generally, let g be an integer with g ≥ 2 and consider the invertible polynomial f (x, y, z) = x 2g+1 + y 2g+1 + z 2g+1 together with the group G generated by (e[
Complete intersection singularities as mirrors
We shall now derive the extension of Arnold's strange duality discovered by the author and C. T. C. Wall from the mirror symmetry and the Berglund-Hübsch transposition of invertible polynomials. This is the contents of the paper [37].
The invertible polynomials f (x, y, z) given in Table 3 for the singularities J 3,0 , Z 1,0 , Q 2,0 , W 1,0 , S 1,0 and U 1,0 , which are the heads of the bimodal series, satisfy [G f : G 0 ] = 2. There corresponds an action of G 0 = Z/2Z on the transpose polynomial f . We choose the coordinates such that this action is given by (x, y, z) → (−x, −y, z). The Dolgachev numbers of the pairs (f, G 0 ) are given in Table 4 (see also [34, Table 4]). Moreover, there is a one-parameter family F (depending on a complex parameter a) of weighted homogeneous polynomials defining these singularities, it is also indicated in Table 4. It is natural from the mirror symmetry view point to expect that adding one monomial to an invertible polynomial is dual to having another C * -action on the dual polynomial. This will be elaborated in the sequel.
Name
F A (f,G0) J 3,0
x 3 + xy 6 + z 2 + ax 2 y 3 , a = ±2 2, 2, 2, 3 Z 1,0
x 5 y + xy 3 + z 2 + ax 2 y 3 , a = ±2 2, 2, 2, 4 Q 2,0
x 3 + xy 4 + yz 2 + ax 2 y 2 , a = ±2 2, 2, 2, 5 W 1,0
x 6 + y 2 + yz 2 + ax 3 y, a = ±2 2, 2, 3, 3 S 1,0
x 5 + xy 2 + yz 2 + ax 3 y, a = ±2 2, 2, 3, 4 U 1,0
x 3 + xy 2 + yz 3 + ax 2 y, a = ±2 2, 3, 3, 3 Table 4. Heads (k = 0) of bimodal series
The non-degenerate invertible polynomials f (x, y, z) with [G f : G 0 ] = 2 are classified in [37,Proposition 5]. There are 5 possible types each depending on parameters p 1 , p 2 , p 3 or p 1 , q 2 , q 3 subject to certain conditions. Let L 0 be the quotient of L f corresponding to the subgroup G 0 of G f (cf. Sect 3). We consider 4 × 3-matrices E = (E ij ) i=1,2,3,4 j=1,2,3 such that a i x Ei1 y Ei2 z Ei3 , is a smooth projective line with 4 isotropic points whose orders are α 1 , α 2 , α 3 , α 4 , where A (f,G0) = (α 1 , α 2 , α 3 , α 4 ) are the Dolgachev numbers of the pair (f, G 0 ) defined above, for general a 1 , a 2 , a 3 , a 4 . These matrices are classified in [37,Proposition 2].
Z x ⊕ Z y ⊕ Z z ⊕ Z f / E i1 x + E i2 y + E i3 z = f , i = 1,
We associate to these matrices a pair of polynomials as follows. We observe that the kernel of the matrix E T is either generated by the vector (1, 1, 0, −2) T or by the vector (1, 1, −1, −1) T . Let R := C[x, y, z, w]. In the first case, there exists a Z-graded structure on R given by the C * -action λ * (x, y, z, w) = (λx, λy, z, λ −2 w) for λ ∈ C * .
In the second case, there exists a Z-graded structure on R given by the C * -action λ * (x, y, z, w) = (λx, λy, λ −1 z, λ −1 w) for λ ∈ C * .
Let R = i∈Z R i be the decomposition of R according to one of these Z-gradings. Let E T be the transposed matrix. We associate to this the polynomial
f (x, y, z, w) := x E11 y E21 z E31 w E41 + x E12 y E22 z E32 w E42 + x E13 y E23 z E33 w E43 .
In the first case, we have f ∈ R 0 = C[x 2 w, y 2 w, z, xyw]. Let X := x 2 w, Y := y 2 w, Z := z, W := xyw.
In these new coordinates, we obtain a pair of polynomials f 1 (X, Y, Z, W ) = XY − W 2 , f 2 (X, Y, Z, W ) = f (X, Y, Z, W ).
In the second case, we have f ∈ R 0 = C[xw, yz, xz, yw]. Let X := xw, Y := yz, Z := xz W := yw.
In these new coordinates, we obtain a pair of polynomials f 1 (X, Y, Z, W ) = XY − ZW, f 2 (X, Y, Z, W ) = f (X, Y, Z, W ). Now we choose for each of the matrices E special values a 1 , a 2 , a 3 , a 4 such that the corresponding polynomial F has a non-isolated singularity. We denote this polynomial by f . In two cases, we already have additional conditions on the parameters. In the remaining cases, we consider conditions on the parameters p 1 , p 2 , p 3 (p 1 , q 2 , q 3 ) such that the polynomial f (x, y, z) is of the form f (x, y, z) = u(x, y, z) + v(x, y, z)(x − y e ) 2
with [G f : G 0 ] = 2 and a f = 0. Next consider the case a f = 1. It turns out that the virtual singularities in this case are exactly the virtual singularities corresponding to the bimodal series. Namely, by setting k = −1 in Arnold's equations one obtains polynomials which are similar to our polynomials h for certain types and parameters p 1 , p 2 , p 3 or p 1 , q 2 , q 3 respectively. These types and parameters are listed in Table 6 together with the corresponding Dolgachev and Gabrielov numbers of h and the corresponding dual pairs ( f 1 , f 2 ) defining an isolated complete intersection singularity (ICIS).
Name
Type Let h(x, y, z) = 0 be the equation for one of the virtual bimodal singularities. It turns out that h has besides the origin an additional critical point which is of type A 1 . One can find a Coxeter-Dynkin diagram with respect to a distinguished basis of vanishing cycles corresponding to all the critical points of the form S γ1,γ2,γ3 where γ 1 , γ 2 , γ 3 are the Gabrielov numbers of h.
Dol(h) Gab(h) Dol( f 1 , f 2 ) Gab( f 1 , f 2 ) Dual J 3,
To a graph of type S γ1,γ2,γ3 , there corresponds an extended canonical algebra in the sense of [60]. The 14 cases of the exceptional unimodal singularities (see Table 1) and the 8 cases of Table 6 correspond to those extended canonical algebras where the number t of weights is equal to 3 and the Coxeter element is semi-simple and has only roots of unity as eigenvalues (cf. [
Definition 3. 1 .
1A weighted homogeneous polynomial f (x 1 , . . . , x n ) is called invertible if the following conditions are satisfied:
Definition 3. 2 .
2Let h(x 1 , . . . , x n ) be any polynomial. Let G h be the (finite) group of diagonal symmetries of h, i.e.
Definition 3. 4 .
4Let f (x 1 , . . . , x n ) be an invertible polynomial and L f be the maximal grading of f . The maximal abelian symmetry group G f of f is the abelian group defined by G f := Spec(CL f ),
Definition 3 . 5 .
35Let f (x 1 , . . . , x n ) be a weighted homogeneous polynomial with reduced system of weights W = (w 1 , . . . , w n ; d). The integera f := d − n i=1w i is called the Gorenstein parameter of f . It is also usual to denote ǫ f := −a f the Gorenstein parameter of f , see e.g.[66].Let f (x 1 , . . . , x n
Definition 5 . 2 .
52The numbers (γ 1 , γ 2 , γ 3 ) are called the Gabrielov numbers of the pair (f, {e}) and the tuple is denoted by Γ (f,{e}) .
Definition 5. 4 .
4The orders α 1 , . . . , α r of the isotropy groups of these points are called the Dolgachev numbers of the pair (f, G) and denoted by A (f,G) .By[34, Theorem 4.6], the Dolgachev numbers of the pair (f, G) can be computed from the Dolgachev numbers of the pair (
Definition 6. 2 .
2Define the Q × Q-graded super vector space H f,G := H f,G,0 ⊕ H f,G,1 as H f,G,0 := g∈G; ng ≡0 (mod 2)
Definition 6. 3
3([29]). The E-function for the pair (f, G)
Therefore, in the case that f is a non-degenerate invertible polynomial and G ⊂ SL n (C), the definition of the E-function for the pair (f, G) agrees with [34, Definition 5.7]: Definition 6.7. Let f (x 1 , . . . , x n ) be a polynomial with an isolated singularity at the origin invariant under a group G ⊂ SL n (C). The E-function of the pair (f, G) is defined by
Theorem 6. 8 . 9 .
89(E., Takahashi). Let f (x 1 , . . . , x n ) be a non-degenerate weighted homogeneous polynomial invariant under a group G ⊂ SL n (C). Then one has Var (f,G) Let f (x 1 , . . . , x n ) be a polynomial with an isolated singularity at the origin invariant under a group G ⊂ SL n (C). The characteristic polynomial of the pair (f, G) is φ(f, G)(t) := q∈Q (t − e[q]) h p,q (f,G) .
Theorem 6 .
611 (E., Gusein-Zade, Takahashi). Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f . Then E(f, G)(t,t) = (−1) n E( f , G)(t −1 ,t).
Corollary 6. 15 .
15Let f (x 1 , . . . , x n ) be a non-degenerate invertible polynomial and G a subgroup of G f containing G 0 . Then one has Var (f,G) = 1 12ĉ · χ(f, G), whereĉ := n − 2 n i=1 q i and χ(f, G) := E(f, G)(1, 1).
A G-set is irreducible if the action of G on it is transitive. Isomorphism classes of irreducible G-sets are in one-to-one correspondence with conjugacy classes of subgroups of G: to the conjugacy class containing a subgroup H ⊂ G one associates the isomorphism class [G/H] of the G-set G/H. Definition 7.1. The Burnside ring B(G) of G is the Grothendieck ring of finite G-sets, i.e. it is the (abelian) group generated by the isomorphism classes of finite G-sets modulo the relation [A ∐ B] = [A] + [B] for finite G-sets A and B. The multiplication in B(G) is defined by the cartesian product.As an abelian group, B(G) is freely generated by the isomorphism classes of irreducible G-sets. The element 1 in the ring B(G) is represented by the G-set consisting of one point (with the trivial G-action).
For a subgroup H ⊂ G let V H be the set of all fixed points of H. Denote by V (H) the set of points of V with isotropy group H. Finally, let V ([H]) := K∈[H] V (K) .
G:
B(G) → Z defined by sending a class [G/H] to the number χ orb ([G/H], G). If G is abelian then χ orb ([G/H], G) = |H|. We have
Definition 7 . 5 .
75Let G be abelian. The equivariant Saito duality between B(G) and B(G * ) is the map D G : B(G) → B(G * ) [G/H] → [G * / H]
and a reduced enhanced Euler characteristic χ G (V, ϕ) of the pair (V, ϕ). Now let G again be abelian. Let B 1 (G) be the subgroup of B(G) generated by the isomorphism classes of finite enhanced sets (X, h, α) such that h(x) ∈ Gx for all x ∈ X. As an abelian group it is freely generated by the classes [G/H, h, α] where (1) h : G/H → G/H can be identified with an element h ∈ G/H, (2) α is a 1-dimensional representation of H.
Theorem 7.10 (E., Gusein-Zade). Let f (x 1 , . . . , x n ) be an invertible polynomial and let h f : V f → V f and h f : V f → V f be the monodromy transformations of f and f respectively. Then one has
R f ) for a polynomial f of ADE type. D. Kussin, H. Lenzing and H. Meltzer [59] discuss relations of these categories with weighted projective lines.
Denote by D b (S f ) the bounded derived category of coherent sheaves on S f , and by D f,0 := D b E (S f ) the subcategory consisting of complexes whose support (of all homology sheaves) is contained in E. These are triangulated categories and D b E (S f ) is a 2-Calabi-Yau category. We also consider the smallest full triangulated subcategory D f,+ of D b (S f ) containing D f,0 and the structure sheaf O S f of S f . Then the classes (8) form a spherical collection in D f,+ . A spherical sheaf F determines a spherical twist T F : D b (S f ) → D b (S f ) which is an autoequivalence of the category [73]. Consider the autoequivalences c f,0 := T OE(−1) T O T OE , c f,+ := c f,0 T OS f .
. . . , 4 ∼ = L 0 and C (F,G0) := [(F −1 (0) \ {0})/ G 0 ], where F := 4 i=1
, K. Saito and A. Takahashi [48, Theorem 3.1] and results of P. Seidel [69, Proposition 3.4],
Definition 6.1. Define the Q × Q-graded vector space H f :=p,q∈Q
H p,q
f as
(1) If p + q = n, then H p,q
f := 0.
Table 2 .
2The ADE singularities invertible polynomial f (x, y, z) with the correct Dolgachev and Gabrielov numbers.
E 18 , E 19 , E 20 , Z 17 , Z 18 , Z 19 , Q 16 , Q 17 , Q 18 , W 17 , W 18 , S 16 , S 17 , U 16 .and W ♯
1,k , S ♯
1,k , k ≥ 1,
and the 14 exceptional singularities
1 2g+1 ], e[ 1 2g+1 ], e[ 2g−1 2g+1 ]). Then the genus of the curve C (f,G) is equal to g and we recover the examples of A. Efimov[39].
Table 6 .
6Strange duality between virtual bimodal singularities and ICIS
18, Theorem 3.4.3 and Table 3.4.2] and [60, Table 2]).
orf (x, y, z) = u(x, y, z) + v(x, y, z)(y − x e ) 2 for some monomials u(x, y, z) and v(x, y, z) and some integer e ≥ 2. We consider the cusp singularity f (x, y, z) − xyz and perform the coordinate change x → x + y e or y → y + x e respectively. Then f (x, y, z) − xyz is transformed to h(x, y, z) − xyz. Some of the new polynomials h have 4 monomials and others only 3. We restrict our consideration to the cases where the polynomial h has 4 monomials. The singularities defined by the polynomials h(x, y, z) will be called virtual singularities. We summarize our duality inTable 5.TypeTable 5. Duality between virtual singularities and complete intersection singularitiesOne can associate Dolgachev and Gabrielov numbers to the virtual singularities and the pairs ( f 1 , f 2 ) in an analogous way as above, see[37,Sect. 4]. Here the Gabrielov numbers of the virtual singularities and the Dolgachev numbers of the pairs ( f 1 , f 2 ) are triples, but the Dolgachev numbers of the virtual singularities and the Gabrielov numbers of the pairs ( f 1 , f 2 ) are quadruples of numbers which are divided into two pairs in a natural way. One obtains the following theorem [37, Theorem 4]: There is also an extension of Saito's duality to this duality, see[37,Corollary 6]. Now we consider again the cases with small Gorenstein parameter a f : For the case a f < 0 see [37,Table 8]. There are no non-degenerate invertible polynomials
Critical points of smooth functions and their normal forms. V I Arnold, Engl. translation in Russ. Math. Surv. 305Usp. Math. Nauk.V. I. Arnold, Critical points of smooth functions and their normal forms. Usp. Math. Nauk. 30:5 (1975), 3-65 (Engl. translation in Russ. Math. Surv. 30:5 (1975), 1-75).
A generalization of Gabriel's Galois covering functors and derived equivalences. H Asashiba, J. Algebra. 334H. Asashiba, A generalization of Gabriel's Galois covering functors and derived equiv- alences. J. Algebra 334 (2011), 109-149.
Mirror symmetry for weighted projective planes and their noncommutative deformations. D Auroux, L Katzarkov, D Orlov, Ann. of Math. 2D. Auroux, L. Katzarkov and D. Orlov, Mirror symmetry for weighted projective planes and their noncommutative deformations. Ann. of Math. (2) 167 (2008), 867- 943.
Landau-Ginzburg orbifolds, mirror symmetry and the elliptic genus. P Berglund, M Henningson, Nuclear Physics B. 433P. Berglund and M. Henningson, Landau-Ginzburg orbifolds, mirror symmetry and the elliptic genus. Nuclear Physics B 433 (1995), 311-332.
A generalized construction of mirror manifolds. P Berglund, T Hübsch, Nuclear Physics B. 393P. Berglund and T. Hübsch, A generalized construction of mirror manifolds. Nuclear Physics B 393 (1993), 377-391.
Die Monodromie der isolierten Singularitäten von Hyperflächen. E Brieskorn, Manuscripta math. 2E. Brieskorn, Die Monodromie der isolierten Singularitäten von Hyperflächen. Manuscripta math. 2 (1970), 103-161.
Maximal Cohen-Macaulay modules and Tate cohomology over Gorenstein rings. Unpublished manuscript. R O Buchweitz, DOI 10.1.1.469.9816HannoverR. O. Buchweitz, Maximal Cohen-Macaulay modules and Tate cohomology over Gorenstein rings. Unpublished manuscript, Hannover, 1987, DOI 10.1.1.469.9816.
Skew category, Galois covering and smash product of a k-category. C Cibils, E Marcos, Proc. Amer. Math. Soc. 134C. Cibils and E. Marcos, Skew category, Galois covering and smash product of a k-category. Proc. Amer. Math. Soc. 134 (2006), 39-50.
L Dixon, J A Harvey, C Vafa, E Witten, Strings on orbifolds. 261L. Dixon, J. A. Harvey, C. Vafa and E. Witten, Strings on orbifolds. Nuclear Physics B 261 (1985), 678-686.
L Dixon, J A Harvey, C Vafa, E Witten, Strings on orbifolds II. 274L. Dixon, J. A. Harvey, C. Vafa and E. Witten, Strings on orbifolds II. Nuclear Physics B 274 (1986), 285-314.
Quotient-conical singularities on complex surfaces. I V Dolgachev, Engl. translation in Funct. Anal. Appl. 82Funkcional. Anal. i Prilozen.I. V. Dolgachev, Quotient-conical singularities on complex surfaces. Funkcional. Anal. i Prilozen. 8:2 (1974), 75-76 (Engl. translation in Funct. Anal. Appl. 8 (1974), 160-161).
Automorphic forms and weighted homogeneous singularities. I V Dolgachev, Engl. translation in Funct. Anal. Appl. 92Funkcional. Anal. i Prilozen.I. V. Dolgachev, Automorphic forms and weighted homogeneous singularities. Funkcional. Anal. i Prilozen. 9:2 (1975), 67-68 (Engl. translation in Funct. Anal. Appl. 9 (1975), 149-151).
Mirror symmetry for lattice polarized K3 surfaces. I V Dolgachev, J. Math. Sci. 81I. V. Dolgachev, Mirror symmetry for lattice polarized K3 surfaces. J. Math. Sci. 81 (1996), 2599-2630.
McKay's correspondence for cocompact discrete subgroups of SU(1, 1). I V Dolgachev, CRM Proc. Lecture Notes. Providence, RIAmer. Math. Soc47In: Groups and symmetriesI. V. Dolgachev, McKay's correspondence for cocompact discrete subgroups of SU(1, 1). In: Groups and symmetries, CRM Proc. Lecture Notes, 47, Amer. Math. Soc., Providence, RI, 2009. 111-133.
Exceptional singularities of V. I. Arnold and K3 surfaces. I V Dolgachev, V V Nikulin, Proc. USSR Topological Conference in Minsk. USSR Topological Conference in MinskI. V. Dolgachev and V. V. Nikulin, Exceptional singularities of V. I. Arnold and K3 surfaces. In: Proc. USSR Topological Conference in Minsk, 1977.
Quadratische Formen und Monodromiegruppen von Singularitäten. W Ebeling, Math. Ann. 255W. Ebeling, Quadratische Formen und Monodromiegruppen von Singularitäten. Math. Ann. 255 (1981), 463-498.
Milnor lattices and geometric bases of some special singularities. W Ebeling, Noeuds, tresses et singularités. Genève31Enseign. Math.W. Ebeling, Milnor lattices and geometric bases of some special singularities. In: Noeuds, tresses et singularités (Ed. C.Weber), Monographie Enseign. Math. 31, Genève, 1983, 129-146 and Enseign. Math. (2) 29 (1983), 263-280.
The Monodromy Groups of Isolated Singularities of Complete Intersections. W Ebeling, Lect. Notes in Math. 1293Springer-VerlagBerlin etc.W. Ebeling, The Monodromy Groups of Isolated Singularities of Complete Intersec- tions. Lect. Notes in Math., Vol. 1293, Springer-Verlag, Berlin etc., 1987.
On Coxeter-Dynkin diagrams of hypersurface singularities. W Ebeling, J. Math. Sciences. 82W. Ebeling, On Coxeter-Dynkin diagrams of hypersurface singularities. J. Math. Sciences 82 (1996), 3657-3664.
Strange duality, mirror symmetry, and the Leech lattice. W Ebeling, Singularity theory. Liverpool; CambridgeCambridge Univ. Press263W. Ebeling, Strange duality, mirror symmetry, and the Leech lattice. In: Singularity theory (Liverpool, 1996), London Math. Soc. Lecture Note Ser. 263, Cambridge Univ. Press, Cambridge, 1999, 55-77.
Poincaré series and monodromy of a two-dimensional quasihomogeneous hypersurface singularity. W Ebeling, Manuscripta math. 107W. Ebeling, Poincaré series and monodromy of a two-dimensional quasihomogeneous hypersurface singularity. Manuscripta math. 107 (2002), 271-282.
The Poincaré series of some special quasihomogeneous surface singularities. W Ebeling, Publ. Res. Inst. Math. Sci. 39W. Ebeling, The Poincaré series of some special quasihomogeneous surface singular- ities. Publ. Res. Inst. Math. Sci 39 (2003), 393-413.
Functions of several complex variables and their singularities. W Ebeling, Graduate Studies in Math. 83American Mathematical SocietyProvidence R. I.W. Ebeling, Functions of several complex variables and their singularities. Graduate Studies in Math. Vol. 83, American Mathematical Society, Providence R. I., 2007.
. W Ebeling, S M Gusein-Zade, Monodromy of dual invertible polynomials. Mosc. Math. J. 11W. Ebeling and S. M. Gusein-Zade, Monodromy of dual invertible polynomials. Mosc. Math. J. 11 (2011), 463-472.
W Ebeling, S M Gusein-Zade, Saito duality between Burnside rings for invertible polynomials. 44W. Ebeling and S. M. Gusein-Zade, Saito duality between Burnside rings for invert- ible polynomials. Bull. Lond. Math. Soc. 44 (2012), 814-822.
. W Ebeling, S M Gusein-Zade, Orbifold Euler characteristics for dual invertible polynomials. Mosc. Math. J. 12W. Ebeling and S. M. Gusein-Zade, Orbifold Euler characteristics for dual invertible polynomials. Mosc. Math. J. 12 (2012), , 49-54.
W Ebeling, S M Gusein-Zade, arXiv:1407.0154Orbifold zeta functions for dual invertible polynomials. Preprint. to appear inW. Ebeling and S. M. Gusein-Zade, Orbifold zeta functions for dual invertible poly- nomials. Preprint, arXiv:1407.0154, to appear in Proc. Edinb. Math. Soc. (2).
. W Ebeling, S M Gusein-Zade, arXiv:1506.05604Enhanced equivariant Saito duality. PreprintW. Ebeling and S. M. Gusein-Zade, Enhanced equivariant Saito duality. Preprint, arXiv:1506.05604.
W Ebeling, S M Gusein-Zade, A Takahashi, arXiv:1509.04101Orbifold E-functions of dual invertible polynomials. Preprint. W. Ebeling, S. M. Gusein-Zade and A. Takahashi, Orbifold E-functions of dual invertible polynomials. Preprint, arXiv:1509.04101.
McKay correspondence for the Poincaré series of Kleinian and Fuchsian singularities. W Ebeling, D Ploog, Math. Ann. 347W. Ebeling and D. Ploog, McKay correspondence for the Poincaré series of Kleinian and Fuchsian singularities. Math. Ann. 347 (2010), 689-702.
Poincaré series and Coxeter functors for Fuchsian singularities. W Ebeling, D Ploog, Adv. Math. 225W. Ebeling and D. Ploog, Poincaré series and Coxeter functors for Fuchsian singu- larities. Adv. Math. 225 (2010), 1387-1398.
A geometric construction of Coxeter-Dynkin diagrams of bimodal singularities. W Ebeling, D Ploog, Manuscripta Math. 140W. Ebeling and D. Ploog, A geometric construction of Coxeter-Dynkin diagrams of bimodal singularities. Manuscripta Math. 140 (2013), 195-212.
Strange duality of weighted homogeneous polynomials. W Ebeling, A Takahashi, Compos. Math. 147W. Ebeling and A. Takahashi, Strange duality of weighted homogeneous polynomials. Compos. Math. 147 (2011), 1413-1433.
Mirror symmetry between orbifold curves and cusp singularities with group action. W Ebeling, A Takahashi, Int. Math. Res. Not. IMRN. 2013W. Ebeling and A. Takahashi, Mirror symmetry between orbifold curves and cusp singularities with group action. Int. Math. Res. Not. IMRN 2013, 2240-2270.
Variance of the exponents of orbifold Landau-Ginzburg models. W Ebeling, A Takahashi, Math. Res. Lett. 20W. Ebeling and A. Takahashi, Variance of the exponents of orbifold Landau- Ginzburg models. Math. Res. Lett. 20 (2013), 51-65.
A geometric definition of Gabrielov numbers. W Ebeling, A Takahashi, Rev. Mat. Complut. 27W. Ebeling and A. Takahashi, A geometric definition of Gabrielov numbers. Rev. Mat. Complut. 27 (2014), 447-460.
Strange duality between hypersurface and complete intersection singularities. W Ebeling, A Takahashi, arXiv:1508.02226PreprintW. Ebeling and A. Takahashi, Strange duality between hypersurface and complete intersection singularities. Preprint, arXiv:1508.02226.
Kodaira singularities and an extension of Arnold's strange duality. W Ebeling, C T C Wall, Compositio Math. 56W. Ebeling and C. T. C. Wall, Kodaira singularities and an extension of Arnold's strange duality. Compositio Math. 56 (1985), 3-77.
Homological mirror symmetry for curves of higher genus. A I Efimov, Adv. Math. 230A. I. Efimov, Homological mirror symmetry for curves of higher genus. Adv. Math. 230 (2012), 493-530.
Homological mirror symmetry for Brieskorn-Pham singularities. M Futaki, K Ueda, Selecta Math. (N.S.). 17M. Futaki and K. Ueda, Homological mirror symmetry for Brieskorn-Pham singu- larities. Selecta Math. (N.S.) 17 (2011), 435-452.
Homological mirror symmetry for singularities of type D. M Futaki, K Ueda, Math. Z. 273M. Futaki and K. Ueda, Homological mirror symmetry for singularities of type D. Math. Z. 273 (2013), 633-652.
Intersection matrices for certain singularities. A M Gabrielov, Engl. translation in Funct. Anal. Appl. 7Funkcional. Anal. i Prilozen.A. M. Gabrielov, Intersection matrices for certain singularities. Funkcional. Anal. i Prilozen. 7 (1973), 18-32. (Engl. translation in Funct. Anal. Appl. 7 (1973), 182-193).
Dynkin diagrams of unimodal singularities. A M Gabrielov, Engl. translation in Funct. Anal. Appl. 8Funkcional. Anal. i Prilozen.A. M. Gabrielov, Dynkin diagrams of unimodal singularities. Funkcional. Anal. i Prilozen. 8:3 (1974), 1-6 (Engl. translation in Funct. Anal. Appl. 8 (1974), 192-196).
A class of weighted projective curves arising in representation theory of finite-dimensional algebras. W Geigle, H Lenzing, Singularities, representation of algebras, and vector bundles. Lambrecht; BerlinSpringer1273W. Geigle and H. Lenzing, A class of weighted projective curves arising in representa- tion theory of finite-dimensional algebras. In: Singularities, representation of algebras, and vector bundles (Lambrecht, 1985), Lecture Notes in Math. Vol. 1273, Springer, Berlin, 1987, 265-297.
Construction géométrique de la correspondance de McKay. G Gonzalez-Sprinberg, J.-L Verdier, Ann. Sci.École Norm. Sup. 4G. Gonzalez-Sprinberg and J.-L. Verdier, Construction géométrique de la correspon- dance de McKay. Ann. Sci.École Norm. Sup. (4) 16 (1984), 409-449.
On the Euler number of an orbifold. F Hirzebruch, Th, Höfer, Math. Ann. 286F. Hirzebruch and Th. Höfer, On the Euler number of an orbifold. Math. Ann. 286 (1990), 255-260.
The McKay correspondence for finite subgroups of SL(3, C). Y Ito, M Reid, Higher-dimensional complex varieties. Trento; Berlinde GruyterY. Ito and M. Reid, The McKay correspondence for finite subgroups of SL(3, C). In: Higher-dimensional complex varieties (Trento, 1994), de Gruyter, Berlin 1996, 221-240.
Matrix factorisations and representations of quivers II: type ADE case. H Kajiura, K Saito, A Takahashi, Adv. in Math. 211H. Kajiura, K. Saito and A. Takahashi, Matrix factorisations and representations of quivers II: type ADE case. Adv. in Math. 211 (2007), 327-362.
Triangulated categories of matrix factorizations for regular systems of weights with ε = −1. H Kajiura, K Saito, A Takahashi, Adv. Math. 220H. Kajiura, K. Saito and A. Takahashi, Triangulated categories of matrix factoriza- tions for regular systems of weights with ε = −1. Adv. Math. 220 (2009), 1602-1654.
Kleinian singularities, derived categories and Hall algebras. M Kapranov, E Vasserot, Math. Ann. 316M. Kapranov and E. Vasserot, Kleinian singularities, derived categories and Hall algebras. Math. Ann. 316 (2000), 565-576.
Lagrangian tori in four-dimensional Milnor fibres. A Keating, arXiv:1405.0744PreprintA. Keating, Lagrangian tori in four-dimensional Milnor fibres. Preprint, arXiv:1405.0744.
On triangulated orbit categories. B Keller, Documenta Math. 10B. Keller, On triangulated orbit categories. Documenta Math. 10 (2005), 551-581.
Derived equivalences from mutations of quivers with potential. B Keller, D Yang, Adv. Math. 226B. Keller and D. Yang, Derived equivalences from mutations of quivers with poten- tial. Adv. Math. 226 (2011), 2118-2168.
H Knörrer, Group representations and the resolution of rational double points. J. McKayProvidence45Contemporary MathematicsH. Knörrer, Group representations and the resolution of rational double points. In: Finite groups -Coming of Age, Proceedings, Montreal 1982 (J. McKay, ed.), Con- temporary Mathematics, Vol. 45, Am. Math. Soc., Providence 1985, 175-222.
λ-rings and the representation theory of the symmetric group. D Knutson, Lecture Notes in Mathematics. 308Springer-VerlagD. Knutson, λ-rings and the representation theory of the symmetric group. Lecture Notes in Mathematics, Vol. 308, Springer-Verlag, Berlin-New York, 1973.
A note on exceptional unimodal singularities and K3 surfaces. M Kobayashi, M Mase, K Ueda, Int. Math. Res. Not. IMRN. 2013M. Kobayashi, M. Mase and K. Ueda, A note on exceptional unimodal singularities and K3 surfaces. Int. Math. Res. Not. IMRN 2013, 1665-1690.
Homological algebra of mirror symmetry. M Kontsevich, Proceedings of the ICM. the ICMZürich; Birkhäuser, BaselM. Kontsevich, Homological algebra of mirror symmetry. In: Proceedings of the ICM (Zürich, 1994), Birkhäuser, Basel, 1995, 120-139.
FJRW-Rings and Landau-Ginzburg mirror symmetry. M Krawitz, arXiv:0906.0796PreprintM. Krawitz, FJRW-Rings and Landau-Ginzburg mirror symmetry. Preprint, arXiv: 0906.0796.
Triangle singularities, ADE-chains, and weighted projective lines. D Kussin, H Lenzing, H Meltzer, Adv. Math. 237D. Kussin, H. Lenzing and H. Meltzer, Triangle singularities, ADE-chains, and weighted projective lines. Adv. Math. 237 (2013), 194-251.
Extended canonical algebras and Fuchsian singularities. H Lenzing, J A De La Peña, Math. Z. 268H. Lenzing and J. A. de la Peña, Extended canonical algebras and Fuchsian singu- larities. Math. Z. 268 (2011), 143-167.
A note on bimodal singularities and mirror symmetry. M Mase, K Ueda, Manuscripta Math. 146M. Mase and K. Ueda, A note on bimodal singularities and mirror symmetry. Manuscripta Math. 146 (2015), 153-177.
Graphs, singularities, and finite groups. J Mckay, Proc. Symp. Pure Math. 37J. McKay, Graphs, singularities, and finite groups. Proc. Symp. Pure Math. Vol. 37 (1980), 183-186.
Derived categories of coherent sheaves and triangulated categories of singularities. D Orlov, Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Boston, MABirkhäuser Boston, IncIID. Orlov, Derived categories of coherent sheaves and triangulated categories of sin- gularities. In: Algebra, arithmetic, and geometry: in honor of Yu. I. Manin, Vol. II, Progr. Math. 270, Birkhäuser Boston, Inc., Boston, MA, 2009, 503-531.
Singularités exceptionelles, la dualitéétrange d'Arnold et les surfaces K-3. H Pinkham, C. R. Acad. Sci. Paris Sér. A-B. 284H. Pinkham: Singularités exceptionelles, la dualitéétrange d'Arnold et les surfaces K-3. C. R. Acad. Sci. Paris Sér. A-B 284 (1977), 615-618.
Duality for regular systems of weights: a précis. K Saito, Topological Field Theory, Primitive Forms and Related Topics. M. Kashiwara, A. Matsuo, K. Saito, I. SatakeBoston Basel Berlin160BirkhäuserK. Saito, Duality for regular systems of weights: a précis. In: Topological Field The- ory, Primitive Forms and Related Topics (M. Kashiwara, A. Matsuo, K. Saito, I. Sa- take, eds.), Progress in Math., Vol. 160, Birkhäuser, Boston Basel Berlin, 1998, 379- 426.
Duality for regular systems of weights. K Saito, Asian J. Math. 2K. Saito, Duality for regular systems of weights. Asian J. Math. 2 (1998), 983-1047.
Towards a categorical construction of Lie algebras. K Saito, Algebraic geometry in East Asia -Hanoi. TokyoMath. Soc. Japan50K. Saito, Towards a categorical construction of Lie algebras. In: Algebraic geometry in East Asia -Hanoi 2005, Adv. Stud. Pure Math., 50, Math. Soc. Japan, Tokyo, 2008, 101-175.
Vanishing cycles and mutation. P Seidel, European Congress of Mathematics. IIProgr. Math.P. Seidel, Vanishing cycles and mutation. In: European Congress of Mathematics, Vol. II (Barcelona, 2000), Progr. Math., Vol. 202, Birkhäuser, Basel, 2001, 65-85.
More about vanishing cycles and mutation. P Seidel, Symplectic geometry and mirror symmetry. Seoul, 2000; River Edge, NJP. Seidel, More about vanishing cycles and mutation. In: Symplectic geometry and mirror symmetry (Seoul, 2000), World Sci. Publ., River Edge, NJ, 2001, 429-465.
Fukaya categories and Picard-Lefschetz theory. P Seidel, Zürich Lectures in Advanced Math. P. Seidel, Fukaya categories and Picard-Lefschetz theory. Zürich Lectures in Ad- vanced Math., EMS, Zürich, 2008.
Suspending Lefschetz fibrations, with an application to local mirror symmetry. P Seidel, Comm. Math. Phys. 297P. Seidel, Suspending Lefschetz fibrations, with an application to local mirror sym- metry. Comm. Math. Phys. 297 (2010), 515-528.
Homological mirror symmetry for the genus two curve. P Seidel, J. Algebraic Geom. 20P. Seidel, Homological mirror symmetry for the genus two curve. J. Algebraic Geom. 20 (2011), 727-769.
Braid group actions on derived categories of coherent sheaves. P Seidel, R Thomas, Duke Math. J. 108P. Seidel and R. Thomas, Braid group actions on derived categories of coherent sheaves. Duke Math. J. 108 (2001), 37-108.
Algebras of automorphic forms with three generators. I G Ščerbak, Engl. translation in Funct. Anal. Appl. 122Russian) Funkcional. Anal. i Priložen.I. G.Ščerbak, Algebras of automorphic forms with three generators. (Russian) Funkcional. Anal. i Priložen. 12:2 (1978), 93-94 (Engl. translation in Funct. Anal. Appl. 12 (1978), 156-158).
Mixed Hodge structure on the vanishing cohomology. J H M Steenbrink, Real and complex singularities, Proc. Ninth Nordic Summer School. Oslo; Alphen aan den RijnSijthoff and NoordhoffJ. H. M. Steenbrink, Mixed Hodge structure on the vanishing cohomology. In: Real and complex singularities, Proc. Ninth Nordic Summer School (Oslo, 1976), Sijthoff and Noordhoff, Alphen aan den Rijn, 1977, 525-563.
Kostant's generating functions, Ebeling's theorem and McKay's observation relating the Poincaré series. R Stekolshchik, arXiv:math/0608500PreprintR. Stekolshchik, Kostant's generating functions, Ebeling's theorem and McKay's observation relating the Poincaré series. Preprint, arXiv:math/0608500.
Notes on Coxeter transformations and the McKay correspondence. R Stekolshchik, Springer Monographs in Mathematics. BerlinSpringer-VerlagR. Stekolshchik, Notes on Coxeter transformations and the McKay correspondence. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2008.
Mirror symmetry for P 1 -orbifolds. D Van Straten, Trieste, Marienthal and Göteborgunpublished paper based on talks given inD. van Straten, Mirror symmetry for P 1 -orbifolds, unpublished paper based on talks given in Trieste, Marienthal and Göteborg in September 2002.
HMS for isolated hypersurface singularities. A Takahashi, Talk at the "Workshop on Homological Mirror Symmetry and Related Topics. University of MiamiA. Takahashi, HMS for isolated hypersurface singularities. Talk at the "Workshop on Homological Mirror Symmetry and Related Topics" Jan- uary 19-24, 2009, University of Miami, the PDF file available from http://www-math.mit.edu/˜auroux/frg/miami09-notes/ .
Weighted projective lines associated to regular systems of weights of dual type. A Takahashi, Adv. Stud. Pure Math. 59A. Takahashi, Weighted projective lines associated to regular systems of weights of dual type. Adv. Stud. Pure Math. 59 (2010), 371-388.
Mirror symmetry between orbifold projective lines and cusp singularities. A Takahashi, Singularities in geometry and topology. 66A. Takahashi, Mirror symmetry between orbifold projective lines and cusp singu- larities. In: Singularities in geometry and topology 2011, Advanced Studies of Pure Mathematics 66, 2015, 257-282.
Transformation groups and representation theory. T Tom Dieck, Lecture Notes in Mathematics. 766SpringerT. tom Dieck, Transformation groups and representation theory. Lecture Notes in Mathematics, 766, Springer, Berlin, 1979.
Homological mirror symmetry and simple elliptic singularities. K Ueda, arXiv:math.AG/0604361PreprintK. Ueda, Homological mirror symmetry and simple elliptic singularities. Preprint, arXiv:math.AG/0604361.
. J.-L Verdier, Caractéristique Euler-Poincaré, Bull. Soc. Math. France. 101J.-L. Verdier, Caractéristique d'Euler-Poincaré. Bull. Soc. Math. France 101 (1973), 441-445 (1973).
Algebras of automorphic forms with few generators. P Wagreich, Trans. Amer. Math. Soc. 262P. Wagreich, Algebras of automorphic forms with few generators. Trans. Amer. Math. Soc. 262 (1980), 367-389.
. Wolfgang Ebeling, Postfach. 6009Institut für Algebraische Geometrie, Leibniz Universität HannoverWolfgang Ebeling, Institut für Algebraische Geometrie, Leibniz Universität Hannover, Postfach 6009, D-30060 Hannover, Germany E-mail: [email protected]
| []
|
[
"ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES",
"ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES",
"ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES",
"ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES"
]
| [
"Anne-Maria Ernvall-Hytönen ",
"Anne-Maria Ernvall-Hytönen "
]
| []
| []
| The point of this note is to prove that the secrecy function attains its maximum at y = 1 on all known extremal even unimodular lattices. This is a special case of a conjecture by Belfiore and Solé. Further, we will give a very simple method to verify or disprove the conjecture on any given unimodular lattice. | null | [
"https://arxiv.org/pdf/1104.3739v2.pdf"
]
| 119,579,861 | 1104.3739 | 5e002443d16599dfc6bf81d48f0f430d12c1c36c |
ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES
21 Apr 2011
Anne-Maria Ernvall-Hytönen
ON A CONJECTURE BY BELFIORE AND SOLÉ ON SOME LATTICES
21 Apr 2011arXiv:1104.3739v2 [cs.IT]
The point of this note is to prove that the secrecy function attains its maximum at y = 1 on all known extremal even unimodular lattices. This is a special case of a conjecture by Belfiore and Solé. Further, we will give a very simple method to verify or disprove the conjecture on any given unimodular lattice.
Introduction
Belfiore and Oggier defined in [2] the secrecy gain max y∈R,0<y
Θ Z n (yi) Θ Λ (yi) , where Θ Λ (z) = x∈Λ e πi||x|| 2 z ,
as a new lattice invariant to measure how much confusion the eavesdropper will experience while the lattice Λ is used in Gaussian wiretap coding. The function Ξ Λ (y) = Θ Z n (yi) Θ Λ (yi) is called the secrecy function. Belfiore and Solé then conjectured in [3] that the secrecy function attains its maximum at y = 1, which would then be the value of the secrecy gain. The secrecy gain was further studied by Oggier, Solé and Belfiore in [6]. The main point of this note is to prove the following theorem: Theorem 1. The secrecy function obtains its maximum at y = 1 on all known even unimodular extremal lattices.
The method used here applies for any given even unimodular (and also for some unimodular but not even) lattices. This will be discussed in its own section.
Preliminaries and Lemmas
For an excellent source on theta-functions, see e.g. the Chapter 10 in Stein's and Shakarchi's book [7]. Define the theta function
Θ(z | τ ) = ∞ n=−∞ e πin 2 τ e 2πinz . Now Θ(z | τ ) = ∞ n=1 (1 − q 2n )(1 + q 2n−1 e 2πiz )(1 + q 2n−1 e −2πiz ),
The funding from the Academy of Finland (grant number 138337) is gratefully acknowledged. The author would also like to thank Professor Pär Kurlberg and Dr. Camilla Hollanti for useful discussions, and Professors Patrick Solé and Jean-Claude Belfiore for important comments. 1 when q = e πiτ The functions ϑ 2 , ϑ 3 and ϑ 4 can be written with the help of Θ(z | τ ) in the following way:
ϑ 2 (τ ) = e πiτ /4 Θ τ 2 | τ ϑ 3 (τ ) = Θ(0 | τ ) ϑ 4 (τ ) = Θ 1 2 | τ .
Using the product representation for the Θ(z | τ ) function this reads
ϑ 2 (τ ) = e πiτ /4 ∞ n=1 (1 − q 2n )(1 + q 2n−1 e 2πiτ /2 )(1 + q 2n−1 e −2πiτ /2 ) = e πiτ /4 ∞ n=1 (1 − q 2n )(1 + q 2n )(1 + q 2n−2 ) ϑ 3 (τ ) = ∞ n=1 (1 − q 2n )(1 + q 2n−1 )(1 + q 2n−1 ) = ∞ n=1 (1 − q 2n )(1 + q 2n−1 ) 2 ϑ 4 (τ ) = Θ 1 2 | τ ∞ n=1 (1 − q 2n )(1 + q 2n−1 e 2πi/2 )(1 + q 2n−1 e −2πi/2 ) = ∞ n=1 (1 − q 2n )(1 − q 2n−1 ) 2 .
A lattice is called unimodular if its determinant = ±1, and the norms are integral, ie, ||x|| 2 ∈ Z for all vectors x on the lattice. Further, it is called even, if ||x|| 2 is even. Otherwise it is called odd. A lattice can be even unimodular only if the dimension is divisible by 8. Odd unimodular lattices have no such restrictions.
Let us now have a brief look at the even unimodular lattices. Write the dimension n = 24m + 8k. Then the theta series of the lattice can be written as a polynomial of the Eisenstein series E 4 and the discriminant function ∆:
Θ = E 3m+k 4 + m j=1 b i E 3(m−j)+k ∆ j . Since E 4 = 1 2 ϑ 8 2 + ϑ 8 3 + ϑ 8 4 and ∆ = 1 256 ϑ 8 2 ϑ 8 3 ϑ 8 4
, the theta function of an even unimodular lattice can be easily written as a polynomial of these basic theta functions. Furthermore, the secrecy function can be written as a simple rational function of
ϑ 4 2 ϑ 4 4 ϑ 8 3 : (1) Θ Z n Θ Λ = ϑ n 3 E 3m+k 4 + m j=1 E 3(m−j)+k 4 ∆ j = 1 2 ϑ 8 2 + ϑ 8 3 + ϑ 8 4 + m j=1 b j 256 j ·2 3(m−j)+k ϑ 8 2 + ϑ 8 3 + ϑ 8 4 3(m−j)+k ϑ 8j 2 ϑ 8j 3 ϑ 8j 4 ϑ 3m+k 3 −1 = 1 − ϑ 4 2 ϑ 4 4 ϑ 8 3 3m+k + m j=1 b j 256 j 1 − ϑ 4 2 ϑ 4 4 ϑ 8 3 3(m−j)+k · ϑ 4 2 ϑ 4 4 ϑ 8 3 2j −1 .
Hence, finding the maximum of the secrecy function is equivalent to finding the minimum of the denominator of the previous expression in the range of
ϑ 4 2 ϑ 4 4 ϑ 8 3 .
Definition 2. An even unimodular lattice is called extremal if the norm of the shortest vector on the lattice is 2m + 2.
It is worth noticing that the definition of extremal has changed. Earlier (see e.g. [4]), extremal meant that the shortest vector was of length n 8 + 1. With the earlier definition the highest dimensional self-dual extremal lattice is in dimension 24 (see [4]), while with the current definition there is a selfdual extremal lattice in dimension 80 (for a construction, see [1]).
Let us now turn to general unimodular lattices, in particular odd ones. Write n = 8µ + ν, where n is the dimension of the lattice. Just like a bit earlier, the theta function of any unimodular lattice (regardless of whether it's even or odd), can be written as a polynomial (see e.g. (3) in [4]):
Θ Λ = µ r=0 a r ϑ n−8r 3 ∆ r 8 , where ∆ 8 = 1 16 ϑ 4 2 ϑ 4 4 . Hence (2) Θ Z n Θ Λ = µ r=0 a r 16 r ϑ 4r 2 ϑ 4r 4 ϑ 8r 3 −1 .
Again, to determine the maximum of the function, it suffices to consider the denominator polynomial in the range of ϑ 2 ϑ 4 ϑ 3 . The following lemma is easy, and follows from the basic properties of the theta functions. The proof is here for completeness.
Lemma 3. Let y ∈ R. The function f (y) = ϑ 4 4 (yi)ϑ 4 2 (yi) ϑ 8 3 (yi)
has symmetry: f (y) = f 1 y Proof. The formulas (21) in [5] give the following:
ϑ 2 i y =ϑ 2 −1 yi = √ yϑ 4 (yi) ϑ 3 i y =ϑ 3 −1 yi = √ yϑ 3 (yi) ϑ 4 i y =ϑ 4 −1 yi = √ yϑ 2 (yi). Now f (y) = ϑ 4 2 (yi)ϑ 4 4 (yi) ϑ 8 3 (yi) = y −2 ϑ 4 4 i y y −2 ϑ 4 2 i y y −4 ϑ i y = f 1 y ,
which was to be proved.
We may now formulate a lemma that is crucial in the proof of the main theorem:
Lemma 4. Let y ∈ R. The function ϑ 4 4 (yi)ϑ 4 2 (yi) ϑ 8 3 (yi)
attains its maximum when y = 1. This maximum is 1 4 .
Proof. To shorten the notation, write g = e −πy . Notice that when y increases, g decreases and vice versa. Using the product representations for the functions ϑ 2 (yi), ϑ 3 (yi) and ϑ 4 (yi), we obtain
ϑ 2 (yi)ϑ 4 (yi) ϑ 3 (yi) 2 = g 1/4 ∞ n=1 (1 − g 2n )(1 + g 2n )(1 + g 2n−2 ) ∞ n=1 (1 − g 2n )(1 − g 2n−1 ) 2 ( ∞ n=1 (1 − g 2n )(1 + g 2n−1 ) 2 ) 2 = g 1/4 ∞ n=1 (1 + g 2n )(1 + g 2n−2 ) ∞ n=1 (1 − g 2n−1 ) 2 ∞ n=1 (1 + g 2n−1 ) −4 . Now ∞ n=1 (1 + g 2n−2 ) = 2 ∞ n=1 (1 + g 2n ) and ∞ n=1 (1 + g 2n−1 ) −4 = ∞ n=1 (1 + (−g) n ) 4 = ∞ n=1 (1 + g 2n ) 4 ∞ n=1 (1 − g 2n−1 ) 4 .
Combining all these pieces together, we obtain
ϑ 2 (yi)ϑ 4 (yi) ϑ 3 (yi) 2 = 2g 1/4 ∞ n=1 (1 + g 2n ) 2 ∞ n=1 (1 − g 2n−1 ) 2 ∞ n=1 (1 + g 2n ) 4 ∞ n=1 (1 − g 2n−1 ) 4 = 2g 1/4 ∞ n=1 (1 + g 2n ) 6 ∞ n=1 (1 − g 2n−1 ) 6 = 2 g 1/24 ∞ n=1 (1 + (−g) n ) 6 .
Since the factor 2 is just a constant, it suffices to consider the function g 1/24 ∞ n=1 (1 + (−g) n ). To find the maximum, let us first differentiate the function:
∂ ∂g g 1/24 ∞ n=1 (1 + (−g) n ) = g 1/24 ∞ n=1 (1 + (−g) n ) 1 24g + ∞ n=1
n(−1) n g n−1 1 + (−g) n .
Since g 1/24 ∞ n=1 (1+(−g) n ) is always positive, it suffices to analyze the part 1 24g + ∞ n=1 n(−1) n g ( n−1) 1+(−g) n to find the maxima. We wish to prove that the derivate has only one zero, because if it has only one zero, then this zero has to be located at y = 1 (because the original function has symmetry, and therefore, a zero in the point y results in a zero in the point 1 y which has to be separate unless y = 1. To show that the derivative has only one zero, let us consider the second derivative, or actually, the derivative of the part 1 24g + ∞ n=1 n(−1) n g n−1
1+(−g) n . Now ∂ ∂g 1 24g + ∞ n=1
n(−1) n g n−1 1 + (−g) n = − 1 24g 2 + ∞ n=1 n(n − 1)(−1) n g n−2 1 + (−g) n − n 2 g 2(n−1) (1 + (−g) n ) 2 .
Now we wish to show that this is negative when g ∈ (0, 1). Let us first look at the term − 1 24g 2 and the terms in the sum corresponding the values n = 1 and n = 2. Their sum is
− 1 24g 2 − 1 (1 − g) 2 + 2 − 2g 2 (1 + g 2 ) 2 = −73g 6 + 98g 5 − 51g 4 − 92g 3 + 21g 2 + 2g − 1 24g 2 (1 − g) 2 (1 + g 2 ) 2 .
The denominator is positive when g ∈ (0, 1), and the nominator has two real roots, which are both negative (approximately g 1 ≈ −0.719566 and g 2 ≈ −0.196021). On positive values of g, the nominator is always negative. In particular, the nominator is negative when g ∈ (0, 1).
Let us now consider the terms n > 2, and show that the sum is negative. Since the original function has symmetry y → 1 y , and we are only considering the real values of the theta series, we may now limit ourselves to the interval y ∈ [1, ∞), which means that g ∈ (0, e −π ]. Let us now show that the sum of two consecutive terms where the first one corresponds an odd value of n, and the second one an even value of n is negative. The sum looks like the following:
− n(n − 1)g n−2 1 − g n − n 2 g 2(n−1) (1 − g n ) 2 + n(n + 1)g n−1 1 + g n+1 − (n + 1) 2 g 2n (1 + g n+1 ) 2 .
Let us estimate this, and take a common factor:
< g n−2 n − n − 1 1 − g n − ng n (1 − g n ) 2 + (n + 1)g 1 + g n+1 − (n + 1)g n+2 (1 + g n+1 ) 2 = g n−2 n − n − 1 + g n (1 − g n ) 2 + (n + 1)g (1 + g n+1 ) 2 < g n−2 n −(n − 1) − g n + (n + 1)g (1 + g n+1 ) 2 < 0,
when (n − 1) + g n > (n + 1)g.
Since (n − 1) + g n > (n − 1), and (n + 1)g ≤ (n + 1)e −π < n+1 10 < n − 1, when n ≥ 2, this proves that the first derivative has only one zero. This zero is at y = 1. Since the second derivative is negative, it means that this point is actually the maximum of the function. The maximum value is:
ϑ 4 2 (i)ϑ 4 4 (i) ϑ 8 3 (i) = 1 4
Proof of the main theorem
Let us first do on the lattice E 8 as a warm-up case. We wish to show that Theorem 5.
(3) Ξ E 8 (y) ≤ Ξ E 8 (1).
Proof. Notice that
Ξ E 8 (yi) = 1 2 ϑ 2 (yi) 8 + ϑ 3 (yi) 8 + ϑ 4 (yi) 8 ϑ 3 (yi) 8 −1 = 1 2 + ϑ 4 2 (yi) + ϑ 4 4 (yi) 2 − 2ϑ 4 2 (yi)ϑ 4 4 (yi) 2ϑ 3 (yi) 8 −1 = 1 − ϑ 4 2 (yi)ϑ 4 4 (yi) ϑ 8 3 (yi) −1 ,
Therefore, to show that (3) holds, it suffices to show that
ϑ 2 (yi) 4 ϑ 4 (yi) 4 ϑ 3 (yi) 8 ≤ ϑ 2 (i) 4 ϑ 4 (i) 4 ϑ 3 (i) 8 ,
which is equivalent to showing that
ϑ 2 (yi)ϑ 4 (yi) ϑ 3 (yi) 2 ≤ ϑ 2 (i)ϑ 4 (i) ϑ 3 (i) 2 ,
which we have already done in Lemma 4.
Let us now concentrate on the other cases. Again, write z = which has real roots z 0 ≈ 0.2889, z 1 ≈ 0.4491 and z 2 ≈ 0.8620, and is negative when z < z 1 . This proves the theorem.
Method for any given unimodular lattice
Let Λ be a unimodular lattice. Then its secrecy function can be written as a polynomial P (z), where z = as shown in (1) and 2. Now, according Lemma 4, 0 ≤ z ≥ 1 4 (the lower bound does not follow from the lemma but from the fact that z is a square of a real number). Therefore, it suffices to consider the polynomial P (z) on the interval [0, 1 4 ]. The conjecture is true if and only if the polynomial obtains its smallest value on the interval at 1 4 . Investigating the behaviour of a given polynomial to show whether one point is its minimum on a short interval is a very straightforward operation.
.
The following table gives the secrecy functions of all known extremal even unimodular lattices (notice that these are known only in dimensions 8 − 80):
(1 − z)z 6 −1 It suffices to show that the first derivatives of the denominators are negative because then the denominator is decreasing, and the function is increasing and obtains its maximum at z = 1 4 . In dimension 16 the derivative isIn dimension 24 the derivative isIn dimension 32 the derivative isIn dimension 40 the derivative isIn dimension 48 the derivative isIn dimension 56 the derivative isIn dimension 64 the derivative isIn dimension 72 the derivative iswhich has real zeros z 0 ≈ 0.3002 and z 1 ≈ 0.5222, and when z < z 0 , the derivative is negative, which proves this case. It remains to consider the dimension 80, where the derivative is
Extremal lattices of minimum 8 related to the Mathieu group M22. Christine Bachoc, Gabriele Nebe, J. Reine Angew. Math. 494Dedicated to Martin Kneser on the occasion of his 70th birthdayChristine Bachoc and Gabriele Nebe. Extremal lattices of minimum 8 related to the Mathieu group M22. J. Reine Angew. Math., 494:155-171, 1998. Dedicated to Martin Kneser on the occasion of his 70th birthday.
Secrecy gain: A wiretap lattice code design. J.-C Belfiore, F E Oggier, ISITA. J.-C. Belfiore and F. E. Oggier. Secrecy gain: A wiretap lattice code design. In ISITA, pages 174-178, 2010.
Unimodular lattices for the gaussian wiretap channel. J.-C Belfiore, P Solé, abs/1007.0449CoRRJ.-C. Belfiore and P. Solé. Unimodular lattices for the gaussian wiretap channel. CoRR, abs/1007.0449, 2010.
Extremal self-dual lattices exist only in dimensions 1 to 8, 12, 14, 15, 23, and 24. J H Conway, A M Odlyzko, N J A Sloane, Mathematika. 251J. H. Conway, A. M. Odlyzko, and N. J. A. Sloane. Extremal self-dual lattices exist only in dimensions 1 to 8, 12, 14, 15, 23, and 24. Mathematika, 25(1):36-43, 1978.
Sphere packings, lattices and groups. J H Conway, N J A Sloane, Grundlehren der Mathematischen Wissenschaften. 290Fundamental Principles of Mathematical SciencesJ. H. Conway and N. J. A. Sloane. Sphere packings, lattices and groups, volume 290 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences].
. Springer-Verlag, E. Bannai, J. Leech, S. P. Norton, A. M. Odlyzko, R. A. Parker, L. Queen and B. B. VenkovNew YorkSpringer-Verlag, New York, 1988. With contributions by E. Bannai, J. Leech, S. P. Norton, A. M. Odlyzko, R. A. Parker, L. Queen and B. B. Venkov.
Lattice codes for the wiretap gaussian channel: Construction and analysis. F Oggier, P Solé, J.-C Belfiore, F. Oggier, P. Solé, and J.-C. Belfiore. Lattice codes for the wiretap gaussian channel: Construction and analysis.
Complex analysis. Princeton Lectures in Analysis, II. M Elias, Rami Stein, Shakarchi, Princeton University PressPrinceton, NJDepartment of Mathematics, University of HelsinkiElias M. Stein and Rami Shakarchi. Complex analysis. Princeton Lectures in Analysis, II. Princeton Uni- versity Press, Princeton, NJ, 2003. Department of Mathematics, University of Helsinki
| []
|
[
"Sensitivity of discrete symmetry tests in the positronium system with the J-PET detector",
"Sensitivity of discrete symmetry tests in the positronium system with the J-PET detector"
]
| [
"Aleksander Gajos \nFaculty of Physics, Astronomy and Applied Computer Science\nJagiellonian University\nS. Łojasiewicza 1130-348KrakówPoland\n"
]
| [
"Faculty of Physics, Astronomy and Applied Computer Science\nJagiellonian University\nS. Łojasiewicza 1130-348KrakówPoland"
]
| []
| Study of certain angular correlations in the three-photon annihilations of the triplet state of positronium, the electron-positron bound state, may be used as a probe of potential CP and CPT-violating effects in the leptonic sector. We present the perspectives of CP and CPT tests using this process recorded with a novel detection system for photons in the positron annihilation energy range, the Jagiellonian PET (J-PET). We demonstrate the capability of this system to register three-photon annihilations with an unprecedented range of kinematical configurations and to measure the CPT-odd correlation between positronium spin and annihilation plane orientation with a precision improved by at least an order of magnitude with respect to present results. We also discuss the the means to control and reduce detector asymmetries in order to allow J-PET to set the first measurement of the correlation between positronium spin and momentum of the most energetic annihilation photon which has never been studied to date. | 10.3390/sym12081268 | null | 219,176,569 | 2006.01012 | 17bcc1fa61e5b48e72d0905a271d8a6cde059666 |
Sensitivity of discrete symmetry tests in the positronium system with the J-PET detector
Aleksander Gajos
Faculty of Physics, Astronomy and Applied Computer Science
Jagiellonian University
S. Łojasiewicza 1130-348KrakówPoland
Sensitivity of discrete symmetry tests in the positronium system with the J-PET detector
Version June 2, 2020 submitted to SymmetryArticlediscrete symmetryCPTpositronium
Study of certain angular correlations in the three-photon annihilations of the triplet state of positronium, the electron-positron bound state, may be used as a probe of potential CP and CPT-violating effects in the leptonic sector. We present the perspectives of CP and CPT tests using this process recorded with a novel detection system for photons in the positron annihilation energy range, the Jagiellonian PET (J-PET). We demonstrate the capability of this system to register three-photon annihilations with an unprecedented range of kinematical configurations and to measure the CPT-odd correlation between positronium spin and annihilation plane orientation with a precision improved by at least an order of magnitude with respect to present results. We also discuss the the means to control and reduce detector asymmetries in order to allow J-PET to set the first measurement of the correlation between positronium spin and momentum of the most energetic annihilation photon which has never been studied to date.
Introduction
The notion of searching of for violation of fundamental discrete symmetries in purely leptonic systems and in electromagnetic interactions, although not new, has been outside of the mainstream of symmetry tests in physics for few decades after first violation had been discovered.
While the pioneering discoveries naturally led the attention towards weak interactions [1,2], other violation mechanisms should not be ruled out precipitately.
A viable purely leptonic system for tests of discrete symmetries is constituted by positronium exotic atoms. As a bound state of electron and positron, positronium is the lightest matter-antimatter system and at the same time an eigenstate of the C and P operations, making it an ideal candidate for searching of symmetry violating efects [3]. This potential has been recognized already in 1967 by Mills and Berko, who performed a search for decays of annihilations of the positronium C-even singlet state into a C-odd three-photon final state, concluded with a null result [4].
The field of discrete symmetry studies in the lepton sector has seen little activity until Bernreuther et al. pointed out that violations of the CP and CPT could be manifested by certain non-vanishing angular correlations in the decays of positronium atoms if the corresponding operators constructed with observables available in the decay are odd under a given symmetry transformation [5]. Several implementations of tests based on such angular correlations followed, with the best measurements to date yielding results consistent with conservation of both CP and CPT with precision at the level of 10 −3 [6,7]. Notably, authors of the most recent CPT test performed using the Gammasphere array of germanium detectors have also extended the searches for C violation by searching for higher-order C-prohibited annihilations of positronium [8]. Other prohibited positronium decays were studied to test lowest-order QED calculations [9,10].
Since the discovery of neutrino oscillations, searches for leptonic CP violation were strongly concentrated on neutrino physics [11]. Although other tests were attempted e.g. by searching for the electric dipole moment of τ [12], it is indeed the long-baseline neutrino oscillation experiments which first show hints of observation of CP violation at 3σ level [13][14][15].
The interest in positronium as a potential probe of CPT violation has been recently revived by the postulation of possible effects of Lorentz invariance observable with positronium in the framework of the Standard Model Extension (SME). SME is a general-realistic effective field theory of Lorentz violation, which extends systems' Lagrangians to include all effectively possible Lorentz-violating terms. The inherent relation of Lorentz and CPT invariance allows for defining searches of violations of the latter in terms of SME paramters' measurement. A number of possible experiments based on hyperfine spectroscopy of positronium have been postulated using both minimal SME [16] and non-minimal SME scenarios [17].
The J-PET collaboration strives to explore an experimental programme complementary to the SME-motivated spectroscopic studies. Exploiting the potential of a novel powerful detector of photons in the positron annihilation energy range, we aim at extending the measurements of angular correlations in the decays of the positronium triplet state, which are sensitive to effects violation of fundamental symmetries [3]. In this work, we present the scope of experimental CP and CPT tests available with the J-PET detector based on large-acceptance exclusive detection of ortho-positronium annihilations and an unconventional scheme of positronium spin orientation estimation on a single-event basis.
The J-PET detector
J-PET was conceived as the first Positron Emission Tomography (PET) scanner based on plastic scintillators [18][19][20][21]. While actively exploited in medical imaging research towards constructing a cost-effective whole-body PET scanner [20,22,23] and devising new imaging modalities such as spatially-resolved determination of properties positronium atoms produced during a PET scan [24,[24][25][26], J-PET also constitutes a robust detector of photons in the sub-MeV range, well suited for studies of phenomena such as positronium annihilation and entanglement of photons in the field of fundamental research [27][28][29].
The core of the detector is constituted by 192 photon detection modules sparsely arranged in three concentric layers along the longitudinal axis of the detector as presented in the left panel of Figure 1. Each module consists of an EJ-230 plastic scintillator strip of 50 cm length and 7×19 mm 2 cross-section, whose both ends are optically coupled to Hamamatsu R9800 photomultiplier tubes.
Interactions of photons in the plastic scintillators are recorded through their Compton scattering resulting in an energy deposition depending on the scattering angle and emission of scintillation light recorded by the two photomultipliers. Time of interaction and its position along the strip are determined using time difference between light recording at the two ends of the strip. Lack of registration of the full energy peak in J-PET detection modules is compensated by an excellent interaction time resolution at the level of 100 ps [18] resulting from fast front-end electronics [30] and short decay times of plastic scintillators. Additionally, the latter allows for pileup-free measurements with high positron source activities of 10 MBq and more.
Electric signals from the photomultipliers are sampled in the voltage domain at four configurable voltage thresholds by scalable front-end electronics developed for J-PET coupled with a data acquisition (DAQ) system based on FPGA chips and the TRB3 platform [30][31][32]. The DAQ of J-PET is reconfigurable and opens the possibilities of real-time data reconstruction directly on the FPGA systems [33].
Energy deposited by a γ quantum interacting in a scintillator strip is measured through total charge of the electric signals from the attached photomultipliers, estimated by the signals' time-over-threshold (i) the current three layers of sparsely-arranged scintillator strips (blue) will be complemented by a layer of 24 modules containing 13 densely-packed scintillator strips each (red); (ii) the cyllindrical annihilation chamber will be replaced by a spherical one (gray). sampled at the four predefined voltages. Thanks to the ability of J-PET to record Compton scattering angles in multiple scattering events, deposited energy corresponding unanimously to a given angle is related to the recorded time over threshold allowing for calibration of deposited energy measurement [34,35].
The scope of applications of the detector ranging from medical imaging development to fundamental studies of positronium annihilations (including modes not observed to date such as o-Ps→ 4γ [3]) requires the DAQ system to impose a minimal bias on the spectrum of recorded events [36]. This is achieved with recording of data in a trigger-less mode [32,37] followed by filtering and reconstruction with a dedicated analysis software framework [38][39][40]. While an unusual choice due to large resulting data volume, in case of searches for small effects such as rare decays and symmetry violations which can be easily mimicked by a non-uniform response of the detector and DAQ elements such as trigger bias imposed on the data, only complete foregoing of the trigger allows for full control over systematic effects in the experiments.
Currently, J-PET is being extended with an additional layer of detection modules organized in a dense layout placed within the current setup as displayed schematically in the right panel of Figure 1. These modules are intended to enhance the angular acceptance of the detector as well as to provide an improved time resolution thanks to an entirely digital readout using matrices of silicon photomultipliers [41]. The impact of this extension of the detection setup on the discrete symmetry tests' sensitivity is discussed in Section 4.
Methods of searching for discrete symmetry violations with ortho-positronium in J-PET
The ability to record photons in an energy range corresponding to electron-positron annihilations as well as below it makes J-PET a suitable device for studying decays of the lightest purely leptonic bound system, the positronium exotic atom. Positronium, the bound state of electron and positron, may be formed as a singlet or triplet ground state, referred to as para-positronium (p-Ps) and ortho-positronium (o-Ps) respectively. Being an antisymmetric eigenstate of charge conjugation, the latter may only annihilate into an odd number of photons due to the conservation of the C symmetry, tested to the level of 10 −6 for positronium [8,9]. In practice, ortho-positronium predominantly annihilates into a three-photon final state with the next allowed final state (5γ) suppressed by a factor of α 2 .
While the positronium physics appears to be well described by electromagnetic interactions where CP violation is not expected, any observation of CP noninvariance in this system would be an indication of new physics. Motivation for such searches is further encouraged by the recent neutrino oscillation measurements hinting at leptonic CP violation at 3σ level [13][14][15], for which no confirmation was provided by charged lepton systems to date. As pointed out by Bernreuther and Nachtmann [5], the three-photon annihilations of the triplet state of positronium may provide insight into CP and even CPT-violating effects through certain angular correlations between o-Ps spin and momenta of annihilation photons. Table 1. Angular correlation operators constructed with observables of ortho-positronium annihilations into three photons: positronium spin S and momenta of the annihilation photons ordered by their magnitude:
| k 1 | > | k 2 | > | k 3 |.
Each of these operators is either even (+) or odd (-) with respect to the basic symmetry transformations and their combinations as marked in the table.
no. operator C P T CP CPT Table 1 presents three angular correlations measurable in the ortho-positronium three-photon annihilations. The correlations are represented as operators whose properties under the C, P and T transformations and their combinations follow from the respective behaviour of positronium spin ( S) and momentum vectors of the final state photons ( k i for i = 1, 2, 3 where the photons are labeled according to descending energy, i.e. | k 1 | > | k 2 | > | k 3 | ) under these operations. In case of operators which are antisymmetric under a given transformation (marked with "-" in the table), expectation value of the operator must vanish if the respective transformation constitutes a good symmetry. Consequently, observation of a non-zero expectation value of such operator would be an indication of violation of a given discrete symmetry [5,36]. The notion of testing discrete symmetries in the annihilations of ortho-positronium is therefore based on experimental determination of the expectation values of the angular correlation operators listed in Table 1. Notably, only one experiment conducted to date attempted to probe a continuous distribution of such expectation values [7] whereas all previous measurements were constrained to determination of an up-down asymmetry of the operators, a special case with significantly limited sensitivity [6,42,43].
1 S · k 1 + -+ - - 2 S · ( k 1 × k 2 ) + + - + - 3 ( S · k 1 )( S · ( k 1 × k 2 )) + -- - +
The J-PET experiment aims at precise measurements of two out of the three presented angular correlations, probing their full geometrically-allowed domains for the first time.
Estimation of positronium spin
Essential component of the angular correlations in positronium decays considered in this work is the knowledge of the positronium spin quantization axis. Former measurements either used a polarized positronium beam [43], external magnetic field [6,42] or relied on the intrinsic linear polarization of positrons emitted in β + decay [7]. The two former approaches exclusively allow for producing a degree of tensor polarization in the positronium sample, inevitable for conducting a test of the CP symmetry with operator no. 3 from Table 1. However, setups required to convey the beam to the annihilaition recording device and magnets providing sufficient B field effectively prevent recording of the annihilation photons with a large angular acceptance.
Therefore, J-PET builds on the o-Ps polarization control scheme proposed in the best measurement of the S · ( k 1 × k 2 ) operator to date [7], in which logitudinally-polarized positrons from a point-like β + source of 68 Ge or 22 Na are allowed to form positronia only in a limited volume which defines a range of allowed e + spin quantization axes. As the positron polarization statistically translates to the formed ortho-positronium in 2/3 of cases, this allows for obtaining an estimate of the o-Ps spin direction with a finite uncertainty determined by the β + emission average energy and the applied geometry of positronium formation medium. In the original implementation of this idea, this uncertainty accounted for a reduction of statistical polarization by 0.686, resulting in P≈ 0.4 in the whole experiment [7]. On the other hand, it evaded the need for a acceptance-limiting hardware setup which allowed for the first measurement of a true distribution of an angular correlation in o-Ps annihilation, although limited in resolution by the coarse detector granularity.
In the measurements with J-PET proposed in this work, we extend the idea of estimating ortho-positronium spin without externally-induced polarization. While it limits the accessible symmetry violating operators to positions 1. and 2. from Table 1 as measurement of the correlation ( S · k 1 )( S · ( k 1 × k 2 )) is only possible in case of a tensor-polarized positronium sample [44], this allows J-PET to observe an unprecedented spectrum of angular configurations of o-Ps decays and thus the full spectra of correlation operators 1. and 2.
To this end, we use positrons emitted from a point-like β + source which are characterized by linear polarization along their direction of emission to a degree of P e + = υ/c where υ is the positron velocity and c is the speed of light. The positrons are allowed to thermalize in a layer of porous medium enhancing positronium formation, which is spatially separated from the β + source by the volume of vacuum chamber ensuring free propagation of the positrons.
In contrast to the previous measurement [7], we do not assume the positronium production region to be point-like but use the information on the locations and times of the three photons' interactions in the detector to reconstruct the o-Ps→ 3γ annihilation point with a trilateration-based approach [45]. In consequence, we can estimate the direction of e + spin separately for each event, thus reducing the related decrease in statistical o-Ps polarization from 0.686 to about 0.98 determined by the spatial resolution of the o-Ps→ 3γ reconstruction.
In the currently performed measurements, J-PET implements the aforementioned spin estimation scheme with a cylindrical positronium production chamber mounted axially in the center and extending for the whole length of the detector. A 10 MBq β + source of 22 Na is installed in the center of the chamber, while its walls are coated with 3 mm of R60G porous silica, allowing practically all positrons reaching the chamber walls to thermalize and interact within this layer [46]. The chamber walls are made of 3 mm polycarbonate so as to minimize absorption and scattering of annihilation photons. The chamber mounted inside the J-PET detector is presented in the left panel of Figure 1. The right panel of the figure illustrates a future enhancement of the chamber geometry, i.e. replacement of the cylinder with a spherical vacuum chamber (with R=10 cm) which allows for a more efficient utilization of positrons from the β + source for positronium formation, increases o-Ps→ 3γ registration efficiency for extreme values of certain correlations and reduces spurious asymmetries as demonstrated in the next Sections.
The correlation between o-Ps spin and annihilation plane
The 2 nd operator from Table 1 is sensitive to potential violations of CPT invariance and has been previously studied in two experiments with the most precise result of the CPT-violation parameter C CPT of (2.6 ± 3.1) × 10 −3 [7,43]. In fact, a similarly-defined triple correlation has been studied in search for T violation in decays using Z 0 spin and momenta of the most energetic produced jets [47] As can be seen from Table 1, the S · k 1 correlation is also odd under the CPT transformation. The choice of the ostensibly more complicated operator in the previous measurements was motivated by the fact that S · ( k 1 × k 2 ) contains a simple correlation between the o-Ps spin and positronium annihilation plane spanned by the momentum vectors of the emitted photons. The definition using two most energetic photons' momenta is merely an experimentally-useful convention and does not introduce a significant correlation between detection efficiency and photon energy as is the case for the S · k 1 operator as argued later on in this work. In order to avoid any dependence on selected photon energies, it is convenient to normalize this operator to the magnitude of the cross product, leading to the following definition:
O CPT =Ŝ · ( k 1 × k 2 )/| k 1 × k 2 |,(1)
which expresses the pure angular correlation between o-Ps spin and its decay plane. The best measurement to date was simultaneously the first measurement going beyond the up-down asymmetry of the operator and determining its continuous distribution. However, due to the geometry of the Gammasphere detector used therein and the positronium production setup, the measurement was only sensitive to the values of this operator in the range of about (−0.4, 0.4) out of the allowed region of [-1,1].
Due to its high granularity of the detection modules in the transverse plane and continuous interaction position measurement in the longitudinal coordinate, in conjunction with the spin estimation scheme which does not impose a distinguished positronium spin quantization axis with respect to the detector, the J-PET setup is able to record a substantially broader range of kinematic configurations of o-Ps→ 3γ events.
To demonstrate the sensitivity of J-PET to the distribution of the O CPT operator, a toy Monte Carlo simulation of ortho-positronium annihilations in the experimental setup described above was prepared, featuring allowed angular and energy distributions of photons from o-Ps→ 3γ annihilations expected from QED [48], the geometry of the positronium production setup as well as geometric arrangement of the detection modules. Compton interactions of the annihilation photons were simulated according to the Klein-Nishina formula and a photon registration threshold was set on the simulated deposition of energy by the scattered photon in a scintillator strip.
The distribution of the O CPT operator, i.e. cosine of the angle between the normal to the annihilation plane and positronium spin direction was simulated either to be uniform as expected in absence of CPT-violating effects [5] or with an assumed level of asymmetry quantified by a C CPT coefficient. Following the approach used in Ref. [7], the simplest asymmetric form of the distribution as a function of cos θ was used where the total probability distribution contains a term linear in cos θ whose contribution given by C CPT ∈ [0; 1]. A simulation of 10 13 positrons emitted from a β + source without CPT-violating effects in ortho-positronium annihilations results in a distribution of the O CPT operator presented in the hatched blue histogram presented in Fig. 2. Red histogram in the same figure corresponds to the distribution obtained if maximal violation of the symmetry (C CPT = 1) is assumed. While the existing measurements exclude values of C CPT beyond the 10 −3 level, the exaggeration in Fig. 2 was used to visualize the effects detectable through determination of the distribution of this angular correlation. It is visible that while detection efficiency peaks for values corresponding to the decay plane normal being close to perpendicular to the spin quantization axis, it does not drop to zero close to the extreme values of the correlation, in contrast to the previous measurement [7].
The factors determining the detection efficiency of o-Ps→ 3γ events in J-PET comprise (i) probability of interaction of an annihilation photon in a plastic scintillator strip which on average amounts to about 20%, (ii) geometrical acceptance resulting from the sparse arrangement of detection modules and their length; modules cover about 0.21 of the solid angle around the center of the detector, (iii) the energy deposition threshold above which a Compton-scattered photon is registered by a detection module. Furthermore, the total efficiency of observing ortho-positronium annihilations as a function of the β + source activity also involves (iv) the fraction of positrons forming o-Ps in the region of the annihilation chamber where the three emitted photons can be recorded simultaneously. Fig. 3 (left) presents the total o − Ps → 3γ registration efficiency as a function of the angular correlation defined in Eq. 1 obtained in toy MC simulation of 10 13 positrons from a 22 Na source in the setup described in Sec. 3.1. The efficiency is presented for two geometries of the annihilation chamber: cylindrical (presently used) and spherical (in preparation). In each case, three values of the energy deposition threshold for a single annihilation photon were considered: 40 keV, 100 keV and 140 keV. Results of the simulation show that lowering this photon registration threshold is vital for the total efficiency and each increase of the threshold by about 50 keV results in a reduction of the 3γ registration efficiency by an order of magnitude.
Presently, the detection threshold of the J-PET photomultiplier tubes and signal sampling electronics achievable without entering the noise level is estimated to be about 80 keV.
The MC-based evaluation of detection efficiency also demonstrates the enhancement expected with the spherically-shaped positronium production vacuum chamber instead of the currently used cylindrical one. The drop of efficiency close to the extreme values of O CPT visible in Fig. 2 will be substantially reduced with the new source geometry, resulting in an efficiency more flat across the whole operator spectrum.
A subtle asymmetry in the distribution of a given angular correlation X may be detected by evaluation of the following figure, accounting for asymmetry between event counts N in subsequent intervals of for positive and negative values of a given angular correlation operator O X :
A(|O X |) = N(|O X |) − N(−|O X |) N(|O X |) + N(−|O X |) .(2)
Subsequently, a comparison of the A(|O CPT |) distribution with one obtained in case of the MC-simulated distribution assuming maximal violation (C CPT =1) would allow for extraction of the CPT symmetry violation coefficient in a similar manner as done e.g. in Ref. [7]. For such a procedure, good understanding of the detector efficiency as a function of the value of the measured operator is crucial in order to avoid artificial asymmetries arising from efficiency nonuniformities due to the setup geometry. In J-PET, thanks to the large granularity of detection modules and continuous measurement of interaction positions along them, the expected shape of such efficiency is smooth as demonstrated in the left panel of Fig. 3. While this is already an improvement with respect to the previous measurement of the O CPT operator where coarse granularity of the detectors constituting the Gammaspere array caused strong periodic fluctuations of efficiency [7], the impact of detector geometry on the measured asymmetry requires a careful treatment nonetheless. operator defined as in Eq. 2 using the two positronium production chamber geometries considered in this work, for the cases of no asymmetry assumed in the MC simulations and of CPT violation at a level of 10% (C CPT = 0.1), exaggerated for better visibility. These results were obtained with a simulation of 10 13 positrons from a 22 Na source. It is visible that the S · ( k 1 × k 2 ) angular correlation is not sensitive to the geometry of the positronium annihilation region. Not only are the asymmetries detected using the cylindrical and spherical chambers in good agreement, but also the A(O CPT ) distribution obtained in absence of simulated CPT violation does not reveal signs of a false asymmetry in any of the cases.
1.0 − 0.5 − 0.0 0.5 1.0 ) 2 k × 1 k ( ⋅ S 8 −
These simulations confirm the robustness of the S · ( k 1 × k 2 ) angular correlation as an observable of discrete symmetry tests. While potentially sensitive to genuine effects of CPT violation, its definition allows to cancel out many geometrical effects related to the measurement setup. For this reason, this correlation has been favoured over the ostensibly simpler operator S · k 1 in the past measurements, even though every of the previous experiments focusing on S · ( k 1 × k 2 ) was in principle capable of studying the S · k 1 correlation as well. Later on, we discuss the experimental differences between these two correlations.
The correlation between o-Ps spin and most energetic photon
As discussed in the previous Section, out of the two angular correlations sensitive to discrete symmetries' violation in absence of ortho-positronium tensor polarization (operators 1. and 2. in Table 1), the operator S · ( k 1 × k 2 ) has already been studied in several experiments. On the contrary, the 1 st operator which is a simple projection of the most energetic photon momentum onto the direction of spin of the decaying ortho-positronium atom, has never been measured to date despite its sensitivity to both CP and CPT-violating effects.
The reason lies in its simpler construction which makes its distribution prone to spurious effects and thus experimentally more challenging. While its usage as an observable of a CP and CPT test requires strict control of the impact of the measurement setup geometry on the observed asymmetry, here we argue that smooth efficiency curves offered by the J-PET detector in conjunction with detailed MC simulations may allow for the first measurement of the S · k 1 operator. Similarly as in Eq. 1, it is convenient to introduce normalization of the photon momentum into the definition of the angular correlation operator:
1.0 − 0.5 − 0.0 0.5 1.0 1 k ⋅ S 8 −O CPT = S · k 1 /| k 1 |.
(3) Figure 4 (left) presents the efficiency of J-PET to o-Ps→ 3γ events with a given value of O CPT evaluated with the toy MC simulation in a similar manner as described in Section 3.2. A comparison with Fig. 3 (left) immediately reveals the challenge posed by usage of this operator. In this case, the efficiency curves contain a modulation which is not symmetric as a function of O CPT . Moreover, this effect is magnified with increasing value of the energy deposition threshold for γ detection. This energy dependence originates from the choice of the most energetic photon which introduces a correlation with the kinematical configuration of a given o-Ps→ 3γ decay. This phenomenon is absent in case of S · ( k 1 × k 2 ) because the geometrical entity used therein (orientation of the decay plane) is agnostic of the kinematics of a particular annihilation event and thus also of the energy-based choice of photons.
Successful use of the O CPT operator as a probe of CP and CPT violation therefore requires two factors: (i) maintaining the energy deposition threshold as low as possible, (ii) reducing the spurious asymmetries originating from asymmetric and energy-dependent efficiency to a low and well-understood level.
The latter can be achieved by manipulation of the geometry of positronium production medium. As displayed in the right panel of Fig. 4, although with both simulated setups the S ·k 1 a significant asymmetry appears even in case of no CPT violation assumed in the simulations (where possible violation is introduced in a similar manner as described in Section 3.2), usage of the spherical vacuum chamber results in a simpler dependence of the false asymmetry on the value of O CPT which is easier to parameterize. Additionally, two independent measurements with different chambers would allow for discrimination between the setup-specific false asymmetry and a possible genuine effect as well as for extraction of the latter. It is important to stress that while the results presented in this work are based on a toy MC simulation, the actual experiments will be augmented with simulations of the full setup based on the Geant4 package, which are currently being commissioned.
Perspectives for J-PET sensitivity to the CPT violation effects
The J-PET setup featuring the cylindrical annihilation chamber is already in operation. If a conservative photon detection threshold of 100 keV is assumed, it can be estimated from the efficiency curve presented in the left panel of Fig. 3 that with a 10 MBq positron source J-PET can record about 8.5 × 10 4 o-Ps→ 3γ annihilations per day of measurement. The achievable sensitivity for the CPT violation parameter C CPT must include the analyzing power of the employed setup which is dominated by statistical polarization corresponding to the estimated o-Ps spin and with a 22 Na amount to about 0.4. Taking this factor in to account, the statistical sensitivity at the unprecedented level of 10 −4 can be achieved with about three months of measurement.
Although J-PET is well suited for extended periods of continuous measurement, further improvements of the efficiency of both positronium production and 3γ events detection are necessary in order to allow for reaching the sensitivity of CPT symmetry tests discussed in this work beyond the level of 10 −4 .
The two upgrades already being commissioned comprise the spherical vacuum chamber for positronium production and spin estimation and a new layer of detection modules with a fully digital readout. The spherical chamber, in addition to the advantages discussed in Sections 3.2 and 3.3, is expected to increase the fraction of positrons from the β + source mounted at its centre by a factor of about 1.5 with respect to the cylindrical geometry. This is because the most sensitive region of the J-PET detector spans in the central region along its Z axis, corresponding to |z| <8 cm [36]. Outside of this volume, registration probability for 3γ event drops rapidly, therefore in case of the cylindrical chamber only positrons emitted from the β + source into a solid angle of about 2.2 π have a chance to form positronium whose annihilation can be recorded. Therefore, only 55% of emitted positrons may produce recordable positronium. With the spherical geometry, over 80% of isotropically-emitted positrons will reach the porous medium in the most sensitive region of the detector.
The second upgrade is constituted by insertion of a new system of 312 plastic scintillator strips arranged in 24 densely-packed modules as the innermost layer of the detector as visualized in the right panel of Fig. 1. The new fully digital readout system based on silicon photomultipliers is expected to improve time resolution of γ interaction recording, crucial for the trilaterative reconstruction of o-Ps annihilations and thus for the event-by-event spin estimation resolution [45]. Moreover, presence of the additional detection layer will increase single photon registration probability by a factor of about 3, leading to a 27x enhancement of the total o-Ps→ 3γ recording efficiency.
The aforementioned upgrades account for an improvement of the statistics collectable in a unit time of measurement by about 40 with respect to the present setup. Therefore, with the future upgraded setup, J-PET is expected to reach the sensitivity to C CPT at the level of 10 −5 .
It is worth mentioning that the set of discrete symmetry tests possible with J-PET presented in this work is not exhaustive. A second class of angular correlation operators may be defined using the momenta of annihilation photons and their electromagnetic polarization rather than positronium spin direction [3]. Notably, neither of such correlations has been measured to date due to the incapability of the previous positronium experiments to measure photon polarization. Due to the detection principle based on Compton interaction, J-PET is the first detector able to provide a measurement of such angular correlations involving photon electromagnetic polarization [3,28] and experiments towards this end are already ongoing [49,50].
Conclusions
The J-PET detector is capable of performing tests of the CP and CPT symmetry by determination of the distributions of the angular correlations between ortho-positronium spin and annihilation photons in the o-Ps→ 3γ process. Preliminary MC simulations demonstrate that the current experimental setup may reach the sensitivity of 10 −4 for the CPT violation parameter in the measurement using the S · ( k 1 × k 2 ) operator as well as set the first measurement of the S · k 1 operator thanks to smooth response of the detector in function of the angular correlations and good control over spurious asymmetries. Future upgrades of the detector and the positronium formation chamber are expected to provide an about 40-fold increase of statistics in the same measurement time, allowing the discrete symmetry tests with J-PET to reach the sensitivity of 10 −5 .
Figure 1 .
1Left: View of the J-PET detector with a cylindrical vacuum chamber for positronium production and annihilation mounted in its centre. Right: Schematic view of two future extensions of the experimental setup:
Figure 2 .
2Distributions of the O CPT operator resulting from toy MC simulations of 10 13 positron interactions in J-PET with the cylindrical positronium production chamber in case of no CPT violation assumed in the simulation (hatched blue histogram) and extreme violation (hollow red histogram).
Figure 3 .
3Left: Total efficiency of registration o-Ps→ 3γ events in J-PET as a function of the S · ( k 1 × k 2 ) angular correlation obtained in a MC simulation. The curves present efficiencies in case of two different geometries of the positronium production chamber: cylindrical and spherical as well as for three values of energy deposition threshold for single γ detection. Right: Asymmetry of the S · ( k 1 × k 2 ) distribution for the two chamber geometries in cases of no CPT violation and an exaggerated violation at the 10% level assumed in the simulations.
Figure 3 (
3right) shows examples of asymmetries of the O CPT
Figure 4 .
4Left: Total efficiency of registration o-Ps→ 3γ events in J-PET as a function of the S · k 1 angular correlation obtained in a MC simulation. The curves present efficiencies in case of two different geometries of the positronium production chamber: cylindrical and spherical as well as for three values of energy deposition threshold for single γ detection. Right: Asymmetry of the S · k 1 distribution for the two chamber geometries in cases of no CPT violation and an exaggerated violation at the 10% level assumed in the simulations.
Funding:
This research was funded by The Polish National Center for Research and Development through grant INNOTECH-K1/IN1/64/159174/NCBR/12, the Foundation for Polish Science through the MPD and TEAM POIR.04.04.00-00-4204/17 programmes, the National Science Centre of Poland through grants no. 2016/21/B/ST2/01222 and 2017/25/N/NZ1/ 00861, the Ministry for Science and Higher Education through grants no. 6673/IA/SP/2016, 7150/E-338/SPUB/2017/1, 7150/E-338/M/2017 and 7150/E-338/M/2018. Conflicts of Interest: The authors declare no conflict of interest. 50. Raj, J.; Kisielewska, D.; Czerwiński, E. Studies of Ortho-Positronium Decays into Three Photons with the J-PET Detector. Acta Phys. Polon. 2020, A137, 137. doi:10.12693/APhysPolA.137.137. Sample Availability: Monte Carlo simulations underlying the presented studies as well as data of ortho-positronium annihilations recorded with J-PET can be made available upon request to the authors. The software used for data analysis is available at http://github.com/JPETTomography/j-pet-framework.git . c 2020 by the author. Submitted to Symmetry for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Experimental Test of Parity Conservation in Beta Decay. C S Wu, E Ambler, R W Hayward, D D Hoppes, R P Hudson, 10.1103/PhysRev.105.1413Phys. Rev. 105Wu, C.S.; Ambler, E.; Hayward, R.W.; Hoppes, D.D.; Hudson, R.P. Experimental Test of Parity Conservation in Beta Decay. Phys. Rev. 1957, 105, 1413-1415. doi:10.1103/PhysRev.105.1413.
. J H Christenson, J W Cronin, V L Fitch, Turlay, R. Evidence for the 2π Decay of the K 0Christenson, J.H.; Cronin, J.W.; Fitch, V.L.; Turlay, R. Evidence for the 2π Decay of the K 0
. Meson. Phys. Rev. Meson. Phys. Rev.
. Lett, 10.1103/PhysRevLett.13.13813Lett. 1964, 13, 138-140. doi:10.1103/PhysRevLett.13.138.
Potential of the J-PET detector for studies of discrete symmetries in decays of positronium atom -a purely leptonic system. P Moskal, D Alfs, T Bednarski, P Białas, E Czerwiński, C Curceanu, A Gajos, B Głowacz, M Gorgol, B Hiesmayr, 10.5506/APhysPolB.47.509Acta Phys. Polon. 47Moskal, P.; Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Curceanu, C.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; et al. Potential of the J-PET detector for studies of discrete symmetries in decays of positronium atom -a purely leptonic system. Acta Phys. Polon. 2016, B47, 509, [arXiv:nucl-ex/1602.05226]. doi:10.5506/APhysPolB.47.509.
Search for C Nonconservation in Electron-Positron Annihilation. A P Mills, S Berko, 10.1103/PhysRevLett.18.420Phys. Rev. Lett. 18Mills, A.P.; Berko, S. Search for C Nonconservation in Electron-Positron Annihilation. Phys. Rev. Lett. 1967, 18, 420-425. doi:10.1103/PhysRevLett.18.420.
How to Test CP, T and CPT Invariance in the Three Photon Decay of Polarized s Wave Triplet Positronium. W Bernreuther, U Low, J P Ma, O Nachtmann, 10.1007/BF01412589Z. Phys. 41143Bernreuther, W.; Low, U.; Ma, J.P.; Nachtmann, O. How to Test CP, T and CPT Invariance in the Three Photon Decay of Polarized s Wave Triplet Positronium. Z. Phys. 1988, C41, 143. doi:10.1007/BF01412589.
Search for CP Violation in Positronium Decay. T Yamazaki, T Namba, S Asai, T Kobayashi, 10.1103/PhysRevLett.104.083401Phys. Rev. Lett. 10483401Yamazaki, T.; Namba, T.; Asai, S.; Kobayashi, T. Search for CP Violation in Positronium Decay. Phys. Rev. Lett. 2010, 104, 083401. doi:10.1103/PhysRevLett.104.083401.
Search for CPT-Odd Decays of Positronium. P A Vetter, S J Freedman, 10.1103/PhysRevLett.91.263401Phys. Rev. Lett. 91263401Vetter, P.A.; Freedman, S.J. Search for CPT-Odd Decays of Positronium. Phys. Rev. Lett. 2003, 91, 263401. doi:10.1103/PhysRevLett.91.263401.
Branching-ratio measurements of multiphoton decays of positronium. P A Vetter, S J Freedman, 10.1103/PhysRevA.66.052505Phys. Rev. A. 6652505Vetter, P.A.; Freedman, S.J. Branching-ratio measurements of multiphoton decays of positronium. Phys. Rev. A 2002, 66, 052505. doi:10.1103/PhysRevA.66.052505.
Measurement of five-photon decay in orthopositronium. T Matsumoto, M Chiba, R Hamatsu, T Hirose, J Yang, J Yu, 10.1103/PhysRevA.54.1947Phys. Rev. A. 54Matsumoto, T.; Chiba, M.; Hamatsu, R.; Hirose, T.; Yang, J.; Yu, J. Measurement of five-photon decay in orthopositronium. Phys. Rev. A 1996, 54, 1947-1951. doi:10.1103/PhysRevA.54.1947.
H Von Busch, P Thirolf, C Ender, D Habs, F Köck, T Schulze, D Schwalm, 10.1016/0370-2693(94)90015-9Measurement of the decay e + e − → 4γ at rest. 325von Busch, H.; Thirolf, P.; Ender, C.; Habs, D.; Köck, F.; Schulze, T.; Schwalm, D. Measurement of the decay e + e − → 4γ at rest. Physics Letters B 1994, 325, 300 -307. doi:https://doi.org/10.1016/0370-2693(94)90015-9.
Leptonic CP violation. G C Branco, R González Felipe, F R Joaquim, 10.1103/RevModPhys.84.515Rev. Mod. Phys. 84Branco, G.C.; González Felipe, R.; Joaquim, F.R. Leptonic CP violation. Rev. Mod. Phys. 2012, 84, 515-565. doi:10.1103/RevModPhys.84.515.
Search for the electric dipole moment of the τ lepton. K Inami, K Abe, K Abe, R Abe, T Abe, I Adachi, H Aihara, M Akatsu, Y Asano, T Aso, 10.1016/S0370-2693(02)02984-2Physics Letters B. 551Inami, K.; Abe, K.; Abe, K.; Abe, R.; Abe, T.; Adachi, I.; Aihara, H.; Akatsu, M.; Asano, Y.; Aso, T.; et al. Search for the electric dipole moment of the τ lepton. Physics Letters B 2003, 551, 16 -26. doi:https://doi.org/10.1016/S0370-2693(02)02984-2.
Search for CP Violation in Neutrino and Antineutrino Oscillations by the T2K Experiment with 2.2 × 10 21 Protons on Target. K Abe, Others, 10.1103/PhysRevLett.121.171802Phys. Rev. Lett. 121171802Abe, K.; others. Search for CP Violation in Neutrino and Antineutrino Oscillations by the T2K Experiment with 2.2 × 10 21 Protons on Target. Phys. Rev. Lett. 2018, 121, 171802. doi:10.1103/PhysRevLett.121.171802.
New constraints on oscillation parameters from ν e appearance and ν µ disappearance in the NOvA experiment. M A Acero, Others, 10.1103/PhysRevD.98.032012Phys. Rev. D. 9832012Acero, M.A.; others. New constraints on oscillation parameters from ν e appearance and ν µ disappearance in the NOvA experiment. Phys. Rev. D 2018, 98, 032012. doi:10.1103/PhysRevD.98.032012.
Constraint on the matter-antimatter symmetry-violating phase in neutrino oscillations. K Abe, Others, 10.1038/s41586-020-2177-0Nature. 580Abe, K.; others. Constraint on the matter-antimatter symmetry-violating phase in neutrino oscillations. Nature 2020, 580, 339-344, [arXiv:hep-ex/1910.03887]. doi:10.1038/s41586-020-2177-0.
Lorentz and CPT tests with hydrogen, antihydrogen, and related systems. V A Kostelecký, A J Vargas, 10.1103/PhysRevD.92.056002Phys. Rev. 92Kostelecký, V.A.; Vargas, A.J. Lorentz and CPT tests with hydrogen, antihydrogen, and related systems. Phys. Rev. 2015, D92, 056002, [arXiv:hep-ph/1506.01706]. doi:10.1103/PhysRevD.92.056002.
Overview of the Phenomenology of Lorentz and CPT Violation in Atomic Systems. A J Vargas, 10.3390/sym11121433111433Vargas, A.J. Overview of the Phenomenology of Lorentz and CPT Violation in Atomic Systems. Symmetry 2019, 11, 1433. doi:10.3390/sym11121433.
Test of a single module of the J-PET scanner based on plastic scintillators. P Moskal, Others, 10.1016/j.nima.2014.07.052Nucl. Instrum. Meth. 764Moskal, P.; others. Test of a single module of the J-PET scanner based on plastic scintillators. Nucl. Instrum. Meth. 2014, A764, 317-321, [arXiv:physics.ins-det/1407.7395]. doi:10.1016/j.nima.2014.07.052.
A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals. P Moskal, Others, 10.1016/j.nima.2014.12.005Nucl. Instrum. Meth. A. 775Moskal, P.; others. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals. Nucl. Instrum. Meth. A 2015, 775, 54-62, [arXiv:physics.ins-det/1412.6963]. doi:10.1016/j.nima.2014.12.005.
PET: a new technology for the whole-body PET imaging. S Niedźwiecki, 10.5506/APhysPolB.48.1567Acta Phys. Polon. 1567B48Niedźwiecki, S.; others. J-PET: a new technology for the whole-body PET imaging. Acta Phys. Polon. 2017, B48, 1567, [arXiv:physics.ins-det/1710.11369]. doi:10.5506/APhysPolB.48.1567.
Estimating the NEMA characteristics of the J-PET tomograph using the GATE package. P Kowalski, Others, 10.1088/1361-6560/aad29bPhys. Med. Biol. 63Kowalski, P.; others. Estimating the NEMA characteristics of the J-PET tomograph using the GATE package. Phys. Med. Biol. 2018, 63, 165008, [arXiv:physics.ins-det/1808.00241]. doi:10.1088/1361-6560/aad29b.
Towards total-body modular PET for positronium and quantum entanglement imaging. P Moskal, IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC). Moskal, P. Towards total-body modular PET for positronium and quantum entanglement imaging. 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), 2018, pp. 1-4.
Prospects and clinical perspectives of total-body PET imaging using plastic scintillators. P Moskal, E Ł Stępień, PET Clinics. in printMoskal, P.; Stępień, E.Ł. Prospects and clinical perspectives of total-body PET imaging using plastic scintillators. PET Clinics 2020. in print.
Feasibility study of the positronium imaging with the J-PET tomograph. P Moskal, Others, 10.1088/1361-6560/aafe20Phys. Med. Biol. 64Moskal, P.; others. Feasibility study of the positronium imaging with the J-PET tomograph. Phys. Med. Biol. 2019, 64, 055017, [arXiv:physics.ins-det/1805.11696]. doi:10.1088/1361-6560/aafe20.
Positronium in medicine and biology. P Moskal, B Jasińska, E Ł Stępień, S D Bass, 10.1038/s42254-019-0078-7Nature Reviews Physics. 1Moskal, P.; Jasińska, B.; Stępień, E.Ł.; Bass, S.D. Positronium in medicine and biology. Nature Reviews Physics 2019, 1, 527-529. doi:10.1038/s42254-019-0078-7.
Performance assessment of the 2γ positronium imaging with the total-body PET scanners. P Moskal, Others, arXiv:physics.ins-det/1911.06841EJNMMI Physics. 2020in printMoskal, P.; others. Performance assessment of the 2γ positronium imaging with the total-body PET scanners. EJNMMI Physics 2020, [arXiv:physics.ins-det/1911.06841]. in print.
Genuine Multipartite Entanglement in the 3-Photon Decay of Positronium. B Hiesmayr, P Moskal, 10.1038/s41598-017-15356-yHiesmayr, B.; Moskal, P. Genuine Multipartite Entanglement in the 3-Photon Decay of Positronium. Sci. Rep. 2017, 7, 15349, [arXiv:quant-ph/1706.06505]. doi:10.1038/s41598-017-15356-y.
Feasibility studies of the polarization of photons beyond the optical wavelength regime with the J-PET detector. P Moskal, Others, 10.1140/epjc/s10052-018-6461-1Eur. Phys. J. C. 78Moskal, P.; others. Feasibility studies of the polarization of photons beyond the optical wavelength regime with the J-PET detector. Eur. Phys. J. C 2018, 78, 970, [arXiv:physics.ins-det/1809.10397]. doi:10.1140/epjc/s10052-018-6461-1.
Witnessing Entanglement In Compton Scattering Processes Via Mutually Unbiased Bases. B C Hiesmayr, P Moskal, 10.1038/s41598-019-44570-zSci. Rep. 9Hiesmayr, B.C.; Moskal, P. Witnessing Entanglement In Compton Scattering Processes Via Mutually Unbiased Bases. Sci. Rep. 2019, 9, 8166, [arXiv:quant-ph/1807.04934]. doi:10.1038/s41598-019-44570-z.
. M Pałka, P Strzempek, G Korcyl, T Bednarski, S Niedźwiecki, P Białas, E Czerwiński, K Dulski, A Gajos, B Głowacz, Pałka, M.; Strzempek, P.; Korcyl, G.; Bednarski, T.; Niedźwiecki, S.; Białas, P.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; et al.
Multichannel FPGA based MVT system for high precision time (20 ps RMS) and charge measurement. 10.1088/1748-0221/12/08/P08001JINST 2017. 12Multichannel FPGA based MVT system for high precision time (20 ps RMS) and charge measurement. JINST 2017, 12, P08001, [arXiv:physics.ins-det/1707.03565]. doi:10.1088/1748-0221/12/08/P08001.
A novel method based solely on FPGA units enabling measurement of time and charge of analog signals in Positron Emission Tomography. M Pałka, T Bednarski, P Białas, E Czerwiński, L Kapłon, A Kochanowski, G Korcyl, J Kowal, P Kowalski, T Kozik, 10.1515/bams-2013-0104Bio-Algorithms Med-Syst. 10Pałka, M.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, L.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; et al. A novel method based solely on FPGA units enabling measurement of time and charge of analog signals in Positron Emission Tomography. Bio-Algorithms Med-Syst. 2014, 10, 41-45, [arXiv:physics.ins-det/1311.6127]. doi:10.1515/bams-2013-0104.
Sampling FEE and Trigger-less DAQ for the J-PET Scanner. G Korcyl, Others, 10.5506/APhysPolB.47.491Acta Phys. Polon. 47Korcyl, G.; others. Sampling FEE and Trigger-less DAQ for the J-PET Scanner. Acta Phys. Polon. 2016, B47, 491, [arXiv:physics.ins-det/1602.05251]. doi:10.5506/APhysPolB.47.491.
Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA SoC Devices. G Korcyl, P Białas, C Curceanu, E Czerwiński, K Dulski, B Flak, A Gajos, B Głowacz, M Gorgol, B C Hiesmayr, 10.1109/TMI.2018.2837741IEEE Transactions on Medical Imaging. 37Korcyl, G.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Flak, B.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.C.; et al. Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA SoC Devices. IEEE Transactions on Medical Imaging 2018, 37, 2526-2535, [arXiv:physics.ins-det/1807.10754]. doi:10.1109/TMI.2018.2837741.
Time Over Threshold as a measure of energy response of plastic scintillators used in the J-PET detector. S Sharma, 10.1051/epjconf/201919905014EPJ Web Conf. 5014Sharma, S. Time Over Threshold as a measure of energy response of plastic scintillators used in the J-PET detector. EPJ Web Conf. 2019, 199, 05014. doi:10.1051/epjconf/201919905014.
Estimating relationship between the Time Over Threshold and energy loss by photons in plastic scintillators used in the J-PET scanner. S Sharma, J Chhokar, C Curceanu, E Czerwinski, M Dadgar, K Dulski, J Gajewski, A Gajos, M Gorgol, N Gupta-Sharma, arXiv:physics.ins-det/1911.12059EJNMMI-Physics. accepted for publicationSharma, S.; Chhokar, J.; Curceanu, C.; Czerwinski, E.; Dadgar, M.; Dulski, K.; Gajewski, J.; Gajos, A.; Gorgol, M.; Gupta-Sharma, N.; et al. Estimating relationship between the Time Over Threshold and energy loss by photons in plastic scintillators used in the J-PET scanner. EJNMMI-Physics, [arXiv:physics.ins-det/1911.12059]. (accepted for publication).
Feasibility Study of the Time Reversal Symmetry Tests in Decay of Metastable Positronium Atoms with the J-PET Detector. A Gajos, Others, 10.1155/2018/8271280Adv. High Energy Phys. 8271280Gajos, A.; others. Feasibility Study of the Time Reversal Symmetry Tests in Decay of Metastable Positronium Atoms with the J-PET Detector. Adv. High Energy Phys. 2018, 2018, 8271280, [arXiv:physics.ins-det/1804.07148]. doi:10.1155/2018/8271280.
A novel method based solely on FPGA units enabling measurement of time and charge of analog signals in Positron Emission Tomography. G Korcyl, P Moskal, T Bednarski, P Białas, E Czerwiński, L Kapłon, A Kochanowski, J Kowal, P Kowalski, T Kozik, 10.1515/bams-2013-0115Algorithms Med-Syst. 10Korcyl, G.; Moskal, P.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, L.; Kochanowski, A.; Kowal, J.; Kowalski, P.; Kozik, T.; et al. A novel method based solely on FPGA units enabling measurement of time and charge of analog signals in Positron Emission Tomography. Bio-Algorithms Med-Syst. 2014, 10, 37-40. doi:10.1515/bams-2013-0115.
Analysis framework for the J-PET scanner. W Krzemień, Others, 10.12693/APhysPolA.127.1491Acta Phys. Polon. A. 127Krzemień, W.; others. Analysis framework for the J-PET scanner. Acta Phys. Polon. A 2015, 127, 1491-1494, [arXiv:physics.ins-det/1503.00465]. doi:10.12693/APhysPolA.127.1491.
Overview of the software architecture and data flow for the J-PET tomography device. W Krzemień, Others, Krzemień, W.; others. Overview of the software architecture and data flow for the J-PET tomography device.
. 10.5506/APhysPolB.47.561Acta Phys. Polon. B. 56147Acta Phys. Polon. B 2016, 47, 561, [arXiv:physics.ins-det/1508.02451]. doi:10.5506/APhysPolB.47.561.
Software platform for PET tomography data reconstruction and analysis. W Krzemien, A Gajos, K Kacprzak, K Rakoczy, G. J-Pet Korcyl, Framework, 10.1016/j.softx.2020.100487Krzemien, W.; Gajos, A.; Kacprzak, K.; Rakoczy, K.; Korcyl, G. J-PET Framework: Software platform for PET tomography data reconstruction and analysis. SoftwareX 2020, 11, 100487, [arXiv:physics.ins-det/2002.10183]. doi:10.1016/j.softx.2020.100487.
Time resolution of the plastic scintillator strips with matrix photomultiplier readout for J-PET tomograph. P Moskal, O Rundel, D Alfs, T Bednarski, P Białas, E Czerwiński, A Gajos, K Giergiel, M Gorgol, B Jasińska, 10.1088/0031-9155/61/5/2025arXiv:physics.ins-det/1602.02058Phys. Med. Biol. Moskal, P.; Rundel, O.; Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Giergiel, K.; Gorgol, M.; Jasińska, B.; et al. Time resolution of the plastic scintillator strips with matrix photomultiplier readout for J-PET tomograph. Phys. Med. Biol. 2016, 61, 2025, [arXiv:physics.ins-det/1602.02058]. doi:10.1088/0031-9155/61/5/2025.
First test of CP invariance in the decay of positronium. M Skalsey, J Van House, 10.1103/PhysRevLett.67.1993Phys. Rev. Lett. 67Skalsey, M.; Van House, J. First test of CP invariance in the decay of positronium. Phys. Rev. Lett. 1991, 67, 1993-1996. doi:10.1103/PhysRevLett.67.1993.
Angular Correlation Test of CPT in Polarized Positronium. B K Arbic, S Hatamian, M Skalsey, J Van House, W Zheng, 10.1103/PhysRevA.37.3189Phys. Rev. 37Arbic, B.K.; Hatamian, S.; Skalsey, M.; Van House, J.; Zheng, W. Angular Correlation Test of CPT in Polarized Positronium. Phys. Rev. 1988, A37, 3189-3194. doi:10.1103/PhysRevA.37.3189.
A method to produce linearly polarized positrons and positronium atoms with the J-PET detector. M Mohammed, Others, 10.12693/APhysPolA.132.1486Acta Phys. Polon. 1321486Mohammed, M.; others. A method to produce linearly polarized positrons and positronium atoms with the J-PET detector. Acta Phys. Polon. 2017, A132, 1486. doi:10.12693/APhysPolA.132.1486.
Trilateration-based reconstruction of ortho-positronium decays into three photons with the J-PET detector. A Gajos, D Kamińska, E Czerwiński, D Alfs, T Bednarski, P Białas, B Głowacz, M Gorgol, B Jasińska, L Kapłon, 10.1016/j.nima.2016.02.069Nucl. Instrum. Meth. 819Gajos, A.; Kamińska, D.; Czerwiński, E.; Alfs, D.; Bednarski, T.; Białas, P.; Głowacz, B.; Gorgol, M.; Jasińska, B.; Kapłon, L.; et al. Trilateration-based reconstruction of ortho-positronium decays into three photons with the J-PET detector. Nucl. Instrum. Meth. 2016, A819, 54-59, [arXiv:physics.ins-det/1602.07528]. doi:10.1016/j.nima.2016.02.069.
Studies of Ortho-Positronium Decays into Three Photons with the J-PET Detector. Acta Phys. Polon. 2020, A137. A Gajos, 10.12693/APhysPolA.137.126126Gajos, A. Studies of Ortho-Positronium Decays into Three Photons with the J-PET Detector. Acta Phys. Polon. 2020, A137, 126. doi:10.12693/APhysPolA.137.126.
First Measurement of the T-Odd Correlation between the Z 0 Spin and the Three-Jet Plane Orientation in Polarized Z 0 Decays into Three Jets. K Abe, I Abt, C J Ahn, T Akagi, N J Allen, W W Ash, D Aston, K G Baird, C Baltay, H R Band, 10.1103/PhysRevLett.75.4173Phys. Rev. Lett. 75Pergamon Press OxfordRelativistic quantum theory. 1st ed.] ed.Abe, K.; Abt, I.; Ahn, C.J.; Akagi, T.; Allen, N.J.; Ash, W.W.; Aston, D.; Baird, K.G.; Baltay, C.; Band, H.R.; et al. First Measurement of the T-Odd Correlation between the Z 0 Spin and the Three-Jet Plane Orientation in Polarized Z 0 Decays into Three Jets. Phys. Rev. Lett. 1995, 75, 4173-4177. doi:10.1103/PhysRevLett.75.4173. 48. Berestetskii, V.B.; Lifshitz, E.M.; Pitaevskii, L.P. Relativistic quantum theory, [1st ed.] ed.; Pergamon Press Oxford, New York, 1971.
A feasibility study of the time reversal violation test based on polarization of annihilation photons from the decay of ortho-Positronium with the J-PET detector. J Raj, Others, 10.1007/s10751-018-1527-xHyperfine Interact. 239Raj, J.; others. A feasibility study of the time reversal violation test based on polarization of annihilation photons from the decay of ortho-Positronium with the J-PET detector. Hyperfine Interact. 2018, 239, 56, [arXiv:physics.ins-det/1809.00847]. doi:10.1007/s10751-018-1527-x.
| [
"http://github.com/JPETTomography/j-pet-framework.git"
]
|
[
"REINFORCEMENT LEARNING FOR INSTANCE SEGMEN- TATION WITH HIGH-LEVEL PRIORS",
"REINFORCEMENT LEARNING FOR INSTANCE SEGMEN- TATION WITH HIGH-LEVEL PRIORS"
]
| [
"Paul Hilt \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Maedeh Zarvandi \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Edgar Kaziakhmedov \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Sourabh Bhide \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Maria Leptin \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Constantin Pape \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n",
"Anna Kreshuk [email protected] \nInstitute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n\n"
]
| [
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n",
"Institute of Science and Technology\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\nCell Biology and Biophysics Unit European Molecular Biology Laboratory\n"
]
| []
| Instance segmentation is a fundamental computer vision problem which remains challenging despite impressive recent advances due to deep learning-based methods. Given sufficient training data, fully supervised methods can yield excellent performance, but annotation of groundtruth remains a major bottleneck, especially for biomedical applications where it has to be performed by domain experts. The amount of labels required can be drastically reduced by using rules derived from prior knowledge to guide the segmentation. However, these rules are in general not differentiable and thus cannot be used with existing methods. Here, we revoke this requirement by using stateless actor critic reinforcement learning, which enables non-differentiable rewards. We formulate the instance segmentation problem as graph partitioning and the actor critic predicts the edge weights driven by the rewards, which are based on the conformity of segmented instances to high-level priors on object shape, position or size. The experiments on toy and real data demonstrate that a good set of priors is sufficient to reach excellent performance without any direct object-level supervision. | null | [
"https://export.arxiv.org/pdf/2107.02600v2.pdf"
]
| 235,742,987 | 2107.02600 | 34caa61486b20e03e3a9f9013e2fb781c60b15b3 |
REINFORCEMENT LEARNING FOR INSTANCE SEGMEN- TATION WITH HIGH-LEVEL PRIORS
Paul Hilt
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Maedeh Zarvandi
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Edgar Kaziakhmedov
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Sourabh Bhide
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Maria Leptin
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Constantin Pape
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Anna Kreshuk [email protected]
Institute of Science and Technology
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory Skolkovo Director's Research European Molecular Biology Laboratory Director's Research European Molecular Biology Laboratory European Molecular Biology Organization
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
Cell Biology and Biophysics Unit European Molecular Biology Laboratory
REINFORCEMENT LEARNING FOR INSTANCE SEGMEN- TATION WITH HIGH-LEVEL PRIORS
Instance segmentation is a fundamental computer vision problem which remains challenging despite impressive recent advances due to deep learning-based methods. Given sufficient training data, fully supervised methods can yield excellent performance, but annotation of groundtruth remains a major bottleneck, especially for biomedical applications where it has to be performed by domain experts. The amount of labels required can be drastically reduced by using rules derived from prior knowledge to guide the segmentation. However, these rules are in general not differentiable and thus cannot be used with existing methods. Here, we revoke this requirement by using stateless actor critic reinforcement learning, which enables non-differentiable rewards. We formulate the instance segmentation problem as graph partitioning and the actor critic predicts the edge weights driven by the rewards, which are based on the conformity of segmented instances to high-level priors on object shape, position or size. The experiments on toy and real data demonstrate that a good set of priors is sufficient to reach excellent performance without any direct object-level supervision.
INTRODUCTION
Instance segmentation is the task of segmenting all objects in an image and assigning each of them a different id. It is the necessary first step to analyze individual objects in a scene and is thus of paramount importance in many computer vision applications. Over the recent years, fully supervised instance segmentation methods have made tremendous progress both in natural image applications and in scientific imaging, achieving excellent segmentations for very difficult tasks Lee et al. (2017); Chen et al. (2021).
A large corpus of training images is hard to avoid when the segmentation method needs to take into account the full variability of the natural world. However, in many practical segmentation tasks the appearance of the objects can be expected to conform to certain rules that are known a priori. Examples include surveillance, industrial quality control and especially medical and biological imaging applications where full exploitation of such prior knowledge is particularly important as the training data is sparse and difficult to acquire: pixelwise annotation of the necessary instance-level groundtruth for a microscopy experiment can take weeks or even months of expert time. The use of shape priors has a strong history in this domain Osher & Paragios (2007); Delgado-Gonzalo et al. (2014), but the most powerful learned shape models still require groundtruth Oktay et al. (2018) and generic shapes are hard to combine with the CNN losses and other, non-shape, priors. For many high-level priors it has already been demonstrated that integration of the prior directly into the CNN loss can lead to superior segmentations while significantly reducing the necessary amounts of training data Kervadec et al. (2019). However, the requirement of formulating the prior as a differentiable function poses a severe limitation on the kinds of high-level knowledge that can be exploited with such an approach. Our contribution addresses this limitation and establishes a framework in which a rich set of non-differentiable rules and expectations can be used to steer the network training.
To circumvent the requirement of a differentiable loss function, we turn to the reinforcement learning paradigm, where the rewards can be computed from a non-differentiable cost function. We base our framework on a stateless actor-critic setup Pfau & Vinyals (2016), providing one of the first practical applications of this important theoretical construct. In more detail, we solve the instance segmentation problem as agglomeration of image superpixels, with the agent predicting the weights of the edges in the superpixel region adjacency graph. Based on the predicted weights, the segmentation is obtained through (non-differentiable) graph partitioning. The segmented objects are evaluated by the critic, which learns to approximate the rewards based on object-and image-level reasoning (see Fig. 1).
The main contributions of this work can be summarized as follows: (i) we formulate instance segmentation as a RL problem based on a stateless actor-critic setup, encapsulating the non-differentiable step of instance extraction into the environment and thus achieving end-to-end learning; (ii) we do not use annotated images for supervision and instead exploit prior knowledge on instance appearance and morphology by tying the rewards to the conformity of the predicted objects to pre-defined rules and learning to approximate the (non-differentiable) reward function with the critic; (iii) we introduce a strategy for spatial decomposition of rewards based on fixed-sized subgraphs to enable localized supervision from combinations of object-and image-level rules. (iv) we demonstrate the feasibility of our approach on synthetic and real images and show an application to two important segmentation tasks in biology. In all experiments, our framework delivers excellent segmentations with no supervision other than high-level rules.
RELATED WORK
Reinforcement learning has so far not found significant adoption in the segmentation domain. The closest to our work are two methods in which RL has been introduced to learn a sequence of segmentation decision steps as a Markov Decision Process. In the actor critic framework of Araslanov et al. (2019), the actor recurrently predicts one instance mask at a time based on the gradient provided by the critic. The training needs fully segmented images as supervision and the overall system, including an LSTM sub-network between the encoder and the decoder, is fairly complex. In Jain et al. (2011), the individual decision steps correspond to merges of clusters while their sequence defines a hierarchical agglomeration process on a superpixel graph. The reward function is based on Rand index and thus not differentiable, but the overall framework requires full (super)pixelwise supervision for training.
Reward decomposition was introduced for multi agent RL by Sunehag et al. (2017) where a global reward is decomposed into a per agent reward. Bagnell & Ng (2006) proves that a stateless RL setup with decomposed rewards requires far less training samples than a RL setup with a global reward. In Xu et al. (2019) reward decomposition is applied both temporally and spatially for zero-shot inference on unseen environments by training on locally selected samples to learn the underlying physics of the environment.
The restriction to differentiable losses is present in all application domains of deep learning. Common ways to address it are based on a soft relaxation of the loss that can be differentiated. The relaxation can be designed specifically for the loss, for example, Area-under-Curve Eban et al. (2017) for classification or Jaccard Index Berman et al. (2018) for semantic segmentation. These approaches are not directly applicable to our use case as we aim to use a variety of object-and image-level priors, which should be combined without handcrafting an approximate loss for each case. More generally, Figure 1: Interaction of the agent with the environment: (a) shows the state, which is composed of the image and superpixels; (b) depicts the agent, which consists of the actor and critic networks as well as the feature extractor that computes the node input features; (c) given the state, the agent performs the actions by predicting edge weights on the graph; (d) the environment, which includes the image, superpixels, graph and graph partitioning based on the weights predicted through agent actions ; (e) rewards are obtained by evaluating the segmentation arising from the graph partitioning, based on pre-defined and data dependent rules. The rewards are given back to the agent where they are used for training. but still for a concrete task loss, Direct Loss Minimization has been proposed in Song et al. (2016). For semi-supervised learning of a classification or ranking task, Discriminative Adversarial Networks have been proposed as a means to learn an approximation to the loss dos Santos et al. (2017). Most generally, Grabocka et al. (2019) propose to train a surrogate neural network which will serve as a smooth approximation of the true loss. In our setup, the critic can informally be viewed as a surrogate network as it learns to approximate the priors through the rewards by Q-learning.
Incorporation of rules and priors is particularly important in biomedical imaging applications, where such knowledge can be exploited to augment or even substitute scarce groundtruth annotations. For example, the shape prior is explicitly encoded in popular nuclear Schmidt et al. (2018) and cellular Stringer et al. (2021) segmentation algorithms based on spatial embedding learning. Learned non-linear representations of the shape are used in Oktay et al. (2018), while in Hu et al. (2019) the loss for object boundary prediction is made topology-aware. Domain-specific priors can also be exploited in post-processing by graph partitioning . Interestingly, the energy minimization procedure underlying the graph partitioning can also be incorporated into the learning
METHODS
The task of instance segmentation can be formalized as transforming an image x into a labeling y that maps each pixel to a label value. An instance corresponds to the maximal set of pixels with the same label value. Typically, the instance segmentation problem is solved via supervised learning, i.e. using a training set with groundtruth labelsŷ. Note that y is invariant under the permutation of label values, which makes it difficult to formulate instance segmentation in a fully differentiable manner. Most approaches first predict a "soft" representation with a CNN, e.g. affinities Lee et al. (2017) (2017); Comaniciu & Meer (2002) or partitioning Andres et al. (2012), to obtain the instance segmentation. Alternatively, proposal-based methods predict a bounding-box per instance and then predict the instance mask for each bounding-box He et al. (2017). Furthermore, the common evaluation metrics for instance segmentation Meilȃ (2003); Rand (1971) are also not differentiable.
Our main motivation to explore RL for the instance segmentation task is to circumvent the restriction to differentiable losses and -regardless of the loss -to make the whole pipeline end-to-end even in presence of non-differentiable steps that transform pixelwise CNN predictions into instances.
We formulate the instance segmentation problem using a region adjacency graph G = (V, E), where the nodes V correspond to superpixels (clusters of pixels) and the edges E connect nodes that belong to spatially adjacent superpixels. Given edge weights W , the instance segmentation is obtained by partitioning the graph, here using an approximate multicut solver Kernighan & Lin (1970). Together, the image data, superpixels, graph and the graph partitioning make up the environment E of our RL setup. Based on the state s of E, the agent A predicts actions a. Here, the actions are interpreted as edge weights W and used to partition the graph. The reward r is then computed based on this partitioning. Our agent A is a stateless actor-critic Haarnoja et al. (2018), represented by two graph neural networks (GNN) Gilmer et al. (2017). The actor predicts the actions a based on the graph and its node features F . The node(superpixel) features are computed by pooling together the corresponding pixel features based on the raw image data.
We compute the node features F with a UNet Ronneberger et al. (2015) that takes the image as input and outputs a feature vector per pixel. These features are spatially averaged over the superpixels to obtain F . The feature extractor UNet is part of the agent A, thus training it end-to-end with the actor and critic networks (Fig. 1). In low data regimes it is also possible to use a pre-trained and fixed feature extractor or to combine the learned features with hand-crafted ones.
Crucially, the reinforcement setup enables us to use both a non-differentiable instance segmentation step and reward function, by encapsulation of the "pixels to instances" step in the environment and learning a policy based on the rewards with the stateless actor critic.
STATELESS REINFORCEMENT LEARNING SETUP
Unlike most RL settings Sutton & Barto (2018), our approach does not require an explicitly time dependent state: the actions returned by the agent correspond to the real-valued edge weights in [0, 1], which are used to compute the graph partitioning. Any state can be reached by a single step from the initial state and there exists no time dependency in the state transition. Unlike Jain et al. (2011), we predict all edge values at once which allows us to avoid the iterative strategy of Araslanov et al. (2019) and deliver and evaluate a complete segmentation in every step. Hence, we implement a stateless actor critic formulation.
Stateless RL was introduced in Pfau & Vinyals (2016) to study the connection between generative adversarial networks and actor critics, our method is one of the first practical applications of this concept. Here, the agent consists of an actor, which predicts the actions a and a critic, which predicts the action value Q (expected future discounted reward) given the actions. The stateless approach simplifies the action value: it estimates the reward for a single step instead of the expected sum of discounted future rewards for many steps. We have explored a multi-step setup as well, but found that it yields inferior results for our application; details can be found in the App. A.8. Furthermore, we compute sub-graph rewards instead of relying on a single global reward in order to provide a more localized reward signal (see Section 3.2 for details).
The actor corresponds to a single GNN, which predicts the mean and variance of a Normal distribution for each edge. The actions a are determined by sampling from this distribution and applying a sigmoid to the result to obtain continuous edge weights in the value range [0, 1]. The GNN takes the state s = (G, F ) as input arguments and its graph convolution for the i th node is defined as in Gilmer et al. (2017):
f i = γ π f i , 1 |N (i)| j∈N (i) φ π (f i , f j )
(1)
where γ π as well as φ π are MLPs, (·, ·) is the concatenation of vectors and N (i) is the set of neighbors of node i. The gradient of the loss for the actor is given by:
∇ θ L actor = ∇ θ 1 |SG| sg∈G α â∈sg log(π θ (â|s)) − Q sg (s, a) (2)
This loss gradient is derived following Haarnoja et al. (2018). We adapt it to the sub-graph reward structure by calculating the joint action probability of the policy π θ over each sub-graph sg in the set of all sub-graphs SG. Using this loss to optimize the policy parameters θ minimizes the Kullback-Leibler divergence between the Gibbs distribution of action values for each sub-graph Q sg (s, a) and the policy with respect to the parameters θ of the policy. α is a trainable temperature parameter which is optimized following the method introduced by Haarnoja et al. (2018).
The critic predicts the action value Q sg for each sub-graph sg ∈ SG. It consists of a GNN Q sg (s, a) that takes the state s = (G, F ) as well as the actions a predicted by the actor as input and predicts a feature vector for each edge. The graph convolution from Equation 1 is slightly modified:
f i = γ Q f i , 1 |N (i)| j∈N (i) φ Q f i , f j , a (i,j)
( 3) again γ Q and φ Q are MLPs. Based on these edge features Q sg is predicted for each sub-graph via an MLP. Here, we use a set of subgraph sizes (typically, 6, 12, 32, 128) to generate a supervison signal for different neighborhood scales. A given MLP is only valid for a fixed graph size, so we employ a different MLP for each size. The loss for the critic is given by:
L critic = 1 |SG| sg∈G 1 2 (Q δ sg (s, a) − r) 2(4)
Minimizing this loss with respect to the action value function's parameters δ minimizes the difference between the expected reward and action values Q δ sg (s, a).
LOCALIZED SUB-GRAPH REWARDS
In most RL applications a global scalar reward is provided per state transition. In our application of graph-based instance segmentation, it is instead desirable to introduce several more localized rewards in order to learn from a reward for the specific action, rather than a global scalar. Here, reward decomposition is natural because we evaluate the segmentation quality per object and can use the object scores to provide a localized reward. In order to formalize this idea, we have designed our actor critic (Section 3.1) to learn from sub-graph rewards.
A good set of sub-graphs should fulfill the following requirements: each sub-graph should be connected so that the input to the MLP that computes the activation value for the sub-graphs is correlated. The size of the sub-graphs should be adjustable and all sub-graphs should be extracted with the exact same size to be valid inputs for the MLP. The union of all sub-graphs should cover the complete graph so that each edge contributes to at least one action value Q sg . The sub-graphs should overlap to provide a smooth sum of action values. We have designed Alg. 1 to extract a set of sub-graphs according to these requirements. Fig. 2 shows an example sub-graph decomposition.
While some of the rewards used in our experiments can be directly defined for sub-graphs, most are instead defined per object (see App. A.2 for details on reward design). We use the following general procedure to map object-level rewards to sub-graphs: first assign to each superpixel the reward of its corresponding object. The reward per edge is determined by the maximum value of its two incident superpixels' rewards. The edge rewards are averaged to obtain the reward per sub-graph.
By taking the maximum we assign the higher score to edges whose incident superpixels belong to different objects, because they probably correspond to a correct split. Note that the uncertainty in the assignment of low rewards can lead to a noisy reward signal, but the averaging of the edge rewards over the sub-graphs and the overlaps between the sub-graphs smooth the rewards. We have also explored a different actor critic setup that can use object level rewards directly, with no sub-graph extraction and mapping. However, this approach yields inferior results, see App. A.3 for details.
EXPERIMENTS
We evaluate our approach on three instance segmentation problems: one synthetic and two real. For a proof-of-principle, we construct a synthetic dataset with circular shapes on structured background, Figure 2: The graph is subdivided into subgraphs, each sub-graph is highlighted by a different color. All sub-graphs have the same number of edges (here 3). Overall, we use a variety of sizes covering different notions of locality. showing how our framework can exploit simple geometric priors. Next, we apply the method to a popular microscopy benchmark dataset for nucleus segmentation Caicedo et al. (2019). Finally, we consider a challenging biological segmentation problem with boundary-labeled cells. Here, we evaluate both learning restricted to prior rules and mixed supervision combining rule-based and direct rewards computed from groundtruth annotations. The problem setup, network architectures and hyperparameters are reported in detail in App. A.9.
SYNTHETIC DATA: CIRCLES ON STRUCTURED GROUND
We create synthetic images of circles on a structured background and segment this data using only simple geometric rules. Superpixels were generated with the mutex watershed Wolf et al. (2020) applied to the gaussian gradient of the image. Here, we demonstrate that the actor critic can be trained without any direct object-level supervision and apply a simplified setup with a fixed pixel feature extractor, pre-trained through self-supervision (see App. A.1).
The object-level reward is based on the Circle Hough Transform (CHT) Hassanein et al. (2015). It is combined with an estimate for the total number of objects in the image as an additional global reward. The global reward gives useful gradients during early training stages: when too few potential objects are found in the prediction, a low reward can be given to the tentative background object. If too many potential objects are found, a low reward can be given to all the foreground objects with a low CHT value. The surface created by the per-object and global reward is shown in Fig. 3. The exact reward computation can be found in App. A.11. Fig. 4 shows the output of all algorithm components on a sample image. We also computed results with the mutex watershed Wolf et al. (2020), a typical algorithm for boundary based instance segmentation in microscopy. Texture within objects and structured background are inherently difficult for region-growing algorithms, but our approach can exploit higher-level reasoning along with low-level information and achieve a good segmentation.
REAL DATA: NUCLEUS SEGMENTATION
Nuclei are a very frequent target of instance segmentation in microscopy, which is also reflected in the large amount of publicly available annotated data. The availability of training data sparked the development of popular pre-trained solutions, such as a generalist UNet Falk et al. (2019) or StarDist Schmidt et al. (2018) and CellPose Stringer et al. (2021) which both have an (implicit) shape prior. Also, due to ubiquity of nuclei in microscopy, detailed prior knowledge exists on their shape and their appearance under different stainings. The experiments in this section aim to answer the following questions: i) given fully annotated groundtruth images for training, is there an advantage in using our Features are learned end-to-end. In the unsupervised setting, we compute the reward by combining several object descriptors: eccentricity, extent, minor diameter, perimeter, solidity as well as mean, maximum and minimum intensity per object. The object reward is then given by the normalized sum of square distances of these quantities and their expected value. Objects larger than 15,000 pixels are considered to belong to the background and are not assigned a reward. Since the superpixels serve as fixed input into our model that do not get modified, the accuracy of our segmentations is bound by their accuracy. To investigate their influence, we evaluate our approach with three different sets of superpixels: "GT", where we intersect the superpixels with the groundtruth object masks to ensure that a correct segmentation can be recovered, "UNET", where we compute the superpixels using predictions of a pre-trained U-Net as an edge detector and "RAW", where we only take into account the raw image data. See A.12 for more details on superpixels and object descriptors.
Tab. 1 summarizes the results, with a comparison to popular generalist pre-trained nuclear segmentation methods: StarDist Schmidt et al. (2018), Cellpose Stringer et al. (2021) and UNet Falk et al. (2019). For StarDist and Cellpose, we use the pre-trained models provided with the papers. The UNet is trained on the same images as StarDist, the instance segmentation is recovered either by applying connected components to the boundary-subtracted foreground prediction ("UNet") or, to obtain a comparison conditioned on a particular set of superpixels, by using the UNet boundary predictions and superpixels described above as input to Multicut graph-based agglomeration ("UNet + MC").
Otsu threshold serves as a simple unsupervised baseline Otsu (1979), where binarizing the image is followed by connected components to obtain the instance segmentation.
For the first question, we train our pipeline fully supervised ("ours (sup.)") as described in App. A.10: we use pixelwise groundtruth, but can also exploit our RL formulation where the loss is assigned to individual objects through the non-differentiable graph agglomeration step. Here, our method performs better than all baselines without RL, so there is clearly an advantage to using object-level supervision (as also demonstrated recently for non-RL setups, e.g. by Wolny et al. (2021)).
For the other two questions, we train our method using only rule-based rewards ("ours (unsup.)"). Given superpixels from which the groundtruth image can be recovered ("GT"), we then achieve better segmentation quality than the fully supervised baselines and the gap in performance between our unsupervised and supervised approach is smaller than the gap to the runner-up baseline. Of note, our unsupervised model also outperforms the "UNet + MC" baseline using the same "GT" superpixels, so its performance cannot be explained just by the use of groundtruth in superpixel generation. Example results and failure cases are shown in App. A.12.
In the third experiment, we use a pretrained UNet as an edge detector to create superpixels of "medium" quality and again obtain strong results, outperforming StarDist, CellPose and UNet+MC with "GT" superpixels. Finally, with our worst set of superpixels obtained directly from the raw data, the method can learn to exploit the rules, but is clearly hindered by the superpixel quality.
REAL DATA: CELL SEGMENTATION
Biomedical applications often require segmentation of objects of known morphology arranged in a regular pattern Thompson (1992). Such data presents the best use case for our algorithm as the reward function can leverage such priors. We address a cell segmentation problem from developmental biology, where cells often follow stereotypical shapes and arrangement: 317 drosophila embryo images from Bhide et al. (2020), including 10 with expert annotations used as test data. Note that several pre-trained networks are available for cell segmentation in light microscopy Stringer et al. Table 2: Cell segmentation results, measured by variation of information (VI) Meilȃ (2003). This entropy-based metric is commonly used to evaluate crowded segmentations in microscopy. We also report its merge and split component that measure the over-/under-segmentation error respectively. Lower values are better.
The rewards express that the cells are arranged in a radial pattern, with the average size known from other experiments (see Fig. 5). We set a high reward for merging superpixels that certainly belong to the background (close to the image boundary or center). For background edges near the foreground area, we modulate the reward by the circularity of the overall foreground contour. For the likely foreground edges, we compute object-level rewards by fitting a rotated bounding box to each object and comparing its radii and orientation to template values. We use a weight profile based on the known embryo width to combine object and background rewards (App. A.6).
More formally, the rewards are calculated as follows: for each edge, we define the position h as the average of the centers of the two incident superpixels. Given the image center c, the radius of a circle that approximately covers the foreground j and the (maximal) image border position m, we use a gaussian kernel K(·) for weighting and define edge reward r edge :
r bg = K ||h−c|| γ (1 − a), if h ≤ j K ||m−h|| η (1 − a), otw(5)r f g = K ||h − j|| δ max(r o1 , r o2 )(6)
r edge = r f g + r bg
Here γ, η and δ are normalization constants. The kernel function in Eq. 5 determines the background probability of an edge; 1 − a constitutes a reward that favors merges. It is scaled by the background probability. The object rewards r o are found by fitting a rotated bounding box to the object and then comparing orientation and extent to expected values known from previous experiments. They are mapped to edge rewards r o1 , r o2 using the maximum value of the two incident objects.
We pre-compute superpixels by using boundary predictions as input to a watershed seeded from local maxima. We use the UNet from Wolny et al. (2020), which was trained on roughly similar images. As it was trained on plant cells in different microscope modality, its prediction is far from perfect, especially around the inner circle, see Fig. 5"Edge prediction". We combine the learned node features with hand-crafted features: the normalized polar coordinate of the superpixel center and the normalized superpixel size. Fig. 5 shows visualisations of the learned and hand-crafted features. Interestingly, the learned features converge to a representation that resembles a semantic segmentation of boundaries.
Tab. 2 shows the results: "ours" is the method described above; for "ours (semisup.)" we train a model that additionally receives direct supervision from groundtruth for a single patch using the reward from App. A.10 and for "ours (handcrafted)" we only use the hand-crafted features and not the learned features. We include the UNet from Wolny et al. Since only 10 images of the dataset are annotated, we cannot efficiently finetune any of the popular cell segmentation networks on this dataset. We also project the superpixels to their respective groundtruth cluster ("sp gt") to indicate the best possible solution that can be achieved with the given superpixels. Our approach clearly outperforms the baseline methods trained on the data from Wolny et al. (2020). While predictions are not perfect (white arrows in Fig. 5, prior rules turn out to be sufficient to assemble most cells correctly. The remaining errors are caused by objects not fully conforming to the priors ("bent" rather than straight oval cells) or by a very weak boundary prediction. Furthermore, we see that the learned features significantly improve results and that the semi-supervised approach provides a large boost, even with a single patch used for direct supervision. We only report results for the best model as measured by the reward on a validation set across several training runs. App. Fig. 8 shows validation reward curves consistently improve during training for all random seeds.
DISCUSSION AND OUTLOOK
We introduced an end-to-end instance segmentation algorithm that can exploit non-differentiable loss functions and high-level prior information. Our novel RL approach is based on the stateless actor-critic and predicts the full segmentation at every step, allowing us to assign rewards to all objects and reach stable convergence. The segmentation problem is formulated as graph partitioning; we design a reward decomposition algorithm which maps object-and image-level rewards to sub-graphs for localized supervision. Our experiments demonstrate good segmentation quality on synthetic and real data using only rule-based supervision without any object-or pixel-level labels, such as centers, boxes or masks. Furthermore, in case of full supervision, our method enables end-to-end instance segmentation with direct object-level reasoning, which will allow for post-processing-aware training of segmentation CNNs. In the future, we plan to explore other tasks and reward functions and will further study the semi-supervised setup that showed very promising initial results.
Limitations Our method relies on superpixels which are fixed and not optimized jointly, so the upper bound on the performance is defined by superpixel accuracy. We believe an extension to pixels is possible and envision working on this in the future, but the current setup will not scale to the pixel level directly. Also, our method is limited to problems where consistent prior rules can be formulated for all instances. While this is the case for many applications in life science and medical imaging, not all object classes in the natural world can be described in this way. Here, our method could contribute by complementing training annotations with rules, reducing the overall labelling load in a semi-supervised setting. Finally, our approach requires non-trivial reward engineering as a trade-off for not performing any annotations. .., n} correspond to the individual superpixels and the edges in E = {(i, j)|i = j and i, j ∈ V } connect nodes with adjacent superpixels. In addition, consider edge weights W ∈ R |E| associated with each edge. Here, we infer the weights from pixel-wise boundary probability predictions and normalize the weights such that w∈W w = 1 holds. We train a 2D UNet to predict embeddings for each node in V by pulling together pixel embeddings that belong to the same superpixel and pushing apart pixel embeddings for adjacent superpixels. The intensity of the push force is scaled by the weight of the respective edge. With pixel embeddings x n and node embeddings f i = 1 mi k∈si x k , where m i is the mass of the superpixel for node i and s i is the set of indices for all pixels of the corresponding superpixel, and in accordance with we formulate the loss as
L var = 1 |N | |N | i=1 1 m i mi n=1 [d(f i , x n ) − δ v ] 2 + (8) L dist = |E| (i,j)∈E w (i,j) [2δ d − d(f i , f j )] 2 + (9) L f eat = L var + L dist(10)
Here [·] + refers to selecting the max value from the argument and 0. The forces are hinged by the distance limits δ var and δ dist . d(·) refers to the distance function in the embedding space. Since the feature extractor is trained self-supervised, we give it a smooth edge map of the superpixels as well as the raw data as an input, see Fig. 6. The training of the feature extractor happens prior to training the agent for Method 2.
A.2 REWARD GENERATION
We seek to express the rewards based on prior rules derived from topology, shape, texture, etc. Rules are typically formulated per-object. Section A.7 describes the object-to-sub-graph reward mapping. The reward function is part of the environment and the critic learns to approximate it via Q-learning, enabling the use of non-differentiable functions.
This approach can also be extended to semantic instance segmentation where in addition to the instance labeling a semantic label is to be predicted. To this end, each predicted object is softly Figure 7: Object level rewards. We accumulate edge rewards over each object where we consider all edges that have at least one node within the respective object. E.g. for o 1 we consider all edges that are covered by the light blue object as well as all the red "split" edges. assigned to one of the possible classes and the reward is generated specifically for the predicted class. We make use of this extension by separating the objects into a foreground and background class in our experiments.
In addition to the sub-graph rewards our approach can also be extended to global rewards by global pooling of the output of the critic GNN and adding the squared difference of global action value and reward to Equation 4. Alternatively, the global reward can be distributed onto the sub-graph rewards via a weighted sum of sub-graph reward and global reward. In the second approach a different global reward can be specified per class in the case of the semantic instance segmentation formulation. We make use of the per class global reward to encode a reward for the correct number of predicted objects in the experiments for synthetic data (Subesection 4.1).
The biggest challenge in designing the reward function is to avoid local optima. Since the reward is derived from each predicted object, we define the reward by extracting shape features, position, orientation and size of objects and compare them with our expectation of the true object's features. This similarity score should be monotonically increasing as the objects fit our expectation better. All used similarity functions are to a certain extend linear, however an exponential reward function can speed up learning significantly. Consider an object level reward r ∈ [0, 1], which is linear. We calculate the exponential reward by
r exp (r) = exp(rθ) exp(θ)(11)
where the factor θ determines the range of the gradient in the output. We also find that it is better to compute the reward as a "distance function" of all relevant features rather than decomposing it into the features and simply summing up the corresponding rewards. In our experiments the latter approach behaved quite unpredictably and often generated local optima which the agent could not escape.
A.3 OBJECT LEVEL REWARDS
We have tested generating the rewards based directly on the object scores instead of using the subgraph decomposition described in Section A.7. Since rewards are mainly derived from the features of the predicted objects it seems reasonable to formulate the supervision signal directly for objects. To this end we calculate a scalar reward per object as sketched in Figure 7. In this case, the critic uses a second GNN to predict the per-object action values. It is applied to an object's subgraph, which is composed of all edges that have at least one node in common with the respective object. The graph convolutions are followed by a global pooling operation which yields the scalar action value. This GNN replaces the MLPs used in the case of the reward subgraph decomposition. After extensive testing, we found that this approach is always inferior to the subgraph decomposition.
A.4 IMPACT OF DIFFERENT FEATURE SPACE CAPACITIES
In Tab 3 we compare the performance of our method, using different dimensionalities of the learned feature space (the number of channels in the output of the feature extractor UNet). We find that the reduced capacity of small feature spaces improves the agents performance. Table 3: Quantitative evaluation of our method using different feature space dimensionality. We use the same metrics for evaluation as in Tab.2 and compare all results on the validation set.
A.5 RANDOM SEED EVALUATION Figure 8 shows the evolution of the average subgraph reward furing training for different random seeds. The model performance depends on the chosen seed and for the final comparisons we select the runs based on the best score. The seed is generated randomly for each run. Figure 8: Running the same setup for different random seeds reveals a stable trend towards larger rewards. We select the model for comparison based on the best achieved reward (magenta line in Fig. 8a and green line in Fig. 8b).
A.6 GAUSSIAN WEIGHTING SCHEME Fig. 9 shows the weighting scheme which was used to generate the rewards for the fruitfly embryo data (Subesection 4.3). It can be seen as a very approximate semantic segmentation and serves the purpose of generating a reward maximum at the approximate foreground locations.
A.7 RANDOMLY GENERATED SUBGRAPHS
We select subgraphs using Alg. 1. Subgraphs are selected randomly starting from random nodes and continuously adding edges to the subgraph until the desired size is reached. The size of the subgraph is defined by the number of edges in the graph. The algorithm selects edges such that the subgraphs are connected and such that their density is high (low number of nodes in the subgraph). We tested several methods that use multiple steps within one episode. In this formulation we predict the changes starting from an initial state rather than predicting absolute values for the edge weights. For example, we can start from a state defined by edge weights derived from a boundary map. Given that this state should be somewhat close to the desired state we expect that a few small steps within one episode should be sufficient. In our experiments, we have typically used three steps per episode and used actions that can change the weight per edge by the values in [−0.1, 0.1]. This approach generates an action space that is exponentially larger than in the stateless formulation. A priori this setup might still be more stable because it is not possible to diverge from a given solution so fast due to the incremental changes per step.
Let us first consider the case with groundtruth edge weights. In this case, we can give an accurate reward not only for the final segmentation but for every step. Hence, the path to the optimal state is linear. Take for example an initial edge with weight 0.3 and its respective ground truth edge with weight 0. We can give a reward that mirrors the correct confidence of the action by using the negative value of the predicted action: r = −a. This allows us to set the discount factor γ of the RL setup to 0, because the path to the correct edge weight is linear and the correct direction will be encoded in the reward at every step. Therefore the rewards for the following steps are not needed. Setting the discount to 0 generates a problem of equal size as the stateless RL method. However, for this approach the ground truth direction of the path must be known for each edge, so it can only be used in the case of full supervision.
To generalize the multistep approach to the rule-based rewards we need to choose a different setup where a constant reward is given at each non terminal step and the rule-based one is given at the terminal step. This setup requires a discount factor γ > 0 and has an action space more complex than the stateless approach, because future steps are necessary to compute the reward. We tested this setup extensively against the stateless approach and found that it was not competitive.
A.9 NETWORK ARCHITECTURES, HYPERPARAMETERS AND EXPERIMENT DETAILS
The U-Net used for feature extraction is based on a slightly modified implementation of the standard 2D U-Net. It uses max-pooling to spatially downsample the features by a factor of 2 in the encoder and transposed convolutions to upsample them in the decoder. All convolutions have a 3x3 kernel size, followed by ReLU activations and group normalization. It has four levels, each level consisting of 2 convolutional blocks followed by downsampling (encoder) / upsampling (decoder), the features from the encoder are concatenated to the decoder inputs on the same level using skip connections. For the experiments we use feature maps of size 32, 64, 128, 256 for the different levels; except for the nucleus segmentation experiment where we use 16, 32, 64, 128.
Both actor and critic are based on GNNS: the GNN for the actor predicts the action per edge and the critic has a GNN that also predicts per edge outputs. Both GNNs use an architecture of depth two and each layer consists of an MLP with three hidden layers and 2028 units per hidden layer. In the case of the critic we have an additional MLP per sub-graph size that takes the edge outpus of the GNN as input and predicts the action value for a given sub-graph. Each of these MLPs has two hidden layers with 2028 hidden units each.
We use the Adam optimizer with a learning rate of 0.0001 and the pytorch default values for all other hyperparameters. We have trained the networks on different GPUs, depending on the problem size (which is defined by the number of superpixels): Nvidia Geforce GTX 2080 TIs for the synthetic data experiments, Nvidia Geforce RTX 3090s for the nucleus segmentation experiments and Nvidia A100s for the cell segmentation experiments. A single training run always used a single GPU.
The cell segmentation dataset is available on a general-purpose open-access repository Zenodo 1 .
A.10 DIRECT SUPERVISION
We have also implemented a set-up for direct supervision that can be applied if any of the images have a groundtruth pixelwise segmentation. We investigated both full supervision (Fig. 10) and mixed Figure 10: An example of a fully supervised prediction on the synthetic dataset. Here, we use the edge-based Dice score over subgraphs and make use of Method 1, i.e. joint training of the feature extractor, but we initialized it using the weights pretrained by self-supervision. The features for the circles are significantly more pronounced after training. Figure 11: An example for prediction with mixed supervision on the synthetic dataset. The reward was defined as follows: for all but one image we use the unsupervised CHT reward and for one image we make use of ground truth and the Dice score. We find that this mixed reward setting leads to improved performance compared to the unsupervised CHT reward. supervision, where one fully segmented image was used in addition to the prior rules (Fig. 11). Under full supervision with a set of groundtruth edge weights, we compute the Dice Score Shamir et al. (2019) of the predicted edge weights a and the ground-truthâ for each sub-graph and use it as the reward. We find this approach to be robust against class imbalance. In both cases, the agent learns to segment the circles correctly, demonstrating fast and robust convergence. Note that learned pixel features converge to a state which strongly resembles a semantic segmentation of the image. We have also used the mixed-supervision approach in Subsection 4.3.
A.11 SYNTHETIC CIRCLE EXPERIMENTS
For the synthetic circular data we implement a reward function based on the circular hough transform (CHT). The object rewards r f g are computed as follows: we define a threshold γ for the CHT value ( Fig. 3 shows the reward surface for γ = 0.8). Let c ∈ [0, 1] be the CHT value for to the given object, let k be the number of expected objects and n be the number of predicted objects. We then define the local and global reward per object as follows.
r locacl = σ ( c−γ 1−γ − 0.5) 6 0.4, if c ≥ γ 0, otw(12)r global = 0.6 k n , if n ≥ k 0.6, otw(13)r f g = r local + r global(14)
Here σ(·) is the sigmoid function. We assume that the largest object is the background and assign it the background reward:
r bg = n k , if n ≤ k 1, otw(15)
A.12 DSB EXPERIMENTS
We compute the per-object rewards based on the following properties: the eccentricity, the extent, the minor axis length, the perimeter, the solidity as well as the mean, maximal and minimal intensity. The properties are computed using the "regionprops" functionality from skimage. Using these properties p i and their expected valuesp i determined for simplicity from ground-truth objects, we compute the reward r o as r o = 1 + cs( pi ni ,p î ni ) 2
.
Here, n i andn i are normalization factors to bring each property into the range [0, 1] and cs(·, ·) is the cosine similarity. Note that objects that are identified as background, using a simple size criterion, do not receive any reward. Projecting this reward to edges using the maximum (see Section 3.2) yields r oe , and the final edge-level reward r e is constructed by the sum r e = α r oe + βr e .
Here α and β are scaling factors andr e is a reward based on the distance of the action to the difference of the mean intensity µ of the incident superpixels of edge (i, j)
r e,ij = |(1 − a ij ) − (|µ i − µ j |)|.(18)
If the action for an edge is close to 1, i.e. strongly favoring a split, and the difference in the mean intensity is high, there will be a large reward and vice versa.
For the DSB experiments we use three sets of superpixels:
• "GT": these superpixels take into account ground-truth information. They are used to judge the performance of our method without being limited by the quality of the underlying superpixels. For these superpixels we compute a height map based on the gradient of the input image and then perform a seeded watershed transform with seeds at the minima of the height map. The resulting superpixels are intersected with the groundtruth, further breaking into pieces the superpixels which cover more than one object or that spill into the background.
• "UNET": these superpixels are based on predictions from a pre-trained U-Net and allow the performance of our method when having access to such a pre-trained network. The U-Net predicts foreground and boundary probabilties and the superpixels are computed via a watershed that uses the minima of the boundary probabilities as seeds and also uses these probabilties as heightmap. Befor computing the seeds the boundary probability are smoothed, using a higher smoothing factor in the background than in the foreground, which is determined based on the foreground predictions.
• "RAW": these superpixels are computed based on raw image data only. They are computed similar to the "UNET" superpixels, but the gradient image of the input is used to compute seeds and as heightmap instead of the boundary predictions. To determine foreground vs. background the otsu thresholded intensity image is used.
Furthermore, we use groundtruth objects to determine expected values of the priors. In practice, these are usually known from biology (shape priors) or can be determined from bulk measurements without segmentation (intensity priors). Fig. 12 shows segmentation results for the methods from Tab. 1 for several images from the test set. Here, red arrows mark segmentation errors that wrongly split a nucleus, purple arrows mark segmentation errors that wrongly merge nuclei and yellow arrows mark segmentation errors that either omit nuclei or segment them with a completely wrong shape. Note that these errors were manually annotated and are not exhaustive for the shown images. We observe that different methods suffer from different systematic errors: Our proposed method suffers from merges, sometimes omits nuclei and in some cases wrongly splits of a small superpixel from a nucleus. The UNet predominantly suffers from merges. The combination of UNet and Multicut, which uses the same superpixels as our method, suffers from merges and omissions, it also systematically splits off superpixels located on the boundary of nuclei, which can especially be seen for the images in the third and fourth row. Stardist and Cellpose, which use use a strong shape prior, do not suffer from merges. Instead, they often wrongly split up nuclei that do not adhere to the shape prior and sometimes omit nuclei or predict them with a very wrong shape. Note that both methods with shape priors result in round shapes that may not match the ground-truth objects for high intersection thresholds, even if they are visually matching well. Figure 12: Example segmentation results for our method and baselines. The arrows mark segmentation errors: red are false splits, purple are false merges and yellow are omissions or nuclei segmented with a very wrong shape. Note that the errors were annotated manually to give an impression of the different kind of systematic errors and are not exhaustive for these images.
step Maitin-Shepard et al. (2016); Song et al. (2019); Abbas & Swoboda (2021).
; Gao et al. (2019); Wolf et al. (2020), boundaries Beier et al. (2017); Funke et al. (2018) or embeddings De Brabandere et al. (2017); Neven et al. (2019) and apply non-differentiable post-processing, such as agglomeration Funke et al. (2018); Bailoni et al. (2019), density clustering McInnes & Healy
Figure 3 :
3An example reward landscape Circle Hough Transform (CHT) rewards. High rewards are given if the overall number of predicted objects is not too high and if the respective object has a large CHT value.
Figure 4 :
4Synthetic data. a) top left to right: groundtruth segmentation, raw data, superpixels and visualization of the actions (merge actions in green, split actions in red
Figure 5 :
5Cell segmentation experiment. Top left to right: groundtruth segmentation; raw data; boundary predictions; superpixel over-segmentation; visualization for the actions on every edge (green = merge action, red = split action). Bottom left to right: a) handcrafted features per superpixel; b) learned features averaged over superpixels; c) learned features per pixel; Multicut segmentations; visualization of the rewards (light green = high reward, dark red = low reward). For all features, we use the first 3 PCA components for visualisation. White arrows point to remaining errors.
(2020) with Multicut for instance segmentation ("UNet + MC") as well as the method of De Brabandere et al. (2017) trained on the same data as Wolny et al. (2020) ("contrastive") as baselines.
Figure 6 :
6Training setup of the feature extractor. The input is a concatenation of the raw data and a smoothed edge map of the superpixels. The superpixel over-segmentation is used in the loss again as the supervision for learning the embedding space. -supervised pre-training, we use a method based on the contrastive loss formulation of Brabandere et al. (2017). Consider a graph G = (V, E), where the nodes in V = {1, 2, .
(a) Setup 1 .
1Features of size 16. 10 seeds. (b) Setup 2. Features of size 12. 8 seeds.
Figure 9 :
9Weighting scheme for object rewards and merge affinity rewards, roughly encoding foreground location in 4.3. Left: weights for object rewards in green and for merge affinity rewards in red, both have a gaussian profile and are concentric. Right: corresponding light-sheet image.Algorithm 1: Dense subgraphs in a rag Data: G = (V, E), l Result: subgraphs by sets of l edges 1 Initialization:SG = ∅; 2 while E\SG = ∅ j = (ij) s.t. (ij) ∈ E\SG; 9 pq.push(i, prio); 10 pq.push(j, prio); 11 sg = sg ∪ (ij); 12 sg vtx = sg vtx ∪ i; 13 sg vtx = sg vtx ∪ j; = {(nj)|∃(nj) ∈ E and ∃j ∈ sg vtx }; 18 forall (nj) ∈ adj do 19 sg = sg ∪ (nj); pq.size() ≤ n_draws & ∃j|(nj) ∈ E, j ∈ sg vtx then 25 j ∈ {j|(nj) ∈ E, j ∈ sg vtx }; = sg ∪ (nj)); 29 sg vtx = sg vtx ∪ j; 30 SG = sg ∪ SG 31 return SG A.8 MULTISTEP REINFORCEMENT LEARNING
). Bottom left to right: pretrained pixel embeddings, superpixel edges, segmentation result and visualization of the rewards (light green for high rewards, dark red for low rewards. b) comparison of segmentation from our method and the mutex watershed. RL formulation with object-level rewards compared to commonly used fully supervised baselines? ii) given superpixels that can be combined into the correct solution, but no other direct supervision, can our approach learn to combine the superpixels correctly only from high-level rules? iii) what happens if superpixels are suboptimal? For data, we turn to the dataset of Caicedo et al.(2019)and select images that contain nuclei of medium size (175 for training and 22 for test).Method
Superpixel
mAP
IoU50
IoU75
UNet
-
0.710
0.900
0.756
StarDist
-
0.645
0.938
0.736
Cellpose
-
0.666
0.931
0.776
UNet + MC
GT
0.674
0.806
0.702
ours (sup.)
GT
0.766
0.907
0.799
Otsu
-
0.554
0.763
0.579
ours (unsup.)
GT
0.743
0.916
0.787
ours (unsup.)
UNET
0.671
0.872
0.704
ours (unsup.)
RAW
0.453
0.785
0.439
sp gt
UNET
0.793
0.98
0.852
sp gt
RAW
0.554
0.969
0.505
Table 1: Nuclei segmentation: for mAP and
IoU higher values are better. Methods above
the first middle line were trained fully super-
vised. Methods below the first middle line were
trained without groundtruth, the results below
the second middle line indicate the quality of the
superpixels projected to the groundtruth (best
possible result that can be achieved with the
given superpixels).
; von Chamier et al. (2021); Wolny et al. (2020); however, they produce sub-par results on this data due to experimental differences.Method
VI
VI merge
VI split
sp gt
1.266
0.672
0.594
ours
2.213
0.839
1.374
ours (semisup.)
1.634
0.733
0.901
ours (handcrafted)
2.523
0.987
1.536
UNet + MC
3.361
3.019
0.342
contrastive
4.440
1.155
3.28
The input to σ(·) is normalized to the interval [−3, 3]. The rewards are always in [0, 1], the local reward is in range [0, 0.4] and the global reward is in range [0, 0.6].
https://zenodo.org/record/4899944#.YORWq0xRVEZ
Combinatorial optimization for panoptic segmentation: A fully differentiable approach. Ahmed Abbas, Paul Swoboda, Advances in Neural Information Processing Systems. 342021Ahmed Abbas and Paul Swoboda. Combinatorial optimization for panoptic segmentation: A fully differentiable approach. Advances in Neural Information Processing Systems, 34, 2021.
Globally optimal closed-surface segmentation for connectomics. Bjoern Andres, Thorben Kroeger, Kevin L Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Koethe, Fred A Hamprecht, European Conference on Computer Vision. SpringerBjoern Andres, Thorben Kroeger, Kevin L Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Koethe, and Fred A Hamprecht. Globally optimal closed-surface segmentation for connectomics. In European Conference on Computer Vision, pp. 778-791. Springer, 2012.
Actor-critic instance segmentation. Nikita Araslanov, Constantin Rothkopf, Stefan Roth, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Nikita Araslanov, Constantin Rothkopf, and Stefan Roth. Actor-critic instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
On local rewards and scaling distributed reinforcement learning. Drew Bagnell, Andrew Ng, Advances in Neural Information Processing Systems. Y. Weiss, B. Schölkopf, and J. PlattMIT Press18Drew Bagnell and Andrew Ng. On local rewards and scaling distributed reinforcement learning. In Y. Weiss, B. Schölkopf, and J. Platt (eds.), Advances in Neural Information Processing Systems, vol- ume 18. MIT Press, 2006. URL https://proceedings.neurips.cc/paper/2005/file/ 02180771a9b609a26dcea07f272e141f-Paper.pdf.
A generalized framework for agglomerative clustering of signed graphs applied to instance segmentation. Alberto Bailoni, Constantin Pape, Steffen Wolf, Thorsten Beier, Anna Kreshuk, Fred A Hamprecht, arXiv:1906.11713arXiv preprintAlberto Bailoni, Constantin Pape, Steffen Wolf, Thorsten Beier, Anna Kreshuk, and Fred A Hamprecht. A generalized framework for agglomerative clustering of signed graphs applied to instance segmentation. arXiv preprint arXiv:1906.11713, 2019.
Multicut brings automated neurite segmentation closer to human performance. Thorsten Beier, Constantin Pape, Nasim Rahaman, Timo Prange, Stuart Berg, D Davi, Albert Bock, Cardona, W Graham, Knott, M Stephen, Louis K Plaza, Scheffer, Nature methods. 142Thorsten Beier, Constantin Pape, Nasim Rahaman, Timo Prange, Stuart Berg, Davi D Bock, Albert Cardona, Graham W Knott, Stephen M Plaza, Louis K Scheffer, et al. Multicut brings automated neurite segmentation closer to human performance. Nature methods, 14(2):101-102, 2017.
The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. Maxim Berman, Amal Rannen Triki, Matthew B Blaschko, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMaxim Berman, Amal Rannen Triki, and Matthew B Blaschko. The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4413-4421, 2018.
Semi-automatic generation of tight binary masks and non-convex isosurfaces for quantitative analysis of 3d biological samples. Sourabh Bhide, Ralf Mikut, Maria Leptin, Johannes Stegmaier, Sourabh Bhide, Ralf Mikut, Maria Leptin, and Johannes Stegmaier. Semi-automatic generation of tight binary masks and non-convex isosurfaces for quantitative analysis of 3d biological samples, 2020.
Semantic instance segmentation with a discriminative loss function. Davy Bert De Brabandere, Luc Neven, Van Gool, Bert De Brabandere, Davy Neven, and Luc Van Gool. Semantic instance segmentation with a discriminative loss function, 2017.
Nucleus segmentation across imaging experiments: the 2018 data science bowl. C Juan, Allen Caicedo, Goodman, W Kyle, Beth A Karhohs, Jeanelle Cimini, Marzieh Ackerman, Cherkeng Haghighi, Tim Heng, Minh Becker, Claire Doan, Mcquin, Nature methods. 1612Juan C Caicedo, Allen Goodman, Kyle W Karhohs, Beth A Cimini, Jeanelle Ackerman, Marzieh Haghighi, CherKeng Heng, Tim Becker, Minh Doan, Claire McQuin, et al. Nucleus segmentation across imaging experiments: the 2018 data science bowl. Nature methods, 16(12):1247-1253, 2019.
Scaling wide residual networks for panoptic segmentation. Huiyu Liang-Chieh Chen, Siyuan Wang, Qiao, Liang-Chieh Chen, Huiyu Wang, and Siyuan Qiao. Scaling wide residual networks for panoptic segmentation, 2021.
Mean shift: A robust approach toward feature space analysis. Dorin Comaniciu, Peter Meer, IEEE Transactions. 245Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5):603-619, 2002.
Semantic instance segmentation with a discriminative loss function. Davy Bert De Brabandere, Luc Neven, Van Gool, arXiv:1708.02551arXiv preprintBert De Brabandere, Davy Neven, and Luc Van Gool. Semantic instance segmentation with a discriminative loss function. arXiv preprint arXiv:1708.02551, 2017.
Snakes on a plane: A perfect snap for bioimage analysis. Ricard Delgado-Gonzalo, Virginie Uhlmann, Daniel Schmitter, Michael Unser, IEEE Signal Processing Magazine. 321Ricard Delgado-Gonzalo, Virginie Uhlmann, Daniel Schmitter, and Michael Unser. Snakes on a plane: A perfect snap for bioimage analysis. IEEE Signal Processing Magazine, 32(1):41-48, 2014.
Learning loss functions for semi-supervised learning via discriminative adversarial networks. Cicero Nogueira Dos Santos, Kahini Wadhawan, Bowen Zhou, Cicero Nogueira dos Santos, Kahini Wadhawan, and Bowen Zhou. Learning loss functions for semi-supervised learning via discriminative adversarial networks, 2017.
Scalable learning of non-decomposable objectives. Elad Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Ryan Rifkin, Gal Elidan, Artificial intelligence and statistics. PMLRElad Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Ryan Rifkin, and Gal Elidan. Scalable learning of non-decomposable objectives. In Artificial intelligence and statistics, pp. 832-840. PMLR, 2017.
U-net: deep learning for cell counting, detection, and morphometry. Thorsten Falk, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, Jan Deubner, Zoe Jäckel, Katharina Seiwald, Nature methods. 161Thorsten Falk, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, Jan Deubner, Zoe Jäckel, Katharina Seiwald, et al. U-net: deep learning for cell counting, detection, and morphometry. Nature methods, 16(1):67-70, 2019.
Large scale image segmentation with structured loss based deep learning for connectome reconstruction. Jan Funke, Fabian Tschopp, William Grisaitis, Arlo Sheridan, Chandan Singh, Stephan Saalfeld, Srinivas C Turaga, IEEE transactions on pattern analysis and machine intelligence. 41Jan Funke, Fabian Tschopp, William Grisaitis, Arlo Sheridan, Chandan Singh, Stephan Saalfeld, and Srini- vas C Turaga. Large scale image segmentation with structured loss based deep learning for connectome reconstruction. IEEE transactions on pattern analysis and machine intelligence, 41(7):1669-1680, 2018.
Ssap: Single-shot instance segmentation with affinity pyramid. Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionNaiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, and Kaiqi Huang. Ssap: Single-shot instance segmentation with affinity pyramid. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 642-651, 2019.
Neural message passing for quantum chemistry. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, George E Dahl, abs/1704.01212CoRRJustin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. CoRR, abs/1704.01212, 2017. URL http://arxiv.org/abs/1704.
. Josif Grabocka, Randolf Scholz, Lars Schmidt-Thieme, arXiv:1905.10108Learning surrogate losses. arXiv preprintJosif Grabocka, Randolf Scholz, and Lars Schmidt-Thieme. Learning surrogate losses. arXiv preprint arXiv:1905.10108, 2019.
Soft actor-critic algorithms and applications. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, Sergey Levine, abs/1812.05905CoRRTuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algorithms and applications. CoRR, abs/1812.05905, 2018. URL http://arxiv.org/abs/1812.05905.
Mohamed Sameer, and Mohammad Ehab Ragab. A survey on hough transform, theory, techniques and applications. Sherien Allam Shehata Hassanein, Mohammad, abs/1502.02160CoRRAllam Shehata Hassanein, Sherien Mohammad, Mohamed Sameer, and Mohammad Ehab Ragab. A survey on hough transform, theory, techniques and applications. CoRR, abs/1502.02160, 2015. URL http: //arxiv.org/abs/1502.02160.
Mask r-cnn. Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionKaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
Topology-preserving deep image segmentation. Xiaoling Hu, Fuxin Li, Dimitris Samaras, Chao Chen, Advances in Neural Information Processing Systems. 32Xiaoling Hu, Fuxin Li, Dimitris Samaras, and Chao Chen. Topology-preserving deep image segmentation. In Advances in Neural Information Processing Systems, volume 32, 2019. URL https://proceedings. neurips.cc/paper/2019/file/2d95666e2649fcfc6e3af75e09f5adb9-Paper.pdf.
Learning to agglomerate superpixel hierarchies. Viren Jain, Srinivas Turaga, Kevin Briggman, Moritz Helmstaedter, Winfried Denk, Hyunjune Seung, Advances in Neural Information Processing Systems. 24Viren Jain, Srinivas Turaga, Kevin Briggman, Moritz Helmstaedter, Winfried Denk, and Hyunjune Seung. Learning to agglomerate superpixel hierarchies. Advances in Neural Information Processing Systems, 24, 01 2011.
An efficient heuristic procedure for partitioning graphs. The Bell system technical journal. W Brian, Shen Kernighan, Lin, 49Brian W Kernighan and Shen Lin. An efficient heuristic procedure for partitioning graphs. The Bell system technical journal, 49(2):291-307, 1970.
Constrained-cnn losses for weakly supervised segmentation. Hoel Kervadec, Jose Dolz, Meng Tang, Eric Granger, Yuri Boykov, Ismail Ben Ayed, 10.1016/j.media.2019.02.0091361-8415Medical Image Analysis. 54Hoel Kervadec, Jose Dolz, Meng Tang, Eric Granger, Yuri Boykov, and Ismail Ben Ayed. Constrained-cnn losses for weakly supervised segmentation. Medical Image Analysis, 54:88-99, 2019. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2019.02.009.
. Kisuk Lee, Jonathan Zung, Peter Li, Viren Jain, H Sebastian Seung, Superhuman accuracy on the snemi3d connectomics challengeKisuk Lee, Jonathan Zung, Peter Li, Viren Jain, and H. Sebastian Seung. Superhuman accuracy on the snemi3d connectomics challenge, 2017.
Combinatorial energy learning for image segmentation. Viren Jeremy B Maitin-Shepard, Michal Jain, Peter Januszewski, Pieter Li, Abbeel, Advances in Neural Information Processing Systems. D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. GarnettCurran Associates, Inc29Jeremy B Maitin-Shepard, Viren Jain, Michal Januszewski, Peter Li, and Pieter Abbeel. Combina- torial energy learning for image segmentation. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Cur- ran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/ 31857b449c407203749ae32dd0e7d64a-Paper.pdf.
Accelerated hierarchical density based clustering. Leland Mcinnes, John Healy, 2017 IEEE International Conference on Data Mining Workshops (ICDMW). IEEELeland McInnes and John Healy. Accelerated hierarchical density based clustering. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 33-42. IEEE, 2017.
Comparing clusterings by the variation of information. Marina Meilȃ, Learning theory and kernel machines. SpringerMarina Meilȃ. Comparing clusterings by the variation of information. In Learning theory and kernel machines, pp. 173-187. Springer, 2003.
Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. Davy Neven, Bert De Brabandere, Marc Proesmans, Luc Van Gool, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDavy Neven, Bert De Brabandere, Marc Proesmans, and Luc Van Gool. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8837-8845, 2019.
Anatomically constrained neural networks (acnns): Application to cardiac image enhancement and segmentation. Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, Jose Caballero, Stuart A Cook, Antonio De Marvao, Timothy Dawes, P Declan, Bernhard O'regan, Ben Kainz, Daniel Glocker, Rueckert, 10.1109/TMI.2017.2743464IEEE Transactions on Medical Imaging. 372Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, Jose Caballero, Stuart A. Cook, Antonio de Marvao, Timothy Dawes, Declan P. O'Regan, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. Anatomically constrained neural networks (acnns): Application to cardiac image enhancement and segmentation. IEEE Transactions on Medical Imaging, 37(2):384-395, 2018. doi: 10.1109/TMI.2017. 2743464.
Geometric Level Set Methods in Imaging, Vision, and Graphics. S Osher, N Paragios, SpringerNew YorkS. Osher and N. Paragios. Geometric Level Set Methods in Imaging, Vision, and Graphics. Springer New York, 2007. ISBN 9780387218106. URL https://books.google.de/books?id=ZWzrBwAAQBAJ.
A threshold selection method from gray-level histograms. Nobuyuki Otsu, IEEE transactions on systems, man, and cybernetics. 9Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62-66, 1979.
Leveraging domain knowledge to improve microscopy image segmentation with lifted multicuts. Constantin Pape, Alex Matskevych, Adrian Wolny, Julian Hennies, Giulia Mizzon, Marion Louveaux, Jacob Musser, Alexis Maizel, Detlev Arendt, Anna Kreshuk, Frontiers in Computer Science. 16Constantin Pape, Alex Matskevych, Adrian Wolny, Julian Hennies, Giulia Mizzon, Marion Louveaux, Jacob Musser, Alexis Maizel, Detlev Arendt, and Anna Kreshuk. Leveraging domain knowledge to improve microscopy image segmentation with lifted multicuts. Frontiers in Computer Science, 1:6, 2019.
Connecting generative adversarial networks and actor-critic methods. David Pfau, Oriol Vinyals, abs/1610.01945David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. CoRR, abs/1610.01945, 2016. URL http://arxiv.org/abs/1610.01945.
Objective criteria for the evaluation of clustering methods. M William, Rand, Journal of the American Statistical association. 66336William M Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846-850, 1971.
U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597. Olaf Ronneberger, Philipp Fischer, Thomas Brox, Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597, 2015. URL http://arxiv.org/abs/1505.04597.
Cell detection with star-convex polygons. Uwe Schmidt, Martin Weigert, Coleman Broaddus, Gene Myers, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerUwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers. Cell detection with star-convex polygons. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 265-273. Springer, 2018.
Continuous dice coefficient: a method for evaluating probabilistic segmentations. CoRR, abs/1906.11031. Reuben R Shamir, Yuval Duchin, Jinyoung Kim, Guillermo Sapiro, Noam Harel, Reuben R. Shamir, Yuval Duchin, Jinyoung Kim, Guillermo Sapiro, and Noam Harel. Continuous dice coefficient: a method for evaluating probabilistic segmentations. CoRR, abs/1906.11031, 2019. URL http://arxiv.org/abs/1906.11031.
End-to-end learning for graph decomposition. Jie Song, Bjoern Andres, J Michael, Otmar Black, Siyu Hilliges, Tang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJie Song, Bjoern Andres, Michael J Black, Otmar Hilliges, and Siyu Tang. End-to-end learning for graph decomposition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10093- 10102, 2019.
Training deep neural networks via direct loss minimization. Yang Song, Alexander G Schwing, Richard S Zemel, Raquel Urtasun, International Conference on Machine Learning. Yang Song, Alexander G. Schwing, Richard S. Zemel, and Raquel Urtasun. Training deep neural networks via direct loss minimization. International Conference on Machine Learning, 2016.
Cellpose: a generalist algorithm for cellular segmentation. Carsen Stringer, Tim Wang, Michalis Michaelos, Marius Pachitariu, Nature Methods. 181Carsen Stringer, Tim Wang, Michalis Michaelos, and Marius Pachitariu. Cellpose: a generalist algorithm for cellular segmentation. Nature Methods, 18(1):100-106, 2021.
Karl Tuyls, and Thore Graepel. Value-decomposition networks for cooperative multi-agent learning. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, and Thore Graepel. Value-decomposition networks for cooperative multi-agent learning, 2017.
Reinforcement Learning: An Introduction. Richard S Sutton, Andrew G Barto, The MIT Presssecond editionRichard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html.
On Growth and Form. Canto. D'arcy Wentworth Thompson, 10.1017/CBO9781107325852Cambridge University PressD'Arcy Wentworth Thompson. On Growth and Form. Canto. Cambridge University Press, 1992. doi: 10.1017/CBO9781107325852.
Democratising deep learning for microscopy with ZeroCostDL4Mic. Romain F Lucas Von Chamier, Johanna Laine, Christoph Jukkala, Daniel Spahn, Elias Krentzel, Martina Nehme, Sara Lerche, Hernández-Pérez, K Pieta, Eleni Mattila, Séamus Karinou, Ahmet Can Holden, Alexander Solak, Tim-Oliver Krull, Buchholz, L Martin, Jones, A Loïc, Christophe Royer, Yoav Leterrier, Florian Shechtman, Mike Jug, Guillaume Heilemann, Ricardo Jacquemet, Henriques, Nature Communications. 42021Lucas von Chamier, Romain F Laine, Johanna Jukkala, Christoph Spahn, Daniel Krentzel, Elias Nehme, Martina Lerche, Sara Hernández-Pérez, Pieta K Mattila, Eleni Karinou, Séamus Holden, Ahmet Can Solak, Alexander Krull, Tim-Oliver Buchholz, Martin L Jones, Loïc A Royer, Christophe Leterrier, Yoav Shechtman, Florian Jug, Mike Heilemann, Guillaume Jacquemet, and Ricardo Henriques. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nature Communications, 4 2021.
The mutex watershed and its objective: Efficient, parameter-free graph partitioning. Steffen Wolf, Alberto Bailoni, Constantin Pape, Nasim Rahaman, Anna Kreshuk, Ullrich Köthe, Fred A Hamprecht, 2020Steffen Wolf, Alberto Bailoni, Constantin Pape, Nasim Rahaman, Anna Kreshuk, Ullrich Köthe, and Fred A Hamprecht. The mutex watershed and its objective: Efficient, parameter-free graph partitioning. IEEE transactions on pattern analysis and machine intelligence, 2020.
Accurate and versatile 3d segmentation of plant tissues at cellular resolution. Elife. Adrian Wolny, Lorenzo Cerrone, Athul Vijayan, Rachele Tofanelli, Amaya Vilches, Marion Barro, Christian Louveaux, Wenzl, 957613Sören Strauss, David Wilson-Sánchez, Rena LymbouridouAdrian Wolny, Lorenzo Cerrone, Athul Vijayan, Rachele Tofanelli, Amaya Vilches Barro, Marion Louveaux, Christian Wenzl, Sören Strauss, David Wilson-Sánchez, Rena Lymbouridou, et al. Accurate and versatile 3d segmentation of plant tissues at cellular resolution. Elife, 9:e57613, 2020.
Sparse object-level supervision for instance segmentation with pixel embeddings. Adrian Wolny, Qin Yu, Constantin Pape, Anna Kreshuk, arXiv:2103.14572arXiv preprintAdrian Wolny, Qin Yu, Constantin Pape, and Anna Kreshuk. Sparse object-level supervision for instance segmentation with pixel embeddings. arXiv preprint arXiv:2103.14572, 2021.
Scoring-aggregating-planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization. Huazhe Xu, Boyuan Chen, Yang Gao, Trevor Darrell, abs/1910.08143CoRRHuazhe Xu, Boyuan Chen, Yang Gao, and Trevor Darrell. Scoring-aggregating-planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization. CoRR, abs/1910.08143, 2019. URL http://arxiv.org/abs/1910.08143.
| []
|
[
"High-resolution remote thermography using luminescent low-dimensional tin- halide perovskites",
"High-resolution remote thermography using luminescent low-dimensional tin- halide perovskites"
]
| [
"Sergii Yakunin [email protected] \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Bogdan M Benin \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Yevhen Shynkarenko \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Olga Nazarenko \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Maryna I Bodnarchuk \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Dmitry N Dirin \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n",
"Christoph Hofer \nSwiss Center for Electronics and Microtechnology (CSEM)\nCenter LandquartCH-7302LandquartSwitzerland\n",
"Stefano Cattaneo \nSwiss Center for Electronics and Microtechnology (CSEM)\nCenter LandquartCH-7302LandquartSwitzerland\n",
"Maksym V Kovalenko *e-mail:[email protected] \nDepartment of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland\n\nLaboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland\n"
]
| [
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland",
"Swiss Center for Electronics and Microtechnology (CSEM)\nCenter LandquartCH-7302LandquartSwitzerland",
"Swiss Center for Electronics and Microtechnology (CSEM)\nCenter LandquartCH-7302LandquartSwitzerland",
"Department of Chemistry and Applied Biosciences\nLaboratory of Inorganic Chemistry\nETH Zürich\nCH-8093ZürichSwitzerland",
"Laboratory for Thin Films and Photovoltaics\nEmpa -Swiss Federal Laboratories for Materials Science and Technology\nCH-8600DübendorfSwitzerland"
]
| []
| bolometric detectors, owing to advancements in MEMS-technologies (Micro-Electro-Mechanical Systems)6,7, have already entered the consumer electronics market, and are able to record thermal images with both high speed and high resolution. However, their thermographic performance, based on measurements of IR radiation intensity, is inherently limited by the transparency and emissivity/reflectivity of an observed object and, more importantly, by any material and medium (window, coating, matrix, solvent etc.) situated within the path between the detector and an object (Fig. 1). As one of the major consequences, IR thermography cannot be easily combined with conventional optical microscopy or other enclosed optical systems such as cryostats or microfluidic cells.An alternative method for remote thermography, which is unhindered by enclosures or IRabsorptive media, utilizes temperature sensitive luminophores (i.e. fluorophores or phosphors) with PL in the visible spectral range(Fig. 1c)that are deposited onto, or incorporated into, the object of interest as temperature probes 8-16 . To probe an object's temperature, the luminophore is then excited by an ultraviolet or visible (UV-Vis) pulsed source (e.g. laser or light-emitting diode) and the temperature-dependent PL lifetime decay is then analyzed by time-resolving detectors.This PL-lifetime approach exhibits several benefits: the excitation power and, consequently the PL intensity, can be adjusted to a value appropriate for the dynamic range of the detector.Additionally, the use of UV-Vis light, rather than mid-to long-wavelength IR radiation, allows for the direct integration of this method with conventional optical spectroscopy and microscopy applied in biological studies and materials research. Furthermore, higher spatial resolutions can be obtained with visible light (400-700 nm) as the diffraction-limit is ca. 20-times sharper than for LWIR (7-14 µm); this potentially extends the utility of remote thermography to intracellular, in vitro, and in vivo studies 17 . | 10.1038/s41563-019-0416-2 | [
"https://export.arxiv.org/pdf/1905.08727v1.pdf"
]
| 160,009,781 | 1905.08727 | 16c7a73eae94e6222075d8ee73a72563e59852ef |
High-resolution remote thermography using luminescent low-dimensional tin- halide perovskites
Sergii Yakunin [email protected]
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Bogdan M Benin
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Yevhen Shynkarenko
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Olga Nazarenko
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Maryna I Bodnarchuk
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Dmitry N Dirin
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
Christoph Hofer
Swiss Center for Electronics and Microtechnology (CSEM)
Center LandquartCH-7302LandquartSwitzerland
Stefano Cattaneo
Swiss Center for Electronics and Microtechnology (CSEM)
Center LandquartCH-7302LandquartSwitzerland
Maksym V Kovalenko *e-mail:[email protected]
Department of Chemistry and Applied Biosciences
Laboratory of Inorganic Chemistry
ETH Zürich
CH-8093ZürichSwitzerland
Laboratory for Thin Films and Photovoltaics
Empa -Swiss Federal Laboratories for Materials Science and Technology
CH-8600DübendorfSwitzerland
High-resolution remote thermography using luminescent low-dimensional tin- halide perovskites
1 Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request.
bolometric detectors, owing to advancements in MEMS-technologies (Micro-Electro-Mechanical Systems)6,7, have already entered the consumer electronics market, and are able to record thermal images with both high speed and high resolution. However, their thermographic performance, based on measurements of IR radiation intensity, is inherently limited by the transparency and emissivity/reflectivity of an observed object and, more importantly, by any material and medium (window, coating, matrix, solvent etc.) situated within the path between the detector and an object (Fig. 1). As one of the major consequences, IR thermography cannot be easily combined with conventional optical microscopy or other enclosed optical systems such as cryostats or microfluidic cells.An alternative method for remote thermography, which is unhindered by enclosures or IRabsorptive media, utilizes temperature sensitive luminophores (i.e. fluorophores or phosphors) with PL in the visible spectral range(Fig. 1c)that are deposited onto, or incorporated into, the object of interest as temperature probes 8-16 . To probe an object's temperature, the luminophore is then excited by an ultraviolet or visible (UV-Vis) pulsed source (e.g. laser or light-emitting diode) and the temperature-dependent PL lifetime decay is then analyzed by time-resolving detectors.This PL-lifetime approach exhibits several benefits: the excitation power and, consequently the PL intensity, can be adjusted to a value appropriate for the dynamic range of the detector.Additionally, the use of UV-Vis light, rather than mid-to long-wavelength IR radiation, allows for the direct integration of this method with conventional optical spectroscopy and microscopy applied in biological studies and materials research. Furthermore, higher spatial resolutions can be obtained with visible light (400-700 nm) as the diffraction-limit is ca. 20-times sharper than for LWIR (7-14 µm); this potentially extends the utility of remote thermography to intracellular, in vitro, and in vivo studies 17 .
Main. Remote thermal imaging or thermography, lends itself to numerous applications ranging from medicine 1 and defense, to biological research 2 , or the diagnosis of technical failures 3 . In all these applications, remote thermal detection falls into two main categories: infrared (IR) or visible.
IR-based detectors exploit the long-wave IR emission (LWIR, e.g. thermal emission) of a studied object using the fact that the integral radiation intensity emitted from the blackbody scales with its temperature (T) as ~T 4 . This emission can be recorded with rather costly photodetector arrays composed of narrow bandgap semiconductors (InSb, In1-xGaxAs and Hg1-xCdxTe) 4,5 , and this method is predominantly used for scientific and military purposes. Alternative and less expensive To promote the advancement and widespread use of remote thermography, a much broader portfolio of luminescent, thermally sensitive, and temperature-range tunable materials is required.
These emitters must exhibit a fully reproducible radiative lifetime vs. temperature dependence, and demonstrate an invariant behavior towards excitation-light intensity. While emitters satisfying these conditions do exist, the precision of thermometry utilizing them was so far reported to be only in the 0.1-1 °C range 9,15,18 . This level of precision can be again attributed to two factors: the often moderate thermal sensitivity of available thermographic luminophores 18 , and the PL lifetime measurement techniques that have been traditionally applied in industry. As a result, progress in the development of PL-lifetime thermography suffered from the exclusive use of expensive, bulky techniques that are in fact single-point measurements, which prohibit fast image acquisition. To address these shortcomings, we present (i) a new family of low-dimensional tin-halide luminophores well-suited for remote thermography due to their strongly temperature-dependent, compound-specific PL lifetimes, (ii) a high thermographic precision down to 0.05 °C with operation in a broad temperature range from -100 to 110 ºC, and (iii) a new thermographic method utilizing ToF cameras 19,20 for cost-effective, high-resolution, fast thermal imaging.
Results
Low-dimensional tin-halide perovskite-derived luminophores. In the search for suitable thermographic luminophores, we analyzed the potential of lead-or tin halides with perovskite or lower-dimensional perovskite-like crystal structures. These compounds are formed by metalhalide polyhedral anions, typically MX6 4octahedra (M = Pb, Sn), which are either fully isolated and surrounded by positively-charged counterions (so-called 0D-compounds) or connected into extended one-, two-or three-dimensional (1D-3D) frameworks through corner-, edge-or face-sharing 21,22 . In particular, 3D lead-halide perovskites, have recently emerged as prominent optoelectronic materials for photovoltaics and photodetectors 23-27 , hard-radiation detection 28-30 , as well as bright light emitters 31-34 . Although bright luminescence has been reported at all dimensionalities for metal-halides, we have found that only 0D-and 1D-compounds exhibit a suitable set of optical characteristics for remote thermography.
In low-dimensional, and specifically zero-dimensional perovskites, the electronic structure evolves from disperse electronic bands in 2D-3D materials to more localized states (molecule-like) in 0D-1D compounds 21 . Stemming from this, the mechanism of luminescence is also drastically different, and ranges from rather large, delocalized Wannier-type excitons in 3D materials to ultrasmall Frenkel-like self-trapped excitons (STEs) in the 0D-compounds 35-38 . We have tested a range of previously reported and new highly-luminescent, fully-inorganic and hybrid organic-inorganic 0D and 1D tin-halide compounds for their suitability for thermography in the range of -40 to 120 °C. Based on temperature-dependent PL and PL-lifetime measurements, three suitable candidates Table 1). All of these compounds exhibit temperaturedependent PL lifetimes in the range of 1 ns -1µs, and thus ideally match the optimal modulation range for most commercial ToF sensors (e.g. from tens of kHz to tens of MHz). This is in sharp contrast to the thermographic luminophores based on rare-earth-doped oxides 9,13,16,41,42 , which are characterized by much slower emission that is typically in the ms-range. Furthermore, the absorption coefficients of tin halides for the UV-A range (315-400 nm, convenient for optical excitation) are high, and they are comparable to the bandgap absorption values (Fig. 2) as each octahedron within the structure can act as an absorber/emitter. In contrast, the UV-A range absorption of rare-earth-doped oxide luminophores is weak due to a limited concentration of dopants (activator) or absorbing centers 9 .
Both solid-state and solution-based methods were used to prepare the three types of luminophores utilized within this work. The fully-inorganic, 0D Cs4SnBr6 was synthesized through a solid-state approach in which a mixture of CsBr and SnBr2 where repeatedly pressed and heated at sub-melting temperatures as described in our recent report 39 Optical properties of low-dimensional tin-halides. Although diverse in their compositions, structures and syntheses, all present materials are qualitatively unified by their broadband and highly Stokes-shifted PL. Their optical absorption behavior was determined through the measurement of both PL excitation (PLE) and absorption via the application of the Kubelka-Munk (K-M) transformation to diffuse reflectance spectra ( Fig. 2d-f). The absorption of 0D tin-halides, Cs4SnBr6 and (C4N2H14I)4SnI6, appears as molecular-like bands that coincide with the PLE spectra.
Additionally, both materials exhibit absorption features at shorter wavelengths that do not contribute to emission (Fig. 2d,f). This differs from the 1D compound, [C(NH2)3]2SnBr4, which instead exhibits a continuous PLE spectrum (Fig. 2e). Such behavior can be associated with the partial band dispersion that occurs along the polyhedral chain in 1D metal-halides 38 .
These molecular-like characteristics of the absorption spectra of low-dimensional compounds are reflected in their emission. Unlike CsSnBr3 (a 3D perovskite semiconductor) that exhibits weakly Stokes shifted and narrow PL (as a result of excitonic-recombination) with very low quantum yield (QY) 44,45 , Cs4SnBr6 shows room-temperature (RT), green broadband PL centered at 535 nm, with a full-width at half-maximum (FWHM) of 120 nm, and a QY of about 20% (Fig. 2b, Table 1). Such broadband and highly Stokes shifted emission has been observed for other low-dimensional metal-halide compounds such as [C(NH2)3]2SnBr4, (C4N2H14Br)4SnBr6, and (C4N2H14I)4SnI6, and is commonly associated with STE recombination (Fig. 2b,d,f and Supplementary Fig. 4) 35-38 .
The materials typically used in remote thermometry are either rare-earth-doped phosphors exhibiting emission from charge-transfer states 9 or transition metal oxides with STE based emission 46 . As a result of thermal de-trapping, the emission from thermographic luminophores is generally mono-exponential and strongly dependent on the material's temperature; the typical relaxation time for this process is in the range of µs to ms. With the low-dimensional tin-halides, presented here, we have also found the emission lifetime strongly temperature-sensitive, and we associate this with STE de-trapping. The corresponding energy diagram in Fig. 3a depicts this process: upon photon absorption (1), an electron is promoted to an excited state and, after its thermalization (2), is trapped (3) in a long-lived STE state. This trapping is then followed by a radiative recombination with broadband emission (4). A thermally assisted de-trapping pathway (5), followed by fast non-radiative recombination, is also present and plays a key role in the temperature-dependence of the PL characteristics. During de-trapping, a distorted lattice around an STE can be returned back to its original state through exciton-phonon coupling. Thus higher temperatures facilitate de-trapping and assist relaxation via a fast non-radiative channel. In agreement with this mechanism, a strong thermally-driven acceleration of the PL lifetime is observed (measured with time-resolved PL, TRPL, Fig. 3b). Furthermore, the PL intensity scales with the change in PL lifetime ( Supplementary Fig. 5), and its change is fully reversible with temperature ( Supplementary Fig. 6). The absolute PL QYs within the thermal sensitivity range vary linearly between ca. 100 % at lower temperatures to less than 1% at higher temperatures. The highly useful, practical assets of this STE-based emission are (i) the independence of PL lifetime from excitation intensity ( Supplementary Fig. 7) and (ii) the fact that the PL lifetime is the same throughout the whole emission band ( Supplementary Fig. 8). In addition, the PL lifetime is also independent of the material's environment (freestanding or encapsulated within a polymer matrix, Supplementary Fig. 9), and it is highly reproducible from batch-to-batch ( Supplementary Fig. 10) despite differences in the degree of crystallinity and purity. This suggests that the origin of the temperature-dependence is not related to the freezing-out of defect or trap states, but rather a phonon-assisted de-trapping process that is followed by fast non-radiative relaxation. This set of PL characteristics is clearly advantageous over semiconductive metal-halides with delocalized electronic structures (2D, 3D-compounds), wherein the PL decay is a complex function of the density, types, and depths of defect states, as well as the electronic doping level, the state of the surface, and size-quantization. As a result, there is often high batch-to-batch variability; even within similar synthetic methods.
Such behavior of the STE emission, in terms of PL lifetime and PL QY, is not unique to Cs4SnBr6, but is also shared by the other tested 0D and 1D metal-halides (Fig. 3c). Due to the phonon-assisted thermal effect, the temperature range for thermal sensing can be compositionally and/or structurally engineered to either higher or lower temperatures than in the case of Cs4SnBr6.
In the temperature range from -200 °C to 110 °C, [C(NH2)3]2SnBr4 begins to exhibit PL lifetime acceleration at the lowest temperatures, with a sensitivity range of -100 to -30 °C; Cs4SnBr6 has a sensitivity range from -30 °C to 40 °C, whereas (C4N2H14I)4SnI6 exhibits sensitivity from 40 °C up to 110 °C (these ranges are shown as colored areas in Fig. 3c). Surprisingly, (C4N2H14Br)4SnBr6, also exhibits broadband STE emission, but our measurement setup was not able to reach the thermally sensitive range. This suggests that this range lies at much higher temperatures ( Supplementary Fig. 11), and the use of this material as a thermographic luminophore would be rather limited by its thermal stability.
Highly reproducible variation of the PL lifetime, by several orders of magnitude (e.g. 2 orders over a 100 °C range for Cs4SnBr6), makes these metal-halides potent luminophores with high thermometric precision. To assess the practically achievable resolution for thermographic applications, we considered the uncertainty in the monoexponential fitting for Cs4SnBr6 near RT (<1 ns, or ca. 0.2 % of the absolute lifetime value) and applied this error to each point of the PL lifetime vs. temperature graph (Fig. 3d, inset). Remarkably, these experimentally determined lifetimes follow a near-linear trend with a similar uncertainty of 1 ns that validates the above estimation. In the vicinity of RT, the lifetime vs. temperature dependence for Cs4SnBr6 gives a sensitivity of about 17 ns °C -1 estimated as (inset to Fig. 3d). This then yields a thermometric precision of 0.05 °C, which is several times better than previous estimates for fluorescent lifetime thermography 18 . Furthermore, an additional figure-of-merit for thermographic luminophores is the specific sensitivity, = 1 , which in the case of low-dimensional tin-halides reaches values of 0.06 °C -1 ( Supplementary Fig. 12). This is among the highest reported values for thermographic luminophores 47 . Higher resolutions have been demonstrated, but only for a specific case where operation was limited to a narrow temperature range around a phase transition. 47 . 4c,d), the ToF-FLI method showed a much higher lateral thermographic resolution. The blurring observed in the bolometric thermogram of the sample is mostly due to absorption by the coverslip in the mid-to far-IR range (compare Fig. 4c with the bolometric thermogram of the uncovered ITO pattern during heating in Supplementary Fig. 19). These absorption effects highlight a major obstacle, i.e. absorption of thermal emission by glass and similar media, which prevent the combination of bolometric thermography with conventional optical systems that utilize glass or quartz elements.
While a still image demonstrates the ability of this system to precisely measure a 2D map of PL-lifetime, observing dynamic processes requires the ability to record video and to measure at sufficiently high rates. To demonstrate the potential of our ToF-FLI prototypewhose sensor can record with a rate of up to 100 frames per secondfor thermographic video acquisition, we recorded a video of the thermal response of a (C4N2H14I)4SnI6 powder through 1 mm of a glass substrate to the brief contact of a soldering pin (temperature at the apex was approx. 120 °C; ToF-FLI specifications in Supplementary Note 4; Supplementary Video 1). Indeed, it was possible to observe the dynamic temperature changes that occurred as well as the heat transfer through and along the substrate -a challenge for pixel-by-pixel scanning technologies.
In summary, we discovered that the de-trapping process of STEs in low-dimensional tinhalides exhibits extreme thermal sensitivity over a compositionally tunable range of temperatures.
In particular, such emission is characterized by monoexponential decays with a steep dependence of PL lifetime on temperature (up to 20 ns C -1 ). We then applied these features to high-precision thermometric measurements over a wide temperature range (-100 C to 110 C), and furthermore demonstrated a novel approach to remote optical thermography by combining these lowdimensional tin-halide luminophores with ToF-FLI. By doing so, we have succeeded in achieving low-cost, precise, and high-speed PL-lifetime thermographic imaging.
Methods
Synthesis of Cs4SnBr6. All chemicals were used as received without further purification. All manipulations were performed air-free inside a glovebox with H2O and O2 levels <0.1 ppm. CsBr The ampule was opened in the glovebox, and the above process was repeated once more. The pseudo-binary CsBr-SnBr2 phase diagram 43
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Supplementary Information
High-resolution remote thermography using luminescent low-dimensional tinhalide perovskites C4N2H14Br)4SnBr6 ....................................................... In order to understand the nature of the thermal quenching of emission and the lifetime acceleration in tin-halide luminophores, we fit temperature dependent PL-lifetime data with several suitable models (Figs.S13-S15; Table S1 is the non-radiative rate), is the activation temperature, and T is the temperature in K. Although the model agrees with the experimental data, we found that it is difficult to provide a physical interpretation for the fitting parameters: A and , which are exceedingly large (10 7 and 6000 K, respectively; Fig. S13, Table S1).
Next, we chose the Boltzmann-sigmoid model, which also provided a proper fit for the experimental data according to the equation:
( ) = τ 0 1+ −
where τ 0 is the intrinsic radiative lifetime, and are respectively the center and half-width of the temperature sensitive range, and T is the temperature in K (Fig.S14). Despite the fact that the fitting parameters in this model (Table S2) do indeed yield realistic values, the model itself cannot be attributed to any physical quenching process.
Therefore, as a compromise between simplicity and the ability to realistically interpret the model, we chose the exciton-phonon scattering model: , where τ 0 is the intrinsic radiative lifetime, ℎ is the exciton-phonon scattering probability, ℎ is the phonon energy, m is the number of phonons, k is the Boltzmann constant, and T is the temperature in K (Fig. S15, Table S3). 1 This model succeeds in providing phonon energies that can be converted to physically relevant activation temperatures (Table S3, . Figure 16. Photographs of the depth standard for ToF imaging in Fig.4a.
Supplementary
Supplementary Note 2. Basic principles of ToF-FLI.
Depth images such as those produced by the ToF sensor in the Kinect 2.0 from Microsoft Corp.
are measured by the phase shift Δϕ (equivalent to the delay time) of the reflected light relative to the IR reference beam due to light propagation x (Supplementary Fig.S17a). For the depth imaging we take into account the fact that the reflection is an immediate process on the contrary to PL that has an intrinsic delay associated with the PL lifetime. The ToF-measurement of the delay caused by the luminophore's PL lifetime τ makes the use of ToF-FLI possible ( Supplementary Fig.S17b).
To do so, we switched to UV excitation (as an analog to the IR reference beam used in the Kinect 2.0) and rejected any scattered or reflected UV light by optical cut-off filters to exclusively record the PL delay. However, an additional reference measurement without optical filters (to register UV light reflected by the sample) is required to determine the initial delay time ToF-FLI is based on a mathematical model where the observed PL emission trace is shown as a convolution of a harmonically modulated excitation signal with a mono-exponential emission relaxation decay:
( ) = ∫ • (1 + 0 (2 + 2 )) • − − ,(1)
where B is a scaling coefficient, ν is the excitation modulation frequency, and τ is an exponential decay parameter. The result of convolution (1) is a harmonically oscillating trace that has a certain phase delay ϕ from the excitation trace (Fig.4b).
In the case of a mono-exponential decay the result of the phase delay is determined 2 by:
= tan( ) 2(2)
The working principle for a ToF image sensor is based on the acquisition of four, phase-locked images at 0°, 90°, 180° and 270° phase delay with respect to the excitation signal (I0, I1, I2 and I3 on Fig.4b). 3 From these four images, the spatial distribution of the PL intensity I, modulation index
The lifetime is then calculated using Eq. (2). In the ToF-FLI prototype used in this study (Fig. S18), each pixel of the imaging chip is capable of performing this measurement in parallel, thus circumventing the need for bulky and expensive scanning systems.
While metal-halide perovskites have recently revolutionized research in optoelectronics through a unique combination of performance and synthetic simplicity, their lowdimensional counterparts can further expand the field with hitherto unknown and practically useful optical functionalities. In this context, we present the strong temperature dependence of the photoluminescence (PL) lifetime of low-dimensional, perovskite-like tinhalides, and apply this property to thermal imaging with a high precision of 0.05 °C. The PL lifetimes are governed by the heat-assisted de-trapping of self-trapped excitons, and their values can be varied over several orders of magnitude by adjusting the temperature (up to 20 ns °C -1 ). Typically, this sensitive range spans up to one hundred centigrade, and it is both compound-specific and shown to be compositionally and structurally tunable from -100 to 110 ºC going from [C(NH2)3]2SnBr4 to Cs4SnBr6 and (C4N2H14I)4SnI6. Finally, through the innovative implementation of cost-effective hardware for fluorescence lifetime imaging (FLI), based on time-of-flight (ToF) technology, these novel thermoluminophores have been used to record thermographic videos with high spatial and thermal resolution.
have been shortlisted: [C(NH2)3]2SnBr4 [C(NH2)3 = guanidinium; CCDC code 1854819], Cs4SnBr6 39 and (C4N2H14I)4SnI6 40 (Fig. 2;
(
99 %, ABCR; 99 %, Alfa Aesar) and SnBr2 (99.2 %, Alfa Aesar) were mixed in a 4.5:1 molar ratio, mortared, and pressed together into a pellet (> 5 tons of pressure, 13 mm die). The pellet was then sealed under vacuum (10 -2 -10 -3 mbar) in a pyrex tube and heated to 350 C for 60 hours.
indicates that temperatures above 380 C result in a peritectic decomposition. Synthesis of [C(NH2)3]2SnBr4. [C(NH2)3]2SnBr4 was crystallized from hydrobromic acid (HBr, 48 % water solution, Acros) under inert conditions (Ar atmosphere) in a 20-mL Schlenk vessel. For this, Sn powder (0.250 g, 2.1 mmol, 99.8 %, ~325 mesh, from Acros) was dissolved in 3 mL of HBr (degassed under stirring in Ar atmosphere for ~20 min. beforehand). First, the mixture was stirred for 10 min. at RT and then heated with a glycerol bath at 80 C. When all Sn had dissolved, 0.378 g of guanidinium carbonate, [C(NH2)3]2CO3, (99+ %, Acros) was carefully added; a strong evolution of gas was observed. The reaction mixture was then stirred for an additional 5 min. at 80 C resulting in a clear colorless solution, followed by nature cooling. In 8-10 hr., a white crystalline powder of [C(NH2)3]2SnBr4 in the shape of thin needles was separated by vacuum filtration under Ar flow and dried under vacuum. Synthesis of (C4N2H14I)4SnX6 (X = Br, I). C4N2H14X2 was prepared according to Ref. 40 with small modifications. 7.4 mL (0.056 mol) of hydroiodic acid (HI, 57% water solution, ABCR) or 6.4 mL (0.056 mol) hydrobromic acid (HBr, 48% water solution, Acros) was added to 3 mL (0.028 mol) of N,N'-dimethylethylenediamine in 20 mL of ethanol at 0 C and stirred overnight. A whiteyellowish powder of C4N2H14X2 was obtained after removing the solvent under vacuum. The salt was washed several times with diethyl ether and dried under vacuum. C4N2H14X2 was stored in the glovebox for future use. A precursor solution was prepared by mixing 0.1 mmol of SnX2 and 0.4 mmol of C4N2H14X2 in 1 mL of degassed DMF overnight. 0.5 mL of the precursor solutionwas then rapidly injected, under stirring, into either 3 mL of anhydrous toluene with 30 mL of trioctylphosphine (for X = I) or into 3 mL anhydrous toluene (X = Br). This was followed by the immediate formation of the precipitate, (C4N2H14I)4SnX6. The mixture was stirred at RT for another 15 min, the crude solution was centrifuged and the supernatant was discarded. The precipitate was washed with anhydrous toluene two more times, followed by drying under vacuum.The whole procedure was performed air-free and (C4N2H14I)4SnX6 was stored in the glovebox.Optical characterization UV-Vis absorbance spectra were obtained using the Kubelka-Munk transformation of diffuse reflectance of the microcrystalline powders that were collected using a Jasco V670 spectrophotometer equipped with a deuterium (D2) lamp (190 -350 nm) for UV, a halogen lamp (330 -2700 nm) for UV/NIR, and an integrating sphere (ILN-725) with a working wavelength range of 220 -2200 nm. Photoluminescence emission, excitation and quantum yield. PL and PLE spectra were measured with a Fluorolog iHR 320 Horiba Jobin Yvon spectrofluorometer equipped with a Xe lamp and a PMT detector. Absolute values of PL QY were measured using a Quantaurus-QY spectrometer from Hamamatsu in powder mode. Time-resolved and steady-state photoluminescence temperature dependence. Samples were located on top of a 4-stage Peltier cooling/heating element in an evacuated chamber with a quartz window. The sample temperature was adjusted and stabilized using a self-made electronic scheme based on an Arduino microcontroller and thermocouple sensor. The current through the Peltier was reversible; hence the setup provided a wide working temperature range of -40 ºC to 120 ºC. This is an open-source project by the authors, which is deposited and described in detail at https://www.researchgate.net/project/High-power-thermoelectric-cooler-TEC-controller-with-4stage-Peltier-refrigerator-heater. Measurements in the low temperature range (78-300 K) were performed using a Joule-Thomson cryostat (MMR Technologies) with a cooling/heating rate of about 2-5 K/min. TRPL traces were recorded with a heating rate of 2 ºC /min. A 355 nm excitation source (a frequency-tripled, picosecond Nd:YAG laser, Duetto from Time-Bandwidth) and a CW diode laser with an excitation wavelength of 405 nm (for steady-state PL) were used. Scattered emission from the lasers was filtered out using dielectric long-pass filters with cut-offs at 400 nm and 450 nm, respectively. For TRPL, the signal was acquired using a time-correlated single photon counting (TCSPC) setup, equipped with a SPC-130-EM counting module (Becker & Hickl GmbH) and an IDQ-ID-100-20-ULN avalanche photodiode (Quantique) for recording the decay traces. ToF thermography imaging. As a heating element we used a patterned ITO thin film deposited on a microscopy cover slide (Structure Probe, Inc. USA). The patterning was performed by a standard "wet" lithography process using a positive photoresist and a contact mask that was inkjet printed onto a transparent polymer film. The heat distribution in the obtained ITO pattern was checked by a LWIR camera: Thermal Seek customized with ZnSe lensed macro-objective made according to the project: https://www.thingiverse.com/thing:525605. Cs4SnBr6 powder was then dispersed on top of the ITO pattern and encapsulated with a photopolymer glue (ZLD-312 UV, Zhanlida Co., Ltd.) and a second glass microscopy cover slide under inert atmosphere in a nitrogen-filled glovebox. The ToF-FLI prototype includes a modulated light source (LED, 365 nm emission wavelength), a CMOS ToF imager (256 x 256 pixels) originally developed by CSEM for 3D imaging, dedicated FPGA-based electronics, and optical components for illuminating the probe and collecting the PL emission. The camera electronics are based on a stacked PCB approach, including a base board, a FPGA processing module and a sensor head PCB. A MATLAB GUI running on a separate PC is used to set the measurement parameters (such as modulation frequency, illumination intensity and integration time) and display the results. The LED modulation frequency can be varied between 3 kHz and 20 MHz, allowing the measurement of PL lifetimes from hundreds of microseconds down to a few nanoseconds with sub-nanosecond precision. With the current optics an area of 5.3x5.3 mm is imaged. The emission wavelength can be selected by exchangeable spectral filters. More detailed technical characteristics of the ToF-FLI image sensor setup are listed in Supplementary Note 3 and Supplementary Reference 4.
Acknowledgements M.K. acknowledges financial support from the European Union through the FP7 (ERC Starting Grant NANOSOLID, GA No. 306733). C.H. and S.C. thank the Swiss Nano-Tera program (projects FlusiTex and FlusiTex Gateway) and the Swiss Commission for Technology and Innovation CTI (project SecureFLIM) for financing the development of the ToF-FLI imager. Authors thank Gabriele Rainò and Stefan T. Ochsenbein for fruitful discussion. Author contributions This work originated from continuing interactions between the research groups at ETH Zurich and CSEM. S.Y., B.B., Y.S. performed measurements; C.H., and S.C. developed and adapted the FLI reader; B.B., O.N., D.D. and M.B. synthesized the tin-halide thermographic luminophores; S.Y. and Y.S. analyzed the results; S.Y., B.B., and M.K. wrote the manuscript. M.K. supervised the work. S.Y. and B.B. contributed equally to this work. All authors discussed the results and commented on the manuscript. Additional Information Competing financial interests: The authors declare no competing financial interest. Reprints and permission information is available online at http://npg.nature.com/reprintsandpermissions
Figure 1 .
1Visible-light and infrared (IR) thermography comparison. Differences in transmittance for visible and IR light are demonstrated in (a) photograph, and in (b) IR thermogram captured using a commercial Seek Thermal Compact Pro™ LWIR bolometry camera. (c) The transparency ranges of two ubiquitous optical media.
Figure 2 .
2Crystallographic structures and basic optical properties of select thermographic luminophores. Crystal structure of (a) Cs4SnBr6, (b) [C(NH2)3]2SnBr4 and (c) (C4N2H14I)4SnI6 and their corresponding absorption (Kubelka-Munk transformed spectrum), PL, and PLE spectra.
Figure 3 .
3Thermal effects on PL lifetime variation of self-trapped excitonic (STE) emission in low-dimensional tin-halides. (a) Energy diagram depicting the STE processes: 1 -photon absorption, 2 -thermalization, 3 -trapping, 4 -radiative recombination, 5 -thermally-assisted detrapping followed by non-radiative recombination. (b) Temperature evolution of TRPL traces for Cs4SnBr6 excited at 355 nm. (c) PL lifetime temperature dependence for [C(NH2)3]2SnBr4 (blue curve), Cs4SnBr6 (green curve), (C4N2H14I)4SnI6 (red curve). (d) The monoexponential model demonstrates a high precision in lifetime fitting of about 1 ns. The inset to Fig.3d illustrates the accuracy of PL lifetime vs. temperature dependency within error bars of 1 ns that correspond to a precision of 0.05 °C.
Figure 4 .
4Demonstration of the principles for remote thermography based on ToF sensors. (a) 3D depth image acquired with the ToF camera of a Kinect 2.0™ showing a depth resolution of 2 mm in the inset. This is equivalent to a precision of about 10 ps in the time-domain. (b) Recalculation of phase-locked intensities to a phase-shift in the frequency domain. (c)Thermographic image of a sample that consists of encapsulated Cs4SnBr6 powder placed between patterned ITO and a glass coverslip, and then acquired with a commercial Seek Thermal Compact Pro™ LWIR bolometry camera equipped with a ZnSe lensed macro-objective. The image was taken as a current passed through the ITO resulting in resistive heating. (d) ToF-FLI thermogram of the same sample under the same heating conditions from (c). Scale bars in (c) and (d) are 3 mm.21 Mao, L., Guo, P., Kepenekian, M., Hadar, I., Katan, C., Even, J., Schaller, R. D., Stoumpos, C. C. & Kanatzidis, M. G. Structural diversity in white-light-emitting hybrid lead bromide perovskites. J. Am. Chem. Soc. 140, 13078-13088, (2018). 22 Quintero-Bermudez, R., Gold-Parker, A., Proppe, A. H., Munir, R., Yang, Z., Kelley, S. O., Amassian, A., Toney, M. F. & Sargent, E. H. Compositional and orientational control in metal halide perovskites of reduced dimensionality. Nat. Mater. 17, 900-907, (2018). 23 Lee, M. M., Teuscher, J., Miyasaka, T., Murakami, T. N. & Snaith, H. J. Efficient hybrid solar cells based on meso-superstructured organometal halide perovskites. Science 338, 643-647, (2012). 24 Kim, H.-S., Lee, C.-R., Im, J.-H., Lee, K.-B., Moehl, T., Marchioro, A., Moon, S.-J., Humphry-Baker, R., Yum, J.-H., Moser, J. E., Grätzel, M. & Park, N.-G. Lead iodide perovskite sensitized all-solidstate submicron thin film mesoscopic solar cell with efficiency exceeding 9%. Sci. Rep. 2, 591, (2012). 25 Hao, F., Stoumpos, C. C., Cao, D. H., Chang, R. P. H. & Kanatzidis, M. G. Lead-free solid-state organic-inorganic halide perovskite solar cells. Nat. Photonics 8, 489, (2014). 26 Yakunin, S., Shynkarenko, Y., Dirin, D. N., Cherniukh, I. & Kovalenko, M. V. Non-dissipative internal optical filtering with solution-grown perovskite single crystals for full-colour imaging. NPG Asia Mater. 9, e431, (2017). 27 Tan, H., Jain, A., Voznyy, O., Lan, X., García de Arquer, F. P., Fan, J. Z., Quintero-Bermudez, R., Yuan, M., Zhang, B., Zhao, Y., Fan, F., Li, P., Quan, L. N., Zhao, Y., Lu, Z.-H., Yang, Z., Hoogland, S. & Sargent, E. H. Efficient and stable solution-processed planar perovskite solar cells via contact passivation. Science 355, 722-726, (2017). 28 Yakunin, S., Sytnyk, M., Kriegner, D., Shrestha, S., Richter, M., Matt, G. J., Azimi, H., Brabec, C. J., Stangl, J., Kovalenko, M. V. & Heiss, W. Detection of X-ray photons by solution-processed lead halide perovskites. Nat. Photonics 9, 444-449, (2015). 29 Yakunin, S., Dirin, D. N., Shynkarenko, Y., Morad, V., Cherniukh, I., Nazarenko, O., Kreil, D., Nauser, T. & Kovalenko, M. V. Detection of gamma photons using solution-grown single crystals of hybrid lead halide perovskites. Nat. Photonics 10, 585-589, (2016). 30 He, Y., Matei, L., Jung, H. J., McCall, K. M., Chen, M., Stoumpos, C. C., Liu, Z., Peters, J. A., Chung, D. Y., Wessels, B. W., Wasielewski, M. R., Dravid, V. P., Burger, A. & Kanatzidis, M. G. High spectral resolution of gamma-rays at room temperature by perovskite CsPbBr3 single crystals. Nat. Commun. 9, 1609, (2018). 31 Cho, H., Jeong, S.-H., Park, M.-H., Kim, Y.-H., Wolf, C., Lee, C.-L., Heo, J. H., Sadhanala, A., Myoung, N., Yoo, S., Im, S. H., Friend, R. H. & Lee, T.-W. Overcoming the electroluminescence efficiency limitations of perovskite light-emitting diodes. Science 350, 1222-1225, (2015). 32 Yakunin, S., Protesescu, L., Krieg, F., Bodnarchuk, M. I., Nedelcu, G., Humer, M., De Luca, G., Fiebig, M., Heiss, W. & Kovalenko, M. V. Low-threshold amplified spontaneous emission and lasing from colloidal nanocrystals of caesium lead halide perovskites. Nat. Commun. 6, 8056, (2015). 33 Quan, L. N., García de Arquer, F. P., Sabatini, R. P. & Sargent, E. H. Perovskites for Light Emission. Adv. Mater. 0, 1801996. 34 Kovalenko, M. V., Protesescu, L. & Bodnarchuk, M. I. Properties and potential optoelectronic applications of lead halide perovskite nanocrystals. Science 358, 745-750, (2017). 35 Lin, H., Zhou, C., Tian, Y., Siegrist, T. & Ma, B. Low-Dimensional Organometal Halide Perovskites. ACS Energy Lett. 3, 54-62, (2018). 36 Zhou, C., Tian, Y., Wang, M., Rose, A., Besara, T., Doyle, N. K., Yuan, Z., Wang, J. C., Clark, R., Hu, Y., Siegrist, T., Lin, S. & Ma, B. Low-Dimensional Organic Tin Bromide Perovskites and Their Photoinduced Structural Transformation. Angew. Chem. Int. Ed. 56, 9018-9022, (2017). 37 Smith, M. D., Jaffe, A., Dohner, E. R., Lindenberg, A. M. & Karunadasa, H. I. Structural origins of broadband emission from layered Pb-Br hybrid perovskites. Chem. Sci. 8, 4497-4504, (2017).
of Contents Supplementary Figure 1 . 28 Supplementary Figure 2 . 29 Supplementary Figure 3 . 30 Supplementary Figure 4 .
Contents1282293304Powder X-ray diffraction pattern of Cs4SnBr6. ........................................................... Powder X-ray diffraction pattern of [C(NH2)3]2SnBr4. ............................................... Powder X-ray diffraction pattern of (C4N2H14I)4SnI6. ................................................ Optical characterization of (
.... 31 Supplementary Figure 5 . 32 Supplementary Figure 6 . 33 Supplementary Figure 8 . 35 Supplementary Figure 9 . 52 Supplementary Figure 1 .Supplementary Figure 4 .Supplementary Figure 5 .Supplementary Figure 6 .Supplementary Figure 8 .
3153263383595214568Temperature dependence of Cs4SnBr6 PL emission lifetime and intensity. .............. Reversible thermal quenching for Cs4SnBr6 PL emission. ......................................... Time-resolved PL traces for Cs4SnBr6 at various emission ranges. ........................... Time-resolved PL traces for Cs4SnBr6 under different environmental conditions. . 36 Supplementary Figure 10. Time-resolved PL traces of Cs4SnBr6 for several synthetic batches. ....................... 37 Supplementary Figure 11. PL lifetime temperature dependence for (C4N2H14Br)4SnBr6. ................................. 38 Supplementary Note 1. Models for the fitting of PL lifetime vs. temperature dependence. ............................... 40 Supplementary Figure 13. Fitting by the Mott model. ........................................................................................... 42 Supplementary Table 1. Fitting parameters for the Mott model .......................................................................... 42 Supplementary Figure 14. Fitting by the Boltzmann-sigmoid model. .................................................................. 43 Supplementary Figure 15. Fitting by the exciton-phonon scattering model......................................................... 44 Supplementary Table 3. Fitting parameters for the exciton-phonon scattering model ....................................... 44 Supplementary Figure 16. Photographs of the depth standard for ToF imaging inFig.4a. ............................... 45Supplementary Note 2. Basic principles of ToF-FLI. ............................................................................................. 46SupplementaryFigure 17. Scheme describing optical ToF measurements .......................................................... 47 Supplementary Note 3. Frequency domain PL lifetime measurement by phase-shift. ........................................ 48 Supplementary Figure 18. Compact stand-alone ToF-FLI prototype .................................................................. 49Supplementary Note 4. Key specifications of ToF-FLI imagesensor used in the setup ...................................... 50SupplementaryFigure 19. A thermographic image of a patterned ITO glass slide with a bolometric camera. ....................................................................................................................................................................... 50 Supplementary Video 1. ............................................................................................................................................ 51 Supplementary References ....................................................................................................................................... Powder X-ray diffraction pattern of Cs4SnBr6. Optical characterization of (C4N2H14Br)4SnBr6: photoluminescence excitation (PLE, red) and photoluminescence (PL, green) spectra. Temperature dependence of Cs4SnBr6 PL emission lifetime and intensity. Reversible thermal quenching for Cs4SnBr6 PL emission. Time-resolved PL traces for Cs4SnBr6 at various emission ranges. Supplementary Figure 10. Time-resolved PL traces of Cs4SnBr6 for several synthetic batches. Supplementary Figure 11. PL lifetime temperature dependence for (C4N2H14Br)4SnBr6. Supplementary Note 1. Models for the fitting of PL lifetime vs. temperature dependence.
-3). With the first model, we suggested a thermal activation process according to the Mott model,
column where ℎ is shown in units of K). Furthermore, these temperatures agree extremely well with the observed onset of lifetime acceleration in each case.This model, however, predicts a rather high number of phonons involved in the scattering process (m ~10). Such high values of m might be explained through collective processes that involve high numbers of phonons, but this is also unrealistic given the low probability of such an event occurring. It is possible that there exists an essentially nonlinear temperature dependence in the phonon-exciton interaction, and the self-trapped excitons within such systems require another model to provide a satisfactory description of their thermal behavior.SupplementaryFigure 13. Fitting by the Mott model. PL lifetime temperature dependence for [C(NH2)3]2SnBr4 (blue squares), Cs4SnBr6 (green triangles), (C4N2H14I)4SnI6 (red circles) by fitting with an Mott model (colored lines):
Figure 15 .
15Fitting by the exciton-phonon scattering model. PL lifetime temperature dependence for [C(NH2)3]2SnBr4 (blue squares), Cs4SnBr6 (green triangles), (C4N2H14I)4SnI6 (red circles) by fitting with the exciton-phonon scattering model (colored lines):
Δt0 .
Δt0This delay time occurs as a result of the optical path of the measurement scheme as well as variations which result from inhomogeneities in a sample's topography. This must be correctly accounted for to precisely determine the PL-lifetime, τ. Supplementary Figure 17. Scheme describing optical ToF measurements. (a) The estimation of distance x by the measured phase shift Δϕ. (b) Similar ToF hardware principles apply to the estimation of PL decay. Supplementary Note 3. Frequency domain PL lifetime measurement by phase-shift.
M
, and phase angle Δϕ can be computed:
. Cs4SnBr6 (R-3c space group) is composed of [SnBr6] 4octahedra separated by Cs + cations (Fig. 2a) 43 . A new 1D hybrid organic-inorganic compound, guanidinium tin-bromide, [C(NH2)3]2SnBr4 (Pna21 space group), was crystallized from a solution of the respective ions in concentrated HBr. Its tin-halide backbone consists of corner-sharing [SnBr5] 2square pyramids (Fig. 2b). Another 0D hybrid compound, (C4N2H14I)4SnI6, was prepared by co-precipitation from a dimethylformamide solution according to Ref. 40 (C4N2H14I)4SnI6 comprises isolated [SnI6] 4octahedra surrounded by large, organic cations (Fig. 2c). Powder X-ray diffraction patterns of all three materials indicate high phase-purity (Supplementary Figs. 1-3).
Further work is needed to shed light into the physics of the STE emission of these novel luminophores. Herein, we have applied several models to fit the experimental temperature-dependencies of PL lifetimeAs an innovative solution to this problem, we adopted the use of ToF-FLI 19,20a frequency domain time-resolved technique that can be used to acquire a 2D map of PL-lifetimes 50and combined it with thermally sensitive luminophores. ToF-FLI is thus for the first time used for thermographic imaging. This approach offers both the rapid acquisition speed and the excellent depth precision of ToF detectors such as those found in consumer electronics (e.g. the Kinect 2.0), which we confirmed to be several mm at a distance of about 2 m (inset toFig.4a;Supplementary Fig. 16)51 . By converting this depth variation into an equivalent delay time (details in Supplementary Note 2 andSupplementary Fig. 17), we find that such a technique could have a precision in the range of tens-to-hundreds of picoseconds, and this suggested that lifetime precisions approaching those of conventional TCSPC methods were possible. Furthermore, this level of precision is achieved in real-time with video recording. Briefly, the working principle for such a ToF-FLI image sensor is based on the acquisition of four, phase-locked images at 0°, 90°, 180° and 270° phase differences with respect to the excitation signal (I0, I1, I2 and I3 onFig. 4b)followed by the recalculation of the average intensity I, modulation depth M and phase delay, Δϕ (details described in Supplementary Note 3).To demonstrate the concept of affordable ToF-FLI thermography with low-dimensional tin-halides, we used a compact, stand-alone prototype developed by some of the co-authors from CSEM (Switzerland), in which all the necessary hardware components for wide-field frequencydomain FLI were incorporated (Supplementary Fig. 18, Supplementary Note 4). As a test of the thermographic performance of our system, we deposited a Cs4SnBr6 powder over a resistively heated pattern and enclosed the material with a glass coverslip, and then measured the resulting lifetime image of the heated pattern. Compared to a conventional bolometric thermogram(Supplementary Note 1; Supplementary Figs. 13-15; Supplementary Tables S1-3). Although they
all provide a satisfactory fit, each has difficulties providing physically meaningful fitting
parameters. For instance, the Mott model, commonly used for thermal luminophores 8 , results in
rather unphysical values for the activation temperature, whereas the exciton-phonon scattering
model 48 suggests the participation of up to 13 phonons per single de-trapping event.
Time-of-flight thermography using low-dimensional tin-halides. Although several time-
resolved measurement techniques like PL decay trace-measurements, TCSPC (Fig. 3b-d) or phase
fluorometry (frequency domain time-resolved fluorescence) could be used to precisely measure
the PL-lifetime of these materials, all of these have traditionally been limited by the fact that they
only use a single channel detector 49 . Consequently, lengthy acquisition through point-by-point
scanning is required, and this severely limits the thermographic image capture rate.
(Fig
Table 1 .
1Structural and optical characteristics of tin-halide thermographic luminophores.Composition
Space group
Structure
Absorpt.
max.
Emiss.
max.
Emission
FWHM
Stokes
shift
QY
(@RT)
Temp.
range
nm
eV
nm
eV
nm
eV
eV
%
°C
[C(NH2)3]2SnBr4
Pna21
1D 350 3.55
555
2.24 125
0.5
1.31
2
[-100, -30]
Cs4SnBr6
R-3c
0D 345
3.6
535
2.32 120 0.51
1.28
20
[-30, 40]
(C4N2H14I)4SnI6
P-1(#2) 0D 400 3.11
630
2.0
125
0.4
1.14
75
[40, 110]
(C4N2H14Br)4SnBr6 P-1(#2) 0D 335 3.71
575
2.18 107
0.4
1.54
~100
N/A
Table
Table 1 .
1Fitting parameters for the Mott model. Supplementary Figure 14. Fitting by the Boltzmann-sigmoid model. PL lifetime temperature dependence for [C(NH2)3]2SnBr4 (blue squares), Cs4SnBr6 (green triangles), (C4N2H14I)4SnI6 (red circles) by fitting with the Boltzmann-sigmoid model (colored lines): ( ) = SupplementaryTable 2. Fitting parameters for Boltzmann-sigmoid model. Temperature sensitivity ranges are determined as TB ± 2ΔT.Composition
τ0
A
Ta
Temperature sensitivity range
ns
K
ºC (K)
Cs4SnBr6
1669
1.9 10 7
4759
-30 -40 (243 -323)
[C(NH2)3]2SnBr4
1837
1.5 10 6
2863
-100 -30 (173 -243)
(C4N2H14I)4SnI6
1131
5.7 10 8
6945
40 -110 (313 -383)
Table 3 .
3Fitting parameters for the exciton-phonon scattering model.Composition
τ0
Γph
Eph
m
Temperature sensitivity
range
ns
10 -6 s -1
meV
K
ºC (K)
Cs4SnBr6
1681
0.053
21.3
247
10.7
-30 -40 (243 -323)
[C(NH2)3]2SnBr4
1851
0.03
15.6
181
8.7
-100 --30 (173 -243)
(C4N2H14I)4SnI6
1135
0.13
25.2
292
13
40 -110 (313 -383)
SupplementaryFigure 19. A thermographic image of a patterned ITO glass slide with a bolometric camera. The sample was heated with a passing electrical current, and acquired with a commercial Seek Thermal Compact Pro™ LWIR bolometry camera equipped with a ZnSe lensed macro-objective.Supplementary Video 1. Thermographic video.The dynamics of temperature change in the sample and heat transfer through the substrates due to brief contact with a hot soldering pin (the temperature of pin apex was approx. 120 °C). The sample is (C4N2H14I)4SnI6 powder encapsulated between two relatively thick (1mm) glass substrates.
Medical applications of infrared thermography: a review. B B Lahiri, S Bagavathiappan, T Jayakumar, J Philip, Infrared Phys. Technol. 55Lahiri, B. B., Bagavathiappan, S., Jayakumar, T. & Philip, J. Medical applications of infrared thermography: a review. Infrared Phys. Technol. 55, 221-235, (2012).
Thermal infrared imaging of crop canopies for the remote diagnosis and quantification of plant responses to water stress in the field. H G Jones, R Serraj, B R Loveys, L Z Xiong, A Wheaton, A H Price, Funct. Plant Biol. 36Jones, H. G., Serraj, R., Loveys, B. R., Xiong, L. Z., Wheaton, A. & Price, A. H. Thermal infrared imaging of crop canopies for the remote diagnosis and quantification of plant responses to water stress in the field. Funct. Plant Biol. 36, 978-989, (2009).
Infrared thermography for condition monitoring -a review. S Bagavathiappan, B B Lahiri, T Saravanan, J Philip, T Jayakumar, Infrared Phys. Technol. 60Bagavathiappan, S., Lahiri, B. B., Saravanan, T., Philip, J. & Jayakumar, T. Infrared thermography for condition monitoring -a review. Infrared Phys. Technol. 60, 35-55, (2013).
Thermal Imaging with Plasmon Resonance Enhanced HgTe Colloidal Quantum Dot Photovoltaic Devices. X Tang, M M Ackerman, P Guyot-Sionnest, ACS Nano. 12Tang, X., Ackerman, M. M. & Guyot-Sionnest, P. Thermal Imaging with Plasmon Resonance Enhanced HgTe Colloidal Quantum Dot Photovoltaic Devices. ACS Nano 12, 7362-7370, (2018).
Thermal properties of mid-infrared colloidal quantum dot detectors. E Lhuillier, S Keuleyan, P Rekemeyer, P Guyot-Sionnest, J. Appl. Phys. 11033110Lhuillier, E., Keuleyan, S., Rekemeyer, P. & Guyot-Sionnest, P. Thermal properties of mid-infrared colloidal quantum dot detectors. J. Appl. Phys. 110, 033110, (2011).
Progress in focal plane array technologies. A Rogalski, Prog. in Quant. Electron. 36Rogalski, A. Progress in focal plane array technologies. Prog. in Quant. Electron. 36, 342-473, (2012).
Infrared imaging video bolometer. B J Peterson, Rev. Sci. Instrum. 71Peterson, B. J. Infrared imaging video bolometer. Rev. Sci. Instrum. 71, 3696-3701, (2000).
Non-contact luminescence lifetime cryothermometry for macromolecular crystallography. V B Mykhaylyk, A Wagner, H Kraus, J. Synchrotron Radiat. 24Mykhaylyk, V. B., Wagner, A. & Kraus, H. Non-contact luminescence lifetime cryothermometry for macromolecular crystallography. J. Synchrotron Radiat. 24, 636-645, (2017).
Remote thermometry with thermographic phosphors: Instrumentation and applications. S W Allison, G T Gillies, Rev. Sci. Instrum. 68Allison, S. W. & Gillies, G. T. Remote thermometry with thermographic phosphors: Instrumentation and applications. Rev. Sci. Instrum. 68, 2615-2650, (1997).
A broadening temperature sensitivity range with a core-shell YbEr@YbNd double ratiometric optical nanothermometer. L Marciniak, K Prorok, L Frances-Soriano, J Perez-Prieto, A Bednarkiewicz, Nanoscale. 8Marciniak, L., Prorok, K., Frances-Soriano, L., Perez-Prieto, J. & Bednarkiewicz, A. A broadening temperature sensitivity range with a core-shell YbEr@YbNd double ratiometric optical nanothermometer. Nanoscale 8, 5037-5042, (2016).
Heat flux measurements in stagnation point methane/air flames with thermographic phosphors. M Salem, S Staude, U Bergmann, B Atakan, Exp. Fluids. 49Salem, M., Staude, S., Bergmann, U. & Atakan, B. Heat flux measurements in stagnation point methane/air flames with thermographic phosphors. Exp. Fluids 49, 797-807, (2010).
High-temperature remote thermometry using laser-induced fluorescence decay lifetime measurements of Y2O3:Eu and YAG:Tb thermographic phosphors. S D Alaruri, A J Brewington, M A Thomas, J A Miller, IEEE T. Instrum. Meas. 42Alaruri, S. D., Brewington, A. J., Thomas, M. A. & Miller, J. A. High-temperature remote thermometry using laser-induced fluorescence decay lifetime measurements of Y2O3:Eu and YAG:Tb thermographic phosphors. IEEE T. Instrum. Meas. 42, 735-739, (1993).
On surface temperature measurements with thermographic phosphors: a review. J Brübach, C Pflitsch, A Dreizler, B Atakan, Prog. Energy Combust. Sci. 39Brübach, J., Pflitsch, C., Dreizler, A. & Atakan, B. On surface temperature measurements with thermographic phosphors: a review. Prog. Energy Combust. Sci. 39, 37-60, (2013).
Luminescent probes and sensors for temperature. X Wang, O S Wolfbeis, R J Meier, Chem. Soc. Rev. 42Wang, X.-d., Wolfbeis, O. S. & Meier, R. J. Luminescent probes and sensors for temperature. Chem. Soc. Rev. 42, 7834-7869, (2013).
A survey of phosphors novel for thermography. J Brübach, T Kissel, M Frotscher, M Euler, B Albert, A Dreizler, J. Lumin. 131Brübach, J., Kissel, T., Frotscher, M., Euler, M., Albert, B. & Dreizler, A. A survey of phosphors novel for thermography. J. Lumin. 131, 559-564, (2011).
Temperature dependence of the fluorescence lifetime in Pr 3+ :ZBLAN glass for fiber optic thermometry. T Sun, Z Y Zhang, K T V Grattan, A W Palmer, S F Collins, Rev. Sci. Instrum. 68Sun, T., Zhang, Z. Y., Grattan, K. T. V., Palmer, A. W. & Collins, S. F. Temperature dependence of the fluorescence lifetime in Pr 3+ :ZBLAN glass for fiber optic thermometry. Rev. Sci. Instrum. 68, 3447-3451, (1997).
Intracellular temperature mapping with a fluorescent polymeric thermometer and fluorescence lifetime imaging microscopy. K Okabe, N Inada, C Gota, Y Harada, T Funatsu, S Uchiyama, Nat. Commun. 3705Okabe, K., Inada, N., Gota, C., Harada, Y., Funatsu, T. & Uchiyama, S. Intracellular temperature mapping with a fluorescent polymeric thermometer and fluorescence lifetime imaging microscopy. Nat. Commun. 3, 705, (2012).
Temperature measurement techniques for gas and liquid flows using thermographic phosphor tracer particles. C Abram, B Fond, F Beyrau, Prog. Energy Combust. Sci. 64Abram, C., Fond, B. & Beyrau, F. Temperature measurement techniques for gas and liquid flows using thermographic phosphor tracer particles. Prog. Energy Combust. Sci. 64, 93-156, (2018).
Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors. A Bhandari, C Barsi, R Raskar, Optica. 2Bhandari, A., Barsi, C. & Raskar, R. Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors. Optica 2, 965-973, (2015).
Time-domain fluorescence lifetime imaging techniques suitable for solidstate imaging sensor arrays. D D Li, .-U Ameer-Beg, S Arlt, J Tyndall, D Walker, R Matthews, D R Visitkul, V Richardson, J Henderson, R K , Sensors. 12Li, D. D.-U., Ameer-Beg, S., Arlt, J., Tyndall, D., Walker, R., Matthews, D. R., Visitkul, V., Richardson, J. & Henderson, R. K. Time-domain fluorescence lifetime imaging techniques suitable for solid- state imaging sensor arrays. Sensors 12, 5650-5669, (2012).
One-dimensional organic lead halide perovskites with efficient bluish white-light emission. Z Yuan, C Zhou, Y Tian, Y Shu, J Messier, J C Wang, L J Van De Burgt, K Kountouriotis, Y Xin, E Holt, K Schanze, R Clark, T Siegrist, B Ma, Nat. Commun. 814051Yuan, Z., Zhou, C., Tian, Y., Shu, Y., Messier, J., Wang, J. C., van de Burgt, L. J., Kountouriotis, K., Xin, Y., Holt, E., Schanze, K., Clark, R., Siegrist, T. & Ma, B. One-dimensional organic lead halide perovskites with efficient bluish white-light emission. Nat. Commun. 8, 14051, (2017).
Highly Emissive Self-Trapped Excitons in Fully Inorganic Zero-Dimensional Tin Halides. B M Benin, D N Dirin, V Morad, M Worle, S Yakunin, G Raino, O Nazarenko, M Fischer, I Infante, M V Kovalenko, Angew. Chem. Int. Ed. 57Benin, B. M., Dirin, D. N., Morad, V., Worle, M., Yakunin, S., Raino, G., Nazarenko, O., Fischer, M., Infante, I. & Kovalenko, M. V. Highly Emissive Self-Trapped Excitons in Fully Inorganic Zero- Dimensional Tin Halides. Angew. Chem. Int. Ed. 57, 11329-11333, (2018).
Luminescent zero-dimensional organic metal halide hybrids with near-unity quantum efficiency. C Zhou, H Lin, Y Tian, Z Yuan, R Clark, B Chen, L J Van De Burgt, J C Wang, Y Zhou, K Hanson, Q J Meisner, J Neu, T Besara, T Siegrist, E Lambers, P Djurovich, B Ma, Chem. Sci. 9Zhou, C., Lin, H., Tian, Y., Yuan, Z., Clark, R., Chen, B., van de Burgt, L. J., Wang, J. C., Zhou, Y., Hanson, K., Meisner, Q. J., Neu, J., Besara, T., Siegrist, T., Lambers, E., Djurovich, P. & Ma, B. Luminescent zero-dimensional organic metal halide hybrids with near-unity quantum efficiency. Chem. Sci. 9, 586-593, (2018).
. R Hansel, S Allison, G Walker, Mater. Res. Soc. Symp. Proc. 1076Cambridge University PressHansel, R., Allison, S. & Walker, G. in Mater. Res. Soc. Symp. Proc. Vol. 1076 181-187 (Cambridge University Press, 2008).
Temperature-dependent fluorescence decay lifetimes of the phosphor Y3(Al0.5Ga0.5)5O12:Ce 1%. S W Allison, J R Buczyna, R A Hansel, D G Walker, G T Gillies, J. Appl. Phys. 10536105Allison, S. W., Buczyna, J. R., Hansel, R. A., Walker, D. G. & Gillies, G. T. Temperature-dependent fluorescence decay lifetimes of the phosphor Y3(Al0.5Ga0.5)5O12:Ce 1%. J. Appl. Phys. 105, 036105, (2009).
Solid-state properties of materials of the type Cs4MX6 (where M = Sn or Pb and X = Cl or Br). R H Andrews, S J Clark, J D Donaldson, J C Dewan, J Silver, J. Chem. Soc. Dalton Trans. Andrews, R. H., Clark, S. J., Donaldson, J. D., Dewan, J. C. & Silver, J. Solid-state properties of materials of the type Cs4MX6 (where M = Sn or Pb and X = Cl or Br). J. Chem. Soc. Dalton Trans., 767-770, (1983).
Electronic states and luminescence properties of CsSnBr3 crystal. A S M Voloshinovskii, V B Myagkota, S V Ostrovskii, I P Pidzyrailo, N S , Opt. Spectrosc. 72Voloshinovskii, A. S. M., V. B.; Myagkota, S. V.; Ostrovskii, I. P.; Pidzyrailo, N. S. Electronic states and luminescence properties of CsSnBr3 crystal. Opt. Spectrosc. 72, 486-488, (1992).
Synthesis and optical properties of leadfree cesium tin halide perovskite nanocrystals. T C Jellicoe, J M Richter, H F J Glass, M Tabachnyk, R Brady, S E Dutton, A Rao, R H Friend, D Credgington, N C Greenham, M L Böhm, J. Am. Chem. Soc. 138Jellicoe, T. C., Richter, J. M., Glass, H. F. J., Tabachnyk, M., Brady, R., Dutton, S. E., Rao, A., Friend, R. H., Credgington, D., Greenham, N. C. & Böhm, M. L. Synthesis and optical properties of lead- free cesium tin halide perovskite nanocrystals. J. Am. Chem. Soc. 138, 2941-2944, (2016).
Radiative decay of self-trapped excitons in CaMoO4 and MgMoO4 crystals. V B Mikhailik, H Kraus, M Itoh, D Iri, M Uchida, J. Phys. Condens. Matter. 17Mikhailik, V. B., Kraus, H., Itoh, M., Iri, D. & Uchida, M. Radiative decay of self-trapped excitons in CaMoO4 and MgMoO4 crystals. J. Phys. Condens. Matter 17, 7209, (2005).
Yb:NaY2F5O up-converting nanoparticles for sub-tissue fluorescence lifetime thermal sensing. O A Savchuk, P Haro-González, J J Carvajal, D Jaque, J Massons, M Aguiló, F Díaz, Er, Nanoscale. 6Savchuk, O. A., Haro-González, P., Carvajal, J. J., Jaque, D., Massons, J., Aguiló, M. & Díaz, F. Er:Yb:NaY2F5O up-converting nanoparticles for sub-tissue fluorescence lifetime thermal sensing. Nanoscale 6, 9727-9733, (2014).
Discrete states and carrier-phonon scattering in quantum dot population dynamics. M T Man, H S Lee, Sci. Rep. 5Man, M. T. & Lee, H. S. Discrete states and carrier-phonon scattering in quantum dot population dynamics. Sci. Rep. 5, 8267, (2015).
Robust bayesian fluorescence lifetime estimation, decay model selection and instrument response determination for low-intensity FLIM imaging. M I Rowley, A C Coolen, B Vojnovic, P R Barber, PLoS One. 11Rowley, M. I., Coolen, A. C., Vojnovic, B. & Barber, P. R. Robust bayesian fluorescence lifetime estimation, decay model selection and instrument response determination for low-intensity FLIM imaging. PLoS One 11, e0158404, (2016).
Fluorescence Lifetime Standards for Time and Frequency Domain Fluorescence Spectroscopy. N Boens, W Qin, N Basarić, J Hofkens, M Ameloot, J Pouget, J.-P Lefèvre, B Valeur, E Gratton, M Vandeven, N D Silva, Y Engelborghs, K Willaert, A Sillen, G Rumbles, D Phillips, A J W G Visser, A Van Hoek, J R Lakowicz, H Malak, I Gryczynski, A G Szabo, D T Krajcarski, N Tamai, A Miura, Anal. Chem. 79Boens, N., Qin, W., Basarić, N., Hofkens, J., Ameloot, M., Pouget, J., Lefèvre, J.-P., Valeur, B., Gratton, E., vandeVen, M., Silva, N. D., Engelborghs, Y., Willaert, K., Sillen, A., Rumbles, G., Phillips, D., Visser, A. J. W. G., van Hoek, A., Lakowicz, J. R., Malak, H., Gryczynski, I., Szabo, A. G., Krajcarski, D. T., Tamai, N. & Miura, A. Fluorescence Lifetime Standards for Time and Frequency Domain Fluorescence Spectroscopy. Anal. Chem. 79, 2137-2149, (2007).
Depth errors analysis and correction for time-of-flight (ToF) cameras. Y He, B Liang, Y Zou, J He, J Yang, Sensors. 17He, Y., Liang, B., Zou, Y., He, J. & Yang, J. Depth errors analysis and correction for time-of-flight (ToF) cameras. Sensors 17, 92, (2017).
Compact stand-alone ToF-FLI prototype developed by CSEM (Switzerland) for real-time, wide-field fluorescence lifetime imaging in the nano-to micro-second. Supplementary Figure 18Supplementary Figure 18. Compact stand-alone ToF-FLI prototype developed by CSEM (Switzerland) for real-time, wide-field fluorescence lifetime imaging in the nano-to micro-second
and carrier-phonon scattering in quantum dot population dynamics. M T Man, H S Lee, Sci. Rep. 5Man, M. T. & Lee, H. S._ and carrier-phonon scattering in quantum dot population dynamics. Sci. Rep. 5, 8267, (2015).
Time-of-Flight Cameras: Principles, Methods and Applications. M Hansard, S Lee, O Choi, R Horaud, Springer Publishing CompanyHansard, M., Lee, S., Choi, O. & Horaud, R. Time-of-Flight Cameras: Principles, Methods and Applications. (Springer Publishing Company, Incorporated, 2012).
Lock-in Time-of-Flight (ToF) Cameras: A Survey. S Foix, G Alenya, C Torras, IEEE Sens. J. 11Foix, S., Alenya, G. & Torras, C. Lock-in Time-of-Flight (ToF) Cameras: A Survey. IEEE Sens. J. 11, 1917-1926, (2011).
L E Bonjour, A Singh, T Baechler, M Kayal, IEEE SENSORS Proceedings. IEEEBonjour, L. E., Singh, A., Baechler, T. & Kayal, M. in 2011 IEEE SENSORS Proceedings. 724-727 (IEEE).
| []
|
[
"Color-Dressed Generalized Biadjoint Scalar Amplitudes: Local Planarity",
"Color-Dressed Generalized Biadjoint Scalar Amplitudes: Local Planarity"
]
| [
"Freddy Cachazo [email protected] \nPerimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooONCanada\n",
"Nick Early [email protected] \nMax Planck Institute for Mathematics in the Sciences\nLeipzigGermany\n",
"Yong Zhang [email protected] \nPerimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooONCanada\n"
]
| [
"Perimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooONCanada",
"Max Planck Institute for Mathematics in the Sciences\nLeipzigGermany",
"Perimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooONCanada"
]
| []
| The biadjoint scalar theory has cubic interactions and fields transforming in the biadjoint representation of SU (N ) × SU (Ñ ). Amplitudes are "color" decomposed in terms of partial amplitudes computed using Feynman diagrams which are simultaneously planar with respect to two orderings. In 2019, a generalization of biadjoint scalar amplitudes based on generalized Feynman diagrams (GFDs) was introduced. GFDs are collections of Feynman diagrams derived by incorporating an additional constraint of "local planarity" into the construction of the arrangements of metric trees in combinatorics. In this work we propose a natural generalization of color orderings which leads to color-dressed amplitudes. A generalized color ordering (GCO) is defined as a collection of standard color orderings that is induced, in a precise sense, from an arrangement of projective lines on RP 2 . We present results for n ≤ 9 generalized color orderings and GFDs, uncovering new phenomena in each case. We discover generalized decoupling identities and propose a definition of the "colorless" generalized scalar amplitude. We also propose a notion of GCOs for arbitrary RP k−1 , discuss some of their properties and comment on their GFDs. In a companion paper, we explore the definition of partial amplitudes using CEGM integral formulas.Here T a are generators of SU (N ), which is traditionally called the "color" group. | null | [
"https://export.arxiv.org/pdf/2212.11243v2.pdf"
]
| 254,926,546 | 2212.11243 | e2f043f61e5653ba6bfe3a58fdd8e3172c0a7a7f |
Color-Dressed Generalized Biadjoint Scalar Amplitudes: Local Planarity
14 May 2023
Freddy Cachazo [email protected]
Perimeter Institute for Theoretical Physics
N2L 2Y5WaterlooONCanada
Nick Early [email protected]
Max Planck Institute for Mathematics in the Sciences
LeipzigGermany
Yong Zhang [email protected]
Perimeter Institute for Theoretical Physics
N2L 2Y5WaterlooONCanada
Color-Dressed Generalized Biadjoint Scalar Amplitudes: Local Planarity
14 May 2023
The biadjoint scalar theory has cubic interactions and fields transforming in the biadjoint representation of SU (N ) × SU (Ñ ). Amplitudes are "color" decomposed in terms of partial amplitudes computed using Feynman diagrams which are simultaneously planar with respect to two orderings. In 2019, a generalization of biadjoint scalar amplitudes based on generalized Feynman diagrams (GFDs) was introduced. GFDs are collections of Feynman diagrams derived by incorporating an additional constraint of "local planarity" into the construction of the arrangements of metric trees in combinatorics. In this work we propose a natural generalization of color orderings which leads to color-dressed amplitudes. A generalized color ordering (GCO) is defined as a collection of standard color orderings that is induced, in a precise sense, from an arrangement of projective lines on RP 2 . We present results for n ≤ 9 generalized color orderings and GFDs, uncovering new phenomena in each case. We discover generalized decoupling identities and propose a definition of the "colorless" generalized scalar amplitude. We also propose a notion of GCOs for arbitrary RP k−1 , discuss some of their properties and comment on their GFDs. In a companion paper, we explore the definition of partial amplitudes using CEGM integral formulas.Here T a are generators of SU (N ), which is traditionally called the "color" group.
Introduction
Tree-level scattering amplitudes of gluons are organized in terms of partial amplitudes and color structures as follows (see [1]
for a review)
A n ({k i , i , a i }) = σ∈Sn/Zn tr (T a σ(1) T a σ(2) · · · T a σ(n) ) A(σ(1), σ(2), . . . , σ(n)).
(1.1) When gluons are replaced by scalars, SU (N ) is called a "flavor" group but in this work we do not make a distinction and uniformly use "color". We are mainly interested in the biadjoint cubic scalar theory [2]. This theory carries a group SU (N ) × SU (Ñ ) and its tree amplitudes can be organized in terms of partial amplitudes as follows M n ({k i , a i ,ã i }) = α,β∈Sn/Zn tr (T a α(1) T a α(2) · · · T a α(n) ) tr Tã β(1) Tã β(2) · · · Tã β(n) m n (α, β).
(1.2) The expressions (1.1) and (1.2) are color decompositions of the corresponding amplitudes.
Biadjoint partial amplitudes m n (α, β) have a simple formula, as the sum over treelevel φ 3 Feynman diagrams which are planar with respect to both orderings. Alternatively, m n (α, β) has a Cachazo-He-Yuan (CHY) formulation in terms of an integral over the configuration space of n points on CP 1 with integrands that depend on the orderings in a simple manner [2,3].
In 2019, Guevara, Mizera, and the first two authors proposed a generalization of the CHY construction to integrals over the configuration space of n points on CP k−1 [4] (see [5][6][7][8][9][10] for related work and connections to cluster algebras). The standard biadjoint theory corresponds to k = 2 while k > 2 leads to Cachazo-Early-Guevara-Mizera (CEGM) generalized biadjoint amplitudes. Also in [4], a connection to the tropical Grassmannian [11], Trop G(k, n), was proposed and proven for the positive part Trop + G(3, 6) [12]. Tropical Grassmannians are closely related to the space of arrangements of metric trees [13], which provide the formulation of a special class of generalized biadjoint amplitudes in terms of planar generalized Feynman diagrams (GFDs) as defined in [14].
While the first steps towards generalizing Feynman diagrams to k > 2 have been taken, the notion of color factors has been missing completely. In this work we fill this gap by proposing a notion of k > 2 color factors, using techniques inspired by the theory of oriented matroids [15]. We first concentrate on k = 3, and define a k = 3 color ordering as a collection of n k = 2 orderings on n − 1 labels which can be derived from a generic arrangement of projective lines on RP 2 . For example, Σ := (σ (1) , σ (2) , σ (3) , σ (4) , σ (5) ) = ((2345), (1345), (1245), (1235), (1234)) , (1.3) is one of the twelve possible color orderings for (k, n) = (3, 5) amplitudes. A convenient way to draw the arrangement of n lines is by taking a chart of RP 2 as a plane with a circle at infinity with its antipodal points identified. The genericity assumption guarantees that each line is intersected by the others in a way that defines a (k = 2, n − 1) color ordering. Figure 1 shows a representation of Σ.
Associated with each ordering there is a color factor c(Σ) so that the color dressed generalized biadjoint amplitude is given by 1234)). The dashed circle is at infinity, points on it with the same label are identified. Each line defines a k = 2 color ordering by identifying its points on the boundary to make a circle. Bottom: Five (k, n) = (2, 4) color orderings obtained from Σ by the order in which lines intersect a given one.
where N 3,n is the number of k = 3 color orderings for n points.
Most results in the literature only deal with partial amplitudes associated with the positive tropical Grassmannian (c.f. [12,[16][17][18][19][20][21][22]). As we show in section 5, these correspond to partial amplitudes in which both color orderings satisfy a very restrictive property, a "global" notion of planarity.
Having a precise definition of generalized color orderings (GCOs) allows us to provide a complete characterization of the generalized Feynman diagrams needed to fully characterize and compute m (3) n (Σ I , Σ J ). We define a GFD as an arrangement of metric trees which is compatible with at least one generalized color ordering, in the sense that each Feynman diagram in the collection is planar with respect to the corresponding ordering. This is the notion of local planarity.
In general, we then define m (3) n (Σ I , Σ J ) as a sum over GFDs that are compatible, or locally planar, with both color orderings Σ I and Σ J .
In order to illustrate the concepts, we study M
6 in detail. We find N 3,6 = 372 distinct color orderings. Unlike the standard k = 2 case in which all (n − 1)!/2 color orderings are related by relabelings, for (k, n) = (3,6) there are four types of orderings, with 60,180,120, and 12 orderings in each type respectively. For (k, n) = (3, 7) we find N 3,7 = 27 240 color orderings that fall into eleven types. Now (k, n) = (3, 7) also provides the first example of an arrangement of metric trees which is not a generalized Feynman diagram since it is not compatible with any color ordering. This also leads to the first examples of GFDs which contain trees with both degree-four and degree-three vertices in their collections.
We provide a list of representatives for each of the 135 types of color ordering for (k, n) = (3,8) in an appendix, whose permutations give N 3,8 = 4 445 640 color orderings in total. We present the 4381 types of (3,9) color orderings in an ancillary file and using relabellings one finds N 3,9 = 1 553 388 480 color orderings in total.
The reader familiar with the configuration space of points in the projective plane, X(3, n), would recognize the numbers of color orderings for n = 5, 6, 7 and their partition into types as the same as the number of chambers and their types, as well as with numbers of reorientation classes of oriented uniform matroids (for a detailed discussion of n = 6 see [23] and for n = 7 see [24]). This is not an accident. In fact, the CEGM construction directly computes partial amplitudes as integrals over X (3, n). In a companion paper [25], we explore color-dressed amplitudes from the CEGM integral viewpoint, uncovering fascinating connections to canonical forms [17], reorientation classes of uniform oriented matroids, the tropical Grassmannian, and the hypersimplex.
In section 7, we arrive at the first non-trivial application of generalized color orderings by introducing the notion of decoupling and its corresponding identities among partial amplitudes, in analogy with the famous U (1) decoupling identifies in gauge theories (see [1] for a review).
In section 10, we introduce a generalization of the single scalar field φ 3 theory. This theory has two natural definitions, which we claim are equivalent. The first is as a sum over all (k, n) generalized Feynman diagrams while the second is as a sum over all diagonal generalized biadjoint amplitudes.
In section 11, we discuss the generalization of GCOs beyond k = 3; having moved from k = 2 color orderings to k = 3 GCOs, the further step to generalize to k = 4 and beyond is relatively straightforward: arrangements of projective lines in RP 2 are replaced by arrangements of RP k−2 's in RP k−1 . We will also discuss GCOs for general k, their duality, their GFDs and their partial amplitudes in the later part of this paper.
There are of course new phenomena for higher k GCOs and GFDs, which requires exploration. A particular one concerns relations between GCOs and GFDs for different values of k. For example, in [26], making use of the property that every column or row of a k = 4 planar matrix of Feynman diagrams, i.e., GFDs contributing to a k = 4 type 0 GCO, must be a k = 3 planar arrangement of metric trees, a new bootstrap algorithm was developed to find all such GFDs.
Our goal in this paper is to find all GFDs needed to give the combinatorial construction of the partial amplitude for an arbitrary pair of GCOs. A companion paper [25] is devoted to the development of a method to find all integrands needed in the CEGM integral in order to produce any such partial amplitude. In that paper, we verify that partial amplitudes obtained using both formulations match for (k, n) = (3,6), (3,7), (4,7) and (3,8).
The rest of this paper is organized as follows: Section 2 is a review of the standard color decomposition but with a slightly new version of a color factor. Section 3 defines (3, n) generalized color orderings while section 4 defines (3, n) generalized Feynman diagrams. Combining the results from sections 3 and 4, in section 5 we introduce color dressed amplitudes. Section 6 contains some properties of color ordering and illustrative examples. In section 7, we discuss the generalization of the U (1) decoupling identities. In section 8 and 9, we show how to bootstrap all GFDs for each color ordering and vice versa. In section 10, we discuss the generalization of the single scalar field φ 3 theory. We generalize the GCOs and GFDs for higher k in section 11 and 12 respectively. We end with future directions in section 13, where we introduce a new family of objects called chirotopal Tropical Grassmannians. The positive tropical Grassmannian is the simplest member of the family.
Most data is presented either in the appendices or in an ancillary file.
Standard Color Decomposition
Tree-level scattering amplitudes of gluons in SU (N ) Yang-Mills theory can be color decomposed into partial amplitudes as (see [1] for a review)
A n ({k i , i , a i }) = σ∈Sn/Zn tr (T a σ(1) T a σ(2) · · · T a σ(n) ) A(σ(1), σ(2), . . . , σ(n)). (2.1)
Partial amplitudes have two important properties,
A(1, 2, . . . , n−1, n) = A(n, 1, . . . , n−2, n−1), A(1, 2, . . . , n−1, n) = (−1) n A(n, n−1, . . . , 2, 1).
(2.
2) The sum in (2.1) is over cyclic orderings and therefore any given partial amplitude appears twice. This motivates the following definitions.
Definition 2.1. A color ordering is an equivalence class of n-tuples (σ(1), σ(2), . . . , σ(n)) with σ ∈ S n such that (σ(1), σ(2), . . . , σ(n)) ∼ (σ(n), σ(1), . . . , σ(n − 1)), (σ(1), σ(2), . . . , σ(n)) ∼ (σ(n), σ(n − 1), . . . , σ(1)).
In the following we choose a canonical representative to have σ(1) = 1 and σ(2) < σ(n). Definition 2.2. Given a color ordering (σ(1), σ(2), . . . , σ(n)) in its canonical form, its associated color factor is c(σ) := tr (T a σ(1) T a σ(2) · · · T a σ(n) ) + (−1) n tr (T a σ(n) T a σ(n−1) · · · T a σ(1) ) .
(2.3)
Note that there are (n − 1)!/2 such color factors and their orderings σ are called planar orderings. Also, when n is even, the color factor is independent of the representative σ ∈ S n , but when n is odd the color factor can differ by a sign and hence the need to define it using the canonical representative.
In terms of these color factors, (1.1) and (1.2) can be written as (1), σ(2), . . . , σ(n))A(σ(1), σ(2), . . . , σ(n)) (2.4) and
A n ({k i , i , a i }) = σ∈Sn/Zn×Z 2 c(σM n ({k i , a i ,ã i }) = α,β∈Sn/Zn×Z 2 c(α(1), α(2)
, . . . , α(n)) c(β(1), β(2), . . . , β(n)) m n (α, β).
(2.5) The definition of m n (α, β) in terms of Feynman diagrams is simply given by [2] m n (α, β) = (−1) w(α,β)
T∈O(α)∩O(β) 1 e∈T s e ,(2.6)
where w(α, β) is an integer, the winding number, that depends only on the number of relative descents in the pair of cycles [27], O(γ) is the set of all trees that are planar with respect to γ, the product is over all internal edges e of the tree T and s e is the standard kinematic invariant associated to a propagator.
Generalized Color Orderings
A standard color ordering, σ, admits a simple pictorial representation: Drawing the points {σ(1), σ(2), . . . , σ(n)} on the boundary of a disk makes the dihedral symmetry 1 of the color factor c(σ) manifest. An equivalent description of a (2, n) color ordering is as an arrangement of n points (or RP 0 's) on RP 1 . Our proposal for a (3, n) generalized color ordering is simply given by an arrangement of n lines (or RP 1 ) on RP 2 . An equivalent definition which parallels more closely that of generalized Feynman diagrams is the following.
Definition 3.1. A (k, n) = (3, n) generalized color ordering is an n-tuple Σ = (σ (1) , σ (2) , . . . , σ (n) ), (3.1)
of (k = 2, n − 1) color orderings such that there exists an arrangement of n lines on RP 2 , {L 1 , L 2 , . . . , L n }, so that σ (i) is the (2, n − 1) color ordering on line L i defined by the points
L j ∩ L i for j ∈ [n] \ {i}.
Any (3, n) color ordering, Σ, admits n natural projections to the set of (3, n − 1) color orderings by simply deleting one line from the arrangement. Let us define the projections as π i (Σ) = (π i (σ (1) ), π i (σ (2) ), . . . ,σ (i) , . . . , π i (σ (n) )), (3.2) whereσ (i) indicates that this ordering is removed, while the projections π i on a k = 2 ordering remove the label i from the set regardless of its position. For example, It is tempting to think that the projections could lead to a purely combinatorial recursive definition of k = 3 color orderings. However, as shown in section 6.1, starting at n = 9, there are examples of collections of k = 2 orderings with valid projections but which do not correspond to any arrangement of lines.
Before ending this section, we note that there is a set of generalized color orderings with a very special property.
Definition 3.2. A (3, n) generalized color ordering Σ = (σ (1) , σ (2) , . . . , σ (n) ) is said to descend from a (2, n) color ordering σ if σ (i) = π i (σ). We also say that Σ is a descendant of σ. Figure 2 shows four examples of color orderings for (3,6) which cannot be related to each other by relabeling (see Chapter 8 of [23] for a connection to configuration spaces). This is the first feature in which k = 3 color orderings differ from k = 2 ones. Reading the k = 2 color orderings associated to each line in the arrangements gives the four k = 3 color orderings in table 1. Only the first one of the four is a color ordering which descends from a k = 2 ordering (in this case it is σ = (123456)).
The k = 3 color orderings in table 1 can be treated as representatives of four types of (3, 6) orderings. Using relabelling, they give rise to all the (3, 6) color orderings, totaling 372. In section 6 we explain how to find such representatives. The main tool is a recursive procedure in section 6.1 which can also be applied to higher points.
Type
Color Ordering Representative Table 1. All four types of (3, 6) color orderings. The second column provides a representative that can be used to obtain the rest by applying permutations of labels. The last column contains the number of distinct permutations.
Generalized Feynman Diagrams
In [14], the notion of a generalized Feynman diagram was introduced for k = 3, building on arrangements of metric trees [13], as a collection of Feynman diagrams. The construction in [14] focused on collections of trees that satisfy a special notion of planarity which, in the terminology introduced in the previous section, turns out to correspond to descendants of k = 2 color orderings. Let us review the construction starting with that of a standard Feynman diagram in a biadjoint φ 3 theory. A Feynman diagram is a pair, consisting of a graph together with given kinematic data; the Feynman rules assign to that pair a function. We are interested in tree diagrams with n external vertices (or leaves) and trivalent internal vertices.
The kinematic data consists of a symmetric n × n matrix, s ab , such that its diagonal elements are zero, s aa = 0, and the sum of its rows vanishes, b s ab = 0.
Given a tree, T , assign lengths (i.e. positive real numbers) to each of its edges so that e a denotes the length of the a th external edge and f I the length of the I th internal edge. Also, denote the minimal distance from leaf a to leaf b by d ab . A tree T with a "metric" d ab is called a metric tree. Now consider the integral
R + n−3 I=1 df I exp − a<b s ab d ab = R + n−3 I=1 df I exp − I q I f I = n−3 I=1 1 q I , (4.1)
where an internal edge I partitions the set of leaves into L I ∪ R I = [n] and
q I = a∈L I , b∈R I s ab . (4.2)
It is easy to show that a<b s ab d ab = I q I f I by using b s ab = 0. In particular, note that all external lengths e a drop out. Also, the middle expression in (4.1) is the Laplace transform of the space of internal lengths of the graph, which coincides with the Schwinger parameter representation of Feynman propagators. While the integral representation is only valid for q I > 0, the rational function on the left defines the value of the Feynman diagram for any q I = 0.
Let us now turn to k = 3 Feynman diagrams. The space of k = 3 kinematic invariants is given by rank 3 symmetric tensors s abc such that s aab = 0 and bc s abc = 0. 13]). An arrangement of metric trees is an n-tuple T = (T 1 , T 2 , . . . , T n ) such that T i is a metric tree with n − 1 leaves in the set [n] \ {i} and metric d (i) ab so that the following compatibility condition is satisfied One property with special physical significance is that any arrangement of metric trees admits n natural projections to arrangements of metric trees with one less leaf. Not surprisingly, the definition follows that for color orderings, i.e., π i (T ) = (π i (T 1 ), π i (T 2 ), . . . ,T i , . . . , π i (T n )) (4.4) whereT i indicates that the i th tree is removed and π i (T j ) means the tree obtained from T j by pruning the leaf with the label i. Once again, it is tempting to use the recursive property as a way to find arrangements of metric trees. However, not all arrangements of trees satisfying the recursive property admit a non-degenerate metric. The first example is for n = 9. Nevertheless, checking that a metric exists is easy since it only involves solving a set of linear equations. It is not obvious that this does not happen for n < 9 but an exhaustive search shows that to be the case (for a discussion which uses the connection to tropical geometry see [13]).
d (a) bc = d (b) ac = d (c) ab , ∀ {a, b, c} ⊂ [n].
In parallel to the discussion of arrangements of lines (see Definition 3.2), we note that there is also a family of arrangements of metric trees with a very special property. Definition 4.2. A (3, n) arrangement of metric trees T = (T 1 , T 2 , . . . , T n ) is said to descend from a metric T if T (i) = π i (T ). We also say that T is a descendant of T . Now we are ready to present a very elementary definition of k = 3 generalized Feynman diagrams for n < 9. Higher values of n require a more technical definition and it is outside the scope of this work. Definition 4.3. A (k = 3, n < 9) generalized Feynman diagram is a pair, consisting of given kinematic data, together with an arrangement of metric trees T = (T 1 , T 2 , . . . , T n ) that satisfies the following two properties:
• There exists at least one generalized color ordering Σ = (σ (1) , σ (2) , . . . , σ (n) ) such that T i is planar with respect to σ (i) for all i ∈ [n]. In this case we say that T is compatible with Σ.
• The arrangement T has exactly 2(n − 4) independent internal edge lengths 2 .
Moreover, the rational function associated to T is . . , f 2(n−4) )), simply enforce that all internal lengths must be non-negative. Note that the definition does not restrict the kind of trees that participate in a collection. For (k, n) = (3, 6), all GFDs are collections of trees with only degree-three internal vertices but starting at (k, n) = (3, 7) there can be trees with mixed kinds of internal vertices.
R(T ) := R + 2(n−4)
It is important to point out that the notions of descendant color ordering (see Definition 3.2) and descendant generalized Feynman diagram (using 12.2 for its arrangement) are independent. In other words, there are descendant generalized Feynman diagrams which are compatible with non-descendant color orderings and non-descendant generalized Feynman diagrams which are compatible with descendant color orderings. In fact, most of the GFDs studied in [14] are examples of the latter.
Let us end this section with examples that illustrate the Definition 4.3. In section 3.2 of [14], several GFDs compatible with the (3, 7) color ordering which descends from the canonical ordering (1234567) were presented. All of them are collections of seven tree-diagrams with 2(7 − 4) = 6 independent internal lengths but which evaluate to rational functions R(T ) with different number of poles. We reproduce here the example with seven poles. If the internal lengths of each tree diagram in the collection are ordered from left to right, then their expressions can be recorded in a 3 × 7 matrix with i th column [f
(i) 1 , f (i) 2 , f (i) 3 ] T , x x y y x + y z z w w w + x p + w + x p q q u u p v v p + v p ,(4.7)
with p + x + y = u + z, q + z = x + y. Selecting any six independent variables, a straightforward computation using (4.5) gives, with
R(T ) = W 1234567 +t A = {a,b,c}∈( A 3 )
s abc , R ab,cd,ef g = t abef g + s abc + s abd , W abcdef g = t abcd + t f gab + s abe . (4.10)
For later convenience, we define R ab,cd,ef = t abef +s abc +s abd . Let us now give an example of an arrangement of trees in which all trees have degree-three internal vertices and yet it has seven independent internal lengths; because it has more than six independent parameters it is not a valid generalized Feynman diagram, An amplitude with the complete color structure is called a color dressed amplitude in the literature. Here we generalize the k = 2 color dressed biadjoint amplitude presented in eq. (2.5) to k = 3. Let CO 3,n denote the set of all k = 3 color orderings for n labels and N 3,n = |CO 3,n | the number of such orderings. A typical element Σ ∈ CO 3,n is given by,
Σ = σ(1)
(2) , σ
(3) , . . . , σ
(n) , σ
(1) , σ
(3) , . . . , σ
(n) , . . . , σ
(1) , σ
(2) , . . . , σ (n) (n−1)
. (5.1)
To each such color ordering, we associate a color factor c(Σ). In this work, we treat c(Σ) purely as a bookkeeping device.
Definition 5.1. The color dressed (k = 3, n) biadjoint amplitude is
M (3) n = Σ,Σ ∈ CO 3,n c(Σ)c(Σ) m (3) n (Σ,Σ),(5.
2)
is important to mention that the Groebner cone data for T 0 in their first table is not directly for T 0 but for one of its co-dimension one boundaries. Here O(Σ) is the set of all GFDs which are compatible with Σ. The notion of compatibility is simply that if T = (T (1) , T (2) , . . . , T (n) ), then we require T (i) ∈ O(σ (i) ) for all i.
Let us comment on the notion of compatibility. Another way to explain it is to say that a GFD T is compatible or planar w.r.t. to a generalized color ordering Σ, if the i th -tree is planar w.r.t. to the i th -ordering. This is a local notion of planarity while the one introduced in [14] is global.
We do not provide a definition for the sign function here. We suspect that an explicit realization of color factors might be required to obtain a consistent definition.
We have obtained all GFDs for (3,6), (3,7), and (3,8) and provided a Mathematica notebook as an ancillary file to generate any color ordered (k = 3, n < 9) biadjoint amplitudes with two arbitrary orderings. In the compapion paper [25], we found all of their integrands needed in the CEGM integrals to compute the color ordered amplitudes. We have verified the amplitudes obtained from both sides match, which is a strong consistency check both for the GFDs and the integrands. We present some brief examples of amplitudes next and more details of GFDs are given in section 8.
Examples
We have computed the full color dressed (3,6) Here and in the remainder of this work we omit the overall sign in (5.2). There are two GFDs that are compatible with both of these orderings, See definition of R in eq. (4.10).
In section 8, we classify all (3, 6) GFDs and introduce several operations that connect them.
We end with a (3, 7) example. As we have mentioned before, the two GCOs given in eq. (4.14) and eq. (4.15) are both compatible with the generalized Feynman diagram in eq. (4.13) which has both cubic and quartic vertices in its trees. In fact, this GFD is the only one that contributes to both GCOs at the same time, resulting in m (3) 7 (eq. (4.14), eq.
Properties of k = 3 Color Orderings and Examples
In this section we study some properties of k = 3 color orderings as well as bootstrap methods for constructing them. We illustrate the techniques by applying them to k = 3 with n ≤ 9.
Arrangements of Lines vs. Arrangements of Pseudo-lines
While generalized color orderings are constructed out of arrangements of lines on RP 2 , this definition is not an effective way to construct them. Instead, we use a result from the theory of uniform oriented matroids which states that if lines are allowed to bend slightly, i.e., become pseudo-lines, then arrangements satisfy a recursive definition (for details see Chapter 6 of [15]).
Theorem 6.1. An n-tuple of standard (or k = 2) color orderings such that the i th one is defined on the set [n] \ {i} can be represented as an arrangement of pseudo-lines on RP 2 if the following holds:
• For n = 5 the 5-tuple is one of the 12 descendants of the (2, 5) color orderings.
• For n > 5 and for any j ∈ [n], removing the j th color ordering from the n-tuple and then deleting the label j from the n − 1 remaining ones must produce a (n − 1)-tuple of color orderings that can be represented by an arrangement of n − 1 pseudo-lines.
Note that the second condition is simply the statement that the projection π j defined in (3.2) produces an arrangement of pseudo-lines for all j ∈ [n].
Let us refer to these as (3, n) pseudo-color orderings. Clearly, the set of pseudo-color orderings contains all (3, n) color orderings. As mentioned in section 3, the first time one finds a (3, n) pseudo-color ordering which is not a (3, n) color ordering is for n = 9 [15]. This means that we can use the recursive construction (6.1) to search for (3, n)-color orderings by constructing all pseudo-color orderings and then discarding the ones that are not valid.
Consider the n = 5 case. Given that (3, 5) generalized biadjoint amplitudes are mapped to (2, 5) amplitudes by simply replacing s abc with s de with {a, b, c, d, e} = [5], it is not surprising that there exists a bijection between (3, 5) and (2, 5) color orderings. More explicitly, one can show that each arrangement of five lines in RP 2 is in fact the descendant of a configuration of five points on RP 1 . There are (n − 1)!/2 = 12 (2, 5) color orderings for n = 5. The example given in the introduction (1.3) Σ := (σ (1) , σ (2) , σ (3) , σ (4) , σ (5) ) = ((2435), (1435), (1254), (1253), (1234)) (6.1)
is the descendant of σ = (12534) since π 1 (σ) = (2534) = (2435), π 2 (σ) = (1534) = (1435),
π 3 (σ) = (1254), π 4 (σ) = (1253), π 5 (σ) = (1234). (6.
2)
The first non-trivial case is (k, n) = (3, 6). Here a simple algorithm that starts with an ansatz (σ (1) , σ (2) , . . . , σ (6) ), where each σ (i) is one of the twelve possible (2, 5) orderings with labels in [6] \ {i}, and then uses the recursive definition (6.1) to select the (3, 6) orderings is fast enough to obtain all (3, 6) orderings. We have implemented a slightly more efficient version of this algorithm in Mathematica and found exactly 372 (3, 6) color orderings.
Moreover, the 372 (3, 6) color orderings split into four types modulo relabeling and this is how we obtained the results in table 1. There are 60 of type 0, 180 of type I, 120 of type II and 12 of type III. This means that each type has a symmetry group of order 12, 4, 6 and 60 respectively.
One advantage of the recursive definition (6.1) is that it is purely combinatorial. Once we get all (pseudo-)color orderings, they can be turned into figures showing the arrangements of (pseudo-)lines, such as the four figures in fig. 2. In that figure, we manifestly see that all line arrangements reduce to the (3, 5) arrangement in fig. 1 when the sixth (black) line is removed.
Note that type 0 has exactly (6 − 1)!/2 = 60 orderings. This is because all type 0 color orderings are descendants of (2, 6)-color orderings. This type obviously generalizes to arbitrary (3, n) where their type 0 has (n − 1)!/2 generalized color orderings.
In a companion paper [25], we explain the connection between these (3, 6) color orderings and the 372 chambers which the space of six points (or six lines) on the real projective plane decomposes into.
Type
Color Ordering Representative With the (3, 6) color orderings at hand, one can further produce all (k, n) = (3, 7) color orderings using again Theorem 6.1. Here however, a brute force search is impractical. Instead, one can start with a given (3, 6) color ordering (one each type), and then list all possible ways of adding label 7 to construct n = 7 color ordering candidates. Using Theorem 6.1 the valid (3, 7) color orderings are then selected. There are 27 240 (3, 7) color orderings in total, which split into 11 types modulo relabeling. We tally the numbers of their distinct permutations here, {360, 1}, {840, 1}, {1680, 2}, {2520, 5}, {5040, 2}, e.g., there are five types with 2520 elements, i.e., with a symmetry group of order 2. Note that the type 0 has a symmetry group of order 14, the dihedral group. A representative of each type is given in table 2. The two GCOs in eq. (4.14) and eq. (4.15) are both of type X.
Further improvements in the algorithm produces 4 445 640 (3,8) color orderings, which fall into 135 types. Their representatives are presented in appendix B whose numbers of distinct permutations are tallied as {2520,
1}, {2880, 1}, {5040, 1}, {10080, 4}, {20160, 38}, {40320, 90}.
We also find 4382 types of (3, 9) pseudo-color orderings but one fails to be a colorordering and so there are 4381 types, whose representatives are provided in an ancillary file. We also tally their numbers of distinct permutations here,
{20160, 1}, {60480, 6}, {120960, 24}, {181440, 158}, {362880, 4193}
, which in total gives 1 553 388 480 (3,9) color orderings 4 .
Finally, let us write down a representative of the type of the (3,9) pseudo color orderings which cannot be represented as arrangements of lines and hence are not (3,9) color orderings (see figure 13 in [30]), This type has 120 960 distinct permutations.
Connecting Color Orderings Using Triangle Flips
Realizing k = 3 color orderings as arrangements of lines on the projective plane gives rise to figures in which various polygons appear as regions bounded by the lines. If lines L i , L j , L k bound a triangle then one can deform the arrangement until the triangle shrinks to zero size and can then be opened up in a different configuration (see fig. 4). When these flips are possible, they connect color orderings. See fig. 2 as an explicit example where the two line arrangements on the top are related by a flip via the triangle bounded by L 1 , L 2 and L 6 . This property can be used to improve algorithms for constructing color ordering. With that in mind, it is convenient to be able to recognize triangles in a color ordering without having to find the arrangement of lines since as n increases the arrangements can become quite complicated.
Claim: The arrangement of lines associated to a color ordering Σ = (σ (1) , σ (2) , . . . , σ (n) ) has a triangle bounded by lines L i , L j , L k if and only if labels in the sets {i, j}, {j, k}, and {k, i} are consecutive in σ (k) , σ (i) , and σ (j) respectively. The proof is left as an exercise for the reader. Not only is it easy to recognize a triangle but it is also simple to perform a flip. However, the result might not be a color ordering but only a pseudo color ordering. The procedure is the following.
Assume that Σ has a triangle bounded by lines L i , L j , L k , then a flip sends
Σ −→ (σ (1) ,σ (2) , . . . ,σ (n) ) (6.4) withσ (l) = σ (l) if l / ∈ {i, j, k},σ (i) = σ (i) | j↔k ,σ (j) = σ (j) | i↔k , andσ (k) = σ (k) | j↔i .
One then has to check whether (σ (1) ,σ (2) , . . . ,σ (n) ) can be represented as an arrangement of lines so that it can become a new color orderingΣ.
Using triangle flips, one can bootstrap all (k = 3, n) color orderings starting with a single one. A convenient choice to start with is a descendant of a (k = 2, n) color ordering. We have been able to reproduce all color orderings for (3,6), (3,7), (3,8) and (3,9) derived in section 6.1 using this technique.
Generalized Decoupling Identity
Color-decomposing amplitudes in the biadjoint scalar theory and in Yang-Mills has the advantage of reducing the complexity of the computation by restricting the set of Feynman diagrams to those that contribute to a given partial amplitude. The price to pay for the simplification is that it obscures some properties of the full amplitude which are simple to verify before performing the decomposition. One such property is the following. When the color of one of the particles, say T an , is in the commutant of the rest, then the full amplitude trivially vanishes since each color-dressed Feynman diagram contains at least one su(N ) structure constant of the form f ijn , which vanishes since [T an , T a i ] = 0 by assumption. On the other hand, partial amplitudes A n (σ) do not carry individual color information and traces of products of generators do not generically vanish. Instead, some traces that are generically distinct now become identified. The flip side of this is that the vanishing of the full amplitude, A n , must then imply identities among partial amplitudes that always hold regardless of the color structure.
The simplest example is when T an = I, i.e., the n th particle is a "photon". The set of identities so derived are called U (1) decoupling identities. Here we treat any T an such that [T an , T a i ] = 0 for all i ∈ {1, 2, . . . , n − 1} as a photon with respect to the rest.
Substituting this into (2.4) one has to collect like-terms by noting that, e.g.
c(1, 2, . . . , i, n, i + 1, . . . , n − 1) = ±c(1, 2, . . . , i, i + 1, . . . , n − 1, n). (7.1)
The sign depends on the parity of n as well as its location in the ordering on the left. In other words, up to a sign, the position of label n in the color factor becomes inconsequential.
Imposing that A n ({k i , i , a i }) = 0 implies identities such as (see e.g. [1]) n−1 i=1 A n (1, 2, . . . , i, n, i + 1, . . . , n − 1) = 0. (7.2)
Similarly, biadjoint amplitudes satisfy U (1)-decoupling identities as well,
n−1 i=1 m (2) n ((1, 2, . . . , i, n, i + 1, . . . , n − 1), β) = 0, ∀ β ∈ CO 2,n , (7.3)
where CO 2,n is the set of all (2, n) color orderings. Note that m
(2) n (α, β) = m (2)
n (β, α) and therefore we do not get new identifies by using the second ordering.
In this section we initiate the study of decoupling identities in generalized biadjoint amplitudes. Here we only scratch the surface since a deeper discussion requires the CEGM formulation and it is presented in the companion paper [25].
We do not yet have a satisfactory realization of k = 3 color factors in terms of Lie algebraic objects, but we can use the k = 2 case (7.1) as a guide to uncover the decoupling properties of k = 3 color factors.
It is clear that decoupling a particle in a k = 3 generalized amplitude requires making the position of the particle in the k = 2 orderings in the collection irrelevant. However, this leaves the prescription for what happens to the k = 2 ordering in the collection which does not contain the particle label undefined. The two natural choices are to either keep the order intact or make it irrelevant. We will study (3,5) and (3,6) to discover that the latter is the correct prescription.
Let us start with the (3, 5) case and the prescription where we keep the order intact. Even though it is not the correct prescription for all n, it illustrates an important point. Recall that there are exactly 12 distinct (3, 5) color orderings. A decoupling identity is obtained by selecting one, say, and identifying with it any other that differs from Σ 0 by the position of label 5 in the first four ordering and agrees on the fifth. In this case we find exactly three: 2354) and so it is only the ordering in (•) that groups them in three groups of four. Of course, (•) equals one of the three possible (2, 4) orderings.
Σ 1 = ((
As mentioned in section 6.1, all 12 k = 3 color orderings are descendants of (2, 5) color orderings. This means that we can find a bijection between the two sets. More explicitly,
Σ 0 ↔ (12345), Σ 1 ↔ (12354), Σ 2 ↔ (12534), Σ 3 ↔ (15234) = (14325). (7.6)
Note that the four (2, 5) color orderings in the expression above are exactly the ones appearing in the standard U (1) decoupling identity (7.2). It is clear that had we chosen the second prescription for the decoupling of particle 5, i.e., in addition to making the position of 5 irrelevant, also making the choice of (•) irrelevant, we would have found a single 12-term identity. However, this identity is not irreducible since it is a linear combination of the standard four-term identities.
As it turns out the definition that led to three four-term identities for (3, 5) does not generalize to higher points while the one that gives 12-term identities does.
Here we make a proposal for what the correct k = 3 color decoupling is and then we apply it to (3,6) and (3,7) amplitudes and discover that indeed it leads to identities among partial amplitudes.
Definition 7.1. Given a (3, n) color-dressed amplitude, M(3)
n , the operation that identifies any two (3, n)-color orderings, Σ andΣ, if their i th projections are the same, i.e., π i (Σ) = π i (Σ), is called decoupling the i th -particle.
Recall that M (3)
n is defined in (5.2) and the projection operator π i in (3.2). This definition has a beautiful combinatorial interpretation given that a (3, n) color ordering is represented by an arrangement of n lines in RP 2 . Definition 7.1 implies that line L i can be freely moved on the plane so that all color orderings generated in the process are identified. Let us refer to the set of these color orderings as a decoupling set where L i moves freely. This is the analog to the k = 2 interpretation of the U (1) decoupling identities in which particle i is free to move on the boundary of the circle defining the ordering.
The decoupling identities have a close relation to the recursive properties of arrangements of pseudo-lines described in Theorem 6.1 because if we remove line L i from the plane, the resulting arrangement of lines still corresponds to a color ordering of lower points according to the theorem. Let us refer to the resulting color ordering as an induced lower-point GCO, which are useful when classifying the decoupling identities.
Decoupling Identities for (3, 6) Color Factors
Applying Definition 7.1 to (3, 6) and decoupling, e.g., particle 6, we find that the 372 color orderings are partitioned into 12 decoupling sets of 31 orderings each. For example, consider the k = 3 color ordering that contains the ordering 12345)).
(7.7)
Note that Σ 0 belongs to type 0 and it is therefore the descendant of a k = 2 color factor, in this case, σ = (123456). The set of 31 (3,6) color orderings that contains Σ 0 is composed of 5 type 0, 15 type I, 10 type II and 1 type III color orderings. See their explicit expressions in appendix A. It is interesting to note that the 5 type 0 orderings are descendants of exactly the five k = 2 orderings that participate in the k = 2 decoupling identity. This hints the fact that type 0 color orderings are somehow disconnected and that the other types are needed to fill out the "holes". In the next section we give more evidence that this picture is correct while in [25] we show the geometric meaning in terms of the structure of X(3, 6) and its fibration over X(3, 5) (see figure 8.7 of [23] and figure 1 of [4] for two dual descriptions). Finally, our prescription so far only tells us which color factors to group together but it does not fix the relative signs. From the (3, 5) case, we expect that the 31-term identities are not irreducible. In fact, we find that the set of 372 equations
Σ∈D 6 (−1) k Σ m(3)
6 (Σ,Σ) = 0 ∀Σ ∈ CO 3,6 , (7.8)
where D 6 is the decoupling set of 31 orderings given in eq. (A.1) has 30 solutions for k Σ ∈ {0, 1} up to an overall rescaling. Every one of the 1005 (3, 6) GFDs either does not appear at all in a decoupling identity (7.8) or appears four times with two positive coefficients of (−1) k Σ and two negative ones. In this sense, we say that these identities hold at the level of GFDs, i.e., the GFDs T which contribute to the rational function R(T ) in the amplitudes cancel pairwise.
In [25] we study decoupling identities more deeply and find a geometric interpretation which leads to the irreducible identities that arise from (7.8).
Decoupling Identities for (3, 7) Color Factors
The partition of (3, 6) color orderings into 12 decoupling sets of 31 orderings each makes its tempting to think that the 27 240 (3, 7) orderings could also partition evenly. However, the structure we find is much more interesting, revealing that the 27 240 GCOs are partitioned inhomogeneously by a decoupling.
One can classify these identities by studying their induced GCOs when decoupling, e.g., particle 7. There is no doubt that the 27 240 color orderings produce 372 induced (3, 6) color orderings, which themselves are classified into four types as shown in table 1. Hence, there are four types of decoupling sets for (3, 7) as well. As shown in table 3, it turns out that 74 (3, 7) color orderings including 6 of type 0, 18 of type II, etc., reduce to a (3, 6) color ordering of type 0, while 72 (3, 7) color orderings reduce to a (3, 6) color ordering of type I, etc. Their explicit expressions are presented in the ancillary file. In total, the 27 240 color orderings partition into 372 decoupling sets including 60 type 0 decoupling sets of 74 elements, 180 type I decoupling sets of 72 elements, 120 type II decoupling sets of 74 elements, and 12 type III decoupling sets of 80 elements. We put the representatives of four types of decoupling sets in an ancillary file. Table 3. Four types of decoupling of (3,7). The numbers represent how many (3,7) color orderings of a particular type take part in a certain decoupling.
As in the (3,6) case, the six color orderings of type 0 which are the descendants of six (k = 2, n = 7) color orderings that participate in the k = 2 decoupling identity appear together in a decoupling identity. This time, 68 other color orderings of other types are needed to make up a (3,7) decoupling of type 0.
From table 3, we also see that a type 0 ordering can only take part in the decoupling of type 0 while a type I ordering can participate in decouplings of both type 0 and I by decoupling different labels. It is worth mentioning that orderings of type VI appear in any type of decoupling while the type III decoupling only contains color orderings of types VI and IX.
A similar analysis reveals 11 types of decoupling identities for (3,8). We do not present them here because, as in previous cases, all of the identities in terms of partial amplitudes they lead to are reducible ones and we postpone a more complete discussion to [25].
Bootstrapping GFDs via Flips Compatible with Color Orderings
Listing all generalized Feynman diagrams for any (k, n) is a daunting problem. Even for k = 3 the number of diagrams grows very fast with n. In [14] and [26], a "bootstrap" method for producing new GFDs out of old ones using the notion of global planarity was introduced. In other words, their construction was based only on type 0 color orderings, i.e., those that descend from k = 2 ones. As we will see, there are GFDs that are not compatible with any type 0 color ordering and therefore cannot be generated in that way.
In this section we extend the bootstrap methods starting from the assumption that we have access to all color orderings. This will allow us to generate all GFDs. We show this for (3, n) with n ≤ 8.
The approach in this section is the analog of the triangle flip moves explained for generalized color ordering in the previous section. Triangle flips was one of the two techniques for generating color orderings, the other one was based on a recursive property. The analog for GFDs is based on the recursive property (4.4) but we find the flip moves to be the more efficient option.
Flips in k = 2 Feynman Diagrams
In order to introduce the main idea, let us start with the trivial but illustrative case of standard biadjoint φ 3 Feynman diagrams.
Consider any Feynman diagram, T , with n leaves, n − 2 degree-three vertices and n − 3 internal edges. Each internal edge has a length f I , with I ∈ {1, 2, . . . , n − 3}. If any of the lengths is set to zero, the diagram loses two degree-three vertices and gains a degree-four vertex. There are three ways of "resolving" a degree-four vertex into two degree-three ones. One of the three ways leads back to T while the other two lead to two different Feynman diagrams T and T . This can be done for any internal edge and therefore T is connected to 2(n − 3) other trees this way.
More generally, we call a flip the operation of sending a length to zero to produce a degeneration of a diagram to connect it to a different diagram that shares the same degeneration.
It is worth noting that this process is analog to what is known in mathematical physics as a flop transition, in which a cycle in a manifold is sent to zero size (Kahler volume) and then replaced by another cycle that grows in size. In fact, when this is done in a toric variety, the description of a flop is identical to that of a mutation in a triangulation of a polygon representation of a Feynman diagram.
Flips of Feynman diagrams are very useful in scattering amplitudes, for example in the study of Bern-Carrasco-Johansson double copy relations [31][32][33][34] and their geometric explanations [35][36][37][38][39]. Here we focus on generalizing them to higher k.
Flips in k = 3 Feynman Diagrams
Flips of k = 3 GFDs are also defined using degenerations produced by sending internal edge lengths to zero. Recall that a (3, n) GFD has n(n − 4) internal edges, n − 4 for each tree in the collection, but only 2(n − 4) internal lengths are independent due to the compatibility condition that produces the k = 3 metric d abc . This notion also gives a natural definition of GFDs connected by a flip for all (k, n). A codim-1 degenerate GFD usually contains k = 2 FDs with cubic or quartic vertices but starting from n = 7, it may also contain quintic or higher degree vertices. In practice, to get flips of a GFD, one can degenerate it first and then blow it up. However, unlike the k = 2 case, not all ways of blowing up quartic or higher degree vertices in the various diagrams in the collection lead to valid a GFD. The reason is that randomly resolving quartic or higher degree vertices does not guarantee that the new arrangement of trees will have a valid non-degenerate metric. While this might seem to make the problem complicated, it is actually a simplification.
Consider the set of all GCOs a GFD is compatible with. Any of its degenerations must be compatible with a larger set of GCOs. A necessary condition for the resolution of the degeneration to be allowed is that the new candidate GFD be compatible with at least one of the GCOs of the degenerate GFD. In most cases, the two GFDs connected by flips also share a common GCO.
For example, consider the following GFDs at n = 6, By considering all degenerations and all compatible color orderings, one can find eight flips in total and eight new GFDs. The reason why the number is much less than 4×16 = 64 is that two compatible orderings may lead to the same flips, which is already the case for k = 2. In order to avoid producing the same GFDs several times, one can also ignore all GCOs at first and blow up a degenerate GFD in all possible topological ways, resulting in many arrangements of metric trees. Then, select those with correct number of independent internal lengths and verify if they share a common GCO with the original GFD or its degenartions. This is an equivalent way to find all flips of a GFD.
Starting at n = 7, there are some GFDs whose degenerations contain a k = 2 Feynman diagram with quintic or higher degree vertices and there is no unique way to resolve these high-degree vertices anymore guided by the compatible GCOs of the degenerate GFDs. One has to consider all possible ways to resolve the high-degree vertices and select the new GFDs from the resulting arrangements of metric trees whose metrics have the correct number of independent internal lengths 5 . For example, the GFD with quartic vertices in eq. (4.13), which we present here again for convenience The 64+64 = 128 GCOs together make up the set of compatible GCOs for the degeneration in eq. (8.11).
In general, each (k = 3, n) GFD has at least 2(n − 4) degenerations and 4(n − 4) flips by considering all compatible color orderings. This has been verified for (3, n) up to n = 8.
Bootstrap Algorithms
Here we present two algorithms for computing GFD. Both are based on the idea that using flips all GFDs be can generated starting from a seed. Of course, making use of relabelling simplifies the procedure and it is a step that can be included in the algorithm.
For example, starting with the seed T A in eq. (8.1), which is the descendant of a k = 2
Feynman diagram, The permutation of labels of these seven classes of GFDs gives rise to 1005 GFDs in total, which is consistent with the result in the literature [11], see also [13]. For the reader's convenience, we also include additional information for each representative, including their contributions R(T ) to the amplitudes via eq. (4.5), in Table 5 shows what classes these GFDs belong to.
Some noteworthy facts are the following. Class F GFDs are not covered by the type 0 color orderings and type III color orderings only contain GFDs of classes F and G. class A class B class C class D class E class F class G type 0 type I type II type III Table 5. Compatibilities between different types of color orderings and different classes of GFDs Any GFD is compatible with the same number of color orderings, 16 in total, but the number of GCOs in each type can vary. As also shown in table 5, only two types of color orderings support GFDs in the same class as T A . So do GCOs for T E . While GFDs in the classes defined by T B , T C , T D , T F are compatible with three types of GCOs. Finally, T G has the very interesting property of being compatible with color orderings of any type and so it is universal.
If we are interested only in computing a biadjoint partial amplitude m n (Σ, Σ), with Σ a particular GCO, and a seed is known that is compatible with Σ, we can adopt a simpler bootstrap.
Bootstrap II:
1. Start with a GFD as a seed that is compatible with a certain color ordering Σ.
2. For every GFD in the list, degenerate it and blow it up in a way compatible with Σ.
Repeat step 2 until no new GFDs are produced.
This bootstrap can be thought of as a generalization of that in [14] and [26] to any other type of color orderings. In section 13.1, we use this construction as a motivation for introducing chirotopal tropical Grassmannians as a natural extension of positive tropical Grassmannians.
Application to (3, 7)
Starting at (3,7), there are GFDs with quartic or higher degree vertices. One example was given in eq. (4.13). We can still apply the bootstrap algorithms to trees with higher multiplicities but efficiency might be compromised as special attention is needed to find the flips of such GFDs. Fortunately, computations are still within reach with modest computational resources for (3,7) and (3,8) as the number of GFDs with mixed trees is still small.
For (3,7), we reproduced all 211 155 + 210 = 211 365 GFDs for (3, 7) presented in [13]. Here 210 is the number of GFDs with mixed trees. All GFDs fall into 93 + 1 classes where 93 of them only contain cubic vertices. The only class with quartic vertices is the one generated by eq. (4.13) via relabeling. As mentioned in section 4, this exceptional GFD is a codim-1 boundary of another arrangement, eq. (4.11), whose metric has 7 independent internal lengths. Such an arrangement failed to be a GFD because it is not compatible with any GCOs and has the wrong dimension.
On the one hand, the eleven color orderings in table 2 have 693, 534, 563, 447, 541, 509, 520 + 2, 556 + 2, 393, 440 + 1 and 423 + 1 GFDs respectively. Here 423 + 1 means that the type X GCO has 423 compatible GFDs with only cubic vertices and one GFD with quartic vertices. These are the numbers of GFDs needed to compute the corresponding m whose contribution to the amplitude is given by 1/(s 126 s 347 s 567 t 1236 R 47,123,56 W 1347265 ) with R, W defined in eq. (4.10). All representatives of the (3,7) GFDs, accompanied by one compatible GCO each, their poles, and their contributions to the amplitudes are put in an ancillary file. (3,8) For (3,8), as one can imagine, there would be more GFDs with mixed vertices, whose flips are much more complicated than those of GFDs with only cubic vertices. So in practice we apply the bootstrap I more effectively by bootstrapping all GFDs with only cubic vertices first, and then bootstrapping the remaining GFDs with mixed vertices based on them.
Application to
Besides, there is another problem for (3,8). Given a GFD it is very time-consuming to determine all GCOs it is compatible with. The reason is that there are 4 445 640 (3,8) GCOs. Fortunately, there is a simple method to generate all compatible GCOs for a given GFD based on one of its compatible GCOs. We postpone the explanation of this method to the next section and now we just apply it to our bootstrap. For most cases, two GFDs connected by flips share a common GCO. So in the bootstrap, whenever we generate a new candidate of GFD T by blowing up a degeneration of a GFD T , we can check whether it is compatible with any of the compatible set of GCOs of T . Otherwise, we have to check whether it is compatible with any of the 4 445 640 (3,8) GCOs.
In this way, we obtained 4734 classes of normal GFDs with pure cubic vertices first. Their flips produce 55 new classes of arrangements of metric trees with mixed vertices and correct number of independent lengths. 28 of them share a common GCO with the normal GFDs whose flips produce them. For the last 27 candidates, we found that 3 of them are GFDs by checking whether they are compatible with any of the 4 445 640 (3,8) GCOs. It turns out that the flips of the 31 classes of GFDs don't produce any new classes of GFDs.
In one word, we found 4734 + 31 = 4765 classes of GFDs in total, whose permutations give 116 849 565 + 604800 = 117 454 365 GFDs.
All representatives of the (3,8) GFDs, accompanied by one compatible GCO each, their poles, and their contributions to the amplitudes are put in an ancillary file.
We have checked many (3,8) GFDs and verified that they are compatible with 256 GCOs. We conjecture it to be true for all (3,8) GFDs. In the next section, we will explain how to get all compatible GCOs for each given GFD efficiently and here we present some relevant results. It turns out a (3,8) GFD always contributes to at least 4 but at most 112 types of GCOs.
Let us now assume that Bootstrap I has already been performed and we have obtained all GFDs and their compatible GCOs. Next we describe an efficient way to compute a given partial amplitude m(Σ, Σ) which replaces Bootstrap II.
This method works for general cases but for definiteness, let us concentrate on the present interest of this section, (k, n) = (3,8). By assumption we have all compatible GCOs for a representative of each of the 4765 classes of GFDs. We can decompose them as a list of pairs, one compatible GCO and one representative GFD, (Σ , T ). Then for the GCO of interest, Σ, we just need to select all pairs such that the GCO Σ is of the same type as Σ. This means that for each selected pair (Σ , T ), there exists one or more permutations of labels ρ such that Σ → Σ | ρ = Σ. We relabel the pair (Σ , T ) simultaneously under every ρ to get a set of (Σ, T | ρ ). Gathering all distinct T | ρ obtained in this way, we find all compatible GFDs for every given GCO Σ.
As shown in [26], there are 13612 GFDs contributing to a type 0 GCO, and in this paper we find that this is the maximum number among all classes of GCOs. In particular, among these, the type 18 GCO given in table 6 has the smallest number 3356 of compatible GFDs. A general code to give any (3,8) color ordering amplitudes is provided in an ancillary Mathematica notebook file.
Let us point out an interesting contrast. As above, we find that there are 4765 permutation classes of GFDs, each being compatible with some GCO; by using the metric tree parameterization of the Dressian this translates to having 4765 permutation classes of maximal cones in the tropical Grassmannian Trop G(3, 8).
Now, in [40,Theorem 4.6], the authors find 4766 symmetry classes of maximal cones in the tropical Grassmannian of (3,8). We have identified the extra symmetry class of metric tree arrangement in Equation (4.16). It is not compatible with any GCO. On the other hand, in [40] the cone parameterized by our metric tree arrangement corresponds to a certain non-binomial saturated initial ideal, see [40,Remark 4.5].
Generating GCOs from GFDs using Twists
In the previous section we discussed how to generate new GFDs starting from known ones by using flips guided by generalized color orderings. In every example we studied, all GFDs up to (3,8) have 2 2(n−4) compatible GCOs, which naturally leads us to conjecture that it holds in general. This section is devoted to providing what we believe is a promising direction to prove this important conjecture. In fact, we turn things around and use GFDs to generate new generalized color orderings. Very nicely, this also helps us to improve the bootstrap of GFDs as already applied in section 8.3.2.
Let us start with a simple observation.
Proposition 9.2. Let T be a n-point tree Feynman diagram in a φ 3 scalar field theory. T is compatible with exactly 2 n−3 color orderings.
Before providing the proof, let us define a useful operation on the planar embeddings of a tree. Proof. Any tree, T , in a φ 3 theory has n − 3 internal edges and n leaves. Draw T on a plane and read the order in which the leaves appear. This gives one color ordering. Let us denote it by σ 0 . For each internal edge there is a twist associated to it. Applying a twist generates a different embedding of T on the plane and hence a different color ordering. The number of all possible compositions of any number of twists is clearly 2 n−3 , hence the number of color orderings.
We would like to generalize the construction above to (k, n) Feynman diagrams. Let us illustrate the procedure for k = 3.
Consider a (3, n) Feynman diagram, T = (T 1 , T 2 , . . . , T n ), by definition, T is compatible with at least one GCO, Σ 0 = (σ 1 , σ 2 , . . . , σ n ).
Recall that R(T ) is a rational function in the kinematic invariants s abc . The number of poles in R(T ) satisfies n P ≥ 2(n − 4). Each pole is produced by a one dimensional integral in the space internal lengths along a particular direction. Let t ∈ [0, ∞) be the parameter along one of such directions. When t → ∞ all internal lengths are either O(t) or O(1).
Now consider the embedding of each T i in T on a plane according to the σ i ordering. Select one of the n P possible directions as t → ∞ and twist T i along the internal edges which are O(t). The resulting procedure is a (3, n) twist to T .
This means that the GFD T has n P twists. However, since the space of internal lengths is R
2(n−4) +
, there is a set of 2(n − 4) twists that generates the rest. Now we can perform the counting of GCOs associated to T . For each of the 2(n − 4) independent twists one gets a new GCO. The number of all possible compositions of any number of such twists is clearly 2 2(n−4) , hence the number of (3, n) color orderings. This is a strong argument in favor of Conjecture 9.1, which states that every (k = 3, n) GFD is compatible with 2 2(n−4) color orderings.
Let us illustrate the discussion above with a (3, 7) GFD which has nine poles. This is an example taken from section 3.2 of [14],
1 , f (i) 2 , f (i) 3 ] T , x x y y p + v p p w w r r u u + v v v v u u s s s + u , (9.2) with x + w = r + s + u, r + y = p + v + w. (9.3)
The way to find the directions that correspond to a pole is by taking any of the variables, send it to infinity and solve the constraints (9.3) in all possible ways recalling that all variables must be positive. For example, sending x → ∞ in x + w = r + s + u implies that either r, s, or u must also be sent to infinity. Choosing r → ∞ and using r + y = p + v + w implies that p, v or w must be sent to infinity. However, w appears in x + w = r + s + u invalidating the choice. This means that the directions defined by {x, r, p} or {x, r, v} give rise to poles. A short exercise reveals only nine possibilities, as expected, 6 {{x, r, p}, {x, r, v}, {x, s}, {x, u}, {w, r}, {w, s, y}, {w, u, y}, {y, p}, {y, v}} . (9.4)
Now we can think of each variable as defining a k = 2 twist on the trees where it appears. It is clear that any such twist squares to the identity, i.e., x 2 = I, y 2 = I, etc. Moreover, any two commute, i.e., xy = yx. Each of the n P = 9 allowed directions becomes a valid For example, xrv = (xrp)(yp)(yv). Now, using that the GFD (9.1) is compatible with the (3, 7) color ordering that descends from (1234567), one can apply any combination of the operations in (9.6) to produce a total of 2 6 = 64 GCOs.
In this section, we have explained an efficient way to find all compatible GCOs for a given GFD. As explained at the end of section 8.3.2, once this is done, it is easy to carry out the opposite procedure, i.e., finding all GFDs compatible with a given GCO.
We emphasize here again that all (3,8) partial amplitudes were computed using these techniques and they are consistent with those computed from CEGM integrals [25], which in turn provides more support for Conjecture 9.1.
(3, n) Minimal Scalar Amplitudes
The standard biadjoint theory can be thought of as a theory of multiple massless scalar fields with interactions constrained by a Lie algebraic structure. A theory with a simpler lagrangian is obtained by considering a single massless scalar field φ and no Lie algebra structure. In addition, one can set the interaction to be the simplest non-trivial possible one, i.e. a φ 3 term. We call this the k = 2 minimal scalar theory.
The tree-level amplitudes of this minimal scalar theory are computed by summing over all Feynman diagrams with cubic interactions. For n external points, there are (2n − 5)!! such diagrams. It is natural to ask whether there is a way of obtaining m (2) minimal n from 6 Sending several internal lengths to infinity is equivalent to shrinking the other internal lengths to zero. Hence the nine directions defined by eq. (9.5) can also be obtained by finding out all dim-1 degenerations of the GFD, which is another way to get poles of the amplitudes. See more details in [41]. m (2) n (α, β). In [42], Dolan and Goddard noticed that
m (2) minimal n = 1 2 n−3 σ∈CO 2,n m (2) n (σ, σ) (10.1)
where CO 2,n is the set of all (2, n) color orderings. In this section we propose two independent definitions for m Recall that Conjecture 9.1 states that each (3, n) GFD is compatible with exactly 2 2(n−4) color orderings and so each GFD appears the same number of times in the sum (10.2).
In the remaining of this section we study properties of m
(n − 5)-dimensional Residues
In [43], a simple but surprising connection between m . Even more surprising is that for n = 7 any two-dimensional residue defined by {s abc , t abcd } of m We have checked that if the sum in (10.2) were replaced by any subset of color orderings, for example, by only those of type 0, then the residues would not agree with the corresponding m (2) minimal n amplitudes.
Comments on Residues
One of the most striking properties of m
6 (Σ 0 , Σ 0 ) is that it contains a new class of poles with no direct analog in k = 2 amplitudes, the so-called R-pole. Just as the pole is novel to k = 3, so is the behavior of the residue of m 4 (σ 0 , σ 0 ) amplitudes. This 3-split is achieved in codimension one and hence the novelty. Note that m (2) 4 (σ 0 , σ 0 ) only has two terms (two planar Feynman diagrams) and hence the straightforward computation of the residue from the GFDs gives rise to eight contributions whose sum factors into three amplitudes.
It turns out that m (3) minimal 6 exhibits an even more surprising behavior. The residue where some R a 1 a 2 ,b 1 b 2 ,c 1 c 2 = 0 is product of three m (2) minimal 4 amplitudes, each is made out of three terms. This means that 3 3 = 27 contributions from GFDs conspire to perfectly produce the factorization.
Once again, this behavior is not observed if any proper subset of color orderings is used in the definition. Note that this is a non-trivial statement, as a sum over all color orderings of a given type would be permutation invariant and hence a reasonable object by itself.
In light of the recent work [44] by one of the authors on factorization for the standard globally planar CEGM amplitudes, the behavior of m (3) 6 on the "R pole" is expected to generalize very beautifully to higher k. Many mysteries remain unanswered about factorization; most relevantly, these include investigating residues of the CEGM amplitudes considered in this work. Such questions are left to the future.
Higher k Color Orderings
In this paper, the main focus is on k = 3 color orderings but they can be straightforwardly generalized to higher k.
Definition 11.1. A (k, n) generalized color ordering is an n k−2 -tuple By definition, removing a (k−2)-plane, say H i , from the arrangement with n > k + 2 must result in another arrangement but with (n−1) (k−2)-planes. Therefore, the operation must give a (k, n − 1) color ordering. This generalizes the k-preserving projection given in eq. (3.2),
Σ [k] = {σ (i 1 ,i 2 ,··· ,i k−2 ) |{i 1 , · · · , i k−2 } ⊂ [n]} ,(11.π i Σ [k] := π i σ (i 1 ,i 2 ,··· ,i k−2 ) |{i 1 , · · · , i k−2 } ⊂ [n] \ {i} . (11.2)
On the other hand, in Definition 11.1, we chose to construct Σ [k] out of (2, n−k+2) color orderings. However, it is sometimes convenient to note that since each H i is an
RP k−2 ⊂ RP k−1 , then H i ∩ H j is an (k−3)-plane for all j ∈ [n] \ {i}.
This means that we have an arrangement of n − 1 (k−3)-planes in RP k−1 , i.e. a (k−1, n − 1) color ordering, which we call a k-decreasing projection
π (i) Σ [k] ≡ {σ (i,i 2 ,··· ,i k−2 ) |{i 2 , · · · , i k−2 } ⊂ [n] \ {i}}. (11.3)
Clearly, we have 7
Σ [k] = n i=1 π (i) Σ [k]
. (11.4) Let us also extend the notion of descendant. For any (k, n), there is always a set of generalized color orderings with a very special property.
Definition 11.2. A (k, n) generalized color ordering Σ [k] = {σ (i 1 ,i 2 ,··· ,i k−2 ) |{i 1 , · · · , i k−2 } ⊂ [n]
} is said to descend from a (2, n) color ordering σ if σ (i 1 ,i 2 ,··· ,i k−2 ) = π i 1 π i 2 · · · π i k−2 (σ). We also say that Σ [k] is a descendant of σ.
General pseudo-GCOs
Equations (11.4) and (11.2) suggest a recursive way to get the (k, n) color ordering and similar to k = 3 pseudo-GCO defined just below Theorem 6.1, we can define a general k pseudo-GCO. Definition 11.3. A n k−2 -tuple of standard color orderings with n > k + 2 is said to be a (k, n) pseudo-GCO if all its k-preserving projections are (k, n − 1) pseudo-GCOs, while (k, k + 2) pseudo-GCOs are all descendants of (2, k + 2) color orderings.
Just as an arrangement of n projective (k − 2)-planes for a GCO, one can also find an arrangement of n projective (k − 2)-pseudo-planes for a pseudo-GCO. If there are k such pseudo-planes intersecting at the same point when all such pseudo-planes are straightened, we call the corresponding pseudo-GCO as a non-realizable one. Otherwise, it is a GCO.
One can also define the k-decreasing projection of a pseudo-GCO just as (11.4).
Theorem 11.4. Each k-decreasing projection of pseudo-GCO is also a pseudo-GCO.
Proof. Let's denote an n-pt pseudo-GCO as Σ and its k-decreasing projection as π (j) (Σ) for any j ∈ [n]. Obviously, the theorem is true for n = k + 2. Suppose it's true for (n − 1)pts, which means any k-decreasing projection π (j) (π i (Σ)) with j = i of the k-preserving projection π i (Σ) is a pseudo-GCO. Note that the k-decreasing projection π (j) (π i (Σ)) = π i π (j) (Σ) is also a k-preserving projection of π (j) (Σ). Hence any k-preserving projection of π (j) (Σ) is a pseudo-GCO, which means π (j) (Σ) itself is also a pseudo-GCO according to the definition 11.3. By mathematical induction, we proved the theorem.
Theorem 11.5. An n k−2 -tuple of standard color orderings with k ≥ 4 is a pseudo-GCO if its any k-preserving projection is a pseudo-GCO.
The proof is similar.
Duality between (k, n) and (n − k, n) GCOs
Abviously, all (n − 2, n) GCOs are just descendants of (2, n) GCOs, based on which we can contruct the general duality between (k, n) and (n − k, n) GCOs. Given any (k, n) GCO Σ [k] , we can get its dual (n − k, n) GCO Σ [n−k] by
Σ [k] ∼ Σ [n−k] = Dual(π i 1 π i 2 · · · π i n−k−2 Σ [k] ) {i 1 , i 2 , · · · , i n−k−2 } ⊂ [n] ,(11.5)
where we perform a projection of n − k − 2 labels of Σ [k] first, leading to a (k, k + 2) GCO, π i 1 π i 2 · · · π i n−k−2 Σ [k] , whose dual GCO Dual(π i 1 π i 2 · · · π i n−k−2 Σ [k] ) is just a (k + 2)pt standard color ordering σ (i 1 i 2 ···i n−k−2 ) , which constitutes the dual (n−k, n) GCO Σ [n−k] . The dual of non-realizable pseudo-GCO is also a non-realizable pseudo-GCO and can be obtained similarly.
The duality (11.5) can be easily proved by mathematical induction.
Proof. First we want to prove (11.5) holds at the level of pseudo-GCOs. Obviously, it holds for n = k + 2. Supposing the duality already holds for (n − 1)-pts, i.e., (11.6) which means any projection of Σ [n−k] is a valid (k, n − 1) pseudo-GCO and hence confirms that Σ [n−k] is pseudo-GCO. Since all (k, n) and all (n − k, n) pseudo-GCOs are in bijection and the dual of non-realizable pseudo-GCO is also a non-realizable pseudo-GCO, we prove the duality (11.5) holds at the level of GCOs.
π j (Σ [k] ) ∼ π j (Σ [n−k] ) = Dual(π i 1 π i 2 · · · π i n−k−3 π j Σ [k] ) {i 1 , i 2 , · · · , i n−k−3 } ⊂ [n] \ {j} , ∀ j ∈ [n] ,
Based on (11.5), it's clear to see a k-preserving projection of Σ [k] is dual to a kdecreasing projection of its dual Σ [n−k] ,
π j (Σ [k] ) ∼ π (j) (Σ [n−k] ) , π j (Σ [n−k] ) ∼ π (j) (Σ [k] ) . (11.7)
The first non-trivial duality is the one between (3, 6) GCOs themselves where a GCO of any type is dual to another GCO of the same type.
The duality allows us to get all (4, 7) GCOs by free from those of (3, 7) GCOs. Here we list a (4,7) GCO as an example, 12345)) , (11.8) which is dual to the last (3,7) GCO of type X in table 2. We see the projection of the (3,7) GCO with respect to both 1 and 2 is ((4657), (3576), (3647), (3475), (3546)), which is dual the first entry of (11.8), (35746). The dual GCOs for the remaining types are put in the ancillary file.
The (4,8) GCOs are dual to themselves, so we have to work them out independently, which is explained in the next subsection.
Bootstrapping k = 4 pseudo-GCOs
The recursive definition of the pseudo-GCOs strongly suggest us to use a bootstrap method to generate them. Indeed we reproduced all (4, 7) GCOs and generated all (4, 8) pseudo-GCOs.
Let us illustrate the idea by starting with an ansatz of (4, n) color orderings of the form, Λ [4] = {λ (1,2) (34 · · · n), λ (1,3) (24 · · · n), · · · , λ (n−1,n) (12 · · · n − 2)} (11.9) where λ (i,j) is a (2, n − 2) color ordering. Also note that we reserve the notation Σ and σ for valid color orderings and so we have used Λ and λ for the ansatz. In eq. (12.4) and eq. (11.2), we have defined two operations which act on generalized color orderings but they can obviously be generalized to act on any ansatz as well. To explain the operations more intuitively, it is useful to present Λ [4] in a slightly redundant way as an n × n symmetric matrix, 3) . . . . . . . . . . . . . . .
Λ [4] ∼ 0 λ (2,1) λ (3,1) · · · λ (n,1) λ (1,2) 0 λ (3,2) · · · λ (n,2) λ (1,3) λ (2,3) 0 · · · λ (n,λ (1,n) λ (2,n) λ (3,n) · · · 0 ,(11.10)
where λ (i,j) = λ (j,i) and we have suppressed the dependence on the n − 2 labels. Now one can interpret (11.10) as a collection of k = 3 GCOs, with π (i) (Λ) corresponding to the i th row or column of (11.10), while its projection π i (Λ) corresponds to an (n − 1) × (n − 1) submatrix obtained by deleting the i th row and column of (11.10) and projecting out all label i in the remaining submatrix. Now one can impose the following two conditions on the ansatz Λ [4] to get a candidate for a color ordering:
(a) Its k-preserving projection π i (Λ) for any label i is a valid (4, n − 1) pseudo-GCO.
(b) Its k-decreasing projection π (j) (Λ) for any label j is a valid (3, n − 1) pseudo-GCO.
According to Theorems 11.4 and 11.5, there two conditions are equivalent. In practice, the second kind provides a faster way to find out candidates.
For (4, 7) color orderings, we start with an ansatz of a 7 × 7 matrix in the form (11.10). Requiring that each of its row or column corresponds to a valid (3,6) color ordering leads to exactly 27 240 choices of {λ (1,2) , λ (1,3) , · · · , λ (6,7) }. Hence we reproduced all (4,7) GCOs. The 27 240 (4,7) color orderings fall into 11 types as expected.
The evaluation of bootstrapping (4,8) pseudo-GCOs is combinatorially more involved but does not pose any conceptual challenges. We got 2628 types of pseudo-GCOs in total, which is consistent with the results in the literature [45] where they claim there are 2,604 types of realizable uniform matroids and 24 types of non-realizable ones.
Comparing results from both sides, we distinguish the realizable and non-realizable pseudo-GCOs and we present them in the ancillary files separately. The permutations of 2,604 GCOs give 100 086 840 distinct ones in total. We tally the numbers of their permutations here, The 2604 types of (4,8) GCOs are dual to themselves, which is a consistency check of our results. We mention that a (4,8) GCO might be dual to another GCO of different type. The 24 types of non-realizable pseudo-GCOs are also dual to themselves.
Higher k Feynman Diagrams
In parallel to the above section, we discuss the k Feynman diagrams here. The generalization of Definition 4.1 is straight forward.
T [k] = {T i 1 ,i 2 ,··· ,i k−2 |{i 1 , i 2 , · · · , i k−2 } ⊂ [n]} (12.1) such that T i 1 ,i 2 ,··· ,i k−2 is a metric tree with n − k + 2 leaves in the set [n] \ {i 1 , i 2 , · · · , i k−2 } and metric d (i 1 ,i 2 ,··· ,i k−2 ) ab
so that the following compatibility condition is satisfied
d (i 3 ,i 4 ,··· ,i k ) i 1 ,i 2 = d (i 2 ,i 4 ,··· ,i k ) i 1 ,i 3 = · · · = d (i 1 ,i 2 ,··· ,i k−2 ) i k−1 ,i k , ∀ {i 1 , i 2 , · · · , i k } ⊂ [n]. (12.2)
Denote by d the symmetric tensor with entries d i 1 ,i 2 ,··· ,i k := d
(i 3 ,i 4 ,··· ,i k ) i 1 ,i 2 .
In what follows, motivated by our construction of k = 3 GFDs, we present conditions which are necessary in order for an arrangement of metric trees, T [k] , to define generalized Feynman diagrams for k ≥ 4.
• There are exactly (k − 1)(n − k − 1) independent internal edge lengths after imposing the compatibility conditions (12.2).
• There exists at least one GCO
Σ [k] = {σ (i 1 ,i 2 ,··· ,i k−2 ) |{i 1 , · · · , i k−2 } ⊂ [n]} such that T i 1 ,i 2 ,··· ,i k−2 is planar with respect to σ (i 1 ,i 2 ,··· ,i k−2 ) for all {i 1 , · · · , i k−2 } ⊂ [n]. In this case we say that T [k] is compatible with Σ [k] .
Moreover, the rational function associated to
T [k] is R(T ) := R + (k−1)(n−k−1) I=1 df I ( n k−2 )(n−k−1) J=(k−1)(n−k−1)+1 θ(f J (f 1 , . . . , f (k−1)(n−k−1) )) (12.3) × exp − {i 1 ,i 2 ,··· ,i k }⊂[n] s i 1 ,i 2 ,··· ,i k d i 1 ,i 2 ,··· ,i k .
The conditions in the integrand, θ(f J (f 1 , . . . , f (k−1)(n−k−1) )), simply enforce that all internal lengths must be non-negative. s i,i,i 3 ··· ,i k are generic completely symmetric rank-k tensors subject to s i,i,i 3 ··· ,i k = 0 and {i 2 ,··· ,i k }⊂[n]\{i} s i,i 2 ,··· ,i k =0 for any i. A k-decreasing projection of a GCO is also a GCO. However, a k-decreasing projection of T [k] π
(i) (T ) = {T i,i 2 ,··· ,i k−2 |{i 2 , · · · , i k−2 } ⊂ [n] \ {i}}. (12.4)
is not necessarily a GFD. On the one hand, if T is compatible with Σ, by definition, π (i) (T ) must be compatible with π (i) (Σ). On the other hand, the conditions (12.2) are not sufficient to guarantee that π (i) (T ) has enough independent internal lengths. So π (i) (T ) could also be a degenerate GFD. Similarly, we define the projections of arrangements of metric trees as follows,
π i (T ) = {π i (T i 1 ,i 2 ,··· ,i k−2 )|{i 1 , · · · , i k−2 } ⊂ [n] \ {i}} ,(12.5)
which might be GFDs or their degenerations.
Definition 12.2. A (k, n) arrangement of metric trees T [k] = {T i 1 ,i 2 ,··· ,i k−2 |{i 1 , i 2 , · · · , i k−2 } ⊂ [n]} is said to descend from a metric T if T i 1 ,i 2 ,··· ,i k−2 = π i 1 π i 2 · · · π i k−2 (T ). We also say that T [k] is a degree-k descendant of T .
Note that here the metric tree T could be a cubic tree or its degeneration, i.e., a Feynman diagram with quartic or higher degree vertices.
In particular, we say the standard metric tree T with n = k + 2 external leaves and it degree-k descendant are dual to each other.
In parallel to (11.5), we can explain the general duality between (k, n) and (n − k, n) GFDs. Given any (k, n) GFD T [k] , we conjecture that its dual (n − k, n) GFD T [n−k] is given by
T [k] ∼ T [n−k] = Dual(π i 1 π i 2 · · · π i n−k−2 T [k] ) {i 1 , i 2 , · · · , i n−k−2 } ⊂ [n] ,(12.6)
where we project out n − k − 2 labels of T [k] first, leading to a (k, k + 2) arrangement of metric trees, π i 1 π i 2 · · · π i n−k−2 T [k] , whose dual arrangement, Dual(π i 1 π i 2 · · · π i n−k−2 T [k] ), is just a (k + 2)-pt standard Feynman diagram T (i 1 i 2 ···i n−k−2 ) with possible quartic or higher degree vertices, which constitutes the dual (n − k, n) GFD T [n−k] . Based on (12.6), it's clear to see a projection of T [k] is dual to a codim-1 component of its dual T [n−k] ,
π i (T [k] ) ∼ π (i) (T [n−k] ) , π i (T [n−k] ) ∼ π (i) (T [k] ) ,(12.7)
which was the original way to define the duality between planar GFDs in [26].
(4, 7) GFDs
Using duality, one can get all (4, 7) GFDs from those of (3, 7). For example, the dual (4, 7) GFD of the (3, 7) one (4.13) with both degree-three and degree-four vertices is given in Figure 5. A (4, 7) GFD which is present in a more redundant way by a symmetric matrix such that the Feynman diagram in the i th -row and j th -column has leaves i, j pruned.
5 4 0
fig. 5. One can check π 1 π 2 (4.13) is dual to the second Feynman diagram in the first row of the (4, 7) GFD. Similarly, a k-preserving projection of the (4, 7) GFD is dual to a Feynman diagram in (4.13). The (4, 7) GFD in fig. 5 is compatible with the (4, 7) GCO dual to (4.14),
0( ,(12.8)
which was also present in a symmetric but redundant way. The contribution of the (4, 7)
GFD to the amplitudes according to (12. Remind that all (3, 6) GFDs just contain cubic vertices. So it's obvious that some rows or columns of the (4, 7) GFD in fig. 5 are just degenerated (3, 6) GFDs, i.e., even though some rows or columns may not have enough independent internal lengths to become a GFD in those rows or columns, the whole matrix have enough independent internal lengths to become a k = 4 GFD.
Because of this subtlety, it is conceptually more complicated to bootstrap all k = 4 GFDs than what we have done for k = 4 GCOs in section 11.3. If we start with a 7 × 7 matrix of Feynman diagrams as an ansatz and require each row or column is a (3, 6) GFD, we get 93 classes of (4, 7) GFDs as well as one extra class of matrices of Feynman diagrams which have seven independent internal lengths and are dual to the class of (4.11). Only when we degenerate it, we get the (4, 7) GFD in fig. 5. The 94 classes of (4, 7) GFDs obtained in this way are indeed dual to those of (3, 7) in the way explained in (12.6) and we present them in a ancillary file.
Future Directions
In this work we defined generalized color orderings (GCOs) and started the study of their properties. Combining generalized Feynman diagrams (GFDs) and GCOs, we finally constructed the complete color dressed generalized biadjoint amplitudes. Perhaps a surprising feature of generalized biadjoint amplitudes is that their GFDs are collections of trees which are not necessarily φ 3 Feynman diagrams. In fact, one should consider Feynman diagrams in a theory that with all possible powers of φ. In mathematical terminology, generic trees in the collection might not be trivalent: a priori any metric tree may be a member of the arrangement leading up to the GFD.
This work is the first one of a series of papers where we start the study of a "triality" among partial (k, n) biadjoint amplitudes (as defined in this work using GFDs), CEGM integrands and their integrals on the configuration space of n points in CP k−1 [25], and new objects we call chirotopal tropical Grassmannians.
The mathematics of arrangements of metric trees is naturally connected to tropical geometry, in particular to the Dressian and the tropical Grassmannian [13]. We end this work with a preview of some of directions on chirotopal tropical Grassmannians that will be explored in the future.
The tropical Grassmannian is a complicated object. However, it contains a relatively simple object known as the positive part, Trop + G(k, n). We will argue that Trop + G(k, n) is nothing but one of a family of objects, each determined by a chirotope [15], and whose study seems to be within reach.
Given the importance of such a family of objects, we give a preview of their definition here.
Chirotopal Tropical Grassmannians
The tropical Grassmannian Trop G(k, n), introduced in [11], parametrizes realizable tropical linear spaces; it is the tropical variety of the Plucker ideal of the Grassmannian G(k, n). While Trop G(2, n) is completely characterized by the tropicalization of the 3-term tropical Plucker relations, for general (k, n) the Plucker ideal contains higher degree generators and to calculate Trop G(k, n) quickly becomes a completely intractable problem. On the other hand, in [12], Speyer-Williams introduced the positive tropical Grassmannian, which was later shown [16,17] to be characterized by 3-term tropical Plucker relations, π Lac + π Lbd = min{π Lab + π Lcd , π Lad + π Lbc }, which depends on the given global cyclic order (1, 2, . . . , n).
In this section, motivated by the observation that the CEGM formula for the generalized biadjoint scalar with integrand the (squared) k-Parke-Taylor factor (that is, the canonical function on the nonnegative Grassmannian [46]) is equal to the Laplace transform of the positive tropical Grassmannian, we define the chirotopal tropical Grassmannian Trop χ G(k, n), by relaxing the requirement that the cyclic order be global; by this we mean that we replace the usual notion of the cyclic order on G(k, n) with certain compatible collections of n k−2 cyclic orders, which we called generalized color orderings in Definition 11.1.
Therefore we not only generalize the positive tropical Grassmannian to other realizable oriented uniform matroids, but we also present two a priori completely different ways to represent it. The first is purely combinatorial and uses generalized Feynman diagrams as discussed in this work, while the second uses the CEGM formula to reconstitute the cones from higher dimensional residues as in [44] in the context of factorization.
. , e N ) ∈ Z N ≥0 , denote x e = x e 1 1 . . . x e N N . Let E ⊂ Z N
≥0 . If f = e∈E f e x e is nonzero, denote by Trop (f ) the set of all points (X 1 , . . . , X N ) such that for the collection of numbers N i=1 e i X i for e ranging over E, then the minimum of the collection is achieved at least twice. We say that Trop (f ) is the tropical hypersurface associated to f . The tropical Grassmannian Trop G(k, n) is the intersection of all tropical hypersurfaces Trop (f ) where f ranges over all elements in the Plucker ideal.
The Dressian Dr k,n is the tropical (pre)variety obtained by tropicalizing only the 3-term Plucker relations 8 .
Thus, the Dressian Dr(k, n) consists of all tropical Plucker vectors; a tropical Plucker vector is said to be realizable if it is in the tropical Grassmannian TropG(k, n).
We first define the chirotopal Dressian, as a generalization of the positive Dressian and then intersect with the tropical Grassmannian in order to obtain our main definition, the chirotopal tropical Grassmannian.
Definition 13.2. Given any generic point in the real Grassmannian G(k, n), that is where all maximal k × k minors ∆ J are nonzero, let χ ∈ {−1, 1} ( n k ) be defined coordinate-wise by
χ J = sign(∆ J ).
Any such vector χ arising in this way is called a realizable, (uniform) chirotope [15]. Given a realizable chirotope χ ∈ {−1, 1} ( n k ) , a point π ∈ R ( n k ) is said to be a χ-tropical Plücker vector provided that, for any L ∈ [n] k−2 and any cyclic order (j 1 , j 2 , j 3 , j 4 ) in [n]\L such that χ Lj 1 j 2 χ Lj 3 j 4 χ Lj 1 j 3 χ Lj 2 j 4 , χ Lj 1 j 4 χ Lj 2 j 3 χ Lj 1 j 3 χ Lj 2 j 4 = (1, 1) then π Lj 1 j 3 + π Lj 2 j 4 = min{π Lj 1 j 2 + π Lj 3 j 4 , π Lj 1 j 4 + π Lj 2 j 3 }.
Here we denote by Dr χ k,n the χ-Dressian, consisting of all χ-tropical Plucker vectors. Also we emphasize that both the chirotopal Dressian Dr k,n and χ-tropical Grassmannian Trop χ G(k, n) depend only on the reorientation class of χ, that is, they are invariant under the torus action, say t j : χ J → −χ J whenever j ∈ J, and otherwise χ J → χ J .
It is natural to ask whether chirotopal tropical Plucker vector are always realizable, as is the case for positive tropical Plucker vectors.
Our conjecture, below, proposes to generalize to all realizable chirotopes (as in Definition 13.2) the following recent characterization of the positive tropical Grassmannian. Theorem 13.5 ([16,17]). The positive tropical Grassmannian is completely characterized by the 3-term tropical Plucker relations, that is we have Trop + G(k, n) = Dr + k,n ,
where "+" is the standard notation for the chirotope χ with all entries +1, that is χ = (1, 1, . . . , 1).
Below, in Conjecture 13.6, in order to be consistent with the CEGM formula, we are modding out R ( n k ) by the lineality space, which consists of all vectors with coordinates π J = j∈J x j for x ∈ R n . We speculate that chirotopal tropical Grassmannians are better behaved than both the full Dressian and the tropical Grassmannian. Conjecture 13.6 ventures to assert that every k = 3 chirotopal tropical Plucker vector is already the tropicalization of a linear space: it is already in the tropical Grassmannian.
Conjecture 13.6. Each Trop χ 3,n is a pure 9 (3 − 1)(n − 3 − 1)-dimensional polyhedral fan. 9 A polyhedral complex is pure if all maximal cones have the same dimension.
Moreover we have the equality
Trop χ G(3, n) = Dr χ 3,n , that is, chirotopal tropical Plucker vectors are realizable. Finally, if we fix a given maximal cone in Trop G(3, n) then either it is not contained in any chirotopal tropical Grassmannian, or it is contained in exactly 2 (3−1)(n−3−1) of them 10 .
We hope, in stating the Conjecture, to stimulate progress around a question that we believe should be rigorously investigated.
We emphasize that for k ≥ 3, the tropical Grassmannian is not in general covered by chirotopal tropical Grassmannians, as we have seen in TropG (3,8)!
Indeed, as noted in [40] there is a permutation class of maximal simplicial cones 11 in the tropical Grassmannian Trop G(3, 8) which do not belong to any chirotopal tropical Grassmannian, see the GFD in Equation (4.16). These cones are characterized by a (saturated) initial ideal which is not binomial [40,Remark 4.5].
This motivates the following important question.
Question 13.7. Is there a simple characterization of chirotopal tropical Plucker vectors in terms of properties of the Plucker ideal?
Let us summarize some evidence in support of our conjecture.
1. Using generalized color orderings, we have confirmed the classification of the realizable uniform chirotopes for n = 6, 7, 8, 9 which is given in https://www-imai.is.s. u-tokyo.ac.jp/~hmiyata/oriented_matroids/.
2. Using the data from (1), we have made a highly nontrivial numerical validation: for k = 3 and n = 6, 7 we find exact agreement between our combinatorial (GFD) expressions and the values obtained from the CEGM integrals for all 4 and 11 types of GCOs, respectively. For n = 8, the numerical evaluation of CEGM integrals is much more difficult given the large number of solutions to the scattering equations; nonetheless we find agreement well within the margin of error.
We also have computed the maximal cones, parametrized with generalized Feynman diagrams, or in mathematical terminology, metric tree arrangements subject to the additional requirement they must be compatible with at least one GCO. For example, the latter requirement can be seen to remove the seven-dimensional cones in the Dressian Dr 3,7 , though for a given seven-dimensional cone, its seven codimension one facets remain: each belongs to some chirotopal tropical Grassmannian.
It is of course natural to ask the even more ambitious question whether (or to what extent) Conjecture 13.6 can be extended to larger k. 10 Note that this is equivalent to Conjecture 9.1. 11 For example, with rays spanned by the eight vectors e 126 , e 135 , e 178 , e 237 , e 248 , e 346 , e 457 , e 568 ∈ R ( n 8 ) .
Question 13.8. Do the statements in Conjecture 13.6 hold if we replace k = 3 with k ≥ 4? Are chirotopal tropical Plucker vectors realizable in general?
A natural strategy to approach the proof of Conjecture 13.6 would be to try to generalize the method of proof used in [17] and in [16] for the so-called positive configuration space; both proofs rely on the existence of a certain surjectively positive parameterization [47] of the Grassmannian, as in [48]. Unfortunately, it does not seem obvious how to find such parameterizations for components of configuration spaces going beyond the positive configuration space to reorientation classes of other oriented uniform matroids, which suggests that some new ideas may be required.
It is also important to note that when χ is not isomorphic to the standard positive chirotope, then finest regular matroid subdivisions that are induced by a χ-tropical Plucker vector do not in general saturate Speyer's f-vector theorem [49], as can be seen already in Trop χ G (3,6). For example, the collection of trees T F in Table 4 induces a matroid subdivision with only five maximal cells, one less than the maximum 6−2 2 = 6 for regular matroid subdivisions. This is compatible with the following observation: the vertices for the largest cell label the 16 basis elements of the graphic matroid for the complete graph K 4 , whose presence as some face of the subdivision has been shown in [49] to be the condition under which the f-vector of a regular matroid subdivision is not maximized.
In [25], we exploit the CEGM formulation and its relation to X(3, n) to construct irreducible decoupling identities. Imposing that such decoupling identities have realizations in terms of GCO is a powerful clue in the quest to finding their explicit realization in terms of some Lie algebraic structure or generalization thereof.
where the first GCO in each group is also shown in table 1 whose arrangement of lines is present in fig. 2 which manifestly reduces to the common arrangement of five lines in fig. 1 when line 6 is removed.
B (3,8) Color Orderings
In this appendix, we complete the 135 types of GCOs for (3,8) in table 6. The second column provides a representative that can be used to obtain the rest by applying permutations of labels. The last column contains the number of distinct permutations.
Figure 1 .
1Σ I )c(Σ J ) m (3) n (Σ I , Σ J ), Top: Arrangement of lines corresponding to the generalized color ordering Σ = ((2345), (1345), (1245), (1235), (
# 0 (
0(23456), (13456), (12456), (12356), (12346), (12345)) 60 I ((25436), (15436), (12456), (12356), (12346), (12543)) 180 II ((23465), (13465), (12456), (12356), (12634), (12534)) 120 III ((23645), (13465), (12456), (15326), (12634), (13524)) 12
Figure 2 .
2Representatives of arrangements of lines of different types for(3, 6)
Denote by d the symmetric tensor with entries d abc := d
f J (f 1 , . . . , f 2(n−4) )) exp − a<b<c s abc d abc . (4.5) Here θ(x) := 1 if x > 0 and θ(x) = 0 if x ≤ 0 and f I with I ∈ [2(n − 4)] are the internal lengths chosen to parameterize all other internal lengths. The conditions in the integrand, θ(f J (f 1 , .
Σ I )c(Σ J ) m
Σ I , Σ J ).(5.4) There are 1005(3, 6) generalized Feynman diagrams and the 372×372 matrix of partial amplitudes m(3) 6 (Σ I , Σ J ) can be obtained directly by listing the generalized Feynman diagrams compatible with both orderings. Let us start with an example which leads to a single GFD in the set compatible with both orderings. Considering two color orderings of type I (see table 1), Σ = ((23564), (13564), (12456), (12653), (12436), (12435)), Σ = ((23456), (13645), (12645), (15326), (14326), (13245)). (5.5)the only GFD compatible with both orderings is shown infigure 3. Incidentally, this GFD is not compatible with any type 0 orderings. The partial amplitude is (see Eq. 4.
Figure 3 .
3Blue cycles represent the k = 2 ordering in Σ, red cycles represent the k = 2 ordering inΣ. Following the CHY diagrammatic description of biadjoint amplitudes, the dual to each red cycle gives rise to the possible Feynman diagrams compatible with both orderings. In this case, there is a single Feynman diagram in each entry and it is drawn in black. They together make up a collection of Feynman diagrams with a non-vanishing metric, i.e., a GFD.Consider another set of two color orderings, this time of type II and III respectively, Σ = ((23456), (13456), (12465), (12365), (12634), (12534)), Σ = ((23654), (13564), (12546), (12635), (14326), (13425)). (5.7)
126 s 145 s 234 s 257 s 356 s 467 .(5.10)
# 0 (
0(234567),(134567),(124567),(123567),(123467),(123457),(123456)) 360 I ((234567),(134567),(124567),(123567),(123476),(123475),(123465)) 2520 II ((234567),(134567),(124567),(123576),(123476),(123745),(123645)) 5040 III ((234567),(134567),(124567),(123756),(123746),(123745),(123654)) 2520 IV ((234567),(134567),(124576),(123756),(123746),(127345),(126354)) 2520 V ((234567),(134576),(124576),(123756),(123746),(154327),(145326)) 2520 VI ((234756),(134576),(124567),(123567),(164327),(127345),(143625)) 2520 VII ((234756),(134756),(124567),(123567),(127346),(127345),(125634)) 840 VIII ((234567),(134576),(124576),(123765),(123764),(145327),(145326)) 1680 IX ((234567),(134576),(124756),(123765),(127364),(145327),(143526)) 1680 X ((234576),(134576),(124756),(123675),(127364),(123574),(125364)) 5040
Figure 4 .
4Left: Lines L i , L j , L k bound a triangle with line L k at the bottom. Center: Line L k moves up until the triangle become a point where all three lines intercept. Right: The three lines bound a triangle again but with L k bounding the top.
It is easy to see that after making the position of 5 irrelevant, all three orderings indeed become equal to Σ 0 . We leave it as an exercise for the reader to check that after making the position of 5 irrelevant, all 12 orderings are of the form ((2345), (1345), (1245), (1235), (•)) ,(7.5)
Definition 8. 1 .
1Two GFDs are said to be related by a flip if they have a common codim-1 degenerate GFD.
internal lengths are explicitly given in term of only four. T A has 4 codimension-1 degenerations, x → 0, y → 0, z → 0, or w → 0. For example, for w → 0, T A is compatible with 16 color orderings. Eight of them are of type 0, i.e. descendants of (k = 2, n = 6) color orderings like (123456), (213456), and eight type I color orderings such as,((23456), (13456), (12456), (12365), (12364), (12354)) , (8.3) ((23645), (13645), (12546), (12653), (12643), (12453)) . (8.4)For each of the compatible color ordering, the degeneration (8.2) can be resolved in exactly two ways, one leads back to T A and the other to a new GFD. For example, using the ordering (8.3), the quartic vertices in (8.2) can be resolved in a unique way (without coming back to eq. (8.1)). This is done by using the fourth and fifth k = 2 orderings, (12365), (12364), to perform the k = 2 flip on the fourth and fifth trees respectively, resulting in
T
B has a non-degenerate metric and hence is indeed a GFD. Using the other compatible ordering presented in(8.4), the original GFD T A
a quintic vertex. There are also few cases where two GFDs connected by flips do not share any GCOs, (4.13) has 64 compatible GCOs such as (4.14) while its relabeling (8.10) has totally different 64 other compatible GCOs such as ((247356), (143756), (124735), (127436), (123576), (165427), (132546)) . (8.12)
Start with any GFD, such as a descendant of a (k = 2, n) Feynman diagram as a seed. 2. For each GFD in the list, flip it in all possible ways allowed by GCOs and add any new GFDs to the list.3. Repeat step 2 until no new GFDs are produced.
single layer of its flips produces two new GFDs T B and T C and their relabelling. Then flipping T B and T C again guided by their own compatible orderings produces some new ones. Repeating until no new classes of GFDs are produced, we get four other classes of GFDs which we denote as T D , T E , T F and T G ,
Σ, Σ) partial amplitudes. On the other hand, every GFD is compatible with exactly 64 color orderings which may belong to 2, 3 . . . or 11 types respectively.Here is an example of a GFD which is compatible with 11 types of GCOs,
Conjecture 9 . 1 .
91Every (3, n) GFD is compatible with 2 2(n−4) color orderings.
Definition 9 . 3 .
93Given a tree graph T embedded on a plane, a twist along an internal edge f of T is done by taking one of the two subtrees obtained by deleting f from T and reflecting it along the line defined by f . The new embedding of T is said to be a twist of the old one along f . Now we can prove Proposition 9.2.
internal lengths of each tree diagram in the collection are ordered from left to right, then their expressions can be recorded in a 3 × 7 matrix with i th column [f (i)
( 3 , 7 )
37twist transformations on the (3, 7) color ordering. So we have {xrp, xrv, xs, xu, wr, wsy, wuy, yp, yv}. (9.5) It is easy to show that these nine are in fact generated by only six, {xrp, xs, xu, wr, yp, yv}. (9.6)
Conjecture 9.1. The first is as a sum over all (3, n) GFDs while the second is the analog of (10.
show why it is the correct definition.
Σ 0 , Σ 0 ), with Σ 0 the (3, n) color ordering that descends from σ 0 = (1, 2, . . . , n), and m
n
(σ 0 , σ 0 ) was proposed. The proposal is that certain (n − 5)-dimensional residues of m (k) n (Σ 0 , Σ 0 ) are equal to m
n
(σ 0 , σ 0 ).For(3, 6), any residue of m
Σ 0 , Σ 0 ) at a poles of the form 1/s a,a+1,a+2 leads to the 14 Feynman diagrams that compute m
σ 0 , σ 0 ), while for (3, 7), a two-dimensional residue defined by the zeroes of {s a,a+1,a+2 , t a−1,a,a+1,a+2 } lead to the 42 Feynman diagrams in m (2) 7 (σ 0 , σ 0 ). Using our definition, (10.2), we found that the residue of m (3) minimal 6 at any pole of the form 1/s abc , gives rise to the sum over 105 Feynman diagrams which reproduces m (2) minimal 6
Σ 0 , Σ 0 ) at R = 0. The residue at R = 0 becomes the product of three m
1 )
1where σ (i 1 ,i 2 ,··· ,i k−2 ) is a (2, n − k + 2) color ordering constructed as follows.Let{H 1 , H 2 , . . . , H n } be an arrangement of n projective (k−2)-planes in generic position in RP (k−1) . Intersecting any (k−2) such H's, {H i 1 , H i 2 , . . . , H i k−2 }, produces a line, L (i 1 ,i 2 ,··· ,i k−2 ) . The line so defined intersects the remaining (n−k+2) H's each on a point, resulting in a sequence of points on the line which defines a (2, n−k+2) color ordering σ (i 1 ,i 2 ,··· ,i k−2 ) .
(
(35746), (24675), (26357), (23674), (24375), (23654), (14675), (16357), (13674), (14375), (13654), (12576), (12674), (12754), (12654), (12376), (12753), (12356), (12347), (12346), (
{{1680, 2}, {2520, 1}, {3360, 1}, {5040, 6}, {6720, 3}, {10080, 16}, {13440, 10}, {20160, 183}, {40320, 2382}} , e.g., there are two types with 1680 elements, i.e., with a symmetry group of order 24. Similarly, we present the numbers of distinct permutations of 24 non-realizable pseudo-GCOs here, {{1680, 1}, {3360, 3}, {5040, 3}, {6720, 3}, {10080, 5}, {13440, 3}, {20160, 3}, {40320, 3}}, which gives 319 200 distinct non-realizable pseudo-GCOs in total.
Definition 12.1 ([13]). A (k, n) arrangement of metric trees is an n k−2 -tuple
3) is given by 1/(s 3457 s 2367 s 1567 s 1346 s 1247 s 1235 ).
Definition 13.1 ([11]). Given e = (e 1 , . .
Definition 13. 3 .
3Fix a chirotope χ ∈ {−1, 1} ( n k ) as usual. The χ-tropical Grassmannian Trop χ G(k, n) is the set of all realizable χ-tropical Plucker vectors, i.e. it is the intersection of the chirotopal Dressian with the tropical Grassmannian, Trop χ G(k, n) = Dr χ k,n ∩ Trop G(k, n). Remark 13.4. Note that we are not making assertions about the Groebner fan. For that, a possible related proposal was made in [40, Conjecture 7.1].
Table 2 .
2(3,7) color orderings. The last column denotes the number of distinct permutations.
table 4 .
4E (R 12,45,36 + R 45,12,36 )/(R 12,45,36 R 45,12,36 t 1236 t 1245 t 3456 )Contribution R(T ) to the amplitude
# of flips # of perm.
T A
1/(s 123 s 456 t 1236 t 3456 )
8
90
T B
1/(R 45,12,36 s 123 t 1236 t 3456 )
8
180
T C
1/(s 123 s 345 t 1236 t 3456 )
8
90
T D
1/(R 45,12,36 s 123 s 145 t 1236 )
8
360
T 12
15
T F
1/(s 123 s 145 s 246 s 356 )
8
30
T G
1/(R 45,12,36 s 123 s 145 s 356 )
8
240
Table 4 .
4Contribution of GFDs of different types to the amplitude. The definitions of t, R are given in eq. (4.10). The last two columns denote the numbers of flips and distinct permutations of a GFD respectively. these 1005 GFDs at hand, one can pick out those that are compatible with a particular ordering. The four representatives of color orderings in table 1 have 48, 41, 44 and 45 GFDs respectively.With
Table 6 :
6All 135 Types of (3, 8) Color OrderingsType
Color Ordering Representative
Table 6 -
6continued from previous page Type Color Ordering Representative
Up to a sign when n is odd.
For higher values of n, we suspect that this has to be replaced by the requirement that the metric d abc associated with T defines a cone in the tropical Grassmannian Tr G(3, n).
Note that up to (k, n) =(3, 9), the number of color orderings agrees with the number of uniform matroids over Fq when continued to q = −1, as defined in[28]. This is in contrast to the continuation to q = 1, which leads to the Euler Characteristic of X(3, n) (see appendix A of[29] ). A connection to the number of realizable oriented uniform matroids is explored in[25].
One can resolve all quartic or higher degree vertices into purely cubic vertices at first and then degenerate the resulting arrangements of metric trees which contain too many independent internal lengths until they become valid GFDs.
The union implies that duplicates are not included.
8 The term Dressian was coined in[13], but Dr k,n was called the tropical pre-Grassmannian in[11].
A 31 GCOs in a (3,6) Decoupling SetBelow is a list of 31 GCOs participating a (3,6) decoupling grouped by types:
23456), (13456), (12456), (12356), (12346), (12345)), ((23465), (13465), (12465), (12365), (12346), (12345)), ((23645), (13645), (12645), (12365), (12364), (12345)), ((25436), (15436), (12645), (12635), (12634), (12345)), ((23456), (15436), (15426), (15326), (14326), (12345)), ((25436), (15436), (12456), (12356), (12346), (12543)), ((23456), (13456), (12456), (12365), (12364), (12354)), ((23645), (13645), (12645), (12356), (12346), (12354)), ((23456), (13456), (12645), (12635), (12634), (12543)), ((23456), (13645), (12645), (15326), (14326), (13245)), ((25436), (13645), (15426), (12635), (12634), (13245)), ((23645), (15436), (15426), (12365), (12364), (13245)), ((23456), (13465), (12465), (12365), (14326), (14325)), ((25436), (13456), (15426), (15326), (14326), (12543)), ((23465), (13456), (12456), (12356), (14326), (14325)), ((25436), (15436), (12465), (12365), (12634), (12435)), ((23645), (13645), (12465), (12635), (12364), (12435)), ((23465), (13465), (12645), (12635), (12346), (12435)), ((23465), (13465), (12465), (12356), (12364), (12354)), ((23465), (15436), (15426), (15326), (12346. 40320(13425)), ((23456), (13465), (12645), (12635), (14326), (13425)), ((23645), (15436), (15426), (12356), (12346), (13254)), ((23645), (13645), (12456), (12635), (12634), (12453)), ((25436), (15436), (12456), (12365), (12364), (12453)), ((23645), (13456), (12456), (15326), (14326), (13254)), ((23465), (13645), (12645), (15326). 14235)), ((23645), (13465), (12465), (15326), (12364), (14235)), ((23645), (13465), (12456), (15326), (12634), (13524)). (A.1) ((2345678), (1345678), (1245678), (1238576), (1238476), (1237845), (1236845), (1236754)) 40320 2 ((2345867), (1345867), (1284567), (1283567), (1283476), (1234758), (1234658(23456), (13456), (12456), (12356), (12346), (12345)), ((23465), (13465), (12465), (12365), (12346), (12345)), ((23645), (13645), (12645), (12365), (12364), (12345)), ((25436), (15436), (12645), (12635), (12634), (12345)), ((23456), (15436), (15426), (15326), (14326), (12345)), ((25436), (15436), (12456), (12356), (12346), (12543)), ((23456), (13456), (12456), (12365), (12364), (12354)), ((23645), (13645), (12645), (12356), (12346), (12354)), ((23456), (13456), (12645), (12635), (12634), (12543)), ((23456), (13645), (12645), (15326), (14326), (13245)), ((25436), (13645), (15426), (12635), (12634), (13245)), ((23645), (15436), (15426), (12365), (12364), (13245)), ((23456), (13465), (12465), (12365), (14326), (14325)), ((25436), (13456), (15426), (15326), (14326), (12543)), ((23465), (13456), (12456), (12356), (14326), (14325)), ((25436), (15436), (12465), (12365), (12634), (12435)), ((23645), (13645), (12465), (12635), (12364), (12435)), ((23465), (13465), (12645), (12635), (12346), (12435)), ((23465), (13465), (12465), (12356), (12364), (12354)), ((23465), (15436), (15426), (15326), (12346), (14325)), ((23465), (13465), (12456), (12356), (12634), (12534)), ((23456), (13456), (12465), (12365), (12634), (12534)), ((25436), (13465), (15426), (15326), (12634), (13425)), ((23456), (13465), (12645), (12635), (14326), (13425)), ((23645), (15436), (15426), (12356), (12346), (13254)), ((23645), (13645), (12456), (12635), (12634), (12453)), ((25436), (15436), (12456), (12365), (12364), (12453)), ((23645), (13456), (12456), (15326), (14326), (13254)), ((23465), (13645), (12645), (15326), (12346), (14235)), ((23645), (13465), (12465), (15326), (12364), (14235)), ((23645), (13465), (12456), (15326), (12634), (13524)). (A.1) ((2345678), (1345678), (1245678), (1238576), (1238476), (1237845), (1236845), (1236754)) 40320 2 ((2345867), (1345867), (1284567), (1283567), (1283476), (1234758), (1234658), (1254376)) 40320
1345867), (1284567), (1283756), (1283746), (1237458), (1236548. 20160((2345867), (1345867), (1284567), (1283756), (1283746), (1237458), (1236548), (1254376)) 20160
1345678), (1245678), (1235678), (1234867), (1234857), (1234856. 40320((2345678), (1345678), (1245678), (1235678), (1234867), (1234857), (1234856), (1234765)) 40320
1345678), (1284567), (1283567), (1283476), (1283475), (1283465. 20160((2345678), (1345678), (1284567), (1283567), (1283476), (1283475), (1283465), (1276543)) 20160
1345687), (1245687), (1235687), (1234876), (1234875), (1564328. 40320((2345678), (1345687), (1245687), (1235687), (1234876), (1234875), (1564328), (1564327)) 40320
1384567), (1284567), (1765328), (1674328), (1574328), (1564328. 10080((2345678), (1384567), (1284567), (1765328), (1674328), (1574328), (1564328), (1324567)) 10080
1345678), (1245678), (1235678), (1234876), (1234875), (1234865. 10080((2345678), (1345678), (1245678), (1235678), (1234876), (1234875), (1234865), (1234765)) 10080
1345678), (1245678), (1235678), (1234678), (1234587), (1234586), (1234576)) 20160 10 ((2345678), (1345678), (1245678), (1238567), (1238476), (1238475), (1238465. 4032040320(1384567), (1284567), (1235678), (1234786), (1234785), (1234658). (2765438), (1765438), (1245687), (1235876), (1234876), (1238745), (1283645. (2765438), (1765438), (1245867), (1235876), (1234876), (1283745), (1283645. (2345678), (1345678), (1245678), (1235687), (1234876), (1234875), (1238465((2345678), (1345678), (1245678), (1235678), (1234678), (1234587), (1234586), (1234576)) 20160 10 ((2345678), (1345678), (1245678), (1238567), (1238476), (1238475), (1238465), (1237654)) 40320 11 ((2345678), (1345678), (1248567), (1238567), (1283476), (1283475), (1283465), (1276534)) 20160 12 ((2345678), (1345687), (1245687), (1238567), (1238476), (1238475), (1564328), (1456327)) 40320 13 ((2345678), (1345687), (1245687), (1238576), (1238476), (1238745), (1546328), (1456327)) 40320 14 ((2345678), (1345867), (1245867), (1238756), (1238746), (1547328), (1456328), (1453267)) 40320 15 ((2345678), (1348567), (1248567), (1238576), (1674328), (1547328), (1546328), (1432567)) 40320 16 ((2345678), (1348567), (1248567), (1238756), (1647328), (1547328), (1456328), (1432567)) 20160 17 ((2345687), (1345687), (1284567), (1283567), (1283476), (1283475), (1234658), (1265437)) 40320 18 ((2345687), (1348576), (1248576), (1237865), (1467328), (1453278), (1485326), (1473256)) 40320 19 ((2345867), (1345867), (1284567), (1283576), (1283476), (1237458), (1236458), (1254376)) 40320 20 ((2345867), (1384567), (1284567), (1765328), (1674328), (1234758), (1234658), (1542376)) 40320 21 ((2348567), (1765438), (1765428), (1675328), (1234876), (1237485), (1236485), (1432576)) 40320 22 ((2384567), (1384567), (1284567), (1235678), (1234786), (1234785), (1234658), (1237564)) 40320 23 ((2765438), (1765438), (1245687), (1235876), (1234876), (1238745), (1283645), (1254637)) 40320 24 ((2765438), (1765438), (1245867), (1235876), (1234876), (1283745), (1283645), (1254367)) 40320 25 ((2345678), (1345678), (1245678), (1235687), (1234876), (1234875), (1238465), (1237465)) 40320
1267345)) 20160 75 ((23456782345678), (1345678), (1245687), (1235867), (1234867), (1238457), (1283456), (1273645)) 20160 86 ((2345678), (1345678), (1245867), (1238576), (1238476), (1283745), (1283645), (1276354)) 20160 87 ((2345678), (1345687), (1245687), (1235867), (1234876), (1238475), (1564328), (1546327). 4032010080(1345687), (1245687), (1235678), (1234678), (1754328). (2345867), (1345867), (1245687), (1235678), (1234678), (1283457), (1238456. (2348567), (1345867), (1245876), (1237568), (1647328), (1273845), (1263854. (2345678), (1345867), (1248576), (1238756), (1283746), (1543728), (1453628. (2345687), (1345867), (1248567), (1238576), (1283476), (1547328), (1236458. (2348567), (1345867), (1245678), (1235768), (1674328), (1283745), (1283645. (2348567), (1345867), (1245687), (1235768), (1674328), (1283745), (1238645. (2348567), (1345876), (1245768), (1237568), (1647328), (1543827), (1453826. (2348567), (1348567), (1245786), (1237568), (1283746), (1273845), (1268354. (2384567), (1345687), (1245678), (1675328), (1674328), (1547328), (1283645), (1372654). (2384567), (1345867), (1245678), (1675328), (1674328), (1283745), (1283645), (1376254)) 20160 128 ((2384567), (1348567), (1245867), (1675328), (1283476), (1238745), (1238645. (2348567), (1345867), (1245786), (1237568), (1647328), (1273845), (1268354. (2345687), (1345867), (1248567), (1238756), (1283746), (1547328), (1236548. (2348567), (1345867), (1245687), (1235678), (1674328), (1283475), (1238465. (2348567), (1345867), (1245687), (1235678), (1764328), (1283457), (1238456. (2348567), (1345876), (1245786), (1237568), (1647328), (1548327), (1453826), (1463725)) 20160 134 ((2345687), (1345867), (1248576), (1238756), (1283746), (1543728), (1263548Continued on next page ((2345678), (1345687), (1284567), (1283567), (1283476), (1283475), (1564328), (1345627)) 40320 30 ((2345678), (1345867), (1284567), (1283567), (1283476), (1574328), (1564328), (1345267)) 40320 31 ((2345678), (1348567), (1284567), (1283567), (1674328), (1574328), (1564328), (1342567)) 40320 32 ((2345687), (1345687), (1245687), (1235867), (1234786), (1238475), (1234685), (1236475)) 40320 33 ((2345687), (1345687), (1245867), (1235867), (1234786), (1283475), (1234685), (1263475)) 20160 34 ((2345867), (1345867), (1245678), (1235678), (1234768), (1283475), (1283465), (1267345)) 40320 35 ((2345867), (1345867), (1245867), (1235678), (1234768), (1238475), (1238465), (1236745)) 40320 36 ((2348567), (1384567), (1284567), (1765328), (1234876), (1234875), (1234865), (1423567)) 40320 37 ((2384567), (1384567), (1245687), (1283576), (1283476), (1283745), (1238645), (1245637)) 40320 38 ((2384567), (1384567), (1284567), (1235678), (1234876), (1234785), (1234685), (1235764)) 40320 39 ((2345678), (1345678), (1245678), (1235876), (1234876), (1237845), (1236845), (1236745)) 20160 40 ((2348567), (1345678), (1245678), (1235678), (1674328), (1574328), (1564328), (1432765)) 40320 41 ((2345678), (1345678), (1245678), (1235678), (1234687), (1234587), (1234856), (1234756)) 40320 42 ((2345678), (1345678), (1245678), (1235687), (1234687), (1234587), (1238456), (1237456)) 20160 43 ((2345678), (1345678), (1245687), (1238567), (1238476), (1238475), (1283465), (1273654)) 40320 44 ((2345678), (1345678), (1245687), (1238576), (1238476), (1238745), (1283645), (1273654)) 40320 45 ((2345678), (1345678), (1245867), (1238567), (1238476), (1283475), (1283465), (1276354)) 40320 46 ((2345678), (1345867), (1245876), (1238756), (1238746), (1543728), (1453628), (1453267)) 40320 47 ((2345687), (1345876), (1248576), (1237865), (1283764), (1453278), (1485326), (1473526)) 40320 48 ((2345687), (1384567), (1284567), (1765328), (1674328), (1574328), (1234658), (1654237)) 20160 49 ((2345867), (1348567), (1284567), (1283567), (1674328), (1234758), (1234658), (1524376)) 40320 50 ((2345867), (1348567), (1284567), (1283756), (1647328), (1237458), (1236548), (1524376)) 20160 51 ((2345867), (1384567), (1284567), (1675328), (1674328), (1237458), (1236458), (1542376)) 20160 52 ((2348567), (1348567), (1245687), (1237568), (1283746), (1283745), (1238654), (1256374)) 40320 53 ((2348567), (1384567), (1284567), (1765328), (1234786), (1234785), (1234658), (1423756)) 40320 54 ((2384567), (1765438), (1765428), (1235786), (1234876), (1237845), (1236485), (1325746)) 40320 55 ((2765438), (1765438), (1245678), (1235768), (1234876), (1237485), (1236485), (1257643)) 40320 56 ((2765438), (1765438), (1245678), (1235786), (1234876), (1237845), (1236485), (1257463)) 40320 57 ((2345678), (1345678), (1245678), (1235687), (1234867), (1234857), (1238456), (1237465)) 40320 58 ((2345678), (1345678), (1245678), (1235867), (1234876), (1238475), (1238465), (1237645)) 40320 59 ((2345678), (1345678), (1245687), (1235687), (1234867), (1234857), (1283456), (1273465)) 40320 60 ((2345678), (1345678), (1245687), (1235876), (1234876), (1238745), (1283645), (1273645)) 40320 61 ((2345678), (1345678), (1245867), (1235867), (1234876), (1283475), (1283465), (1276345)) 40320 62 ((2345678), (1345678), (1245867), (1235876), (1234876), (1283745), (1283645), (1276345)) 40320 63 ((2345678), (1345867), (1245867), (1238576), (1238476), (1547328), (1546328), (1453267)) 40320 64 ((2345678), (1345867), (1284567), (1283576), (1283476), (1547328), (1546328), (1345267)) 40320 65 ((2345678), (1348567), (1284567), (1283576), (1674328), (1547328), (1546328), (1342567)) 20160 66 ((2345867), (1345867), (1248567), (1238576), (1283476), (1237458), (1236458), (1253476)) 40320 67 ((2345867), (1348567), (1248567), (1238576), (1674328), (1237458), (1236458), (1523476)) 20160 68 ((2348567), (1384567), (1284567), (1765328), (1234876), (1234785), (1234685), (1423576)) 40320 69 ((2384567), (1345687), (1245687), (1675328), (1674328), (1547328), (1238645), (1456237)) 40320 70 ((2384567), (1345687), (1245687), (1765328), (1674328), (1574328), (1238465), (1456237)) 40320 71 ((2348567), (1348567), (1245678), (1235678), (1283476), (1283475), (1283465), (1256734)) 20160 72 ((2384567), (1345678), (1245678), (1675328), (1674328), (1547328), (1546328), (1327654)) 20160 73 ((2384567), (1384567), (1245867), (1283576), (1283476), (1238745), (1238645), (1245367)) 20160 74 ((2345867), (1345867), (1245678), (1235678), (1234678), (1283457), (1283456), (1267345)) 20160 75 ((2345678), (1345687), (1245867), (1238567), (1238476), (1283475), (1564328), (1453627)) 20160 76 ((2345678), (1345687), (1245867), (1238756), (1238746), (1283745), (1456328), (1453627)) 40320 77 ((2345678), (1345687), (1248567), (1238576), (1283476), (1283745), (1546328), (1435627)) 40320 78 ((2345678), (1345867), (1248567), (1238576), (1283476), (1547328), (1546328), (1435267)) 40320 79 ((2345678), (1345867), (1248567), (1238756), (1283746), (1547328), (1456328), (1435267)) 40320 80 ((2345687), (1345867), (1245867), (1237856), (1238746), (1547328), (1236584), (1475326)) 40320 81 ((2345687), (1345867), (1284567), (1283567), (1283476), (1574328), (1234658), (1625437)) 40320 82 ((2345687), (1348567), (1284567), (1283567), (1674328), (1574328), (1234658), (1652437)) 40320 83 ((2345867), (1348567), (1284567), (1283576), (1674328), (1237458), (1236458), (1524376)) 40320 84 ((2348567), (1348576), (1245768), (1237568), (1283746), (1543827), (1453826), (1257634)) 20160 85 ((2345678), (1345678), (1245687), (1235867), (1234867), (1238457), (1283456), (1273645)) 20160 86 ((2345678), (1345678), (1245867), (1238576), (1238476), (1283745), (1283645), (1276354)) 20160 87 ((2345678), (1345687), (1245687), (1235867), (1234876), (1238475), (1564328), (1546327)) 40320 88 ((2345678), (1345687), (1245867), (1235867), (1234876), (1283475), (1564328), (1543627)) 40320 89 ((2345687), (1345687), (1248567), (1238576), (1283476), (1283745), (1236458), (1265347)) 40320 90 ((2345687), (1348567), (1248567), (1238576), (1674328), (1547328), (1236458), (1652347)) 40320 91 ((2345867), (1345867), (1245687), (1235678), (1234768), (1283475), (1238465), (1263745)) 40320 92 ((2348567), (1348567), (1245678), (1235768), (1283476), (1283745), (1283645), (1256734)) 40320 93 ((2348567), (1348567), (1245687), (1235768), (1283476), (1283745), (1238645), (1256374)) 40320 94 ((2348567), (1348567), (1245867), (1235786), (1283476), (1237485), (1236845), (1253746)) 40320 95 ((2384567), (1345867), (1245867), (1675328), (1674328), (1238745), (1238645), (1452367)) 20160 96 ((2345687), (1345867), (1284567), (1283576), (1283476), (1547328), (1236458), (1625437)) 40320 97 ((2345687), (1348567), (1284567), (1283576), (1674328), (1547328), (1236458), (1652437)) 20160 98 ((2348567), (1345867), (1245678), (1237568), (1647328), (1283745), (1283654), (1437625)) 40320 99 ((2348567), (1345867), (1245687), (1237568), (1647328), (1283745), (1238654), (1473625)) 40320 100 ((2348567), (1384567), (1284576), (1657328), (1237468), (1273458), (1263548), (1423765)) 5040 101 ((2345678), (1345678), (1245687), (1235867), (1234876), (1238475), (1283465), (1273645)) 40320 102 ((2345678), (1345687), (1245867), (1238576), (1238476), (1283745), (1546328), (1453627)) 40320 103 ((2384567), (1345687), (1245678), (1765328), (1674328), (1574328), (1283465), (1372654)) 40320 104 ((2384567), (1345867), (1245678), (1765328), (1674328), (1283475), (1283465), (1376254)) 40320 105 ((2384567), (1345867), (1245687), (1675328), (1674328), (1283745), (1238645), (1452637)) Ordering Representative # 106 ((2384567), (1345867), (1245687), (1765328), (1674328), (1283475), (1238465), (1452637)) 20160 107 ((2384567), (1348567), (1245678), (1765328), (1283476), (1283475), (1283465), (1376524)) 20160 108 ((2384567), (1348567), (1245687), (1675328), (1283476), (1283745), (1238645), (1425637)) 40320 109 ((2384567), (1348567), (1245867), (1675328), (1283476), (1237845), (1236845), (1425376)) 40320 110 ((2345687), (1348567), (1248567), (1238756), (1647328), (1547328), (1236548), (1652347)) 10080 111 ((2348567), (1345687), (1245678), (1235678), (1674328), (1574328), (1283465), (1437265)) 40320 112 ((2348567), (1345687), (1245687), (1235678), (1674328), (1574328), (1238465), (1473265)) 40320 113 ((2348567), (1345867), (1245678), (1235678), (1674328), (1283475), (1283465), (1437625)) 40320 114 ((2348567), (1345867), (1245867), (1235768), (1674328), (1238745), (1238645), (1476325)) 20160 115 ((2348567), (1348567), (1245867), (1235768), (1283476), (1238745), (1238645), (1253674)) 40320 116 ((2345867), (1345687), (1245678), (1235678), (1234678), (1754328), (1283456), (1543726)) 20160 117 ((2345867), (1345687), (1245687), (1235678), (1234678), (1754328), (1238456), (1547326)) 20160 118 ((2345867), (1345867), (1245687), (1235678), (1234678), (1283457), (1238456), (1263745)) 20160 119 ((2348567), (1345867), (1245876), (1237568), (1647328), (1273845), (1263854), (1467325)) 20160 120 ((2345678), (1345867), (1248576), (1238756), (1283746), (1543728), (1453628), (1435267)) 20160 121 ((2345687), (1345867), (1248567), (1238576), (1283476), (1547328), (1236458), (1625347)) 40320 122 ((2348567), (1345867), (1245678), (1235768), (1674328), (1283745), (1283645), (1437625)) 40320 123 ((2348567), (1345867), (1245687), (1235768), (1674328), (1283745), (1238645), (1473625)) 40320 124 ((2348567), (1345876), (1245768), (1237568), (1647328), (1543827), (1453826), (1436725)) 40320 125 ((2348567), (1348567), (1245786), (1237568), (1283746), (1273845), (1268354), (1257364)) 20160 126 ((2384567), (1345687), (1245678), (1675328), (1674328), (1547328), (1283645), (1372654)) 40320 127 ((2384567), (1345867), (1245678), (1675328), (1674328), (1283745), (1283645), (1376254)) 20160 128 ((2384567), (1348567), (1245867), (1675328), (1283476), (1238745), (1238645), (1425367)) 40320 129 ((2348567), (1345867), (1245786), (1237568), (1647328), (1273845), (1268354), (1463725)) 20160 130 ((2345687), (1345867), (1248567), (1238756), (1283746), (1547328), (1236548), (1625347)) 20160 131 ((2348567), (1345867), (1245687), (1235678), (1674328), (1283475), (1238465), (1473625)) 20160 132 ((2348567), (1345867), (1245687), (1235678), (1764328), (1283457), (1238456), (1473625)) 2880 133 ((2348567), (1345876), (1245786), (1237568), (1647328), (1548327), (1453826), (1463725)) 20160 134 ((2345687), (1345867), (1248576), (1238756), (1283746), (1543728), (1263548), (1625347)) 10080
. Bibliography, Bibliography
Multiparton amplitudes in gauge theories. M L Mangano, S J Parke, 10.1016/0370-1573(91)90091-Yhep-th/0509223Phys. Rept. 200M. L. Mangano and S. J. Parke, Multiparton amplitudes in gauge theories, Phys. Rept. 200 (1991) 301-367, [hep-th/0509223].
F Cachazo, S He, E Y Yuan, 10.1007/JHEP07(2014)0331309.0885Scattering of Massless Particles: Scalars, Gluons and Gravitons. 33F. Cachazo, S. He and E. Y. Yuan, Scattering of Massless Particles: Scalars, Gluons and Gravitons, JHEP 07 (2014) 033, [1309.0885].
Scattering of Massless Particles in Arbitrary Dimensions. F Cachazo, S He, E Y Yuan, 10.1103/PhysRevLett.113.1716011307.2199Phys. Rev. Lett. 113171601F. Cachazo, S. He and E. Y. Yuan, Scattering of Massless Particles in Arbitrary Dimensions, Phys. Rev. Lett. 113 (2014) 171601, [1307.2199].
Scattering Equations: From Projective Spaces to Tropical Grassmannians. F Cachazo, N Early, A Guevara, S Mizera, 10.1007/JHEP06(2019)0391903.08904JHEP. 0639F. Cachazo, N. Early, A. Guevara and S. Mizera, Scattering Equations: From Projective Spaces to Tropical Grassmannians, JHEP 06 (2019) 039, [1903.08904].
Tropical Grassmannians, cluster algebras and scattering amplitudes. J Drummond, J Foster, O Gürdogan, C Kalousios, 10.1007/JHEP04(2020)1461907.01053JHEP. 04146J. Drummond, J. Foster, O. Gürdogan and C. Kalousios, Tropical Grassmannians, cluster algebras and scattering amplitudes, JHEP 04 (2020) 146, [1907.01053].
Stringy canonical forms. N Arkani-Hamed, S He, T Lam, 10.1007/JHEP02(2021)0691912.08707JHEP. 0269N. Arkani-Hamed, S. He and T. Lam, Stringy canonical forms, JHEP 02 (2021) 069, [1912.08707].
Notes on polytopes, amplitudes and boundary configurations for Grassmannian string integrals. S He, L Ren, Y Zhang, 10.1007/JHEP04(2020)140JHEP. 041402001.09603S. He, L. Ren and Y. Zhang, Notes on polytopes, amplitudes and boundary configurations for Grassmannian string integrals, JHEP 04 (2020) 140, [2001.09603].
Tropical fans, scattering equations and amplitudes. J Drummond, J Foster, O Gürdoğan, C Kalousios, 10.1007/JHEP11(2021)071JHEP. 11712002.04624J. Drummond, J. Foster, O. Gürdoğan and C. Kalousios, Tropical fans, scattering equations and amplitudes, JHEP 11 (2021) 071, [2002.04624].
Notes on worldsheet-like variables for cluster configuration spaces. S He, Y Wang, Y Zhang, P Zhao, 2109.13900S. He, Y. Wang, Y. Zhang and P. Zhao, Notes on worldsheet-like variables for cluster configuration spaces, 2109.13900.
S J Gates, S N Mak, M Spradlin, A Volovich, 2111.08186Cluster Superalgebras and Stringy Integrals. S. J. Gates, S. N. Hazel Mak, M. Spradlin and A. Volovich, Cluster Superalgebras and Stringy Integrals, 2111.08186.
The tropical grassmannian. D Speyer, B Sturmfels, Advances in Geometry. 4D. Speyer and B. Sturmfels, The tropical grassmannian, Advances in Geometry 4 (2004) 389-411.
The tropical totally positive grassmannian. D Speyer, L Williams, Journal of Algebraic Combinatorics. 22D. Speyer and L. Williams, The tropical totally positive grassmannian, Journal of Algebraic Combinatorics 22 (2005) 189-210.
S Herrmann, A Jensen, M Joswig, B Sturmfels, arXiv:0808.2383How to draw tropical planes. arXiv preprintS. Herrmann, A. Jensen, M. Joswig and B. Sturmfels, How to draw tropical planes, arXiv preprint arXiv:0808.2383 (2008) .
Generalized Planar Feynman Diagrams: Collections. F Borges, F Cachazo, 10.1007/JHEP11(2020)1641910.10674JHEP. 11164F. Borges and F. Cachazo, Generalized Planar Feynman Diagrams: Collections, JHEP 11 (2020) 164, [1910.10674].
. A Bjorner, A Björner, M Vergnas, B Sturmfels, N White, G M Ziegler, Cambridge University PressA. Bjorner, A. Björner, M. Las Vergnas, B. Sturmfels, N. White and G. M. Ziegler, Oriented matroids. No. 46. Cambridge University Press, 1999.
The positive dressian equals the positive tropical grassmannian. D Speyer, L Williams, Transactions of the American Mathematical Society, Series B. 8D. Speyer and L. Williams, The positive dressian equals the positive tropical grassmannian, Transactions of the American Mathematical Society, Series B 8 (2021) 330-353.
Positive configuration space. N Arkani-Hamed, T Lam, M Spradlin, 10.1007/s00220-021-04041-xCommun. Math. Phys. 3842003.03904N. Arkani-Hamed, T. Lam and M. Spradlin, Positive configuration space, Commun. Math. Phys. 384 (2021) 909-954, [2003.03904].
Non-perturbative geometries for planar N = 4 SYM amplitudes. N Arkani-Hamed, T Lam, M Spradlin, 10.1007/JHEP03(2021)0651912.08222JHEP. 0365N. Arkani-Hamed, T. Lam and M. Spradlin, Non-perturbative geometries for planar N = 4 SYM amplitudes, JHEP 03 (2021) 065, [1912.08222].
From weakly separated collections to matroid subdivisions. N Early, Comb. Theory. 2235N. Early, From weakly separated collections to matroid subdivisions, Comb. Theory 2 (2022) Paper No. 2, 35.
Planar kinematic invariants, matroid subdivisions and generalized Feynman diagrams. N Early, N. Early, Planar kinematic invariants, matroid subdivisions and generalized Feynman diagrams, 1912.13513.
The positive tropical Grassmannian, the hypersimplex, and the m=2 amplituhedron. T Lukowski, M Parisi, L K Williams, 6164T. Lukowski, M. Parisi and L. K. Williams, The positive tropical Grassmannian, the hypersimplex, and the m=2 amplituhedron, 2002.06164.
Singularities of eight-and nine-particle amplitudes from cluster algebras and tropical geometry. N Henke, G Papathanasiou, 10.1007/JHEP10(2021)0072106.01392JHEP. 107N. Henke and G. Papathanasiou, Singularities of eight-and nine-particle amplitudes from cluster algebras and tropical geometry, JHEP 10 (2021) 007, [2106.01392].
Hypergeometric functions, my love: modular interpretations of configuration spaces. M Yoshida, Springer Science & Business Media32M. Yoshida, Hypergeometric functions, my love: modular interpretations of configuration spaces, vol. 32. Springer Science & Business Media, 2013.
Configurations of seven lines on the real projective plane and the root system of type e7. J Sekiguchi, Journal of the Mathematical Society of Japan. 51J. Sekiguchi, Configurations of seven lines on the real projective plane and the root system of type e7, Journal of the Mathematical Society of Japan 51 (1999) 987-1013.
F Cachazo, N Early, Y Zhang, 2304.07351Generalized Color Orderings: CEGM Integrands and Decoupling Identities. F. Cachazo, N. Early and Y. Zhang, Generalized Color Orderings: CEGM Integrands and Decoupling Identities, 2304.07351.
Planar Matrices and Arrays of Feynman Diagrams. F Cachazo, A Guevara, B Umbert, Y Zhang, 1912.09422F. Cachazo, A. Guevara, B. Umbert and Y. Zhang, Planar Matrices and Arrays of Feynman Diagrams, 1912.09422.
Berends-Giele recursion for double-color-ordered amplitudes. C R Mafra, 10.1007/JHEP07(2016)0801603.09731JHEP. 0780C. R. Mafra, Berends-Giele recursion for double-color-ordered amplitudes, JHEP 07 (2016) 080, [1603.09731].
A N Skorobogatov, On the number of representations of matroids over finite fields, Designs, Codes and Cryptography. 9A. N. Skorobogatov, On the number of representations of matroids over finite fields, Designs, Codes and Cryptography 9 (1996) 215-226.
Likelihood degenerations. D Agostini, T Brysiewicz, C Fevola, L Kühne, B Sturmfels, S Telen, 10.1016/j.aim.2023.1088632107.10518Adv. Math. 414108863D. Agostini, T. Brysiewicz, C. Fevola, L. Kühne, B. Sturmfels, S. Telen et al., Likelihood degenerations, Adv. Math. 414 (2023) 108863, [2107.10518].
M Celaya, G Loho, C H Yuen, arXiv:2005.01787Oriented matroids from triangulations of products of simplices. arXiv preprintM. Celaya, G. Loho and C. H. Yuen, Oriented matroids from triangulations of products of simplices, arXiv preprint arXiv:2005.01787 (2020) .
New Relations for Gauge-Theory Amplitudes. Z Bern, J J M Carrasco, H Johansson, 10.1103/PhysRevD.78.085011Phys. Rev. D. 78850110805.3993Z. Bern, J. J. M. Carrasco and H. Johansson, New Relations for Gauge-Theory Amplitudes, Phys. Rev. D 78 (2008) 085011, [0805.3993].
Z Bern, J J Carrasco, M Chiodaroli, H Johansson, R Roiban, 1909.01358The Duality Between Color and Kinematics and its Applications. Z. Bern, J. J. Carrasco, M. Chiodaroli, H. Johansson and R. Roiban, The Duality Between Color and Kinematics and its Applications, 1909.01358.
Chapter 2: An invitation to color-kinematics duality and the double copy. Z Bern, J J Carrasco, M Chiodaroli, H Johansson, R Roiban, 10.1088/1751-8121/ac93cfJ. Phys. A. 554430032203.13013Z. Bern, J. J. Carrasco, M. Chiodaroli, H. Johansson and R. Roiban, Chapter 2: An invitation to color-kinematics duality and the double copy, J. Phys. A 55 (2022) 443003, [2203.13013].
Snowmass White Paper: the Double Copy and its Applications. T Adamo, J J M Carrasco, M Carrillo-González, M Chiodaroli, H Elvang, H Johansson, in 2022 Snowmass Summer Study, 4, 2022. 2204.06547T. Adamo, J. J. M. Carrasco, M. Carrillo-González, M. Chiodaroli, H. Elvang, H. Johansson et al., Snowmass White Paper: the Double Copy and its Applications, in 2022 Snowmass Summer Study, 4, 2022. 2204.06547.
Labelled tree graphs, Feynman diagrams and disk integrals. X Gao, S He, Y Zhang, 10.1007/JHEP11(2017)1441708.08701JHEP. 11144X. Gao, S. He and Y. Zhang, Labelled tree graphs, Feynman diagrams and disk integrals, JHEP 11 (2017) 144, [1708.08701].
Scattering Forms and the Positive Geometry of Kinematics, Color and the Worldsheet. N Arkani-Hamed, Y Bai, S He, G Yan, 10.1007/JHEP05(2018)0961711.09102JHEP. 0596N. Arkani-Hamed, Y. Bai, S. He and G. Yan, Scattering Forms and the Positive Geometry of Kinematics, Color and the Worldsheet, JHEP 05 (2018) 096, [1711.09102].
Scattering Forms, Worldsheet Forms and Amplitudes from Subspaces. S He, G Yan, C Zhang, Y Zhang, 10.1007/JHEP08(2018)0401803.11302JHEP. 0840S. He, G. Yan, C. Zhang and Y. Zhang, Scattering Forms, Worldsheet Forms and Amplitudes from Subspaces, JHEP 08 (2018) 040, [1803.11302].
On Positive Geometry and Scattering Forms for Matter Particles. A Herderschee, S He, F Teng, Y Zhang, 10.1007/JHEP06(2020)0301912.08307JHEP. 0630A. Herderschee, S. He, F. Teng and Y. Zhang, On Positive Geometry and Scattering Forms for Matter Particles, JHEP 06 (2020) 030, [1912.08307].
Kinematic numerators from the worldsheet: cubic trees from labelled trees. S He, L Hou, J Tian, Y Zhang, 10.1007/JHEP08(2021)1182103.15810JHEP. 08118S. He, L. Hou, J. Tian and Y. Zhang, Kinematic numerators from the worldsheet: cubic trees from labelled trees, JHEP 08 (2021) 118, [2103.15810].
Massively parallel computation of tropical varieties, their positive part, and tropical grassmannians. D Bendle, J Böhm, Y Ren, B Schröter, Journal of Symbolic Computation. 102224D. Bendle, J. Böhm, Y. Ren and B. Schröter, Massively parallel computation of tropical varieties, their positive part, and tropical grassmannians, Journal of Symbolic Computation (2023) 102224.
Planar Matrices and Arrays of Feynman Diagrams: Poles for Higher k. A Guevara, Y Zhang, A. Guevara and Y. Zhang, Planar Matrices and Arrays of Feynman Diagrams: Poles for Higher k, 2007.15679.
The Polynomial Form of the Scattering Equations. L Dolan, P Goddard, 10.1007/JHEP07(2014)0291402.7374JHEP. 0729L. Dolan and P. Goddard, The Polynomial Form of the Scattering Equations, JHEP 07 (2014) 029, [1402.7374].
Biadjoint Scalars and Associahedra from Residues of Generalized Amplitudes. F Cachazo, N Early, 2204.01743F. Cachazo and N. Early, Biadjoint Scalars and Associahedra from Residues of Generalized Amplitudes, 2204.01743.
Factorization for Generalized Biadjoint Scalar Amplitudes via Matroid Subdivisions. N Early, 2211.16623N. Early, Factorization for Generalized Biadjoint Scalar Amplitudes via Matroid Subdivisions, 2211.16623.
Complete enumeration of small realizable oriented matroids. K Fukuda, H Miyata, S Moriyama, Discrete & Computational Geometry. 49K. Fukuda, H. Miyata and S. Moriyama, Complete enumeration of small realizable oriented matroids, Discrete & Computational Geometry 49 (2013) 359-381.
N Arkani-Hamed, J L Bourjaily, F Cachazo, A B Goncharov, A Postnikov, J Trnka, 10.1017/CBO9781316091548Grassmannian Geometry of Scattering Amplitudes. Cambridge University Press4N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov and J. Trnka, Grassmannian Geometry of Scattering Amplitudes. Cambridge University Press, 4, 2016, 10.1017/CBO9781316091548.
Tropical geometry of statistical models. L Pachter, B Sturmfels, Proceedings of the National Academy of Sciences. 101L. Pachter and B. Sturmfels, Tropical geometry of statistical models, Proceedings of the National Academy of Sciences 101 (2004) 16132-16137.
A Postnikov, arXiv preprint math/0609764Total positivity, grassmannians, and networks. A. Postnikov, Total positivity, grassmannians, and networks, arXiv preprint math/0609764 (2006) .
A matroid invariant via the k-theory of the grassmannian. D E Speyer, Advances in Mathematics. 221D. E. Speyer, A matroid invariant via the k-theory of the grassmannian, Advances in Mathematics 221 (2009) 882-913.
| []
|
[
"A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning",
"A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning"
]
| [
"Aryan Mokhtari [email protected] \nDepartment of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA\n",
"Alec Koppel [email protected] \nDepartment of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA\n",
"Alejandro Ribeiro [email protected] \nDepartment of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA\n"
]
| [
"Department of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA",
"Department of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA",
"Department of Electrical and Systems Engineering\nUniversity of Pennsylvania Philadelphia\n19104PAUSA"
]
| []
| We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. We call the algorithm stochastic because processors choose training subsets uniformly at random. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset. | null | [
"https://arxiv.org/pdf/1606.04991v1.pdf"
]
| 14,404,559 | 1606.04991 | 7e21556941de87a50bd89a3f29840e6659f561cb |
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
Aryan Mokhtari [email protected]
Department of Electrical and Systems Engineering
University of Pennsylvania Philadelphia
19104PAUSA
Alec Koppel [email protected]
Department of Electrical and Systems Engineering
University of Pennsylvania Philadelphia
19104PAUSA
Alejandro Ribeiro [email protected]
Department of Electrical and Systems Engineering
University of Pennsylvania Philadelphia
19104PAUSA
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
Editor:
We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. We call the algorithm stochastic because processors choose training subsets uniformly at random. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset.
Introduction
Learning is often formulated as an optimization problem that finds a vector of parameters x * ∈ R p that minimizes the average of a loss function across the elements of a training set. For a precise definition consider a training set with N elements and let f n : R p → R be a convex loss function associated with the n-th element of the training set. The optimal parameter vector x * ∈ R p is defined as the minimizer of the average cost F (x) := (1/N ) (
Problems such as support vector machine classification, logistic and linear regression, and matrix completion can be put in the form of problem (1). In this paper, we are interested in large scale problems where both the number of features p and the number of elements N in the training set are very large -which arise, e.g., in text (Sampson et al., 1990), image (Mairal et al., 2010), and genomic (Taşan et al., 2014) processing. RAPSA is shown here to converge to the optimal argument x * of (1).
When N and p are large, the parallel processing architecture in Figure 1 becomes of interest. In this architecture, the parameter vector x is divided into B blocks each of which contains p b p features and a set of I B processors work in parallel on randomly chosen parameter blocks while using a stochastic subset of elements of the training set. In the schematic shown, Processor 1 fetches functions f 1 and f n to operate on block x b and Processor i fetches functions f n and f n to operate on block x b . Other processors select other elements of the training set and other blocks with the majority of blocks remaining unchanged and the majority of functions remaining unused. The blocks chosen for update and the functions fetched for determination of block updates are selected independently at random in subsequent slots.
Problems that operate on blocks of the parameter vectors or subsets of the training set, but not on both, blocks and subsets, exist. Block coordinate descent (BCD) is the generic name for methods in which the variable space is divided in blocks that are processed separately. Early versions operate by cyclically updating all coordinates at each step (Luo and Tseng, 1992;Tseng, 2001;Xu and Yin, 2014), while more recent parallelized versions of coordinate descent have been developed to accelerate convergence of BCD (Richtárik and Takáč, 2015;Lu and Xiao, 2013;Nesterov, 2012;Beck and Tetruashvili, 2013). Closer to the architecture in Figure 1, methods in which subsets of blocks are selected at random have also been proposed (Liu et al., 2015;Yang et al., 2013;Nesterov, 2012;Lu and Xiao, 2015). BCD, serial, parallel, or random, can handle cases where the parameter dimension p is large but requires access to all N training samples at each iteration.
Parallel implementations of block coordinate methods have been developed initially in this setting for composite optimization problems (Richtárik and Takáč, 2015). A collection of parallel processors update randomly selected blocks concurrently at each step. Several variants that select blocks in order to maximize the descent at each step are proposed in (Scherrer et al., 2012;Facchinei et al., 2015;Shalev-Shwartz and Zhang, 2013). The aforementioned works require that parallel processors operate on a common time index. In contrast, asynchronous parallel methods, originally proposed in Bertsekas and Tsitsiklis (1989), have been developed to solve optimization problems where processors are not required to operate with a common global clock. This work focused on solving a fixed point problem over a separable convex set, but the analysis is more restrictive than standard convexity assumptions. For a standard strongly convex optimization problem, in contrast, Liu et al. (2015) establish linear convergence to the optimum. All of these works are developed for optimization problems with deterministic objectives.
To handle the case where the number of training examples N is very large, methods have been developed to only process a subset of sample points at a time. These methods are known by the generic name of stochastic approximation and rely on the use of stochastic gradients. In plain stochastic gradient descent (SGD), the gradient of the aggregate function is estimated by the gradient of a randomly chosen function f n (Robbins and Monro, 1951). Since convergence of SGD is slow more often that not, various recent developments have been aimed at accelerating its convergence. These attempts include methodologies to reduce the variance of stochastic gradients (Schmidt et al., 2013;Johnson and Zhang, 2013;Defazio et al., 2014) and the use of ideas from quasi-Newton optimization to handle difficult curvature profiles (Schraudolph et al., 2007;Bordes et al., 2009;Ribeiro, 2014, 2015). More pertinent to the work considered here are the use of cyclic block SGD updates (Xu and Yin, 2015) and the exploitation of sparsity properties of feature vectors to allow for parallel updates (Recht et al., 2011). These methods are suitable when the number of elements in the training set N is large but don't allow for parallel feature processing unless parallelism is inherent to the problem's structure.
The random parallel stochastic algorithm (RAPSA) proposed in this paper represents the first effort at implementing the architecture in Fig. 1 that randomizes over both parameters and sample functions, and may be implemented in parallel. In RAPSA, the functions fetched by a processor are used to compute the stochastic gradient component associated with a randomly chosen block (Section 2). The processors do not coordinate in either choice except to avoid selection of the same block. Our main technical contribution is to show that RAPSA iterates converge to the optimal classifier x * when using a sequence of decreasing stepsizes and to a neighborhood of the optimal classifier when using constant stepsizes (Section 5). In the latter case, we further show that the rate of convergence to this optimality neighborhood is linear in expectation. These results are interesting because only a subset of features are updated per iteration and the functions used to update different blocks are, in general, different. We propose two extensions of RAPSA. Firstly, motivated by the improved performance results of quasi-Newton methods relative to gradient methods in online optimization, we propose an extension of RAPSA which incorporates approximate second-order information of the objective, called Accelerated RAPSA. We also consider an extension of RAPSA in which parallel processors are not required to operate on a common time index, which we call Asynchronous RAPSA. We further show how these extensions yield an accelerated doubly stochastic algorithm for an asynchronous system. We establish that the performance guarantees of RAPSA carry through to asynchronous computing architectures. We then numerically evaluate the proposed methods on a large-scale linear regression problem as well as the MNIST digit recognition problem (Section 6).
Random Parallel Stochastic Algorithm (RAPSA)
We consider a more general formulation of (1) in which the number N of functions f n is not necessarily finite. Introduce then a random variable θ ∈ Θ ⊂ R q that determines the choice of the random smooth convex function f (·, θ) : R p → R. We consider the problem of minimizing the expectation of the random functions F (x) := E θ [f (x, θ)],
x * := argmin x∈R p F (x) := argmin x∈R p E θ [f (x, θ)] .(2)
Problem (1) is a particular case of (2) in which each of the functions f n is drawn with probability 1/N . Observe that when θ = (z, y) with feature vector z ∈ R p and target variable y ∈ R q or y ∈ {0, 1}, the formulation in (2) encapsulates generic supervised learning problems such as regression or classification, respectively. We refer to f (·, θ) as instantaneous functions and to F (x) as the average function. loop in parallel, processors i = 1, . . . , I execute:
3:
Select block b t i ∈ {1, . . . , B} uniformly at random from set of blocks 4:
Choose training subset Θ t i for block x b ,
5:
Compute stochastic gradient :
∇ x b f (x t , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t , θ), b = b t i [cf. (3)] 6: Update the coordinates b t i of the decision variable x t+1 b = x t b − γ t ∇ x b f (x t , Θ t i ) 7:
end loop; Transmit updated blocks i ∈ I t ⊂ {1, . . . , B} to shared memory 8: end for RAPSA utilizes I processors to update a random subset of blocks of the variable x, with each of the blocks relying on a subset of randomly and independently chosen elements of the training set; see Figure 1. Formally, decompose the variable x into B blocks to write x = [x 1 ; . . . ; x B ], where block b has length p b so that we have x b ∈ R p b . At iteration t, processor i selects a random index b t i for updating and a random subset Θ t i of L instantaneous functions. It then uses these instantaneous functions to determine stochastic gradient components for the subset of variables x b = x b t i as an average of the components of the gradients of the functions f (x t , θ) for θ ∈ Θ t i ,
∇ x b f (x t , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t , θ), b = b t i .(3)
Note that L can be interpreted as the size of mini-batch for gradient approximation. The stochastic gradient block in (3) is then modulated by a possibly time varying stepsize γ t and used by processor i to update the block
x b = x b t i x t+1 b = x t b − γ t ∇ x b f (x t , Θ t i ) b = b t i .(4)
RAPSA is defined by the joint implementation of (3) and (4) across all I processors, and is summarized in Algorithm 1. We would like to emphasize that the number of updated blocks which is equivalent to the number of processors I is not necessary equal to the total number of blocks B.
In other words, we may update only a subset of coordinates I/B < 1 at each iteration. We define r := I/B as the ratio of the updated blocks to the total number of blocks which is smaller than 1. The selection of blocks is coordinated so that no processors operate in the same block. The selection of elements of the training set is uncoordinated across processors. The fact that at any point in time a random subset of blocks is being updated utilizing a random subset of elements of the training set means that RAPSA requires almost no coordination between processors. The contribution of this paper is to show that this very lean algorithm converges to the optimal argument x * as we show in Section 5.
Accelerated Random Parallel Stochastic Algorithm (ARAPSA)
As we mentioned in Section 2, RAPSA operates on first-order information which may lead to slow convergence in ill-conditioned problems. We introduce Accelerated RAPSA (ARAPSA) as a parallel doubly stochastic algorithm that incorporates second-order information of the objective by separately approximating the function curvature for each block. We do this by implementing the oLBFGS algorithm for different blocks of the variable x. For related approaches, see, for instance, Broyden et al. (1973);Byrd et al. (1987); Dennis and Moré (1974); Li and Fukushima (2001). DefineB t b as an approximation for the Hessian inverse of the objective function that corresponds to the block b with the corresponding variable x b . If we consider b t i as the block that processor i chooses at step t,
Algorithm 2 Computation of the ARAPSA stepd t b =B t b ∇ x b f (x t , Θ t i ) for block x b . 1: functiond t b = q τ = ARAPSA Step B t,0 b , p 0 = ∇ x b f (x t , Θ t i ), {v u b ,r u b } t−1
u=t−τ 2: for u = 0, 1, . . . , τ − 1 do {Loop to compute constants α u and sequence p u }
3:
Compute and store scalar α
u =ρ t−u−1 b (v t−u−1 b ) T p u 4: Update sequence vector p u+1 = p u − α urt−u−1 b
. 5: end for 6: Multiply p τ by initial matrix: q 0 =B t,0 b p τ 7: for u = 0, 1, . . . , τ − 1 do {Loop to compute constants β u and sequence q u } 8:
Compute scalar β u =ρ t−τ +u b (r t−τ +u b ) T q u 9: Update sequence vector q u+1 = q u + (α τ −u−1 − β u )v t−τ +u b 10: end for {returnd t b = q τ }
then the update of ARAPSA is defined as multiplication of the descent direction of RAPSA byB t b , i.e.,
x t+1 b = x t b − γ tBt b ∇ x b f (x t , Θ t i ) b = b t i . (5) Subsequently, we define thed t b :=B t b ∇ x b f (x t , Θ t i )
. We next detail how to properly specify the block approximate HessianB t b so that it behaves in a manner comparable to the true Hessian. To do so, define for each block coordinate x b at step t the variable variation v t b and the stochastic
gradient variationr t b as v t b = x t+1 b − x t b ,r t b = ∇ x b f (x t+1 , Θ t i ) − ∇ x b f (x t , Θ t i ).(6)
Observe that the stochastic gradient variationr t b is defined as the difference of stochastic gradients at times t + 1 and t corresponding to the block x b for a common set of realizations Θ t i . The term
∇ x b f (x t , Θ t i )
is the same as the stochastic gradient used at time t in (5)
, while ∇ x b f (x t+1 , Θ t i )
is computed only to determine the stochastic gradient variationr t b . An alternative and perhaps more natural definition for the stochastic gradient variation is ∇
x b f (x t+1 , Θ t+1 i ) − ∇ x b f (x t , Θ t i )
. However, as pointed out in Schraudolph et al. (2007), this formulation is insufficient for establishing the convergence of stochastic quasi-Newton methods. We proceed to developing a block-coordinate quasi-Newton method by first noting an important property of the true Hessian, and design our approximate scheme to satisfy this property. The secant condition may be interpreted as stating that the stochastic gradient of a quadratic approximation of the objective function evaluated at the next iteration agrees with the stochastic gradient at the current iteration. We select a Hessian inverse approximation matrix associated with block x b such that it satisfies the secant condition B t+1 br t b = v t b , and thus behaves in a comparable manner to the true block Hessian. The oLBFGS Hessian inverse update rule maintains the secant condition at each iteration by using information of the last τ ≥ 1 pairs of variable and stochastic gradient variations {v u b ,r u b } t−1 u=t−τ . To state the update rule of oLBFGS for revising the Hessian inverse approximation matrices of the blocks, define a matrix asB t,0 b := η t b I for each block b and t, where the constant η t b for t > 0 is given by
η t b := (v t−1 b ) Trt−1 b r t−1 b 2 ,(7)
while the initial value is η t b = 1. The matrixB t,0 b is the initial approximate for the Hessian inverse associated with block x b . The approximate matrixB t b is computed by updating the initial matrix B t,0 b using the last τ pairs of curvature information {v u b ,r u b } t−1 u=t−τ . We define the approximate Hessian inverseB t b =B t,τ b corresponding to block x b at step t as the outcome of τ recursive applications of the updateB loop in parallel, processors i = 1, . . . , I execute:
t,u+1 b = (Ẑ t−τ +u b ) TBt,u b (Ẑ t−τ +u b ) +ρ t−τ +u b (v t−τ +u b ) (v t−τ +u b ) T ,(8)
3:
Select block b t i uniformly at random from set of blocks {1, . . . , B}
4:
Choose a set of realizations Θ t i for the block x b
5:
Compute stochastic gradient :
∇ x b f (x t , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t , θ) [cf. (3)] 6:
Compute the initial Hessian inverse approximation:B t,0 b = η t b I 7:
Compute descent direction:
d t b = ARAPSA Step B t,0 b , ∇ x b f (x t , Θ t i ), {v u b ,r u b } t−1 u=t−τ 8:
Update the coordinates of the decision variable
x t+1 b = x t b − γ tdt b 9:
Compute updated stochastic gradient:
∇ x b f (x t+1 , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t+1 , θ) [cf. (3)]
10:
Update variations v t b = x t+1 b − x t b andr t i = ∇ x b f (x t+1 , Θ t i ) − ∇ x b f (x t , Θ t i ) [ cf.(6)]ρ t−τ +u b = 1 (v t−τ +u b ) Tr t−τ +u b andẐ t−τ +u b = I −ρ t−τ +u br t−τ +u b (v t−τ +u b ) T .(9)
The block-wise oLBFGS update defined by (6) -(9) is summarized in Algorithm 2. The computation cost ofB t b in (8) is in the order of O(p 2 b ), however, for the update in (5) the descent direction Liu and Nocedal (1989) introduce an efficient implementation of productB t b ∇ x b f (x t , Θ t i ) that requires computation complexity of order O(τ p b ). We use the same idea for computing the descent direction of ARAPSA for each block -more details are provided below. Therefore, the computation complexity of updating each block for ARAPSA is in the order of O(τ p b ), while RAPSA requires O(p b ) operations. On the other hand, ARAPSA accelerates the convergence of RAPSA by incorporating the second order information of the objective function for the block updates, as may be observed in the numerical analyses provided in Section 6.
d t b :=B t b ∇ x b f (x t , Θ t i ) is required.
For reference, ARAPSA is also summarized in algorithmic form in Algorithm 3. Steps 2 and 3 are devoted to assigning random blocks to the processors. In Step 2 a subset of available blocks I t is chosen. These blocks are assigned to different processors in Step 3. In Step 5 processors compute the partial stochastic gradient corresponding to their assigned blocks ∇ x b f (x t , Θ t i ) using the acquired samples in Step 4. Steps 6 and 7 are devoted to the computation of the ARAPSA descent directiond t i . In
Step 6 the approximate Hessian inverseB t,0 b for block x b is initialized aŝ B t,0 b = η t b I which is a scaled identity matrix using the expression for η t b in (7) for t > 0. The initial value of η t b is η 0 b = 1. In Step 7 we use Algorithm 2 for efficient computation of the descent direction
d t b =B t b ∇ x b f (x t , Θ t i ). The descent directiond t b is used to update the block x t b with stepsize γ t in
Step 8.
Step 9 determines the value of the partial stochastic gradient ∇ x b f (x t+1 , Θ t i ) which is required for the computation of stochastic gradient variationr t b . In
Step 10 the variable variation v t b and stochastic gradient variationr t b associated with block x b are computed to be used in the next iteration.
Asynchronous Architectures
Up to this point, the RAPSA method dictates that distinct parallel processors select blocks b t i ∈ {1, . . . , B} uniformly at random at each time step t as in Figure 1. However, the requirement Algorithm 4 Asynchronous RAPSA at processor i 1: while t < T do 2:
Processor i ∈ {1, . . . , I} at time index t executes the following steps:
3:
Select block b t i uniformly at random from set of blocks {1, . . . , B}
4:
Choose a set of realizations Θ t i for the block
x b , b = b t i 5: Compute stochastic gradient : ∇ x b f (x t , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t , θ) [cf. (3)] 6:
Update the coordinates of the decision variable
x t+τ +1 b = x t+τ b − γ t+τ ∇ x b f (x t , Θ t i ) 7:
Send updated parameters x t+1 b associated with block b = b t i to shared memory 8:
If another processor is also operating on block b t i at time t, randomly overwrite 9: end while that each processor operates on a common time index is burdensome for parallel operations on large computing clusters, as it means that nodes must wait for the processor which has the longest computation time at each step before proceeding. Remarkably, we are able to extend the methods developed in Sections 2 and 3 to the case where the parallel processors need not to operate on a common time index (lock-free) and establish that their performance guarantees carry through, so long as the degree of their asynchronicity is bounded in a certain sense. In doing so, we alleviate the computational bottleneck in the parallel architecture, allowing processors to continue processing data as soon as their local task is complete.
Asynchronous RAPSA
Consider the case where each node operates asynchronously. In this case, at an instantaneous time index t, only one processor executes an update, as all others are assumed to be busy. If two processors complete their prior task concurrently, then they draw the same time index at the next available slot, in which case the tie is broken at random. Suppose processor i selects block b t i ∈ {1, . . . , B} at time t. Then it grabs the associated component of the decision variable x t b and computes the stochastic gradient ∇ x b f (x t , Θ t i ) associated with the samples Θ t i . This process may take time and during this process other processors may overwrite the variable x b . Consider the case that the process time of computing stochastic gradient or equivalently the descent direction is τ . Thus, when processor i updates the block b using the evaluated stochastic gradient ∇ x b f (x t , Θ t i ), it performs the update
x t+τ +1 b = x t+τ b − γ t+τ ∇ x b f (x t , Θ t i ) b = b t i .(10)
Thus, the descent direction evaluated based on the available information at step t is used to update the variable at time t + τ . Asynchronous RAPSA is summarized in Algorithm 4. Note that the delay comes from asynchronous implementation of the algorithm and the fact that other processors are able to modify the variable x b during the time that processor i computes its descent direction. We assume the the random time τ that each processor requires to compute its descent direction is bounded above by a constant ∆, i.e., τ ≤ ∆ -see Assumption 4. Despite the minimal coordination of the asynchronous random parallel stochastic algorithm in (10), we may establish the same performance guarantees as that of RAPSA in Section 2. These analytical properties are investigated at length in Section 5.
Remark 1 One may raise the concern that there could be instances that two processors or more work on a same block. Although, this event is not very likely since I << B, there is a positive chance that it might happen. This is true since the available processor picks the block that it wants to operate on uniformly at random from the set {1, . . . , B}. We show that this event does not cause any issues and the algorithm can eventually converge to the optimal argument even if more than one processor work on a specific block at the same time -see Section 5.2. Functionally, this means that Algorithm 5 Asynchronous Accelerated RAPSA at processor i 1: while t < T do 2:
Processor i ∈ {1, . . . , I} at time index t executes the following steps:
3:
Select block b t i uniformly at random from set of blocks {1, . . . , B}
4:
Choose a set of realizations Θ t i for the block
x b , b = b t i 5: Compute stochastic gradient : ∇ x b f (x t , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t , θ) [cf. (3)] 6:
Compute the initial Hessian inverse approximation:B t,0 b = η t b I 7:
Compute descent direction:
d t b = ARAPSA Step B t,0 b , ∇ x b f (x t , Θ t i ), {v u b ,r u b } t−1 u=t−τ 8:
Update the coordinates of the decision variable
x t+τ +1 b = x t+τ b − γ t+τdt b 9:
Compute updated stochastic gradient:
∇ x b f (x t+τ +1 , Θ t i ) = 1 L θ∈Θ t i ∇ x b f (x t+τ +1 , θ) [cf. (3)]
10:
Update variations v t b = x t+τ +1 b − x t b andr t b = ∇ x b f (x t+τ +1 , Θ t i ) − ∇ x b f (x t , Θ t i ) [ cf.(12)] 11:
Overwrite the oldest pairs of v b andr b in local memory by v t b andr t b , respectively.
12:
Send updated parameters
x t+1 b , {v u b ,r u b } t−1 u=t−τ to shared memory.
13:
If another processor is operating on block b t i , choose to overwrite with probability 1/2. 14: end while if one block is worked on concurrently by two processors, the memory coordination requires that the result of one of the two processors is written to memory with probability 1/2. This random overwrite rule applies to the case that three or more processors are operating on the same block as well. In this case, the result of one of the conflicting processors is written to memory with probability 1/C where C is the number of conflicting processors.
Asynchronous ARAPSA
In this section, we study the asynchronous implementation of accelerated RAPSA (ARAPSA). The main difference between the synchronous of implementation ARAPSA in Section 3 and the asynchronous version is in the update of the variable x t b corresponding to the block b. Consider the case that processor i finishes its previous task at time t, chooses the block b = b t i , and reads the variable x t b . Then, it computes the stochastic gradient f (x t , Θ t i ) using the set of random variables
Θ t i . Further, processor i computes the descent directionB t b ∇ x b f (x t , Θ t i ) using the last τ sets of curvature information {v u b ,r u b } t−1 u=t−τ as shown in Algorithm 1. If we assume that the required time to compute the descent directionB t b ∇ x b f (x t , Θ t i ) is τ , processor i updates the variable x t+τ b as x t+τ +1 b = x t+τ b − γ t+τ B t b ∇ x b f (x t , Θ t i ) b = b t i .(11)
Note that the update in (11) is different from the synchronous version in (5) in the time index of the variable that is updated using the available information at time t. In other words, in the synchronous implementation the descent directionB
t b ∇ x b f (x t , Θ t i ) is used to update the variable x t b
with the same time index, while this descent direction is executed to update the variable x t+τ b in asynchronous ARAPSA. Note that the definitions of the variable variation v t b and the stochastic gradient variationr t b are different in asynchronous setting and they are given by
v t b = x t+τ +1 b − x t b ,r t b = ∇ x b f (x t+τ +1 , Θ t i ) − ∇ x b f (x t , Θ t i ).(12)
This modification comes from the fact that the stochastic gradient ∇ x b f (x t , Θ t i ) is already evaluated for the descent direction in (11). Thus, we define the stochastic gradient variation by computing the difference of the stochastic gradient ∇ x b f (x t , Θ t i ) and the stochastic gradient associated with the same random set Θ t i evaluated at the most recent iterate which is
x t+τ +1 b . Likewise, the variable variation is redefined as the difference x t+τ +1 b − x t b .
The steps of asynchronous ARAPSA are summarized in Algorithm 5.
Convergence Analysis
We show in this section that the sequence of objective function values F (x t ) generated by RAPSA approaches the optimal objective function value F (x * ). We further show that the convergence guarantees for synchronous RAPSA generalize to the asynchronous setting. In establishing this result we define the set S t corresponding to the components of the vector x associated with the blocks selected at step t defined by indexing set I t ⊂ {1, . . . , B}. Note that components of the set S t are chosen uniformly at random from the set of blocks {x 1 , . . . , x B }. With this definition, due to convenience for analyzing the proposed methods, we rewrite the time evolution of the RAPSA iterates (Algorithm 1) as
x t+1 i = x t i − γ t ∇ xi f (x t , Θ t i ) for all x i ∈ S t ,(13)
while the rest of the blocks remain unchanged, i.e., x t+1
i = x t i for x i / ∈ S t .
Since the number of updated blocks is equal to the number of processors, the ratio of updated blocks is r := |I t |/B = I/B. To prove convergence of RAPSA, we require the following assumptions.
Assumption 1
The instantaneous objective functions f (x, θ) are differentiable and the average function F (x) is strongly convex with parameter m > 0.
Assumption 2 The average objective function gradients ∇F (x) are Lipschitz continuous with respect to the Euclidian norm with parameter M , i.e., for all x,x ∈ R p , it holds that
∇F (x) − ∇F (x) ≤ M x −x .(14)
Assumption 3 The second moment of the norm of the stochastic gradient is bounded for all x, i.e., there exists a constant K such that for all variables x, it holds
E θ ∇f (x t , θ t ) 2 x t ≤ K.(15)
Notice that Assumption 1 only enforces strong convexity of the average function F , while the instantaneous functions f i may not be even convex. Further, notice that since the instantaneous functions f i are differentiable the average function F is also differentiable. The Lipschitz continuity of the average function gradients ∇F is customary in proving objective function convergence for descent algorithms. The restriction imposed by Assumption 3 is a standard condition in stochastic approximation literature (Robbins and Monro, 1951), its intent being to limit the variance of the stochastic gradients (Nemirovski et al., 2009).
Convergence of RAPSA
We turn our attention to the random parallel stochastic algorithm defined in (3)-(4) in Section 2, establishing performances guarantees in both the diminishing and constant algorithm step-size regimes. Our first result comes in the form of a expected descent lemma that relates the expected difference of subsequent iterates to the gradient of the average function.
Lemma 2 Consider the random parallel stochastic algorithm defined in (3)-(4). Recall the definitions of the set of updated blocks I t which are randomly chosen from the total B blocks. Define F t as a sigma algebra that measures the history of the system up until time t. Then, the expected value of the difference x t+1 − x t with respect to the random set I t given F t is
E I t x t+1 − x t | F t = −rγ t ∇f (x t , Θ t ).(16)
Moreover, the expected value of the squared norm x t+1 − x t 2 with respect to the random set I t given F t can be simplified as
E I t x t+1 − x t 2 | F t = r(γ t ) 2 ∇f (x t , Θ t ) 2 .(17)
Proof See Appendix A.1.
Notice that in the regular stochastic gradient descent method the difference of two consecutive iterates x t+1 − x t is equal to the stochastic gradient ∇f (x t , Θ t ) times the stepsize γ t . Based on the first result in Lemma 2, the expected value of stochastic gradients with respect to the random set of blocks I t is the same as the one for SGD except that it is multiplied by the fraction of updated blocks r. Expression in (17) shows the same relation for the expected value of the squared difference x t+1 − x t 2 . These relationships confirm that in expectation RAPSA behaves as SGD which allows us to establish the global convergence of RAPSA.
Proposition 3 Consider the random parallel stochastic algorithm defined in (3)-(4). If Assumptions 1-3 hold, then the objective function error sequence
F (x t ) − F (x * ) satisfies E F (x t+1 ) − F (x * ) | F t ≤ 1 − 2mrγ t F (x t ) − F (x * ) + rM K(γ t ) 2 2 . (18) Proof See Appendix A.2.
Proposition 3 leads to a supermartingale relationship for the sequence of objective function errors F (x t ) − F (x * ). In the following theorem we show that if the sequence of stepsize satisfies standard stochastic approximation diminishing step-size rules (non-summable and squared summable), the sequence of objective function errors F (x t ) − F (x * ) converges to null almost surely. Considering the strong convexity assumption this result implies almost sure convergence of the sequence x t − x * 2 to null.
Theorem 4 Consider the random parallel stochastic algorithm defined in (3)-(4) (Algorithm 1). If Assumptions 1-3 hold true and the sequence of stepsizes are non-summable ∞ t=0 γ t = ∞ and square summable ∞ t=0 (γ t ) 2 < ∞, then sequence of the variables x t generated by RAPSA converges almost surely to the optimal argument x * ,
lim t→∞ x t − x * 2 = 0 a.s.(19)
Moreover, if stepsize is defined as γ t := γ 0 T 0 /(t + T 0 ) and the stepsize parameters are chosen such that 2mrγ 0 T 0 > 1, then the expected average function error E [F (x t ) − F (x * )] converges to null at least with a sublinear convergence rate of order O(1/t),
E F (x t ) − F (x * ) ≤ C t + T 0 ,(20)
where the constant C is defined as
C = max rM K(γ 0 T 0 ) 2 4mrγ 0 T 0 − 2 , T 0 (F (x 0 ) − F (x * )) .(21)
Proof See Appendix A.3.
The result in Theorem 4 shows that when the sequence of stepsize is diminishing as γ t = γ 0 T 0 /(t+ T 0 ), the average objective function value F (x t ) sequence converges to the optimal objective value F (x * ) with probability 1. Further, the rate of convergence in expectation is at least in the order of O(1/t). 1 Diminishing stepsizes are useful when exact convergence is required, however, for the case that we are interested in a specific accuracy the more efficient choice is using a constant stepsize. In the following theorem we study the convergence properties of RAPSA for a constant stepsize γ t = γ.
Theorem 5 Consider the random parallel stochastic algorithm defined in (3)-(4) (Algorithm 1). If Assumptions 1-3 hold true and the stepsize is constant γ t = γ, then a subsequence of the variables x t generated by RAPSA converges almost surely to a neighborhood of the optimal argument x * as
lim inf t→∞ F (x t ) − F (x * ) ≤ γM K 4m a.s.(22)
Moreover, if the constant stepsize γ is chosen such that 2mrγ < 1 then the expected average function value error E [F (x t ) − F (x * )] converges linearly to an error bound as
E F (x t ) − F (x * ) ≤ (1 − 2mγr) t (F (x 0 ) − F (x * )) + γM K 4m . (23) Proof See Appendix A.4.
Notice that according to the result in (23) there exits a trade-off between accuracy and speed of convergence. Decreasing the constant stepsize γ leads to a smaller error bound γM K/4m and a more accurate convergence, while the linear convergence constant (1 − 2mγr) increases and the convergence rate becomes slower. Further, note that the error of convergence γM K/4m is independent of the ratio of updated blocks r, while the constant of linear convergence 1 − 2mγr depends on r. Therefore, updating a fraction of the blocks at each iteration decreases the speed of convergence for RAPSA relative to SGD that updates all of the blocks, however, both of the algorithms reach the same accuracy.
To achieve accuracy the sum of two terms in the right hand side of (23) should be smaller than . Let's consider φ as a positive constant that is strictly smaller than 1, i.e., 0 < φ < 1. Then, we want to have
γM K 4m ≤ φ , (1 − 2mγr) t (F (x 0 ) − F (x * )) ≤ (1 − φ) .(24)
Therefore, to satisfy the first condition in (24) we set the stepsize as γ = 4mφ /M K. Apply this substitution into the second inequality in (24) and consider the inequality a + ln(1 − a) < 0 for 0 < a < 1, to obtain that
t ≥ M K 8m 2 rφ ln F (x 0 ) − F (x * ) (1 − φ) .(25)
The lower bound in (25) shows the minimum number of required iterations for RAPSA to achieve accuracy .
Convergence of Asynchronous RAPSA
In this section, we study the convergence of Asynchronous RAPSA (Algorithm 4) developed in Section 4 and we characterize the effect of delay in the asynchronous implementation. To do so, the following condition on the delay τ is required.
1. The expectation on the left hand side of (32), and throughout the subsequent convergence rate analysis, is taken with respect to the full algorithm history F 0 , which all realizations of both Θt and It for all t ≥ 0.
Assumption 4 The random variable τ which is the delay between reading and writing for processors does not exceed the constant ∆, i.e., τ ≤ ∆.
The condition in Assumption 4 implies that processors can finish their tasks in a time that is bounded by the constant ∆. This assumption is typical in the analysis of asynchronous algorithms.
To establish the convergence properties of asynchronous RAPSA recall the set S t containing the blocks that are updated at step t with associated indices I t ⊂ {1, . . . , B}. Therefore, the update of asynchronous RAPSA can be written as
x t+1 i = x t i − γ t ∇ xi f (x t−τ , Θ t−τ i ) for all x i ∈ S t ,(27)
and the rest of the blocks remain unchanged, i.e., x t+1
i = x t i for x i / ∈ S t .
Note that the random set I t and the associated block set S t are chosen at time t − τ in practice; however, for the sake of analysis we can assume that these sets are chosen at time t. In other words, we can assume that at step t − τ processor i computes the full (for all blocks) stochastic gradient ∇f (x t−τ , Θ t−τ i ) and after finishing this task at time t, it chooses uniformly at random the block that it wants to update. Thus, the block x i in (27) is chosen at step t. This new interpretation of the update of asynchronous RAPSA is only important for the convergence analysis of the algorithm and we use it in the proof of following lemma which is similar to the result in Lemma 2 for synchronous RAPSA.
Lemma 6 Consider the asynchronous random parallel stochastic algorithm (Algorithm 4) defined in (10). Recall the definitions of the set of updated blocks I t which are randomly chosen from the total B blocks. Define F t as a sigma algebra that measures the history of the system up until time t. Then, the expected value of the difference x t+1 − x t with respect to the random set I t given F t is
E I t x t+1 − x t | F t = − γ t B ∇f (x t−τ , Θ t−τ ).(28)
Moreover, the expected value of the squared norm x t+1 − x t 2 with respect to the random set S t given F t satisfies the identity
E I t x t+1 − x t 2 | F t = (γ t ) 2 B ∇f (x t−τ , Θ t−τ ) 2 .(29)
Proof: See Appendex B.1.
The results in Lemma 6 is a natural extension of the results in Lemma 2 for the lock-free setting, since in the asynchronous scheme only one of the blocks is updated at each iteration and the ratio r can be simplified as 1/B. We use the result in Lemma 6 to characterize the decrement in the expected sub-optimality in the following proposition.
Proposition 7 Consider the asynchronous random parallel stochastic algorithm defined in (10) (Algorithm 4) . If Assumptions 1-3 hold, then for any arbitrary ρ > 0 we can write that the objective function error sequence F (
x t ) − F (x * ) satisfies E F (x t+1 ) − F (x * ) | F t−τ ≤ 1 − 2mγ t B 1 − ρM 2 E F (x t ) − F (x * ) | F t−τ + M K(γ t ) 2 2B + τ 2 M Kγ t (γ t−τ ) 2 2ρB 2 .(30)
Proof: See Appendix B.2.
We proceed to use the result in Proposition 7 to prove that the sequence of iterates generated by asynchronous RAPSA converges to the optimal argument x * defined by (2).
Theorem 8 Consider the asynchronous RAPSA defined in (10) (Algorithm 4) . If Assumptions 1-3 hold true and the sequence of stepsizes are non-summable ∞ t=0 γ t = ∞ and square summable ∞ t=0 (γ t ) 2 < ∞, then sequence of the variables x t generated by RAPSA converges almost surely to the optimal argument x * , lim inf
t→∞ x t − x * 2 = 0 a.s.(31)
Moreover, if stepsize is defined as γ t := γ 0 T 0 /(t + T 0 ) and the stepsize parameters are chosen such that
(2mγ 0 T 0 /B)(1 − ρM/2) > 1, then the expected average function error E [F (x t ) − F (x * )]
converges to null at least with a sublinear convergence rate of order O(1/t),
E F (x t ) − F (x * ) ≤ C t + T 0 ,(32)
where the constant C is defined as
C = max M K(γ 0 T 0 ) 2 /2B + (τ 2 M K(γ 0 T 0 ) 3 )(2ρB 2 ) (2mγ 0 T 0 /B)(1 − ρM/2) − 1 , T 0 (F (x 0 ) − F (x * )) .(33)
Proof: See Appendix B.3.
Theorem 8 establishes that the RAPSA algorithm when run on a lock-free computing architecture, still yields convergence to the optimal argument x * defined by (2). Moreover, the expected objective error sequence converges to null as O(1/t). These results, which correspond to the diminishing step-size regime, are comparable to the performance guarantees (Theorem 4) previously established for RAPSA on a synchronous computing cluster, meaning that the algorithm performance does not degrade significantly when implemented on an asynchronous system. This issue is explored numerically in Section 6.
Numerical analysis
In this section we study the numerical performance of the doubly stochastic approximation algorithms developed in Sections 2-4 by first considering a linear regression problem. We then use RAPSA to develop a visual classifier to distinguish between distinct hand-written digits.
Linear Regression
We consider a setting in which observations z n ∈ R q are collected which are noisy linear transformations z n = H n x + w n of a signal x ∈ R p which we would like to estimate, and w ∼ N (0, σ 2 I q ) is a Gaussian random variable. For a finite set of samples N , the optimal x * is computed as the least squares estimate x * := argmin x∈R p (1/N ) N n=1 H n x − z n 2 . We run RAPSA on LMMSE estimation problem instances where q = 1, p = 1024, and N = 10 4 samples are given. The observation matrices H n ∈ R q×p , when stacked over all n (an N × p matrix), are generated from a matrix normal distribution whose mean is a tri-diagonal matrix. The main diagonal is 2, while the super and sub-diagonals are all set to −1/2. Moreover, the true signal has entries chosen uniformly at random from the fractions x ∈ {1, . . . , p}/p. Additionally, the noise variance perturbing the observations is set to σ 2 = 10 −2 . We assume that the number of processors I = 16 is fixed and each processor is in charge of 1 block. We consider different number of blocks B = {16, 32, 64, 128}. Note that when the number of blocks is B, there are p/B = 1024/B coordinates in each block.
Results for RAPSA We first consider the algorithm performance of RAPSA (Algorithm 1) when using a constant step-size γ t = γ = 10 −2 . The size of mini-batch is set as L = 10 in the subsequent experiments. To determine the advantages of incomplete randomized parallel processing, we vary the number of coordinates updated at each iteration. In the case that B = 16, B = 32, B = 64, and B = 128, in which case the number of updated coordinates per iteration are 1024, 512, Convergence is in terms of number of iterations is best when the number of blocks updated per iteration is equal to the number of processors (B = 16, corresponding to parallelized SGD), but comparable across the different cases in terms of number of features processed. This shows that there is no price payed in terms of convergence speed for reducing the computation complexity per iteration.
256, and 128, respectively. Notice that the case that B = 16 can be interpreted as parallel SGD, which is mathematically equivalent to Hogwild! (Recht et al., 2011), since all the coordinates are updated per iteration, while in other cases B > 16 only a subset of 1024 coordinates are updated. Fig. 2(a) illustrates the convergence path of RAPSA's objective error sequence defined as F (x t )− F (x * ) with F (x) = (1/N ) N n=1 H n x − z n 2 as compared with the number of iterations t. In terms of iteration t, we observe that the algorithm performance is best when the number of processors equals the number of blocks, corresponding to parallelized stochastic gradient method. However, comparing algorithm performance over iteration t across varying numbers of blocks updates is unfair. If RAPSA is run on a problem for which B = 32, then at iteration t it has only processed half the data that parallel SGD, i.e., B = 16, has processed by the same iteration. Thus for completeness we also consider the algorithm performance in terms of number of features processedp t which is given byp t = ptI/B.
In Fig. 2(b), we display the convergence of the excess mean square error F (x t ) − F (x * ) in terms of number of features processedp t . In doing so, we may clearly observe the advantages of updating fewer features/coordinates per iteration. Specifically, the different algorithms converge in a nearly identical manner, but RAPSA with I << B may be implemented without any complexity bottleneck in the dimension of the decision variable p (also the dimension of the feature space).
We observe a comparable trend when we run RAPSA with a hybrid step-size scheme γ t = min( , T 0 /t) which is a constant = 10 −1.5 for the firstT 0 = 400 iterations, after which it diminishes as O(1/t). We again observe in Figure 3(a) that convergence is fastest in terms of excess mean square error versus iteration t when all blocks are updated at each step. However, for this step-size selection, we see that updating fewer blocks per step is faster in terms of number of features processed. This result shows that updating fewer coordinates per iteration yields convergence gains in terms of number of features processed. This advantage comes from the advantage of Gauss-Seidel style block selection schemes in block coordinate methods as compared with Jacobi schemes. In particular, it's well understood that for problems settings with specific conditioning, cyclic block updates are superior to parallel schemes, and one may respectively interpret RAPSA as compared to parallel using initialization x 0 = 10 4 × 1. We use hybrid step-size γ t = min(10 −1.5 , 10 −1.5T 0 /t) with annealing rateT 0 = 400. Convergence is faster with smaller B which corresponds to the proportion of blocks updated per iteration r closer to 1 in terms of number of iterations. Contrarily, in terms of number of features processed B = 128 has the best performance and B = 16 has the worst performance. This shows that updating less features/coordinates per iterations can lead to faster convergence in terms of number of processed features.
SGD as executing variants of cyclic or parallel block selection schemes. We note that the magnitude of this gain is dependent on the condition number of the Hessian of the expected objective F (x).
Results for Accelerated RAPSA We now study the benefits of incorporating approximate second-order information about the objective F (x) into the algorithm in the form of ARAPSA (Algorithm 3). We first run ARAPSA for the linear regression problem outlined above when using a constant step-size γ t = γ = 10 −2 with fixed mini-batch size L = 10. Moreover, we again vary the number of blocks as B = 16, B = 32, B = 64, and B = 128, corresponding to updating all, half, one-quarter, and one-eighth of the elements of vector x per iteration, respectively. Fig. 4(a) displays the convergence path of ARAPSA's excess mean-square error F (x t ) − F (x * ) versus the number of iterations t. We observe that parallelized oL-BFGS (I = B) converges fastest in terms of iteration index t. On the contrary, in Figure 4(b), we may clearly observe that larger B, which corresponds to using fewer elements of x per step, converges faster in terms of number of features processed. The Gauss-Seidel effect is more substantial for ARAPSA as compared with RAPSA due to the fact that the argmin of the instantaneous objective computed in block coordinate descent is better approximated by its second-order Taylor-expansion (ARAPSA, Algorithm 3) as compared with its linearization (RAPSA, Algorithm 1).
We now consider the performance of ARAPSA when a hybrid algorithm step-size is used, i.e. γ t = min(10 −1.5 , 10 −1.5T 0 /t) with attenuation thresholdT 0 = 400. The results of this numerical experiment are given in Figure 5. We observe that the performance gains of ARAPSA as compared to parallelized oL-BFGS apparent in the constant step-size scheme are more substantial in the hybrid setting. That is, in Figure 5(a) we again see that parallelized oL-BFGS is best in terms of iteration index t -to achieve the benchmark F (x t ) − F (x * ) ≤ 10 −4 , the algorithm requires t = 100, t = 221, t = 412, and t > 1000 iterations for B = 16, B = 32, B = 64, and B = 128, respectively. However, in terms ofp t , the number of elements of x processed, to reach the benchmark F (x t ) − F (x * ) ≤ 0.1, we requirep t > 1000,p t = 570,p t = 281, andp t = 203, respectively, for B = 16, B = 32, B = 64, and B = 128. We use constant step-size γ t = γ = 10 −1 using initialization 10 4 × 1. Convergence is comparable across the different cases in terms of number of iterations, but in terms of number of features processed B = 128 has the best performance and B = 16 (corresponding to parallelized oL-BFGS) converges slowest. We observe that using fewer coordinates per iterations leads to faster convergence in terms of number of processed elements of x. We use hybrid step-size γ t = min(10 −1.5 , 10 −1.5T 0 /t) with annealing rateT 0 = 400. Convergence is comparable across the different cases in terms of number of iterations, but in terms of number of features processed B = 128 has the best performance and B = 16 has the worst performance. This shows that updating less features/coordinates per iterations leads to faster convergence in terms of number of processed features. Observe that the rate of convergence for ARAPSA is empirically orders of magnitude higher than RAPSA.
F (x t ) − F (x * ), Obj. Error Sequence B=16 B=32 B=64 B=128 (b) Excess Error F (x t ) − F (x * ) vs. featureptF (x t ) − F (x * ), Obj. Error Sequence B=16 B=32 B=64 B=128 (b) Excess Error F (x t ) − F (x * ) vs. featurept
Comparison of RAPSA and ARAPSA We turn to numerically analyzing the performance of Accelerated RAPSA and RAPSA on the linear estimation problem for the case that parameter vectors x ∈ R p are p = 500 dimensional for N = 10 4 iterations in the constant step-size case γ = 10 −2 . Both algorithms are initialized as x 0 = 10 3 × 1 with mini-batch size L = 10, and ARAPSA uses the curvature memory level τ = 10. The number of processors is fixed again as I = 16, and the number of blocks is B = 64, meaning that r = 1/4 of the elements of x are operated on at each iteration.
The results of this numerical evaluation are given in Figure 6. We plot the objective error sequence versus iteration t in Figure 6(a). Observe that ARAPSA converges to within 10 −4 of the optimum by t = 300 iterations in terms of F (x t ) − F (x * ), whereas RAPSA, while descending slowly, approaches within 10 of the optimum by t = 10 4 iterations. The performance advantages of ARAPSA as compared to RAPSA are also apparent in Figure 6(b), which readjusts the results of Figure 6(a) to be in terms of actual elapsed time. We see that despite the higher complexity of ARAPSA per iteration, its empirical performance results in extremely fast convergence on linear estimation problems. That is, in about 3 seconds, the algorithm converges to within 10 −4 of the optimal estimator in terms of objective function evaluation.
Results for Asynchronous RAPSA We turn to studying the empirical performance of the asynchronous variant of RAPSA (Algorithm 4) proposed in Section 4.1. The model we use for asynchronicity is modeled after a random delay phenomenon in physical communication systems in which each local server has a distinct clock which is not locked to the others. Each processor's clock begins at time t i 0 = t 0 for all processors i = 1, . . . , I and selects subsequent times as t k = t k−1 + w i k , where w i k ∼ N (µ, σ 2 ) is a normal random variable with mean µ and variance σ 2 . The variance in this model effectively controls the amount of variability between the clocks of distinct processors.
We run Asynchronous RAPSA for the linear estimation problem when the parameter vector x is p = 500 dimensional for N = 10 3 iterations with no mini-batching L = 1 for both the case that the algorithm step-size is diminishing and constant step-size regimes for the case that the noise distribution perturbing the collected observations has variance σ 2 = 10 −2 , and the observation matrix is as discussed at the outset of Section 6.1. Further, the algorithm is initialized as x 0 = 10 3 1. We run the algorithm for a few different instantiations of asynchronicity, that is, w i k ∼ N (µ, σ 2 ) with µ = 1 or µ = 2, and σ = .1 or σ = .3.
F (x t ) − F (x * ), Obj. Error Sequence µ = 1, σ = .1 µ = 1, σ = .3 µ = 2, σ = .1 Synchronous (b) Excess Error F (x t ) − F (x * ) vs. iteration t.
Figure 7: Asynchronous RAPSA (Algorithm 4) on the linear estimation problem in the constant (γ = 10 4 , left) and diminishing (γ t = 10 6 /(t + 250), right) step-size schemes with no minibatching L = 1 for a binary training subset of size N = 10 3 with no regularization λ = 0 when the algorithm is initialized as x 0 = 10 3 × 1. Varying the asynchronicity distribution has little effect, but we find that convergence behavior is slower than its synchronized counterpart, as expected.
The results of this numerical experiment are given in Figure 7 for both the constant and diminishing step-size schemes. We see that the performance of the asynchronous parallel scheme is comparable across different levels of variability among the local clocks of each processor. In particular, in Figure 7(a) which corresponds to the case where the algorithm is run with constant step-size γ = 10 −2 , we observe comparable performance in terms of the objective function error sequence F (x t ) − F (x * ) with iteration t -across the varying levels of asynchrony we have F (x t ) − F (x * ) ≤ 10 by t = 10 3 . This trend may also be observed in the diminishing step-size scheme γ t = 1/t which is given in Figure 7(b). That is, the distance to the optimal objective is nearly identical across differing levels of asynchronicity. In both cases, the synchronized algorithm performs better than its asynchronous counterpart.
Hand-Written Digit Recognition
We now make use of RAPSA for visual classification of written digits. To do so, let z ∈ R p be a feature vector encoding pixel intensities (elements of the unit interval [0, 1] with smaller values being closer to black) of an image and let y ∈ {−1, 1} be an indicator variable of whether the image contains the digit 0 or 8, in which case the binary indicator is respectively y = −1 or y = 1. We model the task of learning a hand-written digit detector as a logistic regression problem, where one aims to train a classifier x ∈ R p to determine the relationship between feature vectors z n ∈ R p and their associated labels y n ∈ {−1, 1} for n = 1, . . . , N . The instantaneous function f n in (1) for this setting is the λ-regularized negative log-likelihood of a generalized linear model of the odds ratio of whether the label is y n = 1 or y n = −1. The empirical risk minimization associated with training set T = {(z n , y n )} N n=1 is to find x * as the maximum a posteriori estimate
x * := argmin x∈R p λ 2 x 2 + 1 N N n=1 log(1 + exp(−y n x T z n )) ,(34)
where the regularization term (λ/2) x 2 encodes a prior belief on the joint distribution of (z, y) and helps to avoid overfitting. We use the MNIST dataset (Lecun and Cortes), in which feature vectors z n ∈ R p are p = 28 2 = 784 pixel images whose values are recorded as intensities, or elements of (c) Test Set Accuracy vs. featurept Figure 9: RAPSA on MNIST data with hybrid step-size γ t = min(10 −3/4 , 10 −3/4T 0 /t), withT 0 = 300 and no mini-batching L = 1. As with the constant step-size selection, we observe that updating all blocks per iteration is best in terms of t, but in terms of elements of x updated, algorithm performance is nearly identical, meaning that no price is payed for breaking the complexity bottleneck in p.
F (x t ) = 1 N N n=1 fn(x t ), Objective B=16 B=32 B=64 B=128 (b) Objective F (x t ) vs. featurept.
the unit interval [0, 1]. Considered here is the subset associated with digits 0 and 8, a training set T = {z n , y n } N n=1 with N = 1.76 × 10 4 sample points. Results for RAPSA We run RAPSA on this training subset for the cases that B = 16, B = 32, B = 64, and B = 128, which are associated with updating p, p/2, p/4, and p/8 features per iteration. We consider the use of RAPSA with both constant and hybrid step-size selections. In Figure 8, we display the results when we select a constant learning rate γ t = γ = 10 −.5 = 0.316. In Figure 8(a) we plot the objective F (x t ) versus iteration t, and observe that algorithm performance improves with using more elements of x per iteration. That is, using all p coordinates of x achieves superior convergence with respect to iteration t. However, as previously noted, iteration index t is an unfair comparator for objective convergence since the four different setting process different number of features per iteration. In Figure 8(b), we instead consider F (x t ) versus the number of coordinates of x, denoted asp t , that algorithm performance is comparable across the different selections of B. This demonstrates that RAPSA breaks the computational bottleneck in p while suffering no reduction in convergence speed with respect top t .
We consider further the classification accuracy on a test subset of sizeÑ = 5.88 × 10 3 , the results of which are shown in Fig. 9(c). We see that the result for classification accuracy on a test set is (c) Test Set Accuracy vs. featurept Figure 10: ARAPSA on MNIST data with constant step-size γ t = γ = 10 −2 and mini-batch size L = 10, curvature memory τ = 10, and regularizer λ = 7.5 × 10 −3 . Algorithm performance is comparable across different numbers of decision variable coordinates updated per iteration t, but in terms of number of features processed, ARAPSA performance best when using the least information per update.
consistent with the results for the convergence of the objective function value, and asymptotically reach approximately 98% across the different instances of RAPSA.
In Figure 9 we show the result of running RAPSA for this logistic regression problem with hybrid step-size γ t = min(10 −3/4 , 10 −3/4T 0 /t), withT 0 = 300 and no mini-batching L = 1. In Fig. 9(a), which displays the objective F (x t ) versus iteration t, that using full stochastic gradients is better than only updating some of the coordinates in terms of the number of iterations t. In particular, to reach the objective benchmark F (x t ) ≤ 10 −1 , we have to run RAPSA t = 74, t = 156, and t = 217, and t = 631 iterations, for the cases that B = 16, B = 32, B = 64, and B = 128. We illustrate the objective F (x t ) vs. featurep t in Fig. 9(b). Here we recover the advantages of randomized incomplete parallel processing: updating fewer blocks per iteration yields comparable algorithm performance.
We additionally display the algorithm's achieved test-set accuracy on a test subset of sizeÑ = 5.88×10 3 in Fig. 9(c) under the hybrid step-size regime. We again see that after a burn-in period, the classifier achieves the highly accurate asymptotic error rate of between 1 − 2% across the different instantiations of RAPSA. We note that the test set accuracy achieved by the hybrid scheme is superior to the constant step-size setting.
Results for Accelerated RAPSA We now run Accelerated RAPSA (Algorithm 3) as stated in Section 3 for this problem setting for the entire MNIST binary training subset associated with digits 0 and 8, with mini-batch size L = 10 and the level of curvature information set as τ = 10. We further select regularizer λ = 1/ √ N = 7.5 × 10 −3 , and consider both constant and hybrid step-size regimes. As before, we study the advantages of incomplete randomized parallel processing by varying the number of blocks B ∈ {16, 32, 64, 128} on an architecture with a fixed number |I t | = I = 16 of processors. This setup is associated with using all p entries of vector x at each iteration as compared with 1/2, 1/4, and 1/8 of its entries.
Figures 10 the results of this algorithm run when a constant step-size γ = 10 −2 is used. Observe in Figure 10(a) that the algorithm achieves convergence across the differing numbers of blocks B in terms of iteration t, with faster learning rates achieved with smaller B. In particular, to reach the benchmark F (x t ) ≤ 10 −1 , we require t = 145, t = 311, and t = 701 iterations for B = 16, B = 32, and B = 64, respectively, whereas the case B = 128 does not achieve this benchmark by t = 10 3 . This trend is inverted, however, in Figure 10(b), which displays the objective F (x t ) with p t the number of coordinates of x on which the algorithm operates per step. Observe that using fewer entries of x per iteration is better in terms of number of features processedp t . Furthermore, ARAPSA achieves comparable accuracy on a test set of images, approximately near 98% across different selections of B, as is displayed in Figure 10 (c) Test Set Accuracy vs. featurept Figure 11: ARAPSA on MNIST data with hybrid step-size γ t = min(10 −1 , 10 −1T 0 /t), withT 0 = 500, mini-batch size L = 10, curvature memory τ = 10, and regularizer λ = 7.5 × 10 −3 . Algorithm performance is comparable across different numbers of decision variable coordinates updated per iteration t, but in terms of number of features processed, RAPSA performance best when using the least information per update.
We now run Accelerated RAPSA when the learning rate is hand-tuned to optimize performance via a hybrid scheme γ t = min(10 −1 , 10 −1T 0 /t), with attenuation thresholdT 0 = 500. The results of this experiment are given in Figure 11. In particular, in Figure 11(a) we plot the objective F (x t ) with iteration t when the number of blocks B is varied. We see that parallelized oL-BFGS (I = B so that r = 1) performs best in terms of t: to achieve the threshold condition F (x t ) ≤ 10 −1 , we require t = 278, t = 522 iterations for B = 16 and B = 32, respectively, whereas the cases B = 64 and B = 128 do not achieve this benchmark by t = 10 3 . However, the instance of ARAPSA with the fastest and most accurate convergence uses the least coordinates of x when we compare the objective withp t , as may be observed in Figure 11(b). This trend is corroborated in Figure 11(c), where we observe that ARAPSA with B = 128 achieves 99% test-set accuracy the fastest, followed by B = 64, B = 32, and B = 16.
Comparison of RAPSA and ARAPSA We now compare the performance of RAPSA and its accelerated variant on the MNIST digit recognition problem for a binary subset of the training data consisting of N = 10 5 samples. We run both algorithms on an I = 16 processor simulated architecture with B = 64 blocks, such that r = 1/4 of the elements of x are operated upon at each step. We consider the constant algorithm step-size scheme γ = 10 −2 with mini-batch size L = 10.
The results of this online training procedure are given in (12), where we plot the objective optimality gap F (x t ) − F (x * ) versus the number of feature vectors processed tL (Figure 12(a)) and actual elapsed time (Figure 12(b)). We see ARAPSA achieves superior convergence behavior with respect to RAPSA in terms of number of feature vectors processed: to achieve the benchmark F (x t ) − F (x * ) ≤ 10 −1 , ARAPSA requires fewer than tL = 200 feature vectors, whereas RAPSA requires tL = 4 × 10 4 feature vectors. This relationship is corroborated in Figure 12(b), where we see that within a couple seconds ARAPSA converges to within 10 −1 , whereas after five times as long, RAPSA does not achieve this benchmark.
Results for Asynchronous RAPSA We now evaluate the empirical performance of the asynchronous variant of RAPSA (Algorithm 4) proposed in Section 4.1 on the logistic regression formulation of the MNIST digit recognition problem. The model we use for asynchronicity is the one outlined in Section 6.1, that is, each local processor has a distinct local clock which is not required coincide with others, begins at time t i 0 = t 0 for all processors i = 1, . . . , I, and then selects subsequent times as t k = t k−1 + w i k . Here w i k ∼ N (µ, σ 2 ) is a normal random variable with mean µ and variance σ 2 which controls the amount of variability between the clocks of distinct processors. We run the algorithm with no regularization λ = 0 or mini-batching L = 1 and initialization x 0 = 1. Figure 13: Asynchronous RAPSA on MNIST data in the constant (γ = 10 −2 , left) and diminishing (γ t = 1/t, right) step-size schemes with no mini-batching L = 1 for a binary training subset of size N = 10 3 with no regularization λ = 0 when the algorithm is initialized as x 0 = 1. The variability in local processor clocks does not significantly impact performance in both the diminishing and constant step-size settings; however, the synchronous algorithm converges at a faster rate.
The results of this numerical setup are given in Figure 13. We consider the expected risk F (x t ) in both both the constant (γ = 10 −2 , Figure 13(a)) and diminishing (γ t = 1/t, Figure 13(b)) algorithm step-size schemes. We see that the level of asynchronicity does not significantly impact the performance in either scheme, and that the convergence guarantees established in Theorem 8 hold true in practice. We again observe that the version of RAPSA with synchronized computations converges at a faster rate than Asynchronous RAPSA.
Conclusions
We proposed the random parallel stochastic algorithm (RAPSA) proposed as a doubly stochastic approximation algorithm capable of optimization problems associated with learning problems in which both the number of predictive parameters and sample size are huge-scale. RAPSA is doubly stochastic since each processors utilizes a random set of functions to compute the stochastic gradient associated with a randomly chosen sets of variable coordinates. We showed the proposed algorithm converges to the optimal solution sublinearly when the step-size is diminishing. Moreover, linear convergence to a neighborhood of the optimal solution can be achieved using a constant step-size. We further introduced accelerated and asynchronous variants of RAPSA, and presented convergence guarantees for asynchronous RAPSA.
A detailed numerical comparison between RAPSA and parallel SGD for learning a linear estimator and a logistic regressor is provided. The numerical results showcase the advantage of RAPSA with respect to parallel SGD. Further empirical results illustrate the advantages of ARAPSA with respect to parallel oL-BFGS, and that implementing the algorithm on a lock-free parallel computing cluster does not substantially degrade empirical performance.
Appendix A. Proof of Results Leading to Theorems 4 and 5
A.1 Proof of Lemma 2
Recall that the components of vector x t+1 are equal to the components of x t for the coordinates that are not updated at step t, i.e., i / ∈ I t . For the updated coordinates i ∈ I t we know that
x t+1 i = x t i − γ t ∇ x t i f (x t , θ t )
. Therefore, B − I blocks of the vector x t+1 − x t are 0 and the remaining I randomly chosen blocks are given by −γ t ∇ x t i f (x t , θ t ). Notice that there are B I different ways for picking I blocks out of the whole B blocks. Therefore, the probability of each combination of blocks is 1/ B I . Further, each block appears in B−1 I−1 of the combinations. Therefore, the expected value can be written as
E I t x t+1 − x t | F t = B−1 I−1 m I −γ t ∇f (x t , Θ t ) .(35)
Observe that simplifying the ratio in the right hand sides of (35) leads to
B−1 I−1 B I = (B−1)! (I−1)!×(B−I)! p! I!×(B−I)! = I B = r.(36)
Substituting the simplification in (36) into (35) follows the claim in (16). To prove the claim in (17) we can use the same argument that we used in proving (16) to show that
E I t x t+1 −x t 2 | F t = B−1 I−1 B I (γ t ) 2 ∇f (x t , Θ t ) 2 .(37)
By substituting the simplification in (36) into (37) the claim in (17) follows.
A.2 Proof of Proposition 3
By considering the Taylor's expansion of F (x t+1 ) near the point x t and observing the Lipschitz continuity of gradients ∇F with constant M we obtain that the average objective function F (x t+1 ) is bounded above by
F (x t+1 ) ≤ F (x t ) + ∇F (x t ) T (x t+1 − x t ) + M 2 x t+1 − x t 2 .(38)
Compute the expectation of the both sides of (38) with respect to the random set I t given the observed set of information F t . Substitute E I t x t+1 − x t | F t and E I t x t+1 − x t 2 | F t with their simplifications in (16) and (17), respectively, to write
E I t F (x t+1 ) | F t ≤ F (x t ) − rγ t ∇F (x t ) T ∇f (x t , Θ t ) + rM (γ t ) 2 2 ∇f (x t , Θ t ) 2 .(39)
Notice that the stochastic gradient ∇f (x t , Θ t ) is an unbiased estimate of the average function gradient ∇F (x t ). Therefore, we obtain
E Θ t ∇f (x t , Θ t ) | F t = ∇F (x t ).
Observing this relation and considering the assumption in (15), the expected value of (39) with respect to the set of realizations Θ t can be written as
E I t ,Θ t F (x t+1 ) | F t ≤ F (x t ) − rγ t ∇F (x t ) 2 + rM (γ t ) 2 K 2 .(40)
Subtracting the optimal objective function value F (x * ) form the both sides of (40) implies that
E I t ,Θ t F (x t+1 ) − F (x * ) | F t ≤ F (x t ) − F (x * ) − rγ t ∇F (x t ) 2 + rM (γ t ) 2 K 2 .(41)
We proceed to find a lower bound for the gradient norm ∇F (x t ) in terms of the objective value error F (x t ) − F (x * ). Assumption 1 states that the average objective function F is strongly convex with constant m > 0. Therefore, for any y, z ∈ R p we can write
F (y) ≥ F (z) + ∇F (z) T (y − z) + m 2 y − z 2 .(42)
For fixed z, the right hand side of (42) is a quadratic function of y whose minimum argument we can find by setting its gradient to zero. Doing this yields the minimizing argumentŷ = z − (1/m)∇F (z) implying that for all y we must have
F (y) ≥ F (w) + ∇F (z) T (ŷ − z) + m 2 ŷ − z 2 = F (z) − 1 2m ∇F (z) 2 .(43)
Observe that the bound in (43) holds true for all y and z. Setting values y = x * and z = x t in (43) and rearranging the terms yields a lower bound for the squared gradient norm ∇F (x t ) 2 as
∇F (x t ) 2 ≥ 2m(F (x t ) − F (x * )).(44)
Substituting the lower bound in (44) by the norm of gradient square ∇F (x t ) 2 in (41) follows the claim in (18).
A.3 Proof of Theorem 4
We use the relationship in (18) to build a supermartingale sequence. To do so, define the stochastic process α t as
α t := F (x t ) − F (x * ) + rM K 2 ∞ u=t (γ u ) 2 .(45)
Note that α t is well-defined because
∞ u=t (γ u ) 2 ≤ ∞ u=0 (γ u ) 2 < ∞ is summable. Further define the sequence β t with values β t := 2mγ t r(F (x t ) − F (x * )).(46)
The definitions of sequences α t and β t in (45) and (46), respectively, and the inequality in (18) imply that the expected value α t+1 given F t can be written as
E α t+1 F t ≤ α t − β t .(47)
Since the sequences α t and β t are nonnegative it follows from (47) that they satisfy the conditions of the supermartingale convergence theorem -see e.g. Theorem E7.4 of Solo and Kong (1994). Therefore, we obtain that: (i) The sequence α t converges almost surely to a limit. (ii) The sum ∞ t=0 β t < ∞ is almost surely finite. The latter result yields
∞ t=0 2mγ t r(F (x t ) − F (x * )) < ∞. a.s.(48)
Since the sequence of step sizes is non-summable there exits a subsequence of sequence F (x t ) − F (x * ) which is converging to null. This observation is equivalent to almost sure convergence of
lim inf F (x t ) − F (x * ) to null, lim inf t→∞ F (x t ) − F (x * ) = 0. a.s.(49)
Based on the martingale convergence theorem for the sequences α t and β t in relation (47), the sequence α t almost surely converges to a limit. Consider the definition of α t in (45). Observe that the sum ∞ u=t (γ u ) 2 is deterministic and its limit is null. Therefore, the sequence of the objective function value error F (x t )−F (x * ) almost surely converges to a limit. This observation in association with the result in (49) implies that the whole sequence of F (x t ) − F (x * ) converges almost surely to null, lim
t→∞ F (x t ) − F (x * ) = 0. a.s.(50)
The last step is to prove almost sure convergence of the sequence x t − x * 2 to null, as a result of the limit in (50). To do so, we follow by proving a lower bound for the objective function value error F (x t ) − F (x * ) in terms of the squared norm error x t − x * 2 . According to the strong convexity assumption, we can write the following inequality
F (x t ) ≥ F (x * ) + ∇F (x * ) T (x t − x * ) + m 2 x t − x * 2 .(51)
Observe that the gradient of the optimal point is the null vector, i.e., ∇F (x * ) = 0. This observation and rearranging the terms in (51) imply that
F (x t ) − F (x * ) ≥ m 2 x t − x * 2 .(52)
The upper bound in (52) for the squared norm x t − x * 2 in association with the fact that the sequence F (x t ) − F (x * ) almost surely converges to null, leads to the conclusion that the sequence x t − x * 2 almost surely converges to zero. Hence, the claim in (19) is valid. The next step is to study the convergence rate of RAPSA in expectation. In this step we assume that the diminishing stepsize is defined as γ t = γ 0 T 0 /(t + T 0 ). Recall the inequality in (18). Substitute γ t by γ 0 T 0 /(t + T 0 ) and compute the expected value of (18) given F 0 to obtain
E F (x t+1 ) − F (x * ) ≤ 1 − 2mrγ 0 T 0 (t + T 0 ) E F (x t ) − F (x * ) + rM K(γ 0 T 0 ) 2 2(t + T 0 ) 2 .(53)
We use the following lemma to show that the result in (53) implies sublinear convergence of the sequence of expected objective value error E [F (x t ) − F (x * )].
Lemma 9 Let c > 1, b > 0 and t 0 > 0 be given constants and u t ≥ 0 be a nonnegative sequence that satisfies
u t+1 ≤ 1 − c t + t 0 u t + b (t + t 0 ) 2 ,(54)
for all times t ≥ 0. The sequence u t is then bounded as
u t ≤ Q t + t 0 ,(55)
for all times t ≥ 0, where the constant Q is defined as Q := max{b/(c − 1), t 0 u 0 } .
Proof See Section 2 in (Nemirovski et al. (2009)).
Lemma 9 shows that if a sequence u t satisfies the condition in (54) then the sequence u t converges to null at least with the rate of O(1/t). By assigning values
t 0 = T 0 , u t = E [F (x t ) − F (x * )]
, c = 2mrγ 0 T 0 , and b = rM K(γ 0 T 0 ) 2 /2, the relation in (53) implies that the inequality in (54) is satisfied for the case that 2mrγ 0 T 0 > 1. Therefore, the result in (55) holds and we can conclude that
E F (x t ) − F (x * ) ≤ C t + T 0 ,(56)
where the constant C is defined as
C = max rM K(γ 0 T 0 ) 2 4rmγ 0 T 0 − 2 , T 0 (F (x 0 ) − F (x * )) .(57)
A.4 Proof of Theorem 5
To prove the claim in (22) we use the relationship in (18) (Proposition 3) to construct a supermartingale. Define the stochastic process α t with values
α t := F (x t ) − F (x * ) × 1 min u≤t F (x u ) − F (x * ) > γM K 4m(58)
The process α t tracks the optimality gap F (x t )−F (x * ) until the gap becomes smaller than γM K/2m for the first time at which point it becomes α t = 0. Notice that the stochastic process α t is always non-negative, i.e., α t ≥ 0. Likewise, we define the stochastic process β t as
β t := 2γmr F (x t ) − F (x * ) − γM K 4m × 1 min u≤t F (x u ) − F (x * ) > γM K 4m ,(59)
which follows 2γmr (F (x t ) − F (x * ) − γM K/4m) until the time that the optimality gap F (x t ) − F (x * ) becomes smaller than γM K/2m for the first time. After this moment the stochastic process β t becomes null. According to the definition of β t in (59), the stochastic process satisfies β t ≥ 0 for all t ≥ 0. Based on the relationship (18) and the definitions of stochastic processes α t and β t in (58) and (59) we obtain that for all times t ≥ 0
E α t+1 | F t ≤ α t − β t .(60)
To check the validity of (60) we first consider the case that min u≤t F (x u )−F (x * ) > γM K/4m holds. In this scenario we can simply the stochastic processes in (58) and (59) as α t = F (x t ) − F (x * ) and β t = 2γmr (F (x t ) − F (x * ) − γM K/4m). Therefore, according to the inequality in (18) the result in (60) is valid. The second scenario that we check is min u≤t F (x u ) − F (x * ) ≤ γM K/4m. Based on the definitions of stochastic processes α t and β t , both of these two sequences are equal to 0. Further, notice that when α t = 0, it follows that α t+1 = 0. Hence, the relationship in (60) is true. Given the relation in (60) and non-negativity of stochastic processes α t and β t we obtain that α t is a supermartingale. The supermartingale convergence theorem yields: i) The sequence α t converges to a limit almost surely. ii) The sum ∞ t=1 β t is finite almost surely. The latter result implies that the sequence β t is converging to null almost surely, i.e., lim t→∞ β t = 0 a.s.
Based on the definition of β t in (59), the limit in (61) is true if one of the following events holds: i) The indicator function is null after for large t.
ii) The limit lim t→∞ (F (x t ) − F (x * ) − γM K/4m) = 0 holds true. From any of these two events we it is implied that
lim inf t→∞ F (x t ) − F (x * ) ≤ γM K 4m a.s.(62)
Therefore, the claim in (22) is valid. The result in (62) shows the objective function value sequence F (x t ) almost sure converges to a neighborhood of the optimal objective function value F (x * ). We proceed to prove the result in (23). Compute the expected value of (18) given F 0 and set γ t = γ to obtain
E F (x t+1 ) − F (x * ) ≤ (1 − 2mγr) E F (x t ) − F (x * ) + rM Kγ 2 2 .(63)
Notice that the expression in (63) provides an upper bound for the expected value of objective
function error E F (x t+1 ) − F (x * ) in terms of its previous value E [F (x t ) − F (x * )]
and an error term. Rewriting the relation in (63) for step t − 1 leads to
E F (x t ) − F (x * ) ≤ (1 − 2mγr) E F (x t−1 ) − F (x * ) + rM Kγ 2 2 .(64)
Substituting the upper bound in (64) (63) follows an upper bound for the expected error E F (x t+1 ) − F (x * ) as
for the expectation E [F (x t ) − F (x * )] inE F (x t+1 )−F (x * ) ≤ (1 − 2mγr) 2 E F (x t−1 )−F (x * ) + rM Kγ 2 2 (1 + (1−2mrγ)).(65)
By recursively applying the steps in (64)-(65) we can bound the expected objective function error E F (x t+1 ) − F (x * ) in terms of the initial objective function error F (x 0 ) − F (x * ) and the accumulation of the errors as
E F (x t+1 )−F (x * ) ≤ (1 − 2mγr) t+1 (F (x 0 ) − F (x * )) + rM Kγ 2 2 t u=0 (1 − 2mrγ) u .(66)
Substituting t by t − 1 and simplifying the sum in the right hand side of (66) yields
E F (x t ) − F (x * ) ≤ (1 − 2mγr) t (F (x 0 ) − F (x * )) + M Kγ 4m 1 − (1 − 2mrγ) t .(67)
Observing that the term 1 − (1 − 2mrγ) t in the right hand side of (67) is strictly smaller than 1 for the stepsize γ < 1/(2mr), the claim in (23) follows. Proof : Recall that the components of vector x t+1 are equal to the components of x t for the coordinates that are not updated at step t, i.e., i / ∈ I t . For the updated coordinates i ∈ I t we know that x t+1
i = x t i − γ t ∇ x t i f (x t−τ , θ t−τ )
. Therefore, B − 1 blocks of the vector x t+1 − x t are 0 and only one block is given by −γ t ∇ xi f (x t−τ , θ t−τ ). Since the corresponding processor picks its block uniformly at random from the B sets of blocks we obtain that the expected value of the difference x t+1 − x t with respect to the index of the block at time t is given by
E I t x t+1 − x t | F t = 1 B −γ t ∇f (x t−τ , Θ t−τ ) .(68)
Substituting the simplification in (68) in place of (35) in the proof of Lemma 2 and simplifying the resulting expression yields the claim in (28). To prove the claim in (29) we can use the same argument that we used in proving (28) to show that
E I t x t+1 − x t 2 | F t = (γ t ) 2 B ∇f (x t−τ , Θ t−τ ) 2 ,(69)
which completes the proof.
B.2 Proof of Proposition 7
By considering the Taylor's expansion of F (x t+1 ) near the point x t and observing the Lipschitz continuity of gradients ∇F with constant M we obtain that the average objective function F (x t+1 ) is bounded above by
F (x t+1 ) ≤ F (x t ) + ∇F (x t ) T (x t+1 − x t ) + M 2 x t+1 − x t 2 .(70)
Compute the expectation of the both sides of (70) with respect to the random indexing set I t ⊂ {1, . . . , B} associated with chosen blocks given the observed set of information F t . Substitute E I t x t+1 − x t | F t and E I t x t+1 − x t 2 | F t with their simplifications in (28) and (29), respectively, to write
E I t F (x t+1 ) | F t ≤ F (x t ) − γ t B ∇F (x t ) T ∇f (x t−τ , Θ t−τ ) + M (γ t ) 2 2B ∇f (x t−τ , Θ t−τ ) 2 .(71)
Notice that the stochastic gradient ∇f (x t−τ , Θ t−τ ) is an unbiased estimate of the average function gradient ∇F (x t−τ ). Therefore, we obtain E ∇f (x t−τ , Θ t−τ ) | F t = ∇F (x t−τ ). Observing this relation and considering the assumption in (15), the expected value of (71) given the sigma algebra F t can be written as
E F (x t+1 ) | F t ≤ F (x t ) − γ t B ∇F (x t ) T ∇F (x t−τ ) + M (γ t ) 2 K 2B .(72)
By adding and subtracting the term (γ t /B) ∇F (x t ) 2 to the right hand side of (72) we obtain
E F (x t+1 ) | F t ≤ F (x t ) − γ t B ∇F (x t ) 2 + γ t B ∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ) + M (γ t ) 2 K 2B .(73)
Observe that the third term on the right-hand side of (73) is the directional error due to the presence of delays from asynchronicity. We proceed to find an upper bound for the expression ∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ), which means that the error due to delay may be mitigated. To do so, notice that we can write
∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ) = ∇F (x t ) T (∇F (x t ) − ∇F (x t−τ )) ≤ ∇F (x t ) ∇F (x t ) − ∇F (x t−τ ) ,(74)
where for the inequality we have used the Cauchy-Schwarz inequality. Apply the fact that the gradient of the objective function is M -Lipschitz continuous, which implies that ∇F (x t ) − ∇F (x t−τ ) ≤ M x t − x t−τ . Substituting the upper bound M x t − x t−τ for ∇F (x t ) − ∇F (x t−τ ) into (74) we obtain
∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ) ≤ M ∇F (x t ) x t − x t−τ .(75)
The difference norm x t − x t−τ is equivalent to t−1 s=t−τ (x s+1 − x s ) which can be bounded above by t−1 s=t−τ x s+1 − x s by the triangle inequality. Therefore,
∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ) ≤ M ∇F (x t ) t−1 s=t−τ x s+1 − x s .(76)
Substitute the upper bound in (76) for ∇F (x t ) 2 − ∇F (x t ) T ∇F (x t−τ ) into (73) to obtain
E F (x t+1 ) | F t ≤ F (x t ) − γ t B ∇F (x t ) 2 + M γ t B ∇F (x t ) t−1 s=t−τ x s+1 − x s + M (γ t ) 2 K 2B .(77)
Note that for any positive scalars a, b, and ρ the inequality ab ≤ (ρ/2)a 2 + (1/2ρ)b 2 holds. If we set a := ∇F (x t ) and b :=
t−1 s=t−τ x s+1 − x s we obtain that ∇F (x t ) t−1 s=t−τ x s+1 − x s ≤ ρ 2 ∇F (x t ) 2 + 1 2ρ t−1 s=t−τ x s+1 − x s 2 ≤ ρ 2 ∇F (x t ) 2 + τ 2ρ t−1 s=t−τ x s+1 − x s 2 ,(78)
where the last inequality is an application of the triangle inequality to the second term on the right-hand side of the first line in (78). Now substituting the upper bound in (78) into (77) yields
E F (x t+1 ) | F t ≤ F (x t ) − γ t B − ρM γ t 2B ∇F (x t ) 2 + τ M γ t 2ρB t−1 s=t−τ x s+1 − x s 2 + M (γ t ) 2 K 2B .(79)
Compute the expected value of the both sides of (79) given the sigma-algebra F t−1 to obtain
E F (x t+1 ) | F t−1 ≤ E F (x t ) | F t−1 − γ t B − ρM γ t 2B E ∇F (x t ) 2 | F t−1 + τ M γ t 2ρB E t−1 s=t−τ x s+1 − x s 2 | F t−1 + M (γ t ) 2 K 2B ,(80)
which can be simplified as
E F (x t+1 ) | F t−1 ≤ E F (x t ) | F t−1 − γ t B − ρM γ t 2B E ∇F (x t ) 2 | F t−1 + τ M γ t 2ρB E t−2 s=t−τ x s+1 − x s 2 | F t−1 + τ M γ t (γ t−1 ) 2 K 2ρB 2 + M (γ t ) 2 K 2B .(81)
Do the same up to t − τ to get
E F (x t+1 ) | F t−τ ≤ E F (x t ) | F t−τ − γ t B − ρM γ t 2B E ∇F (x t ) 2 | F t−τ + τ M γ t K 2ρB 2 t−1 s=t−τ (γ s ) 2 + M (γ t ) 2 K 2B .(82)
Notice that the sequence of stepsizes γ t is decreasing, thus the sum t−1 s=t−τ (γ s ) 2 in (82) can be bounded above by τ (γ t−τ ) 2 . Applying this substutition and subtracting the optimal objective function value F (x * ) from both sides of the implied expression lead to
E F (x t+1 ) − F (x * ) | F t−τ ≤ E F (x t ) − F (x * ) | F t−τ − γ t B − ρM γ t 2B E ∇F (x t ) 2 | F t−τ + τ 2 M γ t K(γ t−τ ) 2 2ρB 2 + M (γ t ) 2 K 2B .(83)
We make use of the fact that the average function F (x) is m-strongly convex in applying the relation ∇F (x t ) 2 ≥ 2m(F (x t ) − F (x * )) to the expression (84). Therefore,
E F (x t+1 ) − F (x * ) | F t−τ ≤ E F (x t ) − F (x * ) | F t−τ − 2m γ t B − ρM γ t 2B E F (x t ) − F (x * ) | F t−τ + τ 2 M γ t K(γ t−τ ) 2 2ρB 2 + M (γ t ) 2 K 2B ,(84)
as stated in Proposition 7.
B.3 Proof of Theorem 8
Proof : We use the result in Proposition 7 to define a martingale difference sequence with delay. Begin by defining the non-negative stochastic processes α t , β t , and ζ t for t ≥ 0 as
α t := F (x t ) − F (x * ), β t := 2mγ t B 1 − ρM 2 (F (x t ) − F (x * )), ζ t := M K(γ t ) 2 2B + τ 2 M Kγ t (γ t−τ ) 2 2ρB 2 .(85)
According to the definitions in (85) and the inequality in (30) we can write
E α t+1 | F t−τ ≤ E α t | F t−τ − E β t | F t−τ + ζ t .(86)
Computing the expected value of both sides of (86) with respect to the initial sigma algebra
E · | F 0 = E [·] yields E α t+1 ≤ E α t − E β t + ζ t .(87)
Sum both sides of (87) from t = 0 to t = ∞ and consider the fact that ζ t is summable and the sequence α t is non-negative. Thus, we obtain that the series ∞ t=0 E [β t ] < ∞ is finite. By using Monotone Convergence Theorem, we pull the expectation outside the summand to obtain that E [ ∞ t=0 β t ] < ∞. If we define Y n := n t=0 β t , we obtain that Y n ≥ 0 and Y n ≤ Y n+1 . Thus, from the result E [ ∞ t=0 β t ] < ∞ we can conclude that ∞ t=0 β t < ∞ with probability 1. Now considering the definition of β t in (85) and the non-summability of the stepsizes ∞ t=0 γ t = ∞, we obtain that a subsequence of the sequence F (x t ) − F (x * ) almost surely converges to zero, i.e., the liminf of the sequence F (x t ) − F (x * ) is zero with probability 1, lim inf t→∞ F (x t ) − F (x * ) = 0, a.s.
The next step is to study the convergence rate of asynchronous RAPSA in expectation. By setting γ t = γ 0 T 0 /(t + T 0 ) in (30) and computing the expected value given the initial sigma algebra F 0 we obtain
E F (x t+1 ) − F (x * ) (89) ≤ 1 − 2mγ 0 T 0 B(t + T 0 ) 1 − ρM 2 E F (x t ) − F (x * ) + M K(γ 0 T 0 ) 2 2B(t + T 0 ) 2 + τ 2 M K(γ 0 T 0 ) 3 2ρB 2 (t + T 0 )(t − τ + T 0 ) 2 .
Observe that it is not hard to check that if t ≥ 2τ + 1, then the inequality (t − τ + T 0 ) 2 > t + T 0 holds and we can substitute 1/((t − τ + T 0 ) 2 ) in (89) by the upper bound 1/(t + T 0 ). Applying this substitution yields
E F (x t+1 ) − F (x * ) ≤ 1 − 2mγ 0 T 0 B(t + T 0 ) 1 − ρM 2 E F (x t ) − F (x * ) + M K(γ 0 T 0 ) 2 2B(t + T 0 ) 2 + τ 2 M K(γ 0 T 0 ) 3 2ρB 2 (t + T 0 ) 2 .(90)
We use the result in Lemma 9 to show sublinear convergence of the sequence of expected objective value error E [F (x t ) − F (x * )]. Lemma 9 shows that if a sequence u t satisfies the condition in (54) then the sequence u t converges to null at least with the rate of O(1/t). By assigning values t 0 = T 0 , u t = E [F (x t ) − F (x * )], c = (2mγ 0 T 0 /B)(1 − ρM/2), and b = M K(γ 0 T 0 ) 2 /2B + (τ 2 M K(γ 0 T 0 ) 3 )(2ρB 2 ), the relation in (53) implies that the inequality in (54) is satisfied for the case that c = (2mγ 0 T 0 /B)(1 − ρM/2) > 1. Therefore, the result in (55) holds and we can conclude that
E F (x t ) − F (x * ) ≤ C t + T 0 ,(91)
where the constant C is defined as C = max M K(γ 0 T 0 ) 2 /2B + (τ 2 M K(γ 0 T 0 ) 3 )(2ρB 2 ) (2mγ 0 T 0 /B)(1 − ρM/2) − 1 , T 0 (F (x 0 ) − F (x * )) .
Figure 1 :
1Random parallel stochastic algorithm (RAPSA). At each iteration, processor P i picks a random block from the set {x 1 , . . . , x B } and a random set of functions from the training set {f 1 , . . . , f N }. The functions drawn are used to evaluate a stochastic gradient component associated with the chosen block.
Transmit updated blocks i ∈ I t ⊂ {1, . . . , B} to shared memory 12: end for where the matricesẐ t−τ +u b and the constantsρ t−τ +u b in (8) for u = 0, . . . , τ − 1 are defined aŝ
Error F (x t ) − F (x * ) vs. iteration t Error F (x t ) − F (x * ) vs. featurept
Figure 2 :
2RAPSA on a linear regression (quadratic minimization) problem with signal dimension p = 1024 for N = 10 3 iterations with mini-batch size L = 10 for different number of blocks B = {16, 32, 64, 128} initialized as 10 4 ×1. We use constant step-size γ t = γ = 10 −2 .
Error F (x t ) − F (x * ) vs. iteration t Error F (x t ) − F (x * ) vs. featurept
Figure 3 :
3RAPSA on a linear regression problem with signal dimension p = 1024 for N = 10 3 iterations with mini-batch size L = 10 for different number of blocks B = {16, 32, 64, 128}
Error F (x t ) − F (x * ) vs. iteration t
Figure 4 :
4ARAPSA on a linear regression problem with signal dimension p = 1024 for N = 10 3 iterations with mini-batch size L = 10 for different number of blocks B = {16, 32, 64, 128}.
Error F (x t ) − F (x * ) vs. iteration t
Figure 5 :
5ARAPSA on a linear regression problem with signal dimension p = 1024 for N = 10 4 iterations with mini-batch size L = 10 for different number of blocks B = {16, 32, 64, 128}.
Figure 6 :
6Error F (x t ) − F (x * ) vs. iteration t Error F (x t ) − F (x * ) vs.clock time (s) A numerical comparison of RAPSA and ARAPSA on the linear estimation problem stated at the beginning of Section 6.1 for N = 10 4 iterations with signal dimension p = 500 with constant step-size γ = 10 −2 when there are I = 16 processors and B = 64 blocks, meaning that one quarter of the elements of x are updated per iteration.
Error F (x t ) − F (x * ) vs. iteration t.
Figure 8 :
8RAPSA on MNIST data with constant step-size γ t = γ = 10 −.5 with no mini-batching L = 1. Algorithm performance is best in terms of number of iterations t when all blocks are used per step (parallelized SGD), but in terms of number of features processed, the methods perform comparably. Thus RAPSA performs as well as SGD while breaking the complexity bottleneck in p, the dimension of decision variable x.
(c).
FFFigure 12 :
12(x t ) − F (x * ) vs. iteration t (x t ) − F (x * )vs. clock time (s) A comparison of RAPSA and ARAPSA on the MNIST digit recognition problem for a binary training subset of size N = 10 3 with mini-batch size L = 10 in the constant stepsize scheme γ = 10 −2 . The objective optimality gap F (x t ) − F (x * ) is shown with respect to the number of feature vectors processed tL (left) and actual elapsed time (right). While the performance difference between RAPSA and ARAPSA is not as large as in the linear estimation problem, we still observe that ARAPSA substantially accelerates the convergence of RAPSA for a standard machine learning problem. σ = .1 µ = 1, σ = .3 µ = 2, σ = .1 Synchronous (b) Objective F (x t ) vs. iteration t.
Algorithm 1 Random Parallel Stochastic Algorithm (RAPSA)1: for t = 0, 1, 2, . . . do
2:
Algorithm 3 Accelerated Random Parallel Stochastic Algorithm (ARAPSA)1: for t = 0, 1, 2, . . . do
2:
On the convergence of block coordinate descent type methods. Amir Beck, Luba Tetruashvili, SIAM Journal on Optimization. 234Amir Beck and Luba Tetruashvili. On the convergence of block coordinate descent type methods. SIAM Journal on Optimization, 23(4):2037-2060, 2013.
Parallel and distributed computation: numerical methods. P Dimitri, John N Bertsekas, Tsitsiklis, Prentice hall23Englewood Cliffs, NJDimitri P Bertsekas and John N Tsitsiklis. Parallel and distributed computation: numerical methods, volume 23. Prentice hall Englewood Cliffs, NJ, 1989.
Sgd-qn: Careful quasi-newton stochastic gradient descent. Antoine Bordes, Léon Bottou, Patrick Gallinari, The Journal of Machine Learning Research. 10Antoine Bordes, Léon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gradient descent. The Journal of Machine Learning Research, 10:1737-1754, 2009.
On the local and superlinear convergence of quasi-newton methods. John E Charles G Broyden, Jorge J Dennis, Moré, IMA Journal of Applied Mathematics. 123Charles G Broyden, John E Dennis, and Jorge J Moré. On the local and superlinear convergence of quasi-newton methods. IMA Journal of Applied Mathematics, 12(3):223-245, 1973.
Global convergence of a class of quasi-newton methods on convex problems. H Richard, Jorge Byrd, Ya-Xiang Nocedal, Yuan, SIAM Journal on Numerical Analysis. 245Richard H Byrd, Jorge Nocedal, and Ya-Xiang Yuan. Global convergence of a class of quasi-newton methods on convex problems. SIAM Journal on Numerical Analysis, 24(5):1171-1190, 1987.
Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. Aaron Defazio, Francis Bach, Simon Lacoste-Julien, Advances in Neural Information Processing Systems. Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646-1654, 2014.
A characterization of superlinear convergence and its application to quasi-newton methods. E John, Jorge J Dennis, Moré, Mathematics of computation. 28126John E Dennis and Jorge J Moré. A characterization of superlinear convergence and its application to quasi-newton methods. Mathematics of computation, 28(126):549-560, 1974.
Parallel selective algorithms for nonconvex big data optimization. Francisco Facchinei, Gesualdo Scutari, Simone Sagratella, IEEE Transactions on. 637Signal ProcessingFrancisco Facchinei, Gesualdo Scutari, and Simone Sagratella. Parallel selective algorithms for nonconvex big data optimization. Signal Processing, IEEE Transactions on, 63(7):1874-1889, 2015.
Accelerating stochastic gradient descent using predictive variance reduction. Rie Johnson, Tong Zhang, Advances in Neural Information Processing Systems. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315-323, 2013.
The MNIST database of handwritten digits. Yann Lecun, Corinna Cortes, Yann Lecun and Corinna Cortes. The MNIST database of handwritten digits. URL http://yann. lecun.com/exdb/mnist/.
A modified bfgs method and its global convergence in nonconvex minimization. Dong-Hui Li, Masao Fukushima, Journal of Computational and Applied Mathematics. 1291Dong-Hui Li and Masao Fukushima. A modified bfgs method and its global convergence in nonconvex minimization. Journal of Computational and Applied Mathematics, 129(1):15-35, 2001.
On the limited memory bfgs method for large scale optimization. C Dong, Jorge Liu, Nocedal, Mathematical programming. 45Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503-528, 1989.
An asynchronous parallel stochastic coordinate descent algorithm. Ji Liu, J Stephen, Christopher Wright, Victor Ré, Srikrishna Bittorf, Sridhar, The Journal of Machine Learning Research. 161Ji Liu, Stephen J Wright, Christopher Ré, Victor Bittorf, and Srikrishna Sridhar. An asynchronous parallel stochastic coordinate descent algorithm. The Journal of Machine Learning Research, 16 (1):285-322, 2015.
On the complexity analysis of randomized block-coordinate descent methods. Zhaosong Lu, Lin Xiao, Mathematical Programming. Zhaosong Lu and Lin Xiao. On the complexity analysis of randomized block-coordinate descent methods. Mathematical Programming, pages 1-28, 2013.
On the complexity analysis of randomized block-coordinate descent methods. Zhaosong Lu, Lin Xiao, Mathematical Programming. 1521-2Zhaosong Lu and Lin Xiao. On the complexity analysis of randomized block-coordinate descent methods. Mathematical Programming, 152(1-2):615-642, 2015.
On the convergence of the coordinate descent method for convex differentiable minimization. Paul Zhi-Quan Luo, Tseng, Journal of Optimization Theory and Applications. 721Zhi-Quan Luo and Paul Tseng. On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7-35, 1992.
Online learning for matrix factorization and sparse coding. Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, The Journal of Machine Learning Research. 11Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factor- ization and sparse coding. The Journal of Machine Learning Research, 11:19-60, 2010.
Res: Regularized stochastic bfgs algorithm. Signal Processing. Aryan Mokhtari, Alejandro Ribeiro, IEEE Transactions on. 6223Aryan Mokhtari and Alejandro Ribeiro. Res: Regularized stochastic bfgs algorithm. Signal Process- ing, IEEE Transactions on, 62(23):6089-6104, 2014.
Global convergence of online limited memory bfgs. Aryan Mokhtari, Alejandro Ribeiro, Journal of Machine Learning Research. 16Aryan Mokhtari and Alejandro Ribeiro. Global convergence of online limited memory bfgs. Jour- nal of Machine Learning Research, 16:3151-3181, 2015. URL http://jmlr.org/papers/v16/ mokhtari15a.html.
Robust stochastic approximation approach to stochastic programming. Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, Alexander Shapiro, SIAM Journal on Optimization. 194Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574- 1609, 2009.
Efficiency of coordinate descent methods on huge-scale optimization problems. Yu Nesterov, SIAM Journal on Optimization. 222Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341-362, 2012.
Hogwild: A lock-free approach to parallelizing stochastic gradient descent. Benjamin Recht, Christopher Re, Stephen Wright, Feng Niu, Advances in Neural Information Processing Systems. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693-701, 2011.
Parallel coordinate descent methods for big data optimization. Peter Richtárik, Martin Takáč, Mathematical Programming. Peter Richtárik and Martin Takáč. Parallel coordinate descent methods for big data optimization. Mathematical Programming, pages 1-52, 2015.
A stochastic approximation method. Herbert Robbins, Sutton Monro, 10.1214/aoms/1177729586Ann. Math. Statist. 2231951Herbert Robbins and Sutton Monro. A stochastic approximation method. Ann. Math. Statist., 22 (3):400-407, 09 1951. doi: 10.1214/aoms/1177729586. URL http://dx.doi.org/10.1214/aoms/ 1177729586.
Natural language analysis by stochastic optimization: A progress report on project april. Geoffrey Sampson, Robin Haigh, Eric Atwell, 10.1080/09528138908953710J. Exp. Theor. Artif. Intell. 14Geoffrey Sampson, Robin Haigh, and Eric Atwell. Natural language analysis by stochastic opti- mization: A progress report on project april. J. Exp. Theor. Artif. Intell., 1(4):271-287, October 1990. ISSN 0952-813X. doi: 10.1080/09528138908953710. URL http://dx.doi.org/10.1080/ 09528138908953710.
Feature clustering for accelerating parallel coordinate descent. Chad Scherrer, Ambuj Tewari, Mahantesh Halappanavar, David Haglin, Advances in Neural Information Processing Systems. Chad Scherrer, Ambuj Tewari, Mahantesh Halappanavar, and David Haglin. Feature clustering for accelerating parallel coordinate descent. In Advances in Neural Information Processing Systems, pages 28-36, 2012.
Minimizing finite sums with the stochastic average gradient. Mark Schmidt, Nicolas Le Roux, Francis Bach, arXiv:1309.2388arXiv preprintMark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv preprint arXiv:1309.2388, 2013.
A stochastic quasi-newton method for online convex optimization. N Nicol, Jin Schraudolph, Simon Yu, Günter, International Conference on Artificial Intelligence and Statistics. Nicol N Schraudolph, Jin Yu, and Simon Günter. A stochastic quasi-newton method for online convex optimization. In International Conference on Artificial Intelligence and Statistics, pages 436-443, 2007.
Accelerated mini-batch stochastic dual coordinate ascent. Shai Shalev, - Shwartz, Tong Zhang, Advances in Neural Information Processing Systems. Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems, pages 378-385, 2013.
Adaptive signal processing algorithms: stability and performance. Victor Solo, Xuan Kong, Prentice-Hall, IncVictor Solo and Xuan Kong. Adaptive signal processing algorithms: stability and performance. Prentice-Hall, Inc., 1994.
selecting causal genes from genome-wide association studies via functionally coherent subnetworks. Gabriel Murat Taşan, Tong Musso, Marc Hao, Vidal, A Calum, Frederick P Macrae, Roth, Nature methods. Murat Taşan, Gabriel Musso, Tong Hao, Marc Vidal, Calum A MacRae, and Frederick P Roth. selecting causal genes from genome-wide association studies via functionally coherent subnetworks. Nature methods, 2014.
Convergence of a block coordinate descent method for nondifferentiable minimization. Paul Tseng, Journal of optimization theory and applications. 1093Paul Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of optimization theory and applications, 109(3):475-494, 2001.
A globally convergent algorithm for nonconvex optimization based on block coordinate update. Yangyang Xu, Wotao Yin, arXiv:1410.1386arXiv preprintYangyang Xu and Wotao Yin. A globally convergent algorithm for nonconvex optimization based on block coordinate update. arXiv preprint arXiv:1410.1386, 2014.
Block stochastic gradient iteration for convex and nonconvex optimization. Yangyang Xu, Wotao Yin, SIAM Journal on Optimization. 253Yangyang Xu and Wotao Yin. Block stochastic gradient iteration for convex and nonconvex opti- mization. SIAM Journal on Optimization, 25(3):1686-1716, 2015.
Parallel stochastic decomposition algorithms for multi-agent systems. Yang Yang, Gesualdo Scutari, Daniel P Palomar, Signal Processing Advances in Wireless Communications (SPAWC), 2013 IEEE 14th Workshop on. IEEEYang Yang, Gesualdo Scutari, and Daniel P Palomar. Parallel stochastic decomposition algorithms for multi-agent systems. In Signal Processing Advances in Wireless Communications (SPAWC), 2013 IEEE 14th Workshop on, pages 180-184. IEEE, 2013.
| []
|
[
"Vacuum birefringence and dichroism in a strong plane-wave background",
"Vacuum birefringence and dichroism in a strong plane-wave background"
]
| [
"I A Aleksandrov \nDepartment of Physics\nSaint Petersburg State University\nUniversitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia\n\nIoffe Institute\nPolitekhnicheskaya street 26194021Saint PetersburgRussia\n",
"V M Shabaev \nDepartment of Physics\nSaint Petersburg State University\nUniversitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia\n\nNational Research Centre \"Kurchatov Institute\" B.P. Konstantinov Petersburg Nuclear Physics Institute\nGatchina, Leningrad district 188300Russia\n"
]
| [
"Department of Physics\nSaint Petersburg State University\nUniversitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia",
"Ioffe Institute\nPolitekhnicheskaya street 26194021Saint PetersburgRussia",
"Department of Physics\nSaint Petersburg State University\nUniversitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia",
"National Research Centre \"Kurchatov Institute\" B.P. Konstantinov Petersburg Nuclear Physics Institute\nGatchina, Leningrad district 188300Russia"
]
| []
| In the present study, we consider the effects of vacuum birefringence and dichroism in strong electromagnetic fields. According to quantum electrodynamics, the vacuum state exhibits different refractive properties depending on the probe photon polarization and one also obtains different probabilities of the photon decay via production of electron-positron pairs. Here we investigate these two phenomena by means of several different approaches to computing the polarization operator. The external field is assumed to be a linearly polarized plane electromagnetic wave of arbitrary amplitude and frequency. Varying the probe-photon energy and the field parameters, we thoroughly examine the validity of the locally-constant field approximation (LCFA) and techniques involving perturbative expansions in terms of the external-field amplitude. Within the latter approach, we develop a numerical method based on a direct evaluation of the weak-field Feynman diagrams, which can be employed for investigating more complex external backgrounds. It is demonstrated that the polarization operator depends on two parameters: classical nonlinearity parameter ξ and the product η = ωq0/m 2 of the laser field frequency ω and the photon energy q0 (m is the electron mass). The domains of validity of the approximate techniques in the ξη plane are explicitly identified. | null | [
"https://export.arxiv.org/pdf/2303.16273v1.pdf"
]
| 257,804,517 | 2303.16273 | 56bf4695d301893087bcaf81dfaa9b1a0b89b95b |
Vacuum birefringence and dichroism in a strong plane-wave background
I A Aleksandrov
Department of Physics
Saint Petersburg State University
Universitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia
Ioffe Institute
Politekhnicheskaya street 26194021Saint PetersburgRussia
V M Shabaev
Department of Physics
Saint Petersburg State University
Universitetskaya Naberezhnaya 7/9199034Saint PetersburgRussia
National Research Centre "Kurchatov Institute" B.P. Konstantinov Petersburg Nuclear Physics Institute
Gatchina, Leningrad district 188300Russia
Vacuum birefringence and dichroism in a strong plane-wave background
In the present study, we consider the effects of vacuum birefringence and dichroism in strong electromagnetic fields. According to quantum electrodynamics, the vacuum state exhibits different refractive properties depending on the probe photon polarization and one also obtains different probabilities of the photon decay via production of electron-positron pairs. Here we investigate these two phenomena by means of several different approaches to computing the polarization operator. The external field is assumed to be a linearly polarized plane electromagnetic wave of arbitrary amplitude and frequency. Varying the probe-photon energy and the field parameters, we thoroughly examine the validity of the locally-constant field approximation (LCFA) and techniques involving perturbative expansions in terms of the external-field amplitude. Within the latter approach, we develop a numerical method based on a direct evaluation of the weak-field Feynman diagrams, which can be employed for investigating more complex external backgrounds. It is demonstrated that the polarization operator depends on two parameters: classical nonlinearity parameter ξ and the product η = ωq0/m 2 of the laser field frequency ω and the photon energy q0 (m is the electron mass). The domains of validity of the approximate techniques in the ξη plane are explicitly identified.
I. INTRODUCTION
According to quantum electrodynamics (QED), the physical vacuum state contains quantum fluctuations of the electromagnetic and electron-positron fields, which can be viewed as spontaneous creation and annihilation of electron-positron pairs interacting with each other via virtual photons. Although these virtual particles are not observable themselves, their existence can manifest itself while interacting with external fields and real particles giving rise to a number of remarkable nonlinear phenomena such as light-by-light scattering [1][2][3][4], Sauter-Schwinger effect [2,5,6], and so on (for review, see, e.g., Refs. [7][8][9]). In this investigation, we consider propagation of a probe photon in vacuum in the presence of a strong external background. The latter polarizes the physical vacuum, so the probe photon effectively interacts with a nonlinear medium, which leads to the phenomena of vacuum birefringence and dichroism [10][11][12][13][14] which are in the focus of the present study (we note that the nontrivial properties of the vacuum state in the presence of real photons give also rise to recently discussed stimulated photon emission [15]).
Observing these processes in the laboratory represents currently an intriguing and challenging task. There are mainly two different approaches to probing vacuum birefringence. First, one can rely on unprecedented accuracy of experimental measurements in the optical domain, i.e., in the regime of relatively low probe-photon energies (see, e.g., Refs. [16][17][18][19][20][21]). From the theoretical viewpoint, this domain allows one to employ local approximations, i.e., to treat the external (laser) field as a locally constant background. The corresponding locally-constant field approximation (LCFA) has basically two different implementations based either on employing the exact expressions for the Heisenberg-Euler effective Lagrangian [22] or on using the local values of the polarization operator derived in constant crossed fields [23,24]. The second approach to vacuum birefringence involves high-energy probe photons [24][25][26]. The advantage of this technique appears due to large probabilities of the corresponding quantum processes resulting in large values of the experimental signal. On the other hand, it is significantly more difficult to perform measurements in the high-energy domain as, e.g., the Heisenberg-Euler approximation is only valid in the lowenergy domain. To properly assess the feasibility of the corresponding scenarios, one has to obtain accurate and reliable theoretical predictions.
In order to avoid approximate local treatment of the external electromagnetic field, one can model it with a plane-wave background allowing one to deduce explicit analytical expressions for the polarization tensor [13,14,23]. On the other hand, this simplified setup may not properly reflect the properties of real experimental conditions.
In the present study, we have two primary aims. First, we will thoroughly examine the plane-wave scenario by means of analytical nonperturbative expressions derived in Refs. [13,14,23]. We will compute the polarization tensor in a wide range of physical parameters governing the process under consideration: laser-field amplitude, laser frequency, and probephoton energy. Expanding the nonperturbative result in powers of the external-field amplitude, we will assess the accuracy of the calculations based on perturbation theory (PT). Besides, we will quantitatively analyze the validity of the LCFA in the two forms described above. Second, the polarization tensor will be directly evaluated via the corresponding Feynman diagrams. This approach is very important since it can allow one to consider other field configurations, which differ from a simple plane-wave scenario. In what follows, we will benchmark our direct computational procedures and also provide an additional insight into the analytical properties of the integrands involved in the Feynman diagrams. For instance, it will be demonstrated that the overlap between the branch cuts that appears for sufficiently high photon energies is closely related to the decay of the probe photon via production of electron-positron pairs. We also mention that e + e − pairs can be produced directly by a classical strong field, i.e., via the Sauter-Schwinger mechanism. The validity of the LCFA in this context was recently examined in Refs. [27][28][29][30].
The paper has the following structure. In Sec. II we describe the setup under consideration involving a probe photon and external plane-wave background. In Sec. III we present nonperturbative expressions which we employ in our numerical computations. In Sec. IV we calculate the leading-order contribution with respect to the external-field amplitude. Section V is devoted to the description of the two possible implementations of the LCFA. In Sec. VI we discuss how one can directly evaluate the leading-order Feynman diagrams. Section VII contains our numerical results obtained by means of the various techniques. Finally, we conclude in Sec. VIII.
Throughout the text, we employ the units = c = 1, α = e 2 /(4π) ≈ 1/137.
II. SETUP AND NOTATION
We assume that the external plane wave is polarized along the x axis and propagates in the z direction, i.e., it depends on ϕ = ωn µ x µ = ω(t − z), where ω is the laser frequency. The null vector n obeys n 0 = 1, n 2 = 0. The corresponding vector potential has the following form:
A(x) = A(ω(t − z))e x ,(1)A(ϕ) = E 0 ω sin ϕ,(2)
where E 0 is the field strength amplitude. We also introduce a dimensionless parameter ξ = |eE 0 |/(mω). The initial photon momentum q points in the opposite direction to n = e z , q = −q 0 e z . Accordingly, the initial 4-momentum of the photon is q µ = q 0 (1, 0, 0, −1) t . The final momentum will be denoted by k µ . In what follows, we will also employ the light-cone components which for arbitrary 4-vector v µ read
v + = v 0 + nv 2 ,(3)v − = v 0 − nv,(4)v ⊥ = v − (nv)n.(5)
The scalar product of two vectors can be evaluated via
vw ≡ v µ w µ = v + w − + v − w + − v ⊥ w ⊥ .(6)
For instance, n + = 1, n − = 0, n ⊥ = 0, and ϕ = ωx − . The amplitude S(q, k) of the process described by the diagram in Fig. 1 involves two photon wavefunctions defined as
f µ q (x) = 1 √ 2q 0 e −iqx ε µ (q),(7)
where ε µ (q) is the polarization 4-vector. The amplitude can be represented in the form
S(q, k) = 1 √ 4q 0 k 0 ε µ (q)i Π µν 0 (q, k) + Π µν (q, k) ε * ν (k).(8)
q k 1 Figure 1. Feynman diagram describing the leading-order contribution to the photon polarization operator. The amplitude of the process is proportional to the fine-structure constant α and exactly takes into account the interaction with the classical external background (double lines represent the dressed Green's functions).
Here Π µν 0 (q, k) denotes the zero-field contribution to the polarization operator, which corresponds to the diagram with the free-electron Green's functions describing vacuum polarization in the absence of external fields. This contribution diverges and requires a usual renormalization procedure. Since this term does not affect the processes of vacuum birefringence and dichroism, our task is to compute the fielddependent part Π µν (q, k), which is finite.
In what follows, we will evaluate Π µν (q, k) by means of several different techniques mentioned above. As will be seen below, the polarization operator involving ξ, ω, and q 0 depends, in fact, only on ξ and the product ωq 0 . We will consider ξ and η ≡ ωq 0 /m 2 as two independent dimensionless parameters governing the processes of vacuum birefringence and dichroism. We will also introduce the so-called quantum nonlinearity parameter χ = 2ξη which will be considered as a derived quantity χ(ξ, η).
III. NONPERTURBATIVE ANALYTICAL FORMULAS
In the case of a plane-wave external background, it is possible to compute the polarization tensor analytically. In Ref. [13] it was done by means of the operator approach. In Ref. [14] the calculations were performed in the case of a monochromatic plane wave. Recently, in Ref. [23] the results of Refs. [13,14] were confirmed by direct computations of the Feynman diagram in Fig. 1 with the aid of the exact Green's functions, which can be constructed from the Volkov solutions.
Here we will first employ the general expressions presented in Refs. [13,14,23]. Due to the symmetry of the external plane-wave field, it can only change the q + component of the photon momentum, so the amplitude corresponding to the Feynman diagram in Fig. 1
contains δ(k − − q − )δ(k ⊥ − q ⊥ ).
It turns out that the cumbersome expressions for the amplitude derived in Refs. [13,14,23] become relatively simple in the particular case of a circularly polarized plane-wave background. Due to the helicity conservation, the momentum component q + can change only by ±2ω or remain the same. It is not the case if the external field has a linear polarization since such a plane wave does not possess a well-defined helicity quantum number. Accordingly, the q + momentum component of the photon may change by an arbitrary integer number of ω. The general expression for the setup described above has the following form:
Π µν (q, k) = − 4π 2 α ω δ(k − − q − )δ(k ⊥ − q ⊥ ) 1 −1 dv ∞ 0 dτ τ ∞ −∞ dϕ e iΦ c 0 0 0 0 b + ∆b 0 0 0 0 b 0 0 0 0 c ,(9)where b = i τ + 1 2 kq (1 − e iτ β ) + 2m 2 τ ξ 2 µ e iτ β sin 2 (µωq 0 ) cos 2 ϕ,(10)∆b = 2m 2 ξ 2 sinc 2 (µωq 0 ) sin 2 ϕ − 2 sinc(2µωq 0 ) sin 2 ϕ − sin 2 (µωq 0 ) + sin 2 ϕ e iτ β ,(11)c = k 0 q 0 µ τ (1 − e iτ β ),(12)µ = 1 2 τ (1 − v 2 ),(13)Φ = k + − q + ω ϕ + 1 2 µkq − m 2 τ,(14)β = m 2 ξ 2 sinc 2 (µωq 0 ) sin 2 ϕ − 1 2 + 1 2 sinc(2µωq 0 ) cos 2ϕ .(15)
In what follows, we will be interested only in the elastic process, where k + = q + as the other channels are significantly suppressed (actually, they rather represent reactions involving photon merging or splitting than the phenomenon of birefringence).
To extract the particular process of elastic scattering, one has to isolate the zeroth-order Fourier harmonics with respect to ϕ dependence in the functions b, ∆b, and c, so the integration of exp(iΦ) yields the necessary delta-function. This can be straightforwardly attained with the aid of the Jacobi-Anger identity. The result reads
Π µν elastic (q, k) = −(2π) 3 αδ(k − q) 1 −1 dv ∞ 0 dτ τ e −im 2 τ c 0 0 0 0b + ∆b 0 0 0 0b 0 0 0 0c ,(17)whereb = i τ [1 − ΞJ 0 (A)] + m 2 τ ξ 2 µ sin 2 (µωq 0 )Ξ[J 0 (A) + iJ 1 (A)],(18)∆b = m 2 ξ 2 Ξ − 2 sin 2 (µωq 0 )J 0 (A) + [sinc 2 (µωq 0 ) − 2 sinc(2µωq 0 ) + 1][J 0 (A) − iJ 1 (A)] ,(19)c = q 2 0 µ τ [1 − ΞJ 0 (A)],(20)Ξ = exp i 2 m 2 τ ξ 2 [sinc 2 (µωq 0 ) − 1] ,(21)A = 1 2 m 2 τ ξ 2 [sinc(2µωq 0 ) − sinc 2 (µωq 0 )].(22)
Here J n are the Bessel functions of the first kind. We will assume hereinafter k µ = q µ . We also note that the elements Π 00 and Π 33 are equal, which preserves the gauge invariance and the Ward-Takahashi identity [31]. These components will not be evaluated in our study as they do not affect the phenomena under consideration.
The birefringent and dichroic properties of the vacuum in the presence of strong fields manifest themselves in the dif-ference between Π 11 and Π 22 elements: photon polarizations along the x and y axes correspond to different refractive and absorption indexes. In what follows, we will compute these elements. As was stated above, these quantities involve the three parameters ξ, ω, and q 0 , but they depend, in fact, on ξ and η = ωq 0 /m 2 as becomes evident from Eqs. (17)- (22). Figure 2. Feynman diagrams corresponding to the leading-order contribution within the PT expansion in terms of the external field (the amplitudes are proportional to ξ 2 ). The interaction with the classical external field is denoted by the cross. Depending on the energymomentum transfer at the cross vertices, the process is either elastic (2-to-2 process) or corresponds to k = q.
q k q k q k q k 1
IV. PERTURBATION THEORY
Here we will consider the leading-order term of Eq. (17) with respect to the small-ξ expansion. This contribution is proportional to ξ 2 and corresponds to the three Feynman diagrams displayed in Fig. 2. Expanding the function Ξ and the Bessel functions in Taylor series, one obtains
b LO = m 2 ξ 2 1 2 [sinc 2 (µωq 0 ) − 1] + τ µ sin 2 (µωq 0 ) ,(23)∆b LO = m 2 ξ 2 [−2 sin 2 (µωq 0 ) + sinc 2 (µωq 0 ) − 2 sinc(2µωq 0 ) + 1],(24)c LO = − i 2 q 2 0 µm 2 ξ 2 [sinc 2 (µωq 0 ) − 1].(25)
Here "LO" stands for "low order". It turns out that one can replace µ with Eq. (13) and perform the τ integration analyt-ically. Let us first introduce the following general representation:
Π µν elastic (q, k) = −(2π) 3 αδ(k − q)m 2 ξ 2 M µν .(26)
Within PT we find
M 11 LO = 1 −1 dv 2v 2 1 − v 2 I 1 (v) + 1 2 I 2 (v) + I 3 (v) ,(27)M 22 LO = 1 −1 dv 2 1 − v 2 I 1 (v) + 1 2 I 2 (v) ,(28)
where
I 1 (v) = ∞ 0 dt t sin 2 (γt)e −it = 1 4 ln 1 − 4γ 2 − iπ 4 θ(γ − 1/2),(29)I 2 (v) = ∞ 0 dt t [sinc 2 (γt) − 1]e −it = 3 2 − 1 2 1 + 1 4γ 2 ln 1 − 4γ 2 − 1 2γ ln 1 + 2γ 1 − 2γ + iπ 2 1 − 1 γ + 1 4γ 2 θ(γ − 1/2),(30)I 3 (v) = ∞ 0 dt t [1 + sinc 2 (γt) − 2 sinc(2γt)]e −it = − 1 2 + 1 2 1 − 1 4γ 2 ln 1 − 4γ 2 − iπθ(γ − 1/2) ,(31)γ = γ(v) = ωq 0 2m 2 (1 − v 2 ) = 1 2 η(1 − v 2 ).(32)
The expressions (27) and (28)
M 11 LO = − 4 45 ε 2 − 17 3150 ε 4 + O(ε 6 ),(33)M 22 LO = − 7 45 ε 2 − 131 9450 ε 4 + O(ε 6 ).(34)
In the high-energy limit, we obtain [ε ≡ 1/(2η) = m 2 /(2ωq 0 ) 1]
M 11 LO = 1 2 ln 2 ε + 1 − ln 2 + iπ 2 ln ε + 5 2 − ln 2 + 1 2 ln 2 2 − π 2 4 + iπ 2 (1 − ln 2) + iπε ln ε + − π 2 2 + iπ 2 (3 − 2 ln 2) ε + O(ε 2 ln 2 ε),(35)M 22 LO = 1 2 ln 2 ε + 1 − ln 2 + iπ 2 ln ε + 7 2 − ln 2 + 1 2 ln 2 2 − π 2 4 + iπ 2 (1 − ln 2) + iπε ln ε + − π 2 2 + iπ 2 (1 − 2 ln 2) ε + O(ε 2 ln 2 ε).(36)
While the low-energy result (33), (34) is real, the expressions (35) and (36) possess imaginary parts, which describe the process of photon decay. The imaginary part of the difference δM LO ≡ M 11 LO − M 22 LO ≈ −1 + iπε governs the dichroic properties of the vacuum and appears once η > 1. In Sec. VI we will discuss how the imaginary part appears in a direct evaluation of the Feynman diagrams in Fig. 2.
V. LOCALLY-CONSTANT FIELD APPROXIMATION
Here we will employ relatively simple closed-form expressions treating the external background as locally constant. There are basically two different approaches. The first one is based on calculating the polarization tensor in constant crossed fields and the using the actual spatiotemporal dependence of the plane-wave field (1) when integrating over ϕ. The second method employs the Heisenberg-Euler effective Lagrangian computed in a constant electromagnetic field and takes into account the leading-order quantum correction with respect to the field amplitude E 0 . The first approach is generally more accurate as it incorporates the higher-order terms in E 0 and involves the expression for the polarization operator which is derived for arbitrary photon energies q 0 . The second technique based on the Heisenberg-Euler Lagrangian is only valid for sufficiently low photon energies, when there is only a small momentum transfer into the e + e − loop in the diagram in Fig. 1. Besides, the applicability of this method is limited since it involves the PT expansion with respect to the field amplitude. In what follows, we will describe the both approaches and then thoroughly analyze their validity.
A. Polarization operator in constant crossed fields
In the setup under consideration, the vector potential (1) is assumed to be a monochromatic plane wave (2). If one replaces sin ϕ in Eq. (2) with ϕ, the external background will obviously become a combination of constant crossed electric and magnetic fields, E x = B y = −E 0 . In this case, one can also perform nonperturbative calculations of the polarization tensor [32][33][34] and then locally approximate a generic external background by constant crossed fields [23]. Applying this technique to the field configuration (2), one obtains
M 11 LCFA = 1 3πξ 2 1 −1 dv χ w 2/3 w − 1 g(v),(37)M 22 LCFA = 1 3πξ 2 1 −1 dv χ w 2/3 w + 2 g(v),(38)
where χ = 2ξη, w = 4/(1 − v 2 ), and
g(v) = π −π dϕf (u)(cos ϕ) 2/3 ,(39)u = w χ cos ϕ 2/3 ,(40)f (u) = i ∞ 0 dτ e −i(uτ +τ 3 /3) = πGi(u) + iπAi(u).(41)
Here Gi and Ai are the Scorer and Airy functions, respectively.
Note that the integrals in Eqs. (37) and (38) depend only on χ, i.e. the product ξη, which simplifies the further analysis. This fact is a well-known property of the LCFA [35]. This approximation is well justified if the parameter ξ is sufficiently large, so one can expect that the predictions (37) and (38) significantly differ from the exact nonperturbative result given in Eq. (17) once ξ 1. This issue will be discussed in detail in Sec. VII.
Finally, we present the asymptotic forms of Eqs. (37) and (38) in the case χ 1. One obtains
Re M 11 LCFA = − 4χ 2 45ξ 2 1 + 1 4 χ 2 + O(χ 4 ) ,(42)Re M 22 LCFA = − 7χ 2 45ξ 2 1 + 13 49 χ 2 + O(χ 4 ) ,(43)Im M 11 LCFA = − 3χ 3/2 8ξ 2 π 2 e −8/(3χ) 1 + O(χ) ,(44)Im M 22 LCFA = − 3χ 3/2 4ξ 2 π 2 e −8/(3χ) 1 + O(χ) . (45)
For small χ the imaginary part is exponentially suppressed corresponding to tiny probabilities of the photon decay. Note that the ratio χ/ξ coincides with ε = 2η in Eqs. (33) and (34), so the leading-order contribution is reproduced by the LCFA. Nevertheless, the validity of the LCFA and that of the PT expansion correspond to substantially different domains of parameters. Whereas for given ξ they both are accurate for sufficiently small η < η max (ξ), with increasing ξ the bound η max (ξ) increases in the case of the LCFA and decreases in the case of PT. This will be quantitatively demonstrated in Sec. VII. Finally, we note that both the LCFA and PT capture the imaginary part of the polarization tensor.
B. Heisenberg-Euler approximation
Another approach is based on the PT expansion of the polarization operator derived from the one-loop effective Lagrangian in the presence of a constant electromagnetic background [22]. The approximate formula for the ξ 2 contribution to the polarization tensor has the following form:
Π µν LCFA-HE (q, k) = α 45π e 2 m 4 d 4 x e i(k−q)x 4(qF ) µ (kF ) ν + 7(qG) µ (kG) ν .(46)
Here (kF ) µ ≡ k ρ F ρµ . The electromagnetic tensor F µν = ∂ µ A ν − ∂ ν A µ and the dual tensor G µν = (1/2)ε µνρσ F ρσ are evaluated at the spacetime point x according to the local treatment of the external field. In the case of the plane-wave background (1)
This exactly corresponds to the leading low-energy terms in Eqs. (33) and (34) and to the leading-order terms in Eqs. (42) and (43). In what follows, they will be denoted by M 11 LCFA-LO and M 22 LCFA-LO , respectively. Note that the leading-order LCFA expressions completely disregard the imaginary part of the polarization tensor, i.e., fail to describe the process of dichroism.
VI. DIRECT EVALUATION OF THE FEYNMAN DIAGRAMS
Here we will directly compute the Feynman diagrams depicted in Fig. 2. The corresponding amplitudes and accordingly the contributions to the polarization tensor are proportional to E 2 0 , i.e. ξ 2 [cf. Eq. (26)]. Each interaction vertex involves the energy-momentum transfer with the four-vector ±K, where K µ ≡ ωn µ is the four-momentum of the pho-tons that constitute the external plane wave. As we are interested in studying the elastic contributions, the two vertices in each diagram should correspond to one emission and one absorption, so the diagram represents essentially a two-totwo scattering process. Since one has to evaluate three diagrams, the leading-order matrix M µν LO is a sum of three terms,
M µν LO = M µν 1 + M µν 2 + M µν 3 .
Considering, for instance, the first diagram and using the Feynman rules, we obtain the following expression for M µν 1 :
M µν 1 = − i 8π 2 s=±1 d 4 pTr γ ν S(p+q/2−sK/2)γ 1 S(p+q/2+sK/2)γ µ S(p−q/2+sK/2)γ 1 S(p−q/2−sK/2) . (48)
Here s indicates at which of the two vertices the external-field photon is emitted (absorbed). The integration variables p µ are shifted, so that the integrand has a more symmetric form (cf. Ref. [36]). The electron propagator is given by
S(p) = γ µ p µ + m m 2 − p 2 − iε ,(49)
where ε → 0.
One can explicitly verify that the total expression for M µν LO depends only on the product ωq 0 , i.e. η = ωq 0 /m 2 , in accordance with Eqs. (27) and (28). Therefore, we will assume that q 0 = ω = √ ηm, so K = −q. Then Eq. (48) takes the form
M µν 1 = − i 8π 2 ∞ −∞ dz d 3 p Tr γ ν S(z, p + q)γ 1 S(z + q 0 , p)γ µ S(z, p − q)γ 1 S(z − q 0 , p) + γ ν S(z + q 0 , p)γ 1 S(z, p + q)γ µ S(z − q 0 , p)γ 1 S(z, p − q) .(50)
The trace contains denominators that for each p turn to zero at complex points z with small nonzero imaginary parts for nonzero values of ε. After the p integration, the trace as a function of z possesses six branch cuts depicted in Fig. 3 for q 0 < m. The z integration over the real axis in Eq. (50) can be, in fact, performed over any contour like that displayed in Fig. 3, provided it does not intersect any of the branch cuts. In the case q 0 < m (η < 1), one can, for instance, rotate the contour, so that it coincides with the imaginary axis. Substituting then z = iw, where w ∈ R, one can explicitly demonstrate that the total contribution M µν LO = M µν 1 + M µν 2 + M µν 3 is real in accordance with Eqs. (27) and (28).
In order to address the high-energy case η > 1, we employ the following numerical procedure. We change the order of the z and p integrations and first integrate over z ∈ R. Accordingly, the z integrand has a number of isolated poles ξ j − iσε where σ = ±1 and the real parts ξ j depend on p. In each vicinity (ξ j − δ, ξ j + δ) we perform the integration semianalytically by means of the Sokhotski-Plemelj identity. This allows us to set ε = 0 while performing the rest integrations numerically and avoid computational singularities.
Our procedure was also generalized to compute the diagrams for arbitrary independent q 0 and ω. The main steps here are generally the same. After that, we confirmed the results obtained by means of the technique described above. Finally, we note that the expression (50) has a similar form to the amplitude of photon emission via the so-called tadpole diagram (see Ref. [37], where it was evaluated in the regime η < 1).
VII. NUMERICAL RESULTS
We will now perform numerical calculations of the difference δM ≡ M 11 − M 22 , whose real and imaginary parts govern the effects of vacuum birefringence and dichroism, respectively. First, we will evaluate δM within the leading order with respect to the field amplitude. In this case, the results do not depend on ξ. In Fig. 4 we present δM as a function of η. First, one observes that the Heisenberg-Euler approximation within the leading order of perturbation theory can be accurate only in the low-energy regime. If one takes into account the 1/η 4 terms according to Eqs. (33) and (34), the results become slightly more accurate although they completely fail to reproduce the full PT results for η > 1. Second, the more general expressions (27) and (28) yield a nonzero imaginary part for η > 1, so the PT approach may allow one to describe the effects of dichroism. Finally, we note that our approach based on direct computations of the Feynman diagrams as described in Sec. VI provides exactly the same results as Eqs. (27) and (28), which benchmarks the corresponding numerical procedures. To judge whether the leading-order approximation is justified, one has to perform nonperturbative calculations for various values of ξ, which will be done next.
In Fig. 5 we display the real and imaginary parts of δM as a function of η for three different values of ξ: 0.1, 1.0, and 10.0. We refer to Eq. (17) as the exact result. First, we observe that the η dependence very nontrivially changes as a function of ξ, which cannot be taken into account by means of the PT approach. Whereas for ξ 1, this approximation provides indeed very accurate results within a broad range of η, for ξ 1, it fails to reproduce the exact values unless η 1. Second, as was mentioned above, the LCFA predictions have the form δM LCFA (ξ, η) = (1/ξ 2 )δM LCFA (1, ξη), so the (27) and (28)], by means of the Heisenberg-Euler approximation (47) and according to the low-energy expansions (33) and (34). The latter two approaches yield zero imaginary part.
different LCFA curves can be obtained by simply rescaling the plot axes. This approach does not allow one to describe the nontrivial structure that takes place for ξ 1, although it is accurate for very small η, where the expansions (42) and (43) are valid.
Let us now quantitatively identify the domains of validity of various approximations for describing the vacuum birefringence effects. In Fig. 6 we identify the values of ξ and η for which the approximate predictions match the exact results with a relative uncertainty on the level of 10%. First, let us discuss the PT approach, which yields the leading-order estimates (27) and (28). In the regime ξ 1, it is only valid for η 1. It turns out that in the corresponding domain of parameters χ 0.5. Since for large values of ξ one can employ the LCFA, it is possible to estimate the exact result for the real part of M µν by means of Eqs. (42) and (43). Comparing these with the low-energy asymptotic expansions (33) and (34), one can obtain the threshold value of χ. For instance, requiring that the relative uncertainty of PT be less than 10%, one obtains χ < (7/2)0.1 ≈ 0.59. According to our numerical analysis, this condition, in fact, reads χ < 0.55. In the regime ξ 1, the validity of the LCFA (37), (38) is very limited, so one has to directly compare the leading-order PT results with the nonperturbative predictions. In this domain, the applicability of perturbation theory is not solely governed by χ as can be seen in Fig. 6, where the domain of the PT applicability is no longer bounded by a straight line. Finally, we note that in the region ξ 1, even if the PT approach fails to reproduce the exact results for η ∼ 1, it may provide quite accurate predictions for sufficiently large values of η, where Re δM becomes close to −1 [see Fig. 5 (middle)]. Moreover, in this region the nonzero imaginary part of the polarization operator can also be obtained by means of perturbation theory.
In order to identify the validity domain of the leading-order Heisenberg-Euler approximation (47), it is sufficient to compare its predictions with the leading-order PT result (27), (28).
Since within these approaches the matrix M µν is independent of ξ, one should only determine the threshold value of η. For the 10% uncertainty level, it amounts to η max ≈ 0.44. The validity domain of the Heisenberg-Euler approximation is then the intersection of the region η < 0.44 and the validity domain of the PT approach.
The applicability of the LCFA (37), (38) corresponds to a much larger region than that where the Heisenberg-Euler approximation is justified. It not only describes the effect of birefringence in the low-energy domain but is also valid in the case of high-energy probe photons (η 1), provided ξ 1.
As was indicated above, the imaginary part of the polarization tensor, which is responsible for dichroic properties of the vacuum, cannot be estimated by means of the leading-order Heisenberg-Euler approximation (47). Nevertheless, both the PT approach and the LCFA (37), (38) are very useful herethey can be employed within the corresponding regions indicated in Fig. 6.
According to our results, the validity domain of the Heisenberg-Euler approximation is the smallest. The corresponding results can always be additionally confirmed by either perturbation theory or the LCFA based on the calculation of the polarization operator in constant crossed fields. The advantage of the latter approach is the possibility to consider larger values of η once ξ 1. Note also that a considerable part of the plot in Fig. 6 relates to large values of the parameter χ, which are not realistic at present. Nevertheless, given the logarithmic scale in the graph, the LCFA covers a domain of parameters which is substantially broader than the validity region of the Heisenberg-Euler approximation. The PT approach is always accurate once the LCFA-HE technique is justified. In addition, the leading-order predictions coincide with the exact results for any values of η if ξ is sufficiently small.
VIII. CONCLUSION
In the present investigation, we examined the effects of vacuum birefringence and dichroism in strong plane-wave backgrounds by means of several theoretical methods allowing one to evaluate the leading one-loop contribution to the polarization operator. First, we employed closed-form expressions exactly incorporating the interaction between the electronpositron field and classical external background depending on the spatiotemporal coordinates. Second, we performed calculations within the leading order with respect to the field amplitude, i.e., by means of perturbation theory. This was done by expanding the nonperturbative result and by means of our numerical method based on a direct evaluation of the leadingorder Feynman diagrams. It was found that these two approaches yield identical quantitative predictions both for real and imaginary parts of the polarization tensor. Varying the field parameters and the probe-photon energy, we examined the validity of the perturbative methods. Third, we utilized the locally-constant field approximation (LCFA) in two different forms: Heisenberg-Euler approximation and the technique involving exact expressions for the polarization operator in constant crossed fields. By comparing the approximate predictions with the exact results, we evidently identified the field and probe-photon parameters for which each of the approximate techniques is justified.
An important prospect for future studies is the analogous analysis beyond the plane-wave scenario, where the exact analytical expressions are unknown. In this case, for instance, the applicability of the LCFA may be additionally limited if the external electric and magnetic fields are not crossed in contrast to the field configuration examined in the present investigation.
, the integrals in Eq. (46) lead to the conservation laws which may change the photon momentum by ±2ω or keep it the same. We are interested in the latter contribution governing the elastic process. The explicit form of Eq. (46) then reads Π µν LCFA-HE, elastic (q,
Figure 3 .
3Branch cuts (red) of the electron propagators in the case q0 < m before the z integration in Eq. (50) and a possible integration contour (blue).
Figure 4 .
4Real and imaginary parts of the difference δM ≡ M 11 − M 22 calculated within the leading-order of perturbation theory [Eqs.
Figure 5 .Figure 6 .
56Real and imaginary parts of the difference δM ≡ M 11 − M 22 evaluated within the leading-order of perturbation theory (LO), by means of the LCFA [Eqs. (37) and (38)] and exact nonperturbative expression (17) for ξ = 0.1 (top), ξ = 1.0 (middle), ξ = 10.0 (bottom). For ξ = 0.1 the "LO" and exact curves coincide. Domains of the validity of the various approximate methods: LCFA based on using the polarization tensor in constant crossed fields (LCFA), Heisenberg-Euler approximation (LCFA-HE), and PT calculations within the leading order in terms of the external-field amplitude (LO).
depend only on η = ωq 0 /m 2 , while the nonperturbative values of M µν [see Eq.(26)] also involve ξ. Below we will compare the leading-order terms with the nonperturbative results. Let us now present the low-and high-energy asymptotic expressions for M 11LO and M 22
LO . In the low-energy case ε ≡ 2η = 2ωq 0 /m 2
1,
ACKNOWLEDGMENTSThe study was funded by RFBR and ROSATOM, project No. 20-21-00098. I.A.A. also acknowledges the support from the Foundation for the advancement of theoretical physics and mathematics "BASIS".
. H Euler, B Kockel, Naturwiss. 23246H. Euler and B. Kockel, Naturwiss. 23, 246 (1935).
. W Heisenberg, H Euler, Z. Phys. 98714W. Heisenberg and H. Euler, Z. Phys. 98, 714 (1936).
. V Weisskopf, Kong, Vid. Selsk., Mat.-fys. Medd. XIV. 6V. Weisskopf, Kong. Dans. Vid. Selsk., Mat.-fys. Medd. XIV, 6 (1936).
. R Karplus, M Neuman, Phys. Rev. 80776R. Karplus and M. Neuman, Phys. Rev. 80, 380 (1950); 83, 776 (1951).
. F Sauter, Z. Phys. 69742F. Sauter, Z. Phys. 69, 742 (1931).
. J Schwinger, Phys. Rev. 82664J. Schwinger, Phys. Rev. 82, 664 (1951).
. A Di Piazza, C Müller, K Z Hatsagortsyan, C H Keitel, Rev. Mod. Phys. 841177A. Di Piazza, C. Müller, K. Z. Hatsagortsyan, and C. H. Keitel, Rev. Mod. Phys. 84, 1177 (2012).
. B S Xie, Z L Li, S Tang, Matter Radiat. Extremes. 2225B. S. Xie, Z. L. Li, and S. Tang, Matter Radiat. Extremes 2, 225 (2017).
. A Fedotov, A Ilderton, F Karbstein, B King, D Seipt, H Taya, G Torgrimsson, Phys. Rep. 10101A. Fedotov, A. Ilderton, F. Karbstein, B. King, D. Seipt, H. Taya, and G. Torgrimsson, Phys. Rep. 1010, 1 (2023).
. J S Toll, 1952Princeton Univ.Ph.D. thesis. unpublishedJ. S. Toll, Ph.D. thesis, Princeton Univ., 1952 (unpublished).
. R Baier, P Breitenlohner, Acta Phys. Austriaca. 25212R. Baier and P. Breitenlohner, Acta Phys. Austriaca 25, 212 (1967).
. R Baier, P Breitenlohner, Nuovo Cimento B. 47117R. Baier and P. Breitenlohner, Nuovo Cimento B 47, 117 (1967).
. V N Baier, A I Milstein, V M Strakhovenko, Zh. Eksp. Teor. Fiz. 69961Sov. Phys. JETPV. N. Baier, A. I. Milstein, and V. M. Strakhovenko, Zh. Eksp. Teor. Fiz. 69, 1893 (1975) [Sov. Phys. JETP 42, 961 (1976)].
. W Becker, H Mitter, J. Phys. A. 81638W. Becker and H. Mitter, J. Phys. A 8, 1638 (1975).
. I A Aleksandrov, A Di Piazza, G Plunien, V M Shabaev, Phys. Rev. D. 105116005I. A. Aleksandrov, A. Di Piazza, G. Plunien, and V. M. Shabaev, Phys. Rev. D 105, 116005 (2022).
. A Di Piazza, K Z Hatsagortsyan, C H Keitel, Phys. Rev. Lett. 9783603A. Di Piazza, K. Z. Hatsagortsyan, and C. H. Keitel, Phys. Rev. Lett. 97, 083603 (2006).
. T Heinzl, B Liesfeld, K U Amthor, H Schwoerer, R Sauerbrey, A Wipf, Opt. Commun. 267318T. Heinzl, B. Liesfeld, K. U. Amthor, H. Schwoerer, R. Sauer- brey, and A. Wipf, Opt. Commun. 267, 318 (2006).
. V Dinu, T Heinzl, A Ilderton, M Marklund, G Torgrimsson, Phys. Rev. D. 89125003V. Dinu, T. Heinzl, A. Ilderton, M. Marklund, and G. Torgrims- son, Phys. Rev. D. 89, 125003 (2014).
. F Karbstein, H Gies, M Reuter, M Zepf, Phys. Rev. D. 9271301F. Karbstein, H. Gies, M. Reuter, and M. Zepf, Phys. Rev. D. 92, 071301 (2015).
. F Karbstein, E A Mosman, Phys. Rev. D. 101113002F. Karbstein and E. A. Mosman, Phys. Rev. D 101, 113002 (2020).
. F Karbstein, D Ullmann, E A Mosman, M Zepf, Phys. Rev. Lett. 12961802F. Karbstein, D. Ullmann, E. A. Mosman, and M. Zepf, Phys. Rev. Lett. 129, 061802 (2022).
. F Karbstein, R Shaisultanov, Phys. Rev. D. 9185027F. Karbstein and R. Shaisultanov, Phys. Rev. D 91, 085027 (2015).
. S Meuren, C H Keitel, A Di Piazza, Phys. Rev. D. 8813007S. Meuren, C. H. Keitel, and A. Di Piazza, Phys. Rev. D 88, 013007 (2013).
. S Bragin, S Meuren, C H Keitel, A Di Piazza, Phys. Rev. Lett. 119250403S. Bragin, S. Meuren, C. H. Keitel, and A. Di Piazza, Phys. Rev. Lett. 119, 250403 (2017).
. B King, N Elkina, Phys. Rev. A. 9462102B. King and N. Elkina, Phys. Rev. A 94, 062102 (2016).
. Y Nakamiya, K Homma, Phys. Rev. D. 9653002Y. Nakamiya and K. Homma, Phys. Rev. D 96, 053002 (2017).
. I A Aleksandrov, G Plunien, V M Shabaev, Phys. Rev. D. 9916020I. A. Aleksandrov, G. Plunien, and V. M. Shabaev, Phys. Rev. D 99, 016020 (2019).
. D G Sevostyanov, I A Aleksandrov, G Plunien, V M Shabaev, Phys. Rev. D. 10476014D. G. Sevostyanov, I. A. Aleksandrov, G. Plunien, and V. M. Shabaev, Phys. Rev. D 104, 076014 (2021).
. I A Aleksandrov, D G Sevostyanov, V M Shabaev, Symmetry. 142444I. A. Aleksandrov, D. G. Sevostyanov, and V. M. Shabaev, Sym- metry 14, 2444 (2022).
. I A Aleksandrov, D G Sevostyanov, V M Shabaev, arXiv:2210.15626I. A. Aleksandrov, D. G. Sevostyanov, and V. M. Shabaev, arXiv:2210.15626.
V B Berestetskii, E M Lifshitz, L P Pitaevskii, Quantum Electrodynamics. Butterworth-Heinemann, OxfordElsevierV. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii, Quan- tum Electrodynamics (Elsevier Butterworth-Heinemann, Ox- ford, 1982).
. I A Batalin, A E Shabad, FIAN Preprint. 166I. A. Batalin and A. E. Shabad. FIAN Preprint 166, (1968).
. N B Narozhnyi, Sov. Phys. JETP. 28371N. B. Narozhnyi, Sov. Phys. JETP 28, 371 (1969).
. V I Ritus, Ann. Phys. 69555V. I. Ritus, Ann. Phys. 69, 555 (1972).
. V I Ritus, J. Sov. Laser Res. 6497V. I. Ritus, J. Sov. Laser Res. 6, 497 (1985).
. I A Aleksandrov, V M Shabaev, Optics and Spectroscopy. 129890I. A. Aleksandrov and V. M. Shabaev, Optics and Spectroscopy 129, 890 (2021).
. I A Aleksandrov, G Plunien, V M Shabaev, Phys. Rev. D. 100116003I. A. Aleksandrov, G. Plunien, and V. M. Shabaev, Phys. Rev. D 100, 116003 (2019).
| []
|
[
"Distillation of optical Fock-states using atom-cavity systems",
"Distillation of optical Fock-states using atom-cavity systems"
]
| [
"G P Teja \nNanyang Technological University 21\n637371Nanyang LinkSingapore\n\nDepartment of Physical Sciences\nIndian Institute of Science Education and Research\n140306MohaliPunjabIndia\n",
"Chanchal "
]
| [
"Nanyang Technological University 21\n637371Nanyang LinkSingapore",
"Department of Physical Sciences\nIndian Institute of Science Education and Research\n140306MohaliPunjabIndia"
]
| []
| Fock states are quantized states of electromagnetic waves with diverse applications in quantum optics and quantum communication. However, generation of arbitrary optical Fock states still remains elusive. Majority of Fock state generation proposals rely on precisely controlling the atomcavity interactions and are experimentally challenging. We propose a scheme to distill an optical Fock state from a coherent state. A conditional phase flip (CPF) with arbitrary phase is implemented between the atom and light. The CPF along with the unitary rotations and measurements on the atoms enables us to distil required Fock-state. As an example, we show the distillation of Fock-sate |100 . | null | [
"https://export.arxiv.org/pdf/2303.16667v1.pdf"
]
| 257,804,580 | 2303.16667 | cd2a191a45d38e2884903fab21bd1fcdc5184fde |
Distillation of optical Fock-states using atom-cavity systems
G P Teja
Nanyang Technological University 21
637371Nanyang LinkSingapore
Department of Physical Sciences
Indian Institute of Science Education and Research
140306MohaliPunjabIndia
Chanchal
Distillation of optical Fock-states using atom-cavity systems
Fock states are quantized states of electromagnetic waves with diverse applications in quantum optics and quantum communication. However, generation of arbitrary optical Fock states still remains elusive. Majority of Fock state generation proposals rely on precisely controlling the atomcavity interactions and are experimentally challenging. We propose a scheme to distill an optical Fock state from a coherent state. A conditional phase flip (CPF) with arbitrary phase is implemented between the atom and light. The CPF along with the unitary rotations and measurements on the atoms enables us to distil required Fock-state. As an example, we show the distillation of Fock-sate |100 .
I. INTRODUCTION
The second quantization of electromagnetic fields reveals the equivalence of light modes with harmonic oscillators [1]. The eigenstates of these oscillators are called the Fock states and they describe the number of photons present in a specific mode of a light field. Due to their highly non-classical nature, they are extremely useful in various areas such as quantum metrology, quantum key distribution (QKD) protocols, and quantum computation [2,3].
However, their quantum nature also makes it difficult to produce them. In existing experiments, optical Fock states are produced using parametric down-conversion and photon number detectors. These techniques have been shown to produce Fock states up to 5 photons [4][5][6][7][8], however, it is still a challenging task to produce higher Fock states in the optical regime.
Atom-cavity systems have been widely studied for deterministic generation of optical Fock states, Fock states are created inside the cavity by controlling the interaction between atoms and cavities [9][10][11]. However, this requires precise control which is limited by the coherence time of the system. As the number of photons increases, control becomes harder and the fidelity of generated Fock states decreases [12], making it difficult to generate high photon number states.
In addition to controlling the dynamics of atom-cavity systems, schemes based on feedback mechanisms have also been proposed for the deterministic generation of Fock states [13,14] . These schemes involve using a controller to probe the cavity mode with weak measurements, which provide information about the cavity field. Based on the information obtained from these measurements, an actuator applies feedback to the cavity. By repeatedly measuring and applying feedback, the cavity state can be prepared in any desired state. This method has been experimentally verified using Rydberg atoms and superconducting cavities [15]. * [email protected] † [email protected] Further, another method for Fock-state sorting has been demonstrated using chiral coupling of a two-level system to a waveguide. In this method, the light acquires a Fock-number-dependent delay, and the incident pulse is sorted temporally by its Fock numbers [16,17].
In this article, we propose a protocol to distil Fock states from a coherent state. It is based on repeated reflection of light from the atom-cavity system. Unlike the existing protocols, this doesn't require precise control or feedback mechanisms. Also the Fock states in this scheme are generated outside the cavity avoiding further extraction which can effect the statistics of the light.
In Sec. II we discuss the conditional phase flip (CPF) between the atom and the light mode and study effects of cavity-light detuning on the CPF. Then in Sec. III the atom-photon phase gate along with unitary rotations and measurements are used to distil a Fock-state from a coherent state. Finally we conclude by discussing possible implementations.
II. ATOM-PHOTON GATE
Atom-photon gates aim to create a CPF between an atom and a flying photon. Typically this is confined only to a phase of π as this is sufficient to implement a general unitary operation on atom-photon and photonphoton systems. To obtain a conditional phase shift, we consider a three level atom with ground states |g and |s , and an excited state |e as shown in with two dipole allowed transitions (|e ↔ |g and |e ↔ |s ), see Fig. 1. Here, only one of them (|e ↔ |g ) is coupled to the cavity mode. The Hamiltonian of such system can be written as
H =hω c a † a +hω eg |e e| +hg(σ egâ + σ geâ † ),(1)
ω eg (ω c ) is the transition frequency for the atom (cavity), g is the atom cavity coupling strength. σ eg = |e g| is the atomic operator andâ is the cavity mode operator. Although we are considering a single two-level atom interacting with the cavity mode, following results can be extend to dark states in N two-level systems and N atoms under Rydberg blockade conditions [18,19].
On solving the Eq. (1) for the dynamics yields the following reflection coefficients r (1,0) =â in /â out (see Appendix. A for details)
r 1 = 1 − 2 (1 + 4C) − i∆ , r 0 = 1 − 2 1 − i∆ ,(2)
whereâ in (â out ) is the operator for the input (output) mode. r 1 is the reflection coefficient for the coupled transition. r 0 is the reflection coefficient for the decoupled transition (empty cavity) [20]. C = g 2 /κγ is the cooperativity. ∆ = 2∆ c /κ, ∆ c = ω c − ω L with ω L being the mean frequency of the input pulse. Now by assuming C 1 gives r 1 1 and r 0 can be any phase e iφ . For example, numerically solving r 0 for φ = {π/2, π/4, π/8, π/16} gives ∆ = On using Eq. (2) under strong atom-cavity coupling, the operation for the atom-cavity reflection can be written as (see Appendix. A)
D(φ) = |g g| ⊗ I + |s s| ⊗ exp{iφn}.(3)
Note that cavity-light resonance ∆ = 0 gives r 0 = −1 (D(π)). This resonance condition has been exploited to generate atom-photon gates between atom and a single photon [21]. Further Eq. (3) transforms a coherent state intoD
(π)[ψ a ⊗ |α ] = 1 √ 2 (|g |α + |s |−α ),(4)
where ψ a = 1 √ 2 (|g + |s ). Performing a π/2-rotation and a measurement on the atom, projects the light to a catstate. This has been experimentally verified by trapping 87 Rb to generate cat states with strength α = 1.4 [22].
The reflection coefficients are derived under the assumption that the atom-cavity system quickly attains a steady state. Though this is true on resonance [23], detuning can effect the dynamics. In order to verify that steady states are attained even with detuning, we use input-output theory with quantum pulses to solve for the
Fidelity φ = π/5 φ = π/3 φ = π/2 φ = π FIG.
2.D(φ) operation attained from the steady state dynamics of atom-cavity system. We set (g, κ, γ) = 2π × (16, 5, 0.05) GHz for the numerical simulations.
complete dynamics [23,24]. Upon reflection of light from the atom-cavity system, the following transformation is expected:
ψ a |ψ in |0 out reflec − −− → tion |0 in |g |ψ out + |s e iφn |ψ out √ 2(5)
and |ψ in,out = n C n |n represents the quantum state of the input and output light modes. In Fig. 2 we plot the fidelity for various phases. Here an input light is reflected from the atom-cavity system and the quantum state of the atom and the output light is numerically obtained. Then the fidelity is evaluated between the obtained and the expected output state Eq. (5) (see Appendix. B for more details). From the plot it is clear that reflection from atom-cavity yields the operationD(φ) on the output light mode and a steady is obtained even in the presence of cavity-light detuning.
III. DISTILLATION OF FOCK STATES
We now use the phase shift operationD(φ), local unitary and measurement on atom to distil a Fock state. The protocol consists of the following steps:
1. To prepare the Fock state |A , we start with a coherent state of amplitude α = √ A.
Atom is prepared in the state
ψ a = 1 √ 2 (|g + |s )
3. The light is reflected from the atom-cavity setup to performD(φ).
A unitary rotation
U a is performed on the atom [25].
U a [θ] = 1 √ 2 e iθ 1 1 −e −iθ , |g → 1 √ 2 (e iθ |g + |s ) |s → 1 √ 2 (|g − e −iθ |s ) 5.
Finally a measurement is performed on the atom [22].
Choosing α = √ A is not a stringent condition, this choice is based on the fact that coherent state has Poisson distribution with peak value at α 2 . The state of the "atom+output light" after step-3 can be written as
Ψ = 1 √ 2 [ψ l |g + exp{iφn}ψ l |s ],(6)
where ψ l = n C n |n , is the state of light. Applying an appropriate U a [θ], the atom-cavity state can always be cast in the form
Ψ = k C k |k |g + l C l |l |s ,(7)
where k = l and k |C k | 2 + l |C l | 2 = 1 and measurement on the atom gives the photonic state
ψ l = k C k |k or ψ l = l C l |l ,(8)
where k |C k | 2 and l |C l | 2 are the probabilities for the atomic measurements |g and |s .
By repeating the steps 2 to 5, it is possible to distil a general Fock state. We establish this by explicitly showing the distillation of the Fock state |100 . The atom is initialized in the state ψ a and the input light in the coherent state |α = 10 . The initial state of atom and light can be written as (see Appendix. C)
Ψ = 130 k=70 C k |k ⊗ 1 √ 2 (|g + |s ),(9)
where the coherent state is approximated as
130 k=70 C k |k .
The explicit form C k is irrelevant to the distillation protocol and is only used to calculate measurement probabilities. PerformingD(π) on Ψ yields
Ψ π = 130 k=70 C k e iπn |k |s + 130 n=70 C k |k |g ,(10)
where Ψ l represents the state of output light after applyingD(l) operation. Note thatD(π) is obtained on resonance i.e. ∆ = 0. Performing the atomic unitary U a [0] and measuring the atom in the state |g removes the odd photons to give the even cat state [22]
Ψ π = 64 k=36 C 2k |2k ,(11)
here the normalization is absorbed into the C 2k i.e. 64 k=36 |C 2k | 2 = 1 . Preparing the atom in ψ a and reflecting the state Ψ π withD(π/2) yields
Ψ π 2 = 64 k=36 C 2k (−1) k |s + |g |2k ,(12)
performing U a [0] and a measurement in |g projects the photonic state to Ψ π/2 = 32 k=18 C 4k |4k . Now reflecting the state Ψ π/2 withD(π/4) yields
Ψ π 4 = 32 k=18 C 4k (−1) k |s + |g |4k ,(13)
applying U a [0] and a measurement in |s , projects the light state to Ψ π/4 = 15 k=9 C 8k+4 |8k + 4 . Reflecting Ψ π/4 withD(π/8) yields
Ψ π 8 = 15 k=9 C 8k+4 (−1) k e iπ/2 |s + |g |8k + 4 .(14)
To cancel the extra factor of π/2, we perform U a [π/2]. Now, measuring the atom in |g projects the light state to
Ψ π 8 = 7 k=5 C 16k+4 |16k + 4 , =C 84 |84 + C 100 |100 + C 116 |116 .(15)
Again reflecting Ψ π/8 withD(π/16) gives The distillation protocol discussed above is summarized in Table. III and it is clear that an arbitrary Fock state can be distilled by multiple iterations. We started with a coherent state with mean ( n ) and variance ( n 2 − n 2 ) of |α| 2 . For high A, the probability distribution can be approximated with a Gaussian and ±3 √ A covers the 0.997 of the total area of a Gaussian distribution (see Appendix. C). Also from the Table. III, we notice that in each iteration, the total Fock numbers reduce by half. Thus the total number of iterations (Q) required to distil a Fock state form the coherent state can be written as
Ψ π 16 = 7 k=5 C 16k+4 (−1) k e iπ 4 |s + |g |16k + 4 ,(16)6 √ A = 2 Q+1 ⇒ Q = log 2 (6 ∆n ) − 1,(17)
where . is the ceiling function and ∆n = [ n 2 − n 2 ] 1/2 is the standard deviation of the coherent state. Although Eq. (17) is obtained assuming high coherent state amplitude, it works even for small amplitudes.
Further squeezed coherent states with optimized squeezing [26] can be used to reduce the numbers of iterations to Q − 1 (see Appendix. C).
It is also interesting to note that a general CPF is not required and only phase shifts of the form φ = π/2 n are sufficient for the distillation protocol. Since every iteration requires an atomic measurement, the probability of success is ∼ 1/2 M . Even though the probability of success decreases by increasing reflections, every iteration produces a highly non-classical state irrespective of the measurement outcome. Also the probability of success to generate a Fock state in the range [A − 3 √ A, A + 3 √ A] is unity. Another special case of the distillation protocol is the deletion of prime numbered Fock states. Prime numbers by definition cannot be factored by any other number, this can be exploited to delete a Fock number from a given state. For example, reflecting the coherent state Ψ (Eq. (9)) with phaseD(π/101), performing U a [0] and measuring the atom in |g gives the photonic state Ψ = 100 n=70 C n |n + 130 n=102 C n |n .
IV. IMPLEMENTATIONS AND DISCUSSION
Atom-cavity systems are used to implement a variety of protocols [21]. To implement this protocol, the cavity resonant frequency can be tuned to adjust the detuning ∆. This can be achieved by attaching the cavity mirror to a peizo electric actuator [27]. Besides tuning the cavity frequency, multiple atom-cavity systems with different detunings can also be considered. Waveguide QED is an effective approach to study such systems, where an array of atoms are trapped above waveguides to realize many atom-cavity systems on a single chip [28]. Further, the amount of non-classicality of the output light increases after each iteration, which in-turn can make the quantum states strongly susceptible to transmission and photon losses [29]. This discussion is beyond the scope of this article and will be studied separately.
Also, the coherent state pulses used in the distillation protocol typically have a narrow temporal widths and this can be used to realized highly non-classical bath.
When the temporal width of the pulse is much narrower than the cavity linewidth, the pulse can effectively act as a bath for an atom-cavity system. This means that the first Markov approximation is satisfied, which assumes that the system-reservoir coupling strength is frequency independent [30]. Appendix A: Reflection coefficients for the atom-cavity system
Here we derive the reflection coefficients in the main text. The Hamiltonian of the atom cavity system is written as
H =hω c a † a +hω a |e e| +hg(σ eg a + σ ge a † ),(A1)
The dynamics of the system are governed by the Langevin equation and the input-output relation [20] dx
dt = − i[x, H] − x,â † (− κ 2â + √ κâ out )− (− κ 2â † + √ κâ out )[x,â] − − γ 2 [x, σ eg ]σ ge + γ 2 σ eg [x, σ ge ] (A2)
wherex is any operator of the atom-cavity system. γ and κ are the decay rates of atom and cavity. Note that the noise term for atomic decay are omitted because ω eg is in the optical regime, hence the noise can be assumed to be vacuum. The dynamical equations are obtained as
dâ dt = − iω câ − igσ ge − √ κâ out + κ 2â ,(A3)dσ ge dt = − iω a σ ge − ig(σ gg − σ ee )â + γ 2 σ ge ,(A4)
Transforming â, σ ge ,â out → â, σ ge ,â out e −iω L t gives
dâ dt = − i∆ câ − igσ ge − √ κâ out + κ 2â ,(A5)dσ ge dt = − i∆ a σ ge − ig(σ gg − σ ee )â + γ 2 σ ge ,(A6)
where ∆ c = ω c −ω L and ∆ a = ω a −ω L . The input light is on resonance with the atomic transition (∆ a = 0). Also atom is assumed to be weakly excited hence σ gg = 1. By setting ∂x ∂t = 0, the steady state solutions are obtained asâ
= − √ κ i∆ c − κ/2 − g 2 γ/2â out (A7)
On using the input-output relationâ out −â in = √ κâ yields
r 1 = 1 − 2 1 + 4C − i∆ , r 0 = 1 − 2 1 − i∆ ,(A8)
where ∆ = 2∆ c /κ and the cooperativity C = g 2 /κγ. The reflection coefficient for the non-interacting transition (r 0 ) is obtained using C = 0. On using Eq. (A8) an input state
Ψ in = (c g |g + c s |s ) ⊗ n c n (â † in ) n √ n! |0 (A9)
upon reflection is transformed as follows:
Ψ out = c g |g rn 1 + c s |s rn 0 ⊗ n c n |n ,(A10)
here |c g | 2 + |c s | 2 = 1 and |c n | 2 = 1. Form the Eq. (A10) the transformation for general input state can be written asD
(φ) = |g g| ⊗ (r 1 )n + |s s| ⊗ (r 0 )n,(A11)
with r 1 1 and r 0 = e iφ gives the Eq. (3). TABLE II. Numerical solutions for reflection coefficients, here the cooperativity C = 250. ∆ represents the detuning required for r0 = e iφ . We notice that r1 1 .
Appendix B: Verification of general phase
Here we discuss the verification of the general phase as described in Eq. (3). For this, we use the general inputoutput theory with quantum pulses [23,24]. It is based on density matrix formalism where the input and output pulses are replaced by virtual cavities coupled to the quantum system. This formalism explicitly incorporates the information of the pulse shapes and quantum states of the input and output pulses. The Hamiltonian governing the dynamics of the virtual cavities and the quantum system are given bŷ
H eff =Ĥ s + i 2 √ κ g u (t)â † uâ + g * v (t)â †â v + g u (t)g * v (t)â † uâv − h.c. ,(B1)
where H s denotes the system Hamiltonian given in Eq. (1) andâ is the system cavity operator.â u andâ v represent the input and output virtual cavity field operators with the corresponding time-dependent coupling strengths g u (t) and g v (t), respectively.
The time-dependent coupling strengths of these virtual cavities are chosen such that the input virtual cavity releases an input field with the required pulse shape u(t), while the output virtual cavity acquires the output field mode with pulse shape v(t). The relation between time profiles and coupling strengths is given by
g u (t) = u * (t) 1 − t 0 |u (t)| 2 dt , g v (t) = − v * (t) t 0 |v (t)| 2 dt . (B2)
The dynamics of the system are obtained by the solving the following Lindblad master equation
dρ usv dt = 1 ih Ĥ , ρ usv + D[L eff ]ρ us ,(B3)
where ρ usv is the density matrix of the full system including the input-virtual cavity, atom-cavity system and output-virtual cavity and D[L eff ] represents the time-dependent Lindblad dissipator witĥ
L eff (t) = √ κĉ + g * u (t)â u + g * v (t)â v .(B4)
The output field mode can be obtained by considering only the input virtual cavity attached with the system with the Hamiltonian given by [24]
H 0 =Ĥ s + i 2 √ κg u (t)â † uâ − √ κg * u (t)â uâ † ,(B5)
along with the damping term given by the Lindblad operatorL
0 (t) = √ κĉ + g * u (t)â u (B6)
The prominent output field modes along with the amount of excitation carried by them can be obtained by calculating the eigenmode decompostion of the following two-time correlation function [23] g (1)
(t, t ) = [L 0 (t)] †L 0 (t ) ≡ i n i v i (t)(B7)
Using this, we can solve the master equation for the full system given in Eq. (B3) and calculate the fidelity with time for state given in Eq. (3). In the main draft, we demonstrated distillation utilizing coherent states.
It was evident that the number of iterations required depends on the Fock distribution. However, using squeezed coherent states (SCS) with squeezing can narrow the Fock distribution, potentially leading to a reduction in the required number of iterations. Here, we provide two examples using SCS and compare them with coherent states. These techniques can be applied for a general SCS. First we define the mean and variance of a quantum light â †â ≡ n , (∆n) 2 ≡ n 2 − n 2 , (C1)
Q = (∆n) 2 − n n ,(C2)
When Q < 0 (Q > 0), the light mode is said to obey sub (super) Poissonian statistics. The coherent state is written as
|α = e − |α| 2 2 ∞ k=0 α k √ k! |k , P k = e − n n k k! ,(C3)
where P (k) is the probability for the k th -Fock state and n = (∆n) 2 = |α| 2 . For higher amplitudes P (k) can be approximated with a Gaussian, further three standards are expected to include all the statistics ∞ n=0 P k ≈ n +3 ∆n n −3 ∆n P k = 0.997.
(C4)
A general SCS is written as [26] |α, ξ =D(α )Ŝ(ξ) |0
where ξ = re iθ , α = αe iφ and φ = θ/2 .D(α ) is the displacement operator andŜ(r) is the squeezing operator. The probability distribution, variance and the mean are obtained as [26] | n|α , ξ | 2 ≡ P sq k = ( 1 2 tanh r) k k! cosh r e −α 2 (1+tanh r) (C6) H k αe r √ sinh 2r 2 , (∆n) 2 = α 2 e −2r + 2 sinh 2 r cosh 2 r, (C7) n = α 2 + sinh 2 r,
where H k represents the k th order Hermite polynomial. with large coherent part (α sinh r) and sub-Poissonian statistics (∆n) 2 < n , a SCS can give raise to the nonclassical effect of number squeezing (see Fig. 3 Using Eq. (17) and Eq. (C7), we can determine that four iterations are required to distill Fock states {|51 , |100 } form |10, 0.75 , √ 51, 0.65 , while using coherent states |10 , √ 51 requires five iterations. We performed the distillation protocol to verify this, see Table. III. By optimizing over the squeezing parameter, we can reduce the number of iterations by one. However, increasing the squeezing beyond a certain level may cause oscillations in the photon number distribution and require more iterations [26], as shown in Fig. 3(c).
FIG. 1 .
1Atom cavity system for the CPF. A Λ-type atom with only |e ↔ |g transition interacts with the cavity mode. g and κ are the coupling strength and the cavity decay rate of the cavity mode. ∆ is detuning of the cavity mode with input light and the atom. The second mirror is attached to a Piezo actuator to tune the cavity resonant frequency.
like to thank Dr Sandeep K. Goyal and Dr. G. Krishna Teja for their helpful discussions. Chanchal acknowledges the Council of Scientific and Industrial Research (CSIR), Government of India, for financial support through a research fellowship [Award No. 09/947(0106)/2019-EMR-I].
.15317 e iπ/16 0.998 + 2.02 × 10 −5 i -20.35547 e iπ/32 0.998 + 4.06 × 10 −5 i
and finally performing U a [π/4] and measurement in |s distils the Fock state |100 . sequence of distilled Fock numbers after each iteration. Second column shows the atom-cavity phase,D(φ). Other columns show the unitary operation Ua[θ] and the measurement (M) on the atom. P is the probability of the outcome after each iteration. Note that atomic is prepared in ψa after each iteration.Distilled Fock Nos.
φ
θ
M P
(k) 130
k=70
70, 71. . . 100. . . 129, 130 π
0
|g
(2k) 64
k=36
70, 72. . . 100. . . 126, 128 π/2
0
|g 0.5
(4k) 32
k=18
72, 76. . . 100. . . 124, 128 π/4
0
|s 0.5
(8k + 4) 15
k=9 76, 84. . . 100. . . 116, 124 π/8 π/2 |g 0.5
(16k + 4) 7
k=5
84, 100, 116
π/16 π/4 |g 0.5
(16k + 4) k=6
100
-
-
-0.64
TABLE I. Distillation of Fock states.
First column
shows the
FIG. 3. (a) and (b): Number squeezing for the SCS. |α, r and |α represents the SCS and the coherent state. An optimal squeezing can result in quantum states which are similar to a coherent state but with reduced variance. Due to the reduced variance SCS can be useful in minimizing iterations. (c) The effect of photon number oscillations for higher squeezing. TABLE III. Distillation of the Fock states |51 and |100 . First column shows the distilled Fock numbers after each iteration. Second column shows the atom-cavity phase, D(φ). Other columns show the unitary operation Ua[θ] and the measurement (M) on the atom.Appendix C: Distillation using squeezed coherent states (SCS)
60 70 80 90 100 110 120 130 140
n
0.00
0.02
0.04
0.06
0.08
0.10
P
k
|10, 0.75
|10
(a)
21
31
41
51
61
71
81
n
0.00
0.02
0.04
0.06
0.08
0.10
0.12
P
k
|
√
51, 0.65
|
√
51
(b)
20
40
60
80
100 120 140
n
0.00
0.02
0.04
0.06
0.08
0.10
P
k
|
√
51, 2
(c)
Distilled Fock No.
φ
θ
M
36, 37. . . 51 . . . 65, 66
π
0
|s
37, 39. . . 51 . . . 63, 65
π/2
π/2
|s
39, 43. . . 51 . . . 59, 63
π/4
3π/4
|g
43, 51, 59
π/8
3π/8
|g
51
-
-
Distilled Fock Nos.
φ
θ
M
85, 86. . . 100. . . 114, 115
π
0
|g
86, 88. . . 100. . . 112, 114
π/2
0
|g
88, 92. . . 100. . . 108, 118
π/4
0
|s
92, 100, 108
π/8
π/2
|g
100
-
-
-
R Loudon, The Quantum Theory of Light, Oxford science publications. Oxford University PressR. Loudon, The Quantum Theory of Light, Oxford science publications (Oxford University Press, 2003).
. P Kok, W J Munro, K Nemoto, T C Ralph, J P Dowling, G J Milburn, 10.1103/RevModPhys.79.135Rev. Mod. Phys. 79135P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Rev. Mod. Phys. 79, 135 (2007).
. T Ralph, Reports on Progress in Physics. 69853T. Ralph, Reports on Progress in Physics 69, 853 (2006).
. E Waks, E Diamanti, Y Yamamoto, New Journal of Physics. 84E. Waks, E. Diamanti, and Y. Yamamoto, New Journal of Physics 8, 4 (2006).
. J Tiedau, T J Bartley, G Harder, A E Lita, S W Nam, T Gerrits, C Silberhorn, 10.1103/PhysRevA.100.041802Phys. Rev. A. 10041802J. Tiedau, T. J. Bartley, G. Harder, A. E. Lita, S. W. Nam, T. Gerrits, and C. Silberhorn, Phys. Rev. A 100, 041802 (2019).
. M Cooper, L J Wright, C Söller, B J Smith, Optics express. 215309M. Cooper, L. J. Wright, C. Söller, and B. J. Smith, Optics express 21, 5309 (2013).
. A M Brańczyk, T Ralph, W Helwig, C Silberhorn, New Journal of Physics. 1263001A. M. Brańczyk, T. Ralph, W. Helwig, and C. Silberhorn, New Journal of Physics 12, 063001 (2010).
. A Lingenfelter, D Roberts, A A Clerk, 10.1126/sciadv.abj1916Science Advances. 7A. Lingenfelter, D. Roberts, and A. A. Clerk, Science Advances 7, 10.1126/sciadv.abj1916 (2021).
. K R Brown, K M Dani, D M Stamper-Kurn, K B Whaley, 10.1103/PhysRevA.67.043818Phys. Rev. A. 6743818K. R. Brown, K. M. Dani, D. M. Stamper-Kurn, and K. B. Whaley, Phys. Rev. A 67, 043818 (2003).
. K Xia, G K Brennen, D Ellinas, J Twamley, Optics Express. 2027198K. Xia, G. K. Brennen, D. Ellinas, and J. Twamley, Optics Express 20, 27198 (2012).
. C K Law, J H Eberly, 10.1103/PhysRevLett.76.1055Phys. Rev. Lett. 761055C. K. Law and J. H. Eberly, Phys. Rev. Lett. 76, 1055 (1996).
. M Uria, P Solano, C Hermann-Avigliano, 10.1103/PhysRevLett.125.093603Phys. Rev. Lett. 12593603M. Uria, P. Solano, and C. Hermann-Avigliano, Phys. Rev. Lett. 125, 093603 (2020).
. I Dotsenko, M Mirrahimi, M Brune, S Haroche, J.-M Raimond, P Rouchon, 10.1103/PhysRevA.80.013805Phys. Rev. A. 8013805I. Dotsenko, M. Mirrahimi, M. Brune, S. Haroche, J.- M. Raimond, and P. Rouchon, Phys. Rev. A 80, 013805 (2009).
. J Geremia, 10.1103/PhysRevLett.97.073601Phys. Rev. Lett. 9773601J. Geremia, Phys. Rev. Lett. 97, 073601 (2006).
. C Sayrin, I Dotsenko, X Zhou, B Peaudecerf, T Rybarczyk, S Gleyzes, P Rouchon, M Mirrahimi, H Amini, M Brune, Nature. 47773C. Sayrin, I. Dotsenko, X. Zhou, B. Peaudecerf, T. Rybarczyk, S. Gleyzes, P. Rouchon, M. Mirrahimi, H. Amini, M. Brune, et al., Nature 477, 73 (2011).
. S Mahmoodian, G Calajó, D E Chang, K Hammerer, A S Sørensen, 10.1103/PhysRevX.10.031011Phys. Rev. X. 1031011S. Mahmoodian, G. Calajó, D. E. Chang, K. Hammerer, and A. S. Sørensen, Phys. Rev. X 10, 031011 (2020).
. F Yang, M M Lund, T Pohl, P Lodahl, K Mølmer, 10.1103/PhysRevLett.128.213603Phys. Rev. Lett. 128213603F. Yang, M. M. Lund, T. Pohl, P. Lodahl, and K. Mølmer, Phys. Rev. Lett. 128, 213603 (2022).
. Y Sun, P.-X Chen, 10.1364/optica.5.001492Optica. 51492Y. Sun and P.-X. Chen, Optica 5, 1492 (2018).
. Y Hao, G Lin, X Lin, Y Niu, S Gong, Scientific reports. 94723Y. Hao, G. Lin, X. Lin, Y. Niu, and S. Gong, Scientific reports 9, 4723 (2019).
. M J Collett, C W Gardiner, 10.1103/PhysRevA.30.1386Phys. Rev. A. 301386M. J. Collett and C. W. Gardiner, Phys. Rev. A 30, 1386 (1984).
. A Reiserer, G Rempe, 10.1103/RevModPhys.87.1379Rev. Mod. Phys. 871379A. Reiserer and G. Rempe, Rev. Mod. Phys. 87, 1379 (2015).
. B Hacker, S Welte, S Daiss, A Shaukat, S Ritter, L Li, G Rempe, Nature Photonics. 13110B. Hacker, S. Welte, S. Daiss, A. Shaukat, S. Ritter, L. Li, and G. Rempe, Nature Photonics 13, 110 (2019).
. A H Kiilerich, K Mølmer, 10.1103/PhysRevA.102.023717Phys. Rev. A. 10223717A. H. Kiilerich and K. Mølmer, Phys. Rev. A 102, 023717 (2020).
. A H Kiilerich, K Mølmer, 10.1103/PhysRevLett.123.123604Phys. Rev. Lett. 123123604A. H. Kiilerich and K. Mølmer, Phys. Rev. Lett. 123, 123604 (2019).
. N V Vitanov, A A Rangelov, B W Shore, K Bergmann, 10.1103/RevModPhys.89.015006Rev. Mod. Phys. 8915006N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, Rev. Mod. Phys. 89, 015006 (2017).
C Gerry, P Knight, P L Knight, Introductory quantum optics. Cambridge university pressC. Gerry, P. Knight, and P. L. Knight, Introductory quantum optics (Cambridge university press, 2005) Chap. 7.
. K Möhle, E V Kovalchuk, K Döringshoff, M Nagel, A Peters, Applied Physics B. 111223K. Möhle, E. V. Kovalchuk, K. Döringshoff, M. Nagel, and A. Peters, Applied Physics B 111, 223 (2013).
. D E Chang, J S Douglas, A González-Tudela, C.-L Hung, H J Kimble, 10.1103/RevModPhys.90.031002Rev. Mod. Phys. 9031002D. E. Chang, J. S. Douglas, A. González-Tudela, C.-L. Hung, and H. J. Kimble, Rev. Mod. Phys. 90, 031002 (2018).
. I L Chuang, D W Leung, Y Yamamoto, 10.1103/PhysRevA.56.1114Phys. Rev. A. 561114I. L. Chuang, D. W. Leung, and Y. Yamamoto, Phys. Rev. A 56, 1114 (1997).
. C W Gardiner, M J Collett, 10.1103/PhysRevA.31.3761Phys. Rev. A. 313761C. W. Gardiner and M. J. Collett, Phys. Rev. A 31, 3761 (1985).
| []
|
[
"Propagation and Fluxes of Ultra High Energy Cosmic Rays in f (R) Gravity Theory",
"Propagation and Fluxes of Ultra High Energy Cosmic Rays in f (R) Gravity Theory"
]
| [
"Swaraj Pratim Sarmah \nDepartment of Physics\nDibrugarh University\n786004Dibrugarh, AssamIndia\n",
"Umananda Dev Goswami \nDepartment of Physics\nDibrugarh University\n786004Dibrugarh, AssamIndia\n"
]
| [
"Department of Physics\nDibrugarh University\n786004Dibrugarh, AssamIndia",
"Department of Physics\nDibrugarh University\n786004Dibrugarh, AssamIndia"
]
| []
| Even though the sources of the ultra high energy cosmic rays (UHECRs) are yet to be known clearly, the high-quality data collected by the most recent CRs observatories signal that the sources of these CRs should be of the extragalactic origin. As the intergalactic mediums are thought to be filled with the turbulent magnetic fields (TMFs), these intergalactic magnetic fields may have a significant impact on how UHECRs travel across the Universe, which is currently expanding with acceleration. Thus the inclusion of these points in the theory is crucial for understanding the experimental findings on UHECRs. Accordingly, in this work we study the effect of diffusion of UHE particles in presence of TMFs in the light of f (R) theory of gravity. The f (R) theory of gravity is a successful modified theory of gravity in explaining the various aspects of the observable Universe including its current state of expansion. For this work we consider two most studied f (R) gravity models, viz., the power-law model and the Starobinsky model. With these two models we study the diffusive character of propagation of UHECR protons in terms of their density enhancement. The Greisen-Zatsepin-Kuzmin (GZK) cutoff, the dip and the bump are all spectrum characteristics that UHE extragalactic protons acquire when they propagate through the cosmic microwave background (CMB) radiation in presence of TMFs. We analyse all these characteristics through the diffusive flux as well as its modification factor. Model dependence of the modification factor is minimal compared to the diffusive flux. We compare the UHECR protons spectra that are calculated for the considered f (R) gravity models with the available data of the AKENO-AGASA, HiRes, AUGER and YAKUTSK experiments of UHECRs. We see that both the models of f (R) gravity provide the energy spectra of UHECRs with all experimentally observed features, which lay well within the range of combine data of all experiments throughout the energy range of concern, in contrast to the case of the ΛCDM model. | null | [
"https://export.arxiv.org/pdf/2303.16678v1.pdf"
]
| 257,804,709 | 2303.16678 | 5f8d50d5007aa89c27b26e6260ce95561d199f55 |
Propagation and Fluxes of Ultra High Energy Cosmic Rays in f (R) Gravity Theory
Swaraj Pratim Sarmah
Department of Physics
Dibrugarh University
786004Dibrugarh, AssamIndia
Umananda Dev Goswami
Department of Physics
Dibrugarh University
786004Dibrugarh, AssamIndia
Propagation and Fluxes of Ultra High Energy Cosmic Rays in f (R) Gravity Theory
Ultra High Energy Cosmic RaysPropagationEnhancement FactorFlux
Even though the sources of the ultra high energy cosmic rays (UHECRs) are yet to be known clearly, the high-quality data collected by the most recent CRs observatories signal that the sources of these CRs should be of the extragalactic origin. As the intergalactic mediums are thought to be filled with the turbulent magnetic fields (TMFs), these intergalactic magnetic fields may have a significant impact on how UHECRs travel across the Universe, which is currently expanding with acceleration. Thus the inclusion of these points in the theory is crucial for understanding the experimental findings on UHECRs. Accordingly, in this work we study the effect of diffusion of UHE particles in presence of TMFs in the light of f (R) theory of gravity. The f (R) theory of gravity is a successful modified theory of gravity in explaining the various aspects of the observable Universe including its current state of expansion. For this work we consider two most studied f (R) gravity models, viz., the power-law model and the Starobinsky model. With these two models we study the diffusive character of propagation of UHECR protons in terms of their density enhancement. The Greisen-Zatsepin-Kuzmin (GZK) cutoff, the dip and the bump are all spectrum characteristics that UHE extragalactic protons acquire when they propagate through the cosmic microwave background (CMB) radiation in presence of TMFs. We analyse all these characteristics through the diffusive flux as well as its modification factor. Model dependence of the modification factor is minimal compared to the diffusive flux. We compare the UHECR protons spectra that are calculated for the considered f (R) gravity models with the available data of the AKENO-AGASA, HiRes, AUGER and YAKUTSK experiments of UHECRs. We see that both the models of f (R) gravity provide the energy spectra of UHECRs with all experimentally observed features, which lay well within the range of combine data of all experiments throughout the energy range of concern, in contrast to the case of the ΛCDM model.
I. INTRODUCTION
The discovery of cosmic rays (CRs) by V. F. Hess in 1912 [1] is one of the most significant milestones in the history of modern physics. CRs are charged ionizing particles, mostly consisting of protons, helium, carbon and other heavy ions upto iron emanating from outer space. Although the discovery of CRs has passed more than a hundred and ten years now, the origin, acceleration and propagation mechanisms of CRs are still not clearly known [2][3][4], especially in the higher energy range i.e. the energy range E ≥ 0.1 EeV (1 EeV = 10 18 eV). The sources of such usually referred ultra high energy CRs (UHECRs) are not established yet [5][6][7][8]. However, in the energy range E ≤ 0.1 EeV, it is assumed that the sources are of galactic origin and they are accelerated with supernova explosion [9], while that of the well above this range (∼ 1 EeV and above) are of most probably the extragalactic in origin and plausibly to accelerate in gamma ray (γ-ray) bursts or in active galaxies [2].
The energy spectrum of CRs has an extraordinary range of energies. It extends from about many orders of GeV energies upto 100 EeV and it exhibits the power-law spectrum. There is a small spectral break known as the knee at about 4 PeV (1 PeV = 10 15 eV) and a flattening at the ankle at about 5 EeV. In this spectrum, a strong cutoff near 50 EeV, which is called the GZK (Greisen, Zatsepin and Kuzmin) cutoff [10,11] is appeared due to the interaction with cosmic microwave background (CMB) photons. Besides this, there are two other signatures also in the spectrum, viz. dip and bump [12][13][14][15]. The first one is due to the pair production (P + γ CMB → e + + e − + P ) and second one is due to the photopion production (P + γ CMB → π + N ) with the CMB photons.
The intergalactic medium (IGM) contains turbulent magnetic fields (TMFs), which impact significantly on the propagation of extragalactic UHECRs. In the presence of any random magnetic field, the propagation of a charged particle depends on how much distance is travelled by that particle compared with the scattering length λ = 3D/c in the medium, where D denotes the diffusion coefficient and c is the speed of light in free space [16]. If the travelled distance of the charge particle is very smaller than the scattering length, then the propagation is ballistic in nature while that is diffusive if the distance is very larger than the scattering length. Consideration of an extragalactic TMF and also taking into account the finite density of sources in the study of propagation of UHECRs may result in a low-energy magnetic horizon effect, which may allow the observations to be consistent with a higher spectral index [9,17,18], closer to the values anticipated from diffusive shock acceleration. Other hypothesis rely on the assumption of acceleration of heavy nuclei by extragalactic sources, which then interact with the infrared radiation present in those environments to photodisintegration, producing a significant number of secondary nucleons that might explain the light composition seen below the ankle [19,20]. In the presence of an intergalactic magnetic field, the propagation of UHECRs can be studied from the Boltzmann transport equation or by using some simulation methods. In Ref. [16], the author presents a system of partial differential equations to describe the propagation of UHCRs in presence of a random magnetic field. In this paper, the author considered the Boltzmann transport equation and obtained the partial differential equations for the number density as well as for the flux of particles. A diffusive character of propagation of CRs is also obtained in this paper. In Ref. [21] (see also Ref. [22]), an astrophysical simulation framework is proposed for studying the propagating extraterrestrial UHE particles. In their work, authors presented a new and upper version of publicly available code CRPropa 3. It is a code for the efficient development of astrophysical predictions for UHE particles. Ref. [23] presented an analytical solution of the diffusion equation for high energy CRs in the expanding Universe. A fitting to the diffusion coefficient D(E) obtained from numerical integration was presented in Ref. [2] for the both Kolmogorov and Kraichnan turbulence. Authors of Ref. [3] studied the effects of diffusion of CRs in the magnetic field of the local supercluster on the UHECRs from a nearby extragalactic source. In this study authors found that a strong enhancement at certain energy ranges of the flux can help to explain the features of CR spectrum and the composition in detail. In Ref. [5], the authors demonstrated the energy spectra of UHECRs as observed by Fly's Eye [24], HiRes [25], Yakutsk and AGASA [26] from the idea of the UHE proton's interaction with CMB photons. A detailed analytical study of the propagation of UHE particles in extragalactic magnetic fields has been performed in Ref. [27] by solving the diffusion equation analytically with the energy losses that are to be taken into account. In another study [28], the authors obtained the dip, bump and GZK cutoff in terms of the modification factor, which arise due to various energy losses suffered by CR particles while propagting through the complex galactic or intergalactic space [4]. Similarly, in Ref. [29], authors obtained four features in the CR proton spectrum, viz. the bump, dip, second dip and the GZK cutoff taking into consideration of extragalactic proton's interaction with CMB and assuming of resulting power-law spectrum.
The General Relativity (GR) developed by Albert Einstein in 1915 to describe the ubiquitous gravitational interaction is the most beautiful, well tested and successful theory in this regard. The discovery of Gravitational Waves (GWs) by LIGO detectors in 2015 [30] after almost hundred years of their prediction by Einstein himself and the release of the first image of supermassive black hole at the centre of the elliptical supergiant galaxy M87 by the Event Horizon Telescope (EHT) in 2019 [31][32][33][34][35][36] are the robust supports amongst others in a century to GR. Even though the GR has been suffering from many drawbacks from the theoretical as well as observational fronts. For example, the complete quantum theory of GR has remained elusive till now. The most important limitations of GR from the observational point of view are that it can not explain the observed current accelerated expansion [37][38][39][40] of the Universe, and the rotational dynamics of galaxies indicating the missing mass [41] in the Universe. Consequently, the Modified Theories of Gravity (MTGs) have been developed as one of the ways to explain these observed cosmic phenomena, wherein these phenomena are looked at as some special geometrical manifestations of spacetime, which were remained to be taken into account in GR. The most simplest but remarkable and widely used MTG is the f (R) [42] theory of gravity, where the Ricci scalar R in the Einstein-Hilbert (E-H) action is replaced by a function f (R) of R. Various models of f (R) gravity theory have been developed so far from different perspectives. Some of the viable as well as famous or popular models of f (R) gravity are the Starobinsky model [43], Hu-Sawicki model [44], Tsujikawa model [45], power-law model [46] etc.
Till now a number of authors have studied the propagation of CRs in the domain of GR [2-9, 13-16, 27-29, 47]. The enhancement of the flux of CRs is obtained in the framework of the ΛCDM model by a variety of authors [3,48]. Besides these, differential flux as well as the modification factor have also been studied [4,5,13,16,[27][28][29]. Since MTGs have made significant contributions in the understanding the cosmological [49,50] and astrophysical [51] issues in recent times, it would be wise to apply the MTGs in the field of CRs to study the existing issues in this field. Keeping this point in mind, in this work, we study for the very first time the propagation of UHECRs and their consequent flux in the light of a MTG, the f (R) theory of gravity. For this purpose, we consider two f (R) gravity models, viz. the power-law model [46] and the Starobinsky model [67]. Considering these two models, we have calculated the expression for the number density of particles. From the number density, we have calculated the enhancement factor as well as differential flux and modification factor for the UHECRs.
The remaining part of the paper is arranged as follows. In section II, we discuss the turbulent magnetic field and diffusive propagation mechanism. The basic cosmological equation that is used to calculate the cosmological parameters is introduced in the section III. In section IV, we define f (R) gravity models of our interest and calculate the required model parameters for those models. A fit to these models is also shown in this section in comparison with the observational data. In section V, we calculate the number density of particles and hence the enhancement factor. Here the differential flux and modification factor for the both two models are calculated and are compared the results with AKENO-AGASA [52,53], HiRes [54], AUGER [55,56] and YAKUTSK [57] arrays data. Finally, we compare the results for the ΛCDM, power-law and Starobinsky models and then conclude our paper with a fruitful discussion in section VI.
II. PROPAGATION OF COSMIC RAYS IN TURBULENT MAGNETIC FIELDS
It is a challenging task to build a model for the extragalactic magnetic fields since there are few observable constraints on them [58]. Their exact amplitude values are unknown, and they probably change depending on the region of space being considered.
In the cluster centre regions, the large-scale magnetic fields have recorded amplitudes that vary from a few to tens of µG [59]. Smaller strengths are anticipated in the vacuum regions, with the typical boundaries in unclustered regions being 1 to 10 nG. This means that considerable large-scale magnetic fields should also be present in the filaments and sheets of cosmic structures. The evolution of primordial seeds impacted by the process of structure building may result in TMFs in the Universe [2]. As a result, magnetic fields are often connected with the matter density and are therefore stronger in dense areas like superclusters and weaker in voids. In the local supercluster region, a pragmatic estimation place the coherence length of the magnetic fields between 10 kpc and 1 Mpc, while their root mean square (rms) strength lies in the range of 1 to 100 nG [59][60][61]. The regular component of the galactic magnetic field (GMF), which typically has a strength of only a few µG, may have an impact on the CRs' arrival directions, but due to its much lesser spatial extent, it is anticipated to have a subdominant impact on the CRs spectrum.
In the local supercluster region, the rotation measure of polarised background sources has suggested the presence of a strong magnetic field, with a potential strength of 0.3 to 2 µG [60]. It is the magnetic field within the local supercluster that is most relevant since the impacts of the magnetic horizon become noticeable when the CRs from the closest sources reach the Earth. Thus we will not consider here the larger scale inhomogeneities from filaments and voids. The propagation of CRs in an isotropic, homogenous, turbulent extragalactic magnetic field will then be simplified. The rms amplitude of magnetic fields B and the coherence length l c , which depicts the maximum distance between any two points upto which the magnetic fields correlate with each other, can be used to characterise such magnetic fields. The rms strength of the magnetic fields can be defined as B = B 2 (x) , which can take values from 1 nG upto 100 nG and that of the coherence length l c can take the values from 0.01 Mpc to 1 Mpc.
An effective Larmor radius for charged particles of charge Ze moving with energy E through a TMF of strength B may be defined as
r L = E ZeB 1.1 E/EeV ZB/nG Mpc.(1)
A pertinent quantity in the study of diffusion of charged particles in magnetic fields is the critical energy of the particles. This energy can be defined as the energy at which the coherence length of a particle with charge Ze is equal to its Larmor radius i.e., r L (E c ) = l c and it is given by
E c = ZeBl c 0.9Z B nG l c Mpc EeV.(2)
This energy distinguishes between the regime of resonant diffusion that occurs at low energies (< E c ) and the non-resonant regime at higher energies (> E c ). In the resonant diffusion regime the particles suffer large deflections due the interaction with magnetic field B with scales that are comparable to l c , whereas in the latter scenario, deflections are small and can only take place across the travel lengths that are greater than l c . Extensive numerical simulations of proton's propagation yielded a fit to the diffusion coefficient D as a function of energy [2], which is given by
D(E) c l c 3 4 E E c 2 + a I E E c + a L E E c 2−m ,(3)
where m is the index parameter, a I and a L two coefficients. For the case of TMF with Kolmogorov spectrum m = 5/3 and the coefficients are a L ≈ 0.23 and a I ≈ 0.9, while that for Kraichnan spectrum one will have m = 3/2, a L ≈ 0.42 and a I ≈ 0.65. The diffusion length l D relates to the distance after which the particles' overall deflection is nearly one radian and is given by
l D = 3D/c. From Eq. (3), it is seen that for E/E c 0.1 the diffusion length, l D a L l c (E/E c ) 2−m while that for E/E c 0.2, the diffusion length will be l D 4 l c (E/E c ) 2 .
III. BASIC COSMOLOGICAL EQUATIONS
On a large scale, the Universe appears to be isotropic and homogeneous everywhere. In light of this, the simplest model to be considered is a spatially flat Universe, which is described by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric and it is defined as
ds 2 = − dt 2 + a 2 (t)δ ij dx i dx j ,(4)
where a(t) is the scale factor, δ ij is the Kronecker delta function with i, j = {1, 2, 3} and x µ = {x 0 , x 1 , x 2 , x 3 } are comoving coordinates with x 0 = t. Moreover, as a source of curvature we consider the perfect fluid model of the Universe with energy density ρ and pressure p which is specified by the energy-momentum tensor T µ ν = diag(− ρ, p, p, p). Here we are firstly interested in the basic cosmological evolution equation to be used in our study and this equation is the Friedmann equation. The Friedmann equation in f (R) gravity theory we used is derived by following the Palatini variational approach of the theory. In this approach both the metric g µν and the torsion-free connection Γ λ µν are considered as independent variables. In our present case the metric is g µν = diag(− 1, a 2 , a 2 , a 2 ) and the connection can be obtained from the f (R) gravity field equations in the Palatini formalism [42]. Following the Palatini formalism the generalized Friedmann equation for our Universe in terms of redshift in f (R) gravity theory can be expressed as [62]
H 2 H 2 0 = 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + f (R) H 2 0 6f R ζ 2 ,(5)
where
ζ = 1 + 9f RR 2f R H 2 0 Ω m0 (1 + z) 3 Rf RR − f R .(6)
In Eqs. (5) and (6), H 0 ≈ 67.4 kms −1 Mpc −1 [63] is the Hubble constant, Ω m0 ≈ 0.315 [63] is the present value of the matter density parameter and Ω r0 ≈ 5.373 × 10 −5 [64] is the present value of the radiation density parameter. f R and f RR are the first and second order derivatives of the function f (R) with respect to R. It is seen that Eqs. (5) and (6) are f (R) gravity model dependent.
Secondly, in our study it is important to know how the cosmological redshift is related with the cosmological time evolution. This can be studied from the connection between the redshift and cosmological time evolution, which is given by
dt dz = 1 (1 + z)H .(7)
The expression of the Hubble parameter H(z) for different models of f (R) gravity will be derived using Eqs. (5) and (6) in the following section IV.
IV. f (R) GRAVITY MODELS AND COSMOLOGICAL EVOLUTION
In this section, we will introduce the power-law model [46] and Starobinsky model [67] of f (R) theory of gravity, and then will derive the expressions for the Hubble parameter and evolution Eq. (7) for these two models. The least square fit to the derived Hubble parameter from the both models with the recent observational data to constrain the model parameters will also be done here. Moreover, the likelihood fit method will be used here to further constrain different model parameters with the observed cosmological data.
A. Power law model and cosmological equations
The general f (R) gravity power-law model is given by [65,66]
f (R) = λ R n ,(8)
where λ and n are two model parameters. Here the parameter n is apparently a constant quantity, but the parameter λ depends on the value of n as well as on the cosmological parameters H 0 , Ω m0 and R 0 as given by [66]
λ = − 3H 2 0 Ω m0 (n − 2)R n 0 .(9)
This expression of the parameter λ implies that the power-law model has effectively only one unknown parameter, which is the n. For this model the expression of the present value of the Ricci scalar R 0 can be obtained as [66]
R 0 = − 3(3 − n) 2 H 2 0 Ω m0 2n [(n − 3)Ω m0 + 2(n − 2)Ω r0 ] .(10)
The expression of the Hubble parameter H(z) for the power-law model can be obtained from Eq. (5) together with Eq. (6) as [66]
H(z) = − 2nR 0 3(3 − n) 2 Ω m0 (n − 3)Ω m0 (1 + z) 3 n + 2(n − 2) Ω r0 (1 + z) n+3 n 1 2 .(11)
In our study for the model parameter n, we have used its value from the Ref. [66] where the values of n = 1.25, 1.4 and 1.9 have been taken into account. Although the best fitted value of n is 1.4 according to the Ref. [66], here, we employ a corner plot for parameters H 0 , Ω m0 , Ω r0 and n in the Python library using emcee code along with the likelihood function in association with the Markov Chain Monte-Carlo approach to find a most credible value of n. As shown in Fig. 1, from this analysis we have and n = 1.399762 +0.000341 −0.000342 . It is seen that the most likely value of n is very close to 1.4 and hence we will use it in the rest of our analysis.
The relation of cosmological evolution time t and redshift z for the power-law model can be obtained by substituting Eq. (11) for H(z) in Eq. (7) as given by
dt dz = (1 + z) −1 − 2nR 0 3(3 − n) 2 Ω m0 (n − 3)Ω m0 (1 + z) 3 n + 2(n − 2)Ω r0 (1 + z) n+3 n − 1 2 .(12)
In Fig. 2, we have plotted the differential variation of cosmological time t with respect to redshift z i.e. the variation of dt/dz with the redshift z for different values for model parameter n along with the that for the ΛCDM model. It is seen from Fig. 2 that the difference of variation of dt/dz for the power-law and the ΛCDM model is very less significant for smaller values of z ≤ 0.2, while that for higher value of z it has shown a notable variation. Therefore for the rest of the paper, a possible higher value of redshift will have to be taken into account. Moreover, it should be mentioned that although n = 1.
B. Starobinsky Model and cosmological equations
The Starobinsky model of f (R) gravity considered here is of the form [67]:
f (R) = αR + βR 2 ,(13)
where α and β are two free model parameters which are to be constrained by using observational data associated with a particular problem of study. Similar to the previous case the expression of the Hubble parameter H(z) for the Starobinsky model can be obtained from Eq. (5) along with Eq. (7) as
H(z) = H 0 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR + βR 2 H −2 0 6(α + 2βR) 1 − 9 βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 1 2 .(14)
To use this expression of H(z) for further study we have to constraint the values of the model parameters α and β within Table I their realistic values as the behaviour of H(z) depends significantly on these two model parameters. For this we have used the currently available observational Hubble parameter (H obs (z)) data set [68] as shown in Table I. Here we have taken into account the combination of 43 observational Hubble parameter data against 43 distinct values of redshift in order to obtain the precise values of the aforementioned free model parameters, which should be consistent with the ΛCDM model at least around the current epoch i.e. z = 0. Using the least square fitting technique in ROOT software [69], we have plotted the best fitted curve to this set of Hubble parameter data with respect to the redshift as shown in Fig 3. From this best fitted curve we have inferred values of α and β as 1.07 and 0.00086 respectively. Further, like the power-law model, we have plotted a corner plot for the Starobinsky model as well for the model parameters α and β, which is shown in Fig. 4. Here also we get the model parameter as α = 1.070131 +0.000062 −0.000062 and β = 0.000860. Now, we are in a position to write the expression for dt/dz for this model and it can be expressed as
dt dz = (1 + z)H 0 −1 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR+βR 2 H 2 0 6(α + 2βR) 1 − 9βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 − 1 2 ,(15)
In In the next section, we will employ the power law model and the Starobinsky model to calculate the cosmic ray density and differential flux using the results of this section. The first thing that piques our curiosity is how the density of CRs is being modulated at a certain distance from the originating source in a TMF. For this it is necessary to calculate the density enhancement of CRs at a certain distance r s from the originating source while being surrounded by a TBF. We specifically wish to investigate its reliance on the energy of particles and take into account the transition from the diffusive propagation.
In the diffusive regime, the diffusion equation for the UHE particles propagating in an expanding Universe from a source which is located at a position x s can be expressed as [23] ∂n ∂t
+ 3H(t) n − b(E, t) ∂n ∂E − n ∂n ∂E − D(E, t) a 2 (t) ∇ 2 n = Q s (E, t) a 3 (t) δ 3 (x − x s ),(16)
where H(t) =ȧ(t)/a(t) is the Hubble parameter as a function of cosmological time t,ȧ(t) is time derivative of the scale factor a(t), x denotes the comoving coordinates, n is the density of particle at time t and position x, Q s (E) is the source function that depicts the number of emitted particles with energy E per unit time. Thus, at time t, which corresponds to redshift z, r s = x − x s . The energy losses due to the expansion of the Universe and interaction with CMB are described by
dE dt = − b(E, t), b(E, t) = H(t)E + b int (E).(17)
Here H(t)E represents the adiabatic energy losses due to expansion and b int (E) denotes the interaction energy losses. The interaction energy losses with CMB includes energy losses due to pair production and photopion production (for details see [2]). The general solution of Eq. (16) was obtained in [23] considering the particles as protons and it is given as
n(E, r s ) = zi 0 dz dt dz Q(E g , z) exp −r 2 s /4λ 2 (4πλ 2 ) 3/2 dE g dE ,(18)
where z i is the redshift of initial time when a particle was just emitted by source and E g is the generation energy at redshift z of a particle whose energy is E at z = 0, i.e. at present time. The source function Q(E g , z) is considered to follow a power-law spectrum Q ∝ E −γg g with γ g as the spectral index of generation at the source. λ is the Syrovatsky variable [17,83] and is given by
λ 2 (E, z) = z 0 dz dt dz (1 + z) 2 D(E g , z).(19)
Here λ(E, z) refers to the usual distance that CRs travel from the location of their production at redshift z with energy E g , to the present time at which they are degraded to energy E. The expression of the rate of dregadation of the energy at source of particles with respect to their energy at z = 0, i.e. dE g /dE is given by [23]
dE g dE = (1 + z) exp z 0 dz dt dz ∂ b int ∂E .(20)
It is clear that using Eqs. (12) and (15) in Eqs. (19) and (20) the density of UHE protons in the diffusive medium at any cosmological time t with energy E and at a distance r s from the source as given by Eq. (18) can be obtained as predicted by the f (R) gravity power-law model and Starobinsky model respectively. So, in the following we will implement the power-law and Starobinsky model results from section IV to obtain the CR's protons density enhancement factor, and subsequently the CR's protons flux and energy spectrum as predicted by these two f (R) gravity models.
A. Projections of f (R) power-law model
To calculate the CR protons' density (18) and hence its enhancement factor in the TMF of extragalactic space projected by the power-law model of f (R) gravity, as a prerequisite we calculate first the Syrovatsky variable λ for this model from Eq. (19) using Eq. (12) with different values of the model parameters n and taking the feasible field parameter values as l c = 0.1 Mpc, B = 50 nG and the corresponding critical energy of proton as E c = 4.5 EeV, and then study the behaviour of the variable λ for the both Kolmogorov spectrum and Kraichnan spectrum. We also calculate this variable for the ΛCDM model for both the spectra for the comparison. In these calculations and rest ones we use the values of z = 0 − 5 keeping in view of possible source locations of CRs as well as the present and probable future cosmological observable range. The results of these calculations are shown in Fig. 6 with respect to energy E for the Kolmogorov spectrum (m = 5/3, a L ≈ 0.23 and a I ≈ 0.9) (left panel) and the Kraichnan spectrum (m = 3/2, a L ≈ 0.42 and a I ≈ 0.65) (middle panel). It is seen from the figure that the value of λ 2 increases with increasing energy of the particle. The power-law model predicts higher λ 2 values for all values of n in comparison to that of the ΛCDM models for both the spectra and this difference increases substantially with the increasing energy E. Similarly higher values of the parameter n give increasingly higher values λ 2 in comparison to the smaller values of n. Apparently no difference can be observed between the values of λ 2 obtained for the Kolmogorov spectrum and the Kraichnan spectrum from the respective plots. So, to quantify the difference of values of λ 2 for these two spectra to a visible one we calculate the percentage difference between λ 2 values obtained for the Kolmogorov spectrum and Kraichnan spectrum per average bin values of λ 2 for each energy bin of both the spectra (∆λ 2 kk (%)) for the power-law model with n = 1.4, which is shown in the right panel of the figure. A peculiar behaviour of the variation of ∆λ 2 kk (%) with energy is seen from the plot. The ∆λ 2 kk (%) is energy dependent, it decreases rapidly with E upto ∼ 0.4 EeV, after which it shows oscillatory behaviour with the lowest minimum at ∼ 1.55 EeV. At energies above 0.1 EeV the values of ∆λ 2 kk (%) are seen to be mostly below the 1%. Thus at these ultra high energies differences of λ 2 values for the Kolmogorov spectrum and the Kraichnan spectrum are not so significant.
In the diffusive regime the density of the particles has been enhanced by a factor depending on the energy, distance of the particles from the source and TMF properties. The density enhancement factor can be defined as the ratio of actual density to the density of particles that would lead for their rectilinear propagation, which is given by [3] ξ(E, r s ) = 4πr 2 s c n(E, r s )
L(E)(21)
where L(E) is the spectral emissivity of the source, which has power-law dependency on the energy of the particles. The results of the enhancement of the density for a proton source and various parameter values obtained by numerically integrating Eq. (18), are displayed in Fig. 7. The distance to the source r s , the magnetic field amplitude B, and its coherence length l c are the major factors that determine the lower-energy suppression of the density enhancement factor. In the lower panels, the enhancement energy range is less as compared to the upper panels, which is lowest in the case of the lower right panel. As the distance from the source is far away, the enhancement of density is limited in a smaller range of energies, but shifted towards the higher energy side. The final verdict from the Fig.7 is that as the distance from the source r s increases, the enhancement becomes gradually model independent. Also one can appreciate that the f (R) gravity power-law has done a perfect job by enhancing density in a wider range of energies as compared to the ΛCDM model. For a given source distance of 25 Mpc and coherence length of 0.1 Mpc, we depict the enhancement factor ξ as a function of E/E c in Fig. 8 to better highlight the fact that for E/E c < 0.01 the Kolmogorov spectrum (left panel) and Kraichnan spectrum (right panel) have shown different behaviours while for E/E c > 0.01 the both Kolmogorov and Kraichnan spectra have shown similar patterns. In this case, the f (R) power-law model is more suitable as it gives the enhancement in the higher as well as lower values of E/E c , while in the case of the ΛCDM the range it gives the enhancement is less wider than the power-law model. From this Fig. 8 it is clearly seen that the kolmogorov spectrum has given a better range of E/E c than Kraichnan spectrum for the both ΛCDM and f (R) power-law model.
The diffusive character of propagation UHE protons is shown in Fig. 9. Here we plot the density enhancement ξ as a function of source distance r s . In this case, we fix the coherence length l c = 0.1 Mpc, while E/E c = 6 and E/E c = 12 in the upper left and the lower left panel respectively. From these two panels, we can say that the lower E/E c value results in a higher peak of the density enhancement with the peak position towards the smaller values of r s . Again, the ΛCDM model shows the highest peak in the CRs density enhancement, while the f (R) gravity power-law model depicts a better distribution of enhancement with the source distance. For the model parameters n = 1.25 and n = 1.4, it results in a similar distribution while for n = 1.9, it shows a larger distribution. From the lower left panel and the upper right panel, it is clearly seen that the enhancement peak height and position depend on the coherence length l c also. For the higher value of l c the peak height decreases, but it shifts away from the source. In the lower right panel, we have considered a larger value of E/E c = 24 which results in a very poor peak in the both ΛCDM and the f (R) power-law models. So from these results, we can finally say that for suitable values of l c and E/E c , the ΛCDM model depicts a better peak, while f (R) power-law model depicts the enhancement in a much wider distribution.
For reckoning the diffuse spectrum of UHE particles the separation between sources play a crucial role. If the sources are distributed uniformly with separations, which are very smaller than the propagation and the interaction lengths, then the diffuse spectrum of UHE particles has a universal form, regardless of the mode of propagation of such particles [27]. To this end the explicit form of the source function Q(E, z) for the power-law generation of the particles can be written as [29] Q(E, z)
= L 0 (1 + z) m Kq gen (E g ),(22)
where L 0 = L(E) dE is the total emissivity, (1 + z) m represents the probable cosmological evolution of the sources, K is a normalisation constant with K = γ g − 2 for γ g > 2 and for γ g = 2, K = (ln E max /E min ) −1 and q gen = E −γg g (see Appendix A for E g ). Utilizing the formalisation of Refs. [13,23], it is possible to determine the spectrum of UHE protons in the model with uniform source distribution and hence one can obtain the diffuse flux of UHE protons as
J p (E) = c 4π L 0 K zmax 0 dz dt dz (1 + z) m q gen (E g ) dE g dE .(23)
Following Eq. (12) one can rewrite this diffuse flux Eq. (23) as
J p (E) = c 4π L 0 K zmax 0 dz (1 + z) −1 − 2 nR 0 3(3 − n) 2 Ω m0 (n − 3)Ω m0 (1 + z) 3 n + 2(n − 2) Ω r0 (1 + z) n+3 n − 1 2 (1 + z) m q gen (E g ) dE g dE .(24)
The spectrum given by Eq. (23) is known as the universal spectrum as it is independent of the mode of propagation of particles which is the consequence of the small separation of sources as mentioned earlier. The shape of the universal spectrum may theoretically be changed by variety of effects, which include the fluctuations in interaction, discreteness in source distribution, large-scale inhomogeneous source distribution, local source overdensity or deficit, and discrete source distribution. However, the aforementioned effects only slightly change the form of the universal spectrum, with the exception of energies below 1 EeV and above the GZK cutoff. Numerical simulations demonstrate that the energy spectrum is changed by the propagation of UHE protons in the strong magnetic fields depending on the separation of sources. For small separation of sources with their uniform distribution the spectrum becomes the universal one as mentioned already [84,85]. In Fig. 10, we plot the diffusive flux with no cosmological evolution (m = 0) [27,29]. The emissivity L 0 is taken to fit the curve with the available observational data [29]. It is clear that the f (R) gravity power-law model curve with n = 1.4 has satisfied the majority of AKENO, AGASA and HiRes I data, while the ΛCDM model satisfies AUGER and HiRes II. Moreover, the f (R) gravity power-law model curve passes through within the error range of most of the experimental data for the whole UHECRs range. Further, throughout this energy range f (R) gravity power-law model gives higher proton flux than that of the ΛCDM model. A dip is seen at the energy range of 1 − 10 EeV, while at about 30 EeV a bump is observed. These two signatures, the dip and bump are also observed in the modification factor of the energy spectrum plot as shown in Fig. 11. The modification factor of the energy spectrum is a convenient parameter for doing the analysis of the energy spectrum of UHECRs. This parameter corresponds to the enhencement factor of density of UHECR particles discussed earlier. The modification factor of energy spectrum η(E) is calculated as the ratio of the universal spectrum J p (E) after accounting for all energy losses to the unmodified spectrum J unm p (E), in which only adiabatic energy losses due to the red shift are taken into consideration [29], i.e.
η(E) = J p (E) J unm p (E)(25)
Without any cosmological evolution the unmodified spectrum can be written as
J unm p (E) = c 4π L 0 (γ g − 2)E −γg zmax 0 dz dt dz (1 + z) (1−γg) .(26)
The modification factor as a function of energy for the spectral index γ g = 2.7 is shown in Fig. 11 for the f (R) gravity powerlaw model and the ΛCDM model. At about 1 EeV, a dip is seen in the spectrum as predicted by both the models in agreement with the observation of AKENO, HiRes II and YAKUTSK arrays. Also a good agreement with AGASA, HiRes I and AUGER FIG. 11. Spectra of modification factor with γg = 2.7 in comparison with different experimental data such as AKENO-AGASA [52,53], HiRes [54], YAKUTSK [57] and AUGER [55] for the ΛCDM and f (R) gravity power-law model. data for the bump in the spectrum is seen. One can observe that for energy E < 1 EeV, the modification factor becomes higher. The modification factor η > 1 depicts the presence of other components of CRs which are of mainly galactic origin. From Fig. 11, it can also be said that the modification factor of the energy spectrum is less model dependent parameter.
B. Projections of Starobinsky f (R) gravity model
For this model of f (R) gravity also we will follow the same procedure as we have already done in the case of the power-law model. So here also we have to calculate the Syrovatsky variable λ 2 and for this purpose we express λ 2 (E, z) from Eq. (19) using Eq. (15) for the Starobinsky model as
λ 2 (E, z) = H −1 0 z 0 dz (1 + z) 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR+βR 2 H 2 0 6(α + 2βR) 1 − 9βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 − 1 2 D(E g , z).(27)
In Fig. 12 we plot the variation of λ 2 with respect to energy for the f (R) gravity Starobinsky model and power-law model in comparison with the ΛCDM model. For this we consider the source distance r s = 50 Mpc, the coherence length l c = 0.1 Mpc and the strength of the TMF, B = 50 nG, and use only the Kolmogorov spectrum of the diffusion coefficient. A noticeable variation with respect to the energy is observed in λ 2 values for all of the mentioned gravity models. Moreover, the f (R) gravity Starobinsky model gives the lowest value of λ 2 although its pattern of variation with respect to energy is similar for all the three models.
Similarly, using Eqs. (18), (20) and (27) in Eq. (21) we calculate the UHE particle density enhancement factor ξ(E, r s ) for the Starobinsky model. The expression for ξ(E, r s ) for this model can be wrriten as
ξ(E, r s ) = 4πr 2 s H −1 0 zi 0 dz (1 + z) −1 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR+βR 2 H 2 0 6(α + 2βR) 1 − 9βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 − 1 2 exp[−r 2 s /4λ 2 ] (4πλ 2 ) 3/2 dE g dE .(28)
Considering the source distances r s = 25 Mpc and 50 Mpc, coherence lengths l c = 0.5 Mpc and 0.1 Mpc, and field strengths B = 10 nG and 50 nG, we plot the density enhancement factors as a function of energy E for both the Starobinsky model and ΛCDM model in the left panel of Fig. 13 and that for r s = 75 Mpc and 100 Mpc, l c = 0.025 Mpc and 0.05 Mpc, and B = 40 nG and 80 nG in the right panel of Fig. 13. Note that in the figure, we constrain the critical energy i.e., E c = 4.5 EeV and E c = 1.8 EeV for the left and the right panel respectively. One can see that the enhancement of density precisely relies on the parameters we consider and for the different parameters we find a very dintinct result in each of the cases. The distinction between enhancement factors for the Starobinsky model and ΛCDM model is clearly visible. The Starobinsky model gives a higher peak and wider range of the enhancement factor than that given by the ΛCDM model. Moreover, for smaller to medium values of r s the difference between two models on the higher energy side is very small, while for higher values of r s the it is very small on the lower energy side of the enhancement factor plots. For more distinct observation of the density enhancement features, we plot the density enhancement as a function of E/E c in Fig. 14 the same sets of parameters, a Kraichnan spectrum is also plotted in the right panel of Fig.14. The peaks of the both spectrum are almost in the same energy range but the variation in lower E/E c is quite different.
Similar to the case of f (R) gravity power-law model, here also we plot the density enhancement factor ξ with respect to the source distance r s by keeping fixed the coherence length l c = 0.1 Mpc for E/E c = 6 (black line) and 12 (red line) in the left panel of Fig. 15 [52,53], HiRes [54] and AUGER [56] along with the flux for the ΛCDM model.
The diffuse UHECR proton flux for the f (R) gravity Starobinsky model can be expressed as
J p (E) = c H 0 4π L 0 K zmax 0 dz (1 + z) m−1 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR+βR 2 H 2 0 6(α + 2βR) 1 − 9βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 − 1 2 q gen (E g ) dE g dE .(29)
In Fig. 16, we plot this flux (29) as a function of energy by considering the Starobinsky model parameters as we have discussed in section IV. From the figure, we can see that the Starobinsky model's spectrum has a very good agreement with AUGER data within the energy range ∼ 0.8 − 1.1 EeV depicting the dip and bump. In the low energy range, the AKENO and HiRes I, and in the higher energy range both HiRes II and AUGER give a reasonably good agreement with the Starobinsky model one. Similar to the f (R) gravity power-law model the Starobinsky model also gives high flux in comparison to the ΛCDM model over the almost entire energy range considered. A detailed comparison of the diffuse fluxes for all the models considered in this work will be discussed in the next section. [52,53], HiRes [54], YAKUTSK [57] and AUGER [55] experiments.
Finally, for the calculation of the modification factor η of the energy spectrum, the unmodified flux of UHECR protons for the Starobinsky model is given by Fig. 17 shows the behaviour of the modification factor for the Starobinsky model along with that of the ΛCDM model, which are compared with experimental data as in the previous cases. One can observe that for E < 0.9 EeV the modification factor η > 1, which signifies the presence of other components for the galactic origin of CRs like the f (R) gravity power-law model. The observational data have given a good agreement with the calculated modification factor spectra with dip as well as bump for both the Starobinsky model and ΛCDM model. It is also clear that the modification factor is very weakly model dependent as seen in the case of the power-law model also.
J unm p (E) = c H 0 4π L 0 (γ g − 2)E −γg zmax 0 dz (1 + z) −γg 3 Ω m0 (1 + z) 3 + 6 Ω r0 (1 + z) 4 + αR+βR 2 H 2 0 6(α + 2βR) 1 − 9βH 2 0 Ωm0(1+z) 3 α(α+2βR) 2 − 1 2 .(30)
VI. DISCUSSIONS AND CONCLUSIONS
The believable sources of UHECRs are of extragalatic in origin [2,86]. Accordingly the propagation mechanisms of UHECRs through the extragalactic space is one of prime issues of study since the past several decades. It can be inferred that in the propagation of UHECRs across the extragalactic space, the TMFs that exist in such spaces and the current accelerated expansion of the Universe might play their crucial roles. Thus this idea led us to study the propagation of UHECRs in the TMFs in the extragalactic space in the light of f (R) theory of gravity and to compare the final outcomes with the experimental data of various world class experiments on UHECRs. The f (R) theory of gravity is the simplest and one of the successful MTGs that could explain the current accelerated expansion of the Universe. To this end, we have considered two f (R) gravity models, viz., the power-law model and the Starobinsky models. The Starobinsky model of f (R) gravity is the widely used most viable model of the theory [50,66,67]. Similarly the power-law model is also found to be suitable in various cosmological and astrophysical perspectives [66]. The basic cosmological equations for these two f (R) gravity models, which are required for this study are taken from the Ref. [66]. Independent parameters of the models are first constrained by using the recent observational data. A corner plot along with the confidence level plot is used to further constraint the said model parameters as well. The relation between the redshift z and the evolution time t is calculated for both the models. The UHECRs density n(E, r s ) and hence the enhancement factor of the density ξ(E, r s ) are obtained and they are calculated numerically for both the models of f (R) gravity.
A comparative analysis has been performed between the predictions of the power-law model and Starobinsky model of f (R) gravity along with the same of the ΛCDM model for the density enhancement factor ξ as a function of source distance r in Fig. 18. In this analysis we consider the coherence length l c = 0.1 Mpc and the fraction of energy and critical energy, E/E c = 6. One can observe that at r < 1000 Mpc, the variation of ξ for the Starobinsky model and the ΛCDM is not very different but at the far distance from the source, the behaviour of these two model is quite different in terms of the peak position of the enhancement and the range of the source distance where the enhancement takes place. In the case of the f (R) power-law model, the enhancement is less than the Starobinsky model and the ΛCDM model, but it gives the density enhancement in a much wider range than the ΛCDM model. In fact it gives the same range of source distance distribution in the enhancement and gives the peak of enhancement at the same distance as that of the Starobinsky model although the enhancement is comparatively low. Another comparative analysis has been done in Fig. 19 has given the best results as compared with other two models. From the left panel of Fig. 19, we see that at lower energies i.e., below 1 Eev, the enhancement is different for different energy values including the peaks for the all three models. But if we take a look at the higher values of energy, the all three models depict almost similar results in the enhancement. One can say that the maximum value of enhancement for the power-law model and the ΛCDM model is approximately the same but the power-law model has covered a wider range of energy values than the ΛCDM model. While the Starobinsky model gives the highest enhancement value as well as the enhancement in a much wider range of energy values. The right panel of Fig. 19 is plotted to show the variation of density enhancement as a function of E/E c . In this panel, we consider the coherence length l c = 0.05 Mpc and source distance r s = 25 Mpc to demonstrate the behaviour of enhancement with the per unit increase of energy with respect to the critical energy. It is seen that at E/E c = 10 −4 , the value of enhancement for the Starobinsky model and ΛCDM is approximately same, while the f (R) power-law model has shown a higher value of enhancement at this point. But as the fraction of energy is increased, the Starobinsky model has given a better result of enhancement as compared to the other two models. [52,53], HiRes [54] and AUGER [56] detectors' data (left panel), and the modification factors of the same in comparison with the AKENO-AGASA [52,53], HiRes [54], AUGER [55] and YAKUTSK [57] detectors' data.
We calculate the E 3 magnified flux numerically for the both f (R) gravity power-law and Starobinsky models and plot them along with that for the ΛCDM model in Fig. 20 (left panel). We also compare our calculations with the available observational data of AKENO-AGASA [52,53], HiRes [54] and AUGER [56] detectors. In the low energy case, both the f (R) models have shown very similar results, while at the higher energy a different result is obtained. All of these models have shown some agreement with the observational detectors' data, but the f (R) Starobinsky model has depicted a very good result in the lower as well as higher energy range for predicting dip and bump. Starting from AKENO as well as HiRes I data, it gives a good agreement with AUGER and HiRes II for the dip and bump respectively. For the power-law and ΛCDM models, the dip is found at around 3 EeV while that for the Starobinsky model is at around 4 EeV. The power-law model gives the higher flux than that of the Starobinsky model for the energy range around 0.2 − 30 EeV and that of the ΛCDM model for the whole range of energy considered. Moreover, the power-law model favours the GZK cutoff more significantly than the Starobinsky model. The prediction of the Starobinsky model is matching very well with the ΛCDM model at around 10 EeV energies, otherwise it shows higher flux than the ΛCDM model. Further, it is interesting note that the fluxes given by the both f (R) gravity models lay well within the combined data range of all the UHECRs experiments considered here, whereas that of the ΛCDM model remains almost outside this range for the energies below 1.1 EeV. Since the analysis of the bump and dip is more convenient in respect of the modification factor. We do not include in this analysis here the ΛCDM model because this factor is less model dependent and in the previous section we have already done the analysis with the ΛCDM model. From the right panel of Fig. 20, we see that in the lower energy range the origin of CRs appears to be of the galactic origin. We observe the dip in the spectrum at around 4 EeV and it agrees with the observational data. Thus it can be concluded that the f (R) gravity models considered here are found to be noteworthy with some limitations depending upon the range of energies in explaining the propagations of UHECRs and hence the observed data of their fluxes. Consequently, it is worth mentioning that by extending the work with these models it would be interesting to study the localised low scale anisotropies CRs that arise at their highest energies. So, we keep this as one of the future prospects of study. red-shift (expansion of the Universe), the second term the energy loss due to the pair production process with the CMB and the third term the photopion reaction with the CMB that dominates at higher energies [87]. For the f (R) gravity power-law model we estimate the E g as E g F (1.2, 0.035, −20, 1.4),
and we estimate that for the Starobinsky model as
In Fig. 21, a variation of E g with respect to E is plotted for the power-law model and the Starobinsky model. For E < 1 EeV, the variation is linear and above this energy E g is increasing non-linearly with the energy E. The difference between the estimated E g 's by the power-law model and the Starobinsky model is noticeable at higher energies above 1 EeV.
FIG. 1 .
1Likelihood contours plot for the model parameter n of the power-law model along with cosmological parameters H0, Ωm0 and Ωr0. obtained most likely values of the Hubble constant H 0 , matter density parameter Ω m0 , radiation density parameter Ω r0 and the power-law model parameter n respectively as H 0 = 67.399131 +0.017907 −0.017964 , Ω m0 = 0.303626 +0.006936 −0.006653 , Ω r0 = 0.000011 +0.000032 −0.000033
FIG. 2 .
24 is found as most suitable value of the parameter of the power-law model, we used other two values of n in this plot to see how the model prediction varies from that of the ΛCDM model with different values of n. It is clear that the higher values of n obviously show Variation of dt/dz with the redshift z for different values for the model parameter n = 1.25, 1.4 and 1.9 along with the variation of the same for the ΛCDM model. more deviation from the ΛCDM model prediction for all appropriate values of z and hence the most favorable value n = 1.4 shows appreciable behavior in this regard.
FIG. 3 .
3Least square fitting to the observational Hubble data set shown in
and the best fitted curve for the model parameters values α = 1.07 and β = 0.00086. Also a fit to the power-law model for the model parameter n = 1.4 is shown here.
Fig. 5 ,FIG. 4 .
54the variation of dt/dz with the redshift z is shown for the both f (R) gravity power-law model and the Starobinsky model in the comparison with the prediction of the ΛCDM model. It can be observed that the power-law model shows very close behaviour with the ΛCDM for very small values of z, but their difference continuously increases with the increasing values of z as mentioned already. Whereas the Starobinsky model shows consistently higher deviation from the ΛCDM model although Likelihood contours plot for the f (R) gravity Starobinsky model parameters α and β.
FIG. 5 .
5Variation of dt/dz with respect to the redshift z for the both f (R) gravity power-law model and the Starobinsky model. there is gradually a slight inclination towards the ΛCDM model for higher z values. Moreover, the Starobinsky model gives higher values of dt/dz till z ∼ 3 and after this value of z the values of dt/dz for the Starobinsky model decreases continuously with increasing z in comparison to that of the power-law model. Further, both f (R) gravity models show consistently higher dt/dz values than that of the ΛCDM model depending on the values of z within the range of our interest.
FIG. 6 .
6Variations of λ 2 with respect to energy E for the Kolmogorov spectrum (left panel) and Kraichnan spectrum (middle panel) according to the f (R) gravity power-law model and the standard ΛCDM model obtained by considering lc = 0.1 Mpc, B = 50 nG and Ec = 4.5 EeV. Right panel shows the percentage difference between λ 2 values for the Kolmogorov and Kraichnan spectra per average bin values of it for each energy bin of both the spectra for the power-law model with n = 1.4. Here and rest of the corresponding calculations we use z = 0 − 5.
For r s = 25 Mpc, l c = 0.5 Mpc, B = 10 nG and E c = 4.5 EeV (upper left panel), the enhancement has become noticable for different gravity models in in the energy range E < 1 EeV. For the energy range 0.01 < E < 10 EeV, r s = 50 Mpc, l c = 0.1 Mpc, B = 50 nG and E c = 4.5 EeV (upper right panel) are taken into account. In this case, below 1 EeV the variation of enhancement for different gravity models is more distinguished compared to E > 1 EeV. In the lower left panel, r = 75 Mpc, l c = 0.05 Mpc, B = 40 nG and E c = 1.8 EeV are used to plot the enhancement factor for the ΛCDM and f (R) power-law models, while this is done for r = 100 Mpc, l c = 0.025 Mpc, B = 80 nG and E c = 1.8 EeV in the lower right panel.
FIG. 7 .
7Variation of density enhancement factor ξ with energy E for the f (R) gravity power-law model and the ΛCDM model obtained by considering r = 25 − 100 Mpc, lc = 0.025 − 0.5 Mpc, B = 10 − 80 nG and Ec = 1.8 − 4.5 EeV.
FIG. 8 .
8Variation of density enhancement ξ with E Ec . The left panel is for the Kolmogorov spectrum while the right panel is for the Kraichnan spectrum obtained by considering the ΛCDM and f (R) gravity power-law models with lc = 0.1 Mpc and rs = 25 Mpc.
FIG. 9 .
9Variation of ξ with source distance rs for the ΛCDM model and f (R) power-law model obtained by considering lc = 0.1 (upper and lower left panel), lc = 0.05 (upper and lower right panel)and E/Ec = 6 (upper left panel), E/Ec = 12 (lower left and upper right panel), E/Ec = 24 (lower right panel).
FIG. 10. UHECR proton flux is shown for the ΛCDM model and the f (R) gravity power-law model in comparison with various experimental data such as AKENO-AGASA[52,53], HiRes[54] and AUGER[56] experiments.
FIG. 12 .
12Variation of λ 2 with energy E for the ΛCDM, f (R) gravity power-law and Starobinsky models by considering rs = 50 Mpc, lc = 0.1 Mpc, B = 50 nG and Ec = 4.5 EeV.
FIG. 14 .
14for the Starobinsky model as well as for the ΛCDM model. Using l c = 0.05 Mpc, r s = 25 Mpc (solid line) and l c = 0.1 Mpc, r s = 50 Mpc (dotted line), a Kolmogorov spectrum is shown in the left panel for both the models. A remarkable variation is observed for E/E c < 0.1 in both the sets of values, although for E/E c > 0.1 a quite similar results we have obtained. Using 13. Variation of density enhancement ξ with respect to energy E for the f (R) gravity Starobinsky model in comparison with the ΛCDM model obtained by using different sets of parameter as r = 25 − 100 Mpc, lc = 0.025 − 0.5 Mpc, B = 10 − 80 nG and Ec = 1.8 − 4.5 EeV. Variation of density enhancement ξ with E Ec for the Starobinsky model in comparison with the ΛCDM model. The left panel is for the Kolmogorov spectrum while the right panel is for the Kraichnan spectrum obtained by considering different sets of coherence length lc and source distance rs.
and that same for l c = 0.05 Mpc with E/E c = 12 (black line) and 24 (red line) in the right panel of this figure to understand the propagation of UHECR protons in the light of the Starobinsky model in comparison with the ΛCDM model. From this figure, one can see that similar to the power-law model the peak of the enhancement is higher for smaller values of E/E c , whereas the range of the distribution of the enhancement is wider for smaller values of l c . 15. Variation of ξ with source distance rs obtained by considering the parameter as lc = 0.1, E Ec = 6, 12 (left panel) and lc = 0.05, E Ec = 12, 24 (right panel) for both the Starobinsky model and the ΛCDM model. FIG. 16. UHECR protons flux is shown for the f (R) gravity Starobinsky model and compare with various experimental data such as AKENO-AGASA
FIG. 17 .
17Spectra of modification factor for the f (R) gravity Starobinsky model along with that for the ΛCDM model with γg = 2.7, which are in comparison with different experimental data such as AKENO-AGASA
18. Density enhancement factor ξ as a function of source distance is shown for the power-law and the Starobinsky model of f (R) gravity in comparison with that for the ΛCDM model by considering lc = 0.1 Mpc and E/Ec = 6.
(left panel) for the CRs density enhancement with energy. For this purpose we take the parameters as r s = 50 Mpc, l c = 0.1 Mpc, B = 50 nG and E c = 4.5 EeV. The Starobinsky model 2 −
19. Density enhancement factor ξ as a function of energy E of UHECR protons obtained by considering rs = 50 Mpc, lc = 0.1 Mpc, B = 50 nG and Ec = 4.5 EeV (left panel), and also as a function of E/Ec of the same particles for r = 25 Mpc and lc = 0.05 Mpc (right panel). Both panels are shown for the power-law model, Starobinsky model and the ΛCDM model.
FIG. 20 .
20Calculated E 3 magnified spectra of UHECR protons for the ΛCDM model (black dotted line), the f (R) gravity power-law model (blue line) and the Starobinsky model (red line) in comparison with the AKENO-AGASA
FIG. 21 .
21Variation of generation energy Eg with respect to energy E for the f (R) gravity power-law model and Starobinsky model.
E g F ( 1 .
147, 0.02, −18.5, 1.66).
TABLE I .
ICurrently available observational Hubble parameter data H obs (z) [km s −1 Mpc −1 ]z
H obs (z)
Reference
z
H obs (z)
Reference
0.0708 69.0 ± 19.68
[70]
0.48
97.0 ± 62.0
[78]
0.09
69.0 ± 12.0
[71]
0.51
90.8 ± 1.9
[75]
0.12
68.6 ± 26.2
[70]
0.57
92.4 ± 4.5
[79]
0.17
83.0 ± 8.0
[71]
0.593 104.0 ± 13.0
[72]
0.179
75.0 ± 4.0
[72]
0.60
87.9 ± 6.1
[77]
0.199
75.0 ± 5.0
[72]
0.61
97.8 ± 2.1
[75]
0.2
72.9 ± 29.6
[70]
0.68
92.0 ± 8.0
[72]
0.24
79.69 ± 2.65
[73]
0.73
97.3 ± 7.0
[77]
0.27
77.0 ± 14.0
[71]
0.781 105.0 ± 12.0
[72]
0.28
88.8 ± 36.6
[70]
0.875 125.0 ± 17.0
[72]
0.35
84.4 ± 7.0
[74]
0.88
90.0 ± 40.0
[78]
0.352
83.0 ± 14.0
[72]
0.9
117.0 ± 23.0
[71]
0.38
81.9 ± 1.9
[75]
1.037 154.0 ± 20.0
[72]
0.3802 83.0 ± 13.5
[76]
1.3
168.0 ± 17.0
[71]
0.40
95.0 ± 17.0
[71]
1.363 160.0 ± 33.6
[80]
0.4004 77.0 ± 10.2
[76]
1.43
177.0 ± 18.0
[71]
0.4247 87.1 ± 11.2
[76]
1.53
140.0 ± 14.0
[71]
0.43
86.45 ± 3.68
[73]
1.75
202.0 ± 40.0
[71]
0.44
82.6 ± 7.8
[77]
1.965 186.5 ± 50.4
[80]
0.4497 92.8 ± 12.9
[76]
2.34
223.0 ± 7.0
[81]
0.47
89.0 ± 50.0
[78]
2.36
227.0 ± 8.0
[82]
0.4783
80.9 ± 9.0
[76]
ACKNOWLEDGEMENTSUDG is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for the Visiting Associateship of the institute.Appendix A: Parametric function for the generation energy EgFor the complex nature of the dependence of the generation energy E g on the energy E of UHECR particles, we considered a parametric function for the generation energy in this work as given bywhere c 1 , c 2 , c 3 and c 4 are constant parameters to be determined. In the function the first term represents the energy loss due to
Uber Beobachtungen der durchdringenden Strahlung bei sieben Freiballonfahrten. V F Hess, Phys. Z. 131084V. F. Hess, Uber Beobachtungen der durchdringenden Strahlung bei sieben Freiballonfahrten, Phys. Z. 13, 1084 (1912).
Anisotropies of ultrahigh energy cosmic rays diffusing from extragalactic sources. D Harari, S Mollerach, E Roulet, 10.1103/PhysRevD.89.123001arXiv:1312.1366Phys. Rev. D. 89123001D. Harari, S. Mollerach, E. Roulet, Anisotropies of ultrahigh energy cosmic rays diffusing from extragalactic sources, Phys. Rev. D 89, 123001 (2014) [arXiv:1312.1366].
Ultrahigh energy cosmic rays from a nearby extragalactic source in the diffusive regime. S Mollerach, E Roulet, 10.1103/PhysRevD.99.103010arXiv:1903.05722Phys. Rev. D. 99103010S. Mollerach, E. Roulet, Ultrahigh energy cosmic rays from a nearby extragalactic source in the diffusive regime, Phys. Rev. D 99, 103010 (2019) [arXiv:1903.05722].
Cascade photons as test of protons in UHECR. V Berezinsky, A Z Gazizov, O Kalashev, 10.1016/j.astropartphys.2016.08.007arXiv:1606.09293Astropart. Phys. 84V. Berezinsky, A. Z. Gazizov, O. Kalashev, Cascade photons as test of protons in UHECR, Astropart. Phys., 84, 52 (2016) [arXiv:1606.09293 ].
V Berezinsky, A Z Gazizov, S I Grigorieva, 10.48550/arXiv.astro-ph/0210095arXiv:astro-ph/0210095Signatures of AGN model for UHECR. V. Berezinsky, A. Z. Gazizov, S. I. Grigorieva, Signatures of AGN model for UHECR [arXiv:astro-ph/0210095].
Observations and implications of the ultrahigh-energy cosmic rays. M Nagano, A A Watson, 10.1103/RevModPhys.72.689Rev. Mod. Phys. 72689M. Nagano, A. A. Watson, Observations and implications of the ultrahigh-energy cosmic rays, Rev. Mod. Phys. 72, 689 (2000).
Origin and propagation of extremely high-energy cosmic rays. P Bhattacharjee, G Sigl, 10.1016/S0370-1573%2899%2900101-5arXiv:astro-ph/9811011Phys. Rept. 327P. Bhattacharjee, G. Sigl, Origin and propagation of extremely high-energy cosmic rays, Phys. Rept. 327 (2000) [arXiv:astro-ph/9811011].
Ultra High Energy Cosmic Rays: The theoretical challenge. A V Olinto, 10.48550/arXiv.astro-ph/0002006arXiv:astro-ph/0002006Phys. Rept. 333A. V. Olinto, Ultra High Energy Cosmic Rays: The theoretical challenge, Phys. Rept. 333 (2000) [arXiv:astro-ph/0002006].
Anisotropies of ultrahigh-energy cosmic rays in a scenario with nearby sources. S Mollerach, E Roulet, 10.1103/PhysRevD.105.063001arXiv:2111.00560Phys. Rev. D. 10563001S. Mollerach, E. Roulet, Anisotropies of ultrahigh-energy cosmic rays in a scenario with nearby sources, Phys. Rev. D 105 063001 (2022) [arXiv:2111.00560].
End to the Cosmic-Ray Spectrum?. K Greisen, 10.1103/PhysRevLett.16.748Phys. Rev. Lett. 16748K. Greisen, End to the Cosmic-Ray Spectrum?, Phys. Rev. Lett. 16, 748 (1966).
Upper Limit of the Spectrum of Cosmic Rays. G T Zatsepin, V A Kuzmin, JETP. Lett. 478G. T. Zatsepin, V. A. Kuzmin, Upper Limit of the Spectrum of Cosmic Rays, JETP. Lett. 4, 78 (1966).
Ultrahigh-energy cosmic-ray spectrum. C T Hill, D N Schramm, 10.1103/PhysRevD.31.564Phys. Rev. D. 31564C. T. Hill, D. N. Schramm, Ultrahigh-energy cosmic-ray spectrum, Phys. Rev. D 31, 564 (1985).
A bump in the ultra-high energy cosmic ray spectrum. V Berezinsky, S I Grigorieva, Astron. Astroph. 1991V. Berezinsky, S. I. Grigorieva, A bump in the ultra-high energy cosmic ray spectrum , Astron. Astroph. 199, 1 (1988)
Energy Spectrum of Ultra-High Energy Cosmic Rays with Extra-Galactic Origin. S Yoshida, M Teshima, 10.1143/ptp/89.4.833Prog. Theor. Phys. 89833S. Yoshida, M. Teshima, Energy Spectrum of Ultra-High Energy Cosmic Rays with Extra-Galactic Origin, Prog. Theor. Phys., 89, 833 (1993).
Propagation of ultrahigh energy protons in the nearby universe. T Stanev, 10.1103/PhysRevD.62.093005Phys. Rev. D. 6293005T. Stanev et al., Propagation of ultrahigh energy protons in the nearby universe, Phys. Rev. D 62, 093005 (2000) .
Cosmic ray propagation in the Universe in presence of a random magnetic field. A D Supanitsky, 10.48550/arXiv.2007.09063arXiv:2007.09063JCAP. 0446A. D. Supanitsky, Cosmic ray propagation in the Universe in presence of a random magnetic field, JCAP 04, 046 (2021) [arXiv:2007.09063].
Magnetic diffusion effects on the Ultra-High Energy Cosmic Ray spectrum and composition. S Mollerach, E Roulet, 10.48550/arXiv.1305.6519arXiv:1305.6519JCAP. 1013S. Mollerach, E. Roulet, Magnetic diffusion effects on the Ultra-High Energy Cosmic Ray spectrum and composition, JCAP 10, 013 (2013) [arXiv:1305.6519].
Reconstructed properties of the sources of UHECR and their dependence on the extragalactic magnetic field. D Wittkowski For The, Auger CollaborationPierre, Auger CollaborationPoS. 563D. Wittkowski for The Pierre Auger Collaboration, Reconstructed properties of the sources of UHECR and their dependence on the extragalactic magnetic field, PoS 563 (ICRC2017).
Origin of the ankle in the ultra-high energy cosmic ray spectrum and of the extragalactic protons below it. M Unger, G R Farrar, L A Anchordoqui, https:/journals.aps.org/prd/abstract/10.1103/PhysRevD.92.123001arXiv:1505.02153Phys. Rev. D. 92123001M. Unger, G. R. Farrar, L. A. Anchordoqui, Origin of the ankle in the ultra-high energy cosmic ray spectrum and of the extragalactic protons below it, Phys. Rev. D 92, 123001 (2015) [arXiv:1505.02153].
A complete model of the CR spectrum and composition across the Galactic to Extragalactic transition. N Globus, D Allard, E Parizot, https:/journals.aps.org/prd/abstract/10.1103/PhysRevD.92.021302arXiv:1505.01377Phys. Rev. D. 9221302N. Globus, D. Allard, E. Parizot, A complete model of the CR spectrum and composition across the Galactic to Extragalactic transition, Phys. Rev. D 92, 021302 (2015) [arXiv:1505.01377].
CRPropa 3-a public astrophysical simulation framework for propagating extraterrestrial ultra-high energy particles. R A Batista, https:/iopscience.iop.org/article/10.1088/1475-7516/2016/05/038arXiv:1603.07142JCAP. 0538R. A. batista et.al., CRPropa 3-a public astrophysical simulation framework for propagating extraterrestrial ultra-high energy particles, JCAP 05, 038 (2016) [arXiv:1603.07142].
Simulations of ultra-high-energy cosmic rays propagation. O E Kalashev, E Kido, 10.48550/arXiv.1406.0735arXiv:1406.0735J. Exp. Theor. Phys. 120O. E. Kalashev and E. Kido, Simulations of ultra-high-energy cosmic rays propagation, J. Exp. Theor. Phys. 120, 790 (2015) [arXiv:1406.0735].
Diffusion of Cosmic Rays in the Expanding Universe. I. V Berezinsky, A Z Gazizov, 10.1086/502626arXiv:astro-ph/0512090Astrophys. J. 643V. Berezinsky, A. Z. Gazizov, Diffusion of Cosmic Rays in the Expanding Universe. I., Astrophys. J. 643, 8 (2006) [arXiv:astro- ph/0512090].
Detection of a Cosmic Ray with Measured Energy Well beyond the Expected Spectral Cutoff due to Cosmic Microwave Radiation. D J Bird, 10.48550/arXiv.astro-ph/9410067arXiv:astro-ph/9410067Astrophys. J. 441D. J. Bird et al., Detection of a Cosmic Ray with Measured Energy Well beyond the Expected Spectral Cutoff due to Cosmic Microwave Radiation, Astrophys. J. 441 (1995) [arXiv:astro-ph/9410067].
Monocular Measurement of the Spectrum of UHE Cosmic Rays by the FADC Detector of the HiRes Experiment. 10.48550/arXiv.astro-ph/0208301arXiv:astro-ph/0208301The High Resolution Fly's. 23The High Resolution Fly's Eye Collaboration, Monocular Measurement of the Spectrum of UHE Cosmic Rays by the FADC Detector of the HiRes Experiment, Astropart. Phys. 23, 157 (2005) [arXiv:astro-ph/0208301].
Energy determination in the Akeno Giant Air Shower Array experiment. M Takeda, 10.48550/arXiv.astro-ph/0209422arXiv:astro-ph/0209422Astropart. Phys. 19447M. Takeda et al., Energy determination in the Akeno Giant Air Shower Array experiment, Astropart. Phys. 19, 447 (2003) [arXiv:astro- ph/0209422].
Diffusive Propagation of Ultra-High-Energy Cosmic Rays and the Propagation Theorem. R Aloisio, V Berezinsky, 10.48550/arXiv.astro-ph/0403095arXiv:astro-ph/0403095Astrophys. J. 612R. Aloisio, V. Berezinsky, Diffusive Propagation of Ultra-High-Energy Cosmic Rays and the Propagation Theorem, Astrophys. J. 612, 900 (2004) [arXiv:astro-ph/0403095].
Dip in UHECR spectrum as signature of proton interaction with CMB. V Berezinsky, A Z Gazizov, S I Grigorieva, 10.48550/arXiv.astro-ph/0502550arXiv:astro-ph/0502550Phys. Lett. B. 612147V. Berezinsky, A. Z. Gazizov, S. I. Grigorieva, Dip in UHECR spectrum as signature of proton interaction with CMB, Phys. Lett. B 612, 147 (2005) [arXiv:astro-ph/0502550].
On astrophysical solution to ultrahigh energy cosmic rays. V Berezinsky, A Z Gazizov, S I Grigorieva, 10.48550/arXiv.hep-ph/0204357arXiv:hep-ph/0204357Phys. Rev. D. 7443005V. Berezinsky, A. Z. Gazizov, S. I. Grigorieva, On astrophysical solution to ultrahigh energy cosmic rays, Phys. Rev. D 74, 043005 (2006) [arXiv:hep-ph/0204357].
Observation of Gravitational Waves from a Binary Black Hole Merger. B P Abbott, LIGO Scientific Collaboration ; Virgo Collaborationhttps:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102arXiv:1602.03837Phys. Rev. Lett. 11661102B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116, 061102 (2016) [arXiv:1602.03837].
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. https:/iopscience.iop.org/article/10.3847/2041-8213/ab0ec7Astrophys. J. Lett. 8711The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole, Astrophys. J. Lett. 871, L1 (2019).
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. II. Array and Instrumentation. https:/iopscience.iop.org/article/10.3847/2041-8213/ab0c96Astrophys. J. Lett. 8752The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. II. Array and Instrumentation, Astrophys. J. Lett. 875, L2 (2019).
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. III. Data Processing and Calibration. https:/iopscience.iop.org/article/10.3847/2041-8213/ab0c57Astrophys. J. Lett. 8753The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. III. Data Processing and Calibration, Astrophys. J. Lett. 875, L3 (2019).
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. https:/iopscience.iop.org/article/10.3847/2041-8213/ab0e85Astrophys. J. Lett. 8754The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole, Astrophys. J. Lett. 875, L4 (2019).
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring. https:/iopscience.iop.org/article/10.3847/2041-8213/ab0f43Astrophys. J. Lett. 8755The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring, Astrophys. J. Lett. 875, L5 (2019).
The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. https:/iopscience.iop.org/article/10.3847/2041-8213/ab1141Astrophys. J. Lett. 8756The Event Horizon Telescope Collaboration et al., First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole, Astrophys. J. Lett. 875, L6 (2019).
Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant. A G Reiss, https:/iopscience.iop.org/article/10.1086/300499arXiv:astro-ph/9805201Astron. J. 116A. G. Reiss et al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, Astron. J. 116, 1009 (1998) [arXiv:astro-ph/9805201]
Measurements of Ω and Λ from 42 High-Redshift Supernovae. S Perlmutter, https:/iopscience.iop.org/article/10.1086/307221arXiv:astro-ph/9812133Astrophys. J. 517S. Perlmutter et al., Measurements of Ω and Λ from 42 High-Redshift Supernovae, Astrophys. J. 517, 565 (1999) [arXiv:astro- ph/9812133].
Three-Year Wilkinson Microwave Anisotropy Probe ( WMAP ) Observations: Implications for Cosmology. D N Spergel, https:/iopscience.iop.org/article/10.1086/513700arXiv:astro-ph/0603449Astrophys. J. Suppl. S. 170D. N. Spergel et. al., Three-Year Wilkinson Microwave Anisotropy Probe ( WMAP ) Observations: Implications for Cosmology, Astro- phys. J. Suppl. S 170, 377 (2007) [arXiv:astro-ph/0603449].
The Supernova Legacy Survey: Measurement of ΩM , ΩΛ and ω from the First Year Data Set, A & A 447. P Astier, arXiv:astro-ph/051044731P. Astier et. al., The Supernova Legacy Survey: Measurement of ΩM , ΩΛ and ω from the First Year Data Set, A & A 447, 31 (2006) [arXiv:astro-ph/0510447].
Candidate Missing Mass Carriers in an Inflationary Universe. P D Naselskii, A G Polnarev, Soviet Astro. 29487P. D. Naselskii, A. G. Polnarev, Candidate Missing Mass Carriers in an Inflationary Universe, Soviet Astro. 29, 487 (1985).
f (R) theories of gravity. P Sotiriou, V Faraoni, https:/journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.451arXiv:0805.1726Rev. Mod. Phys. 82451P. Sotiriou, V. Faraoni, f (R) theories of gravity, Rev. Mod. Phys. 82, 451 (2010) [arXiv:0805.1726].
Disappearing cosmological constant in f (R) gravity. A A Starobinsky, https:/link.springer.com/article/10.1134/S0021364007150027arXiv:0706.2041JETP. Lett. 86157A. A. Starobinsky, Disappearing cosmological constant in f (R) gravity, JETP. Lett. 86, 157 (2007) [arXiv:0706.2041].
Models of f (R) cosmic acceleration that evade solar system tests. W Hu, I Sawicki, arXiv:0705.1158v1Phys. Rev. D. 7664004W. Hu and I. Sawicki, Models of f (R) cosmic acceleration that evade solar system tests, Phys. Rev. D 76, 064004 (2007) [arXiv:0705.1158v1].
Observational signatures of f (R) dark energy models that satisfy cosmological and local gravity constraints. S Tsujikawa, https:/journals.aps.org/prd/abstract/10.1103/PhysRevD.77.023507arXiv:0709.1391v2Phys. Rev. D. 7723507S. Tsujikawa, Observational signatures of f (R) dark energy models that satisfy cosmological and local gravity constraints, Phys. Rev. D 77, 023507 (2008) [arXiv:0709.1391v2].
Gravitational Waves in f (R) Gravity Power Law Model. D J Gogoi, U D Goswami, https:/link.springer.com/article/10.1007/s12648-020-01998-8arXiv:1901.11277v3Indian J. Phys. 96D. J. Gogoi, U. D. Goswami, Gravitational Waves in f (R) Gravity Power Law Model, Indian J. Phys. 96, 637 (2022) [arXiv:1901.11277v3].
Transition of propagation of relativistic particles from the ballistic to the diffusion regime. A Y Prosekin, S R Kelner, F A Aharonian, 10.48550/arXiv.1506.06594arXiv:1506.06594Phys. Rev. D. 9283003A. Y. Prosekin, S. R. Kelner, F. A. Aharonian, Transition of propagation of relativistic particles from the ballistic to the diffusion regime, Phys. Rev. D 92, 083003 (2015) [arXiv:1506.06594].
Cosmic ray anisotropies from transient extragalactic sources. D Harari, S Mollerach, E Roulet, https:/journals.aps.org/prd/abstract/10.1103/PhysRevD.103.023012arXiv:2010.10629v2Phys. Rev. D. 10323012D. Harari, S. Mollerach, E. Roulet, Cosmic ray anisotropies from transient extragalactic sources, Phys. Rev. D 103, 023012 (2021) [arXiv:2010.10629v2]
Anisotropic LRS-BI Universe with f (Q) gravity theory, Phys. Dark Universe. P Sarmah, A De, U D Goswami, arXiv:2303.0590540P. Sarmah, A. De, U. D. Goswami, Anisotropic LRS-BI Universe with f (Q) gravity theory, Phys. Dark Universe 40, 101209 (2023) [arXiv:2303.05905]
A new f(R) Gravity Model and properties of Gravitational Waves in it. D J Gogoi, U D Goswami, https:/link.springer.com/article/10.1140/epjc/s10052-020-08684-3arXiv:2006.04011Eur. Phys. J. C. 801101D. J. Gogoi and U. D. Goswami, A new f(R) Gravity Model and properties of Gravitational Waves in it, Eur. Phys. J. C 80, 1101 (2020) [arXiv:2006.04011].
Strange stars in f(R) gravity Palatini formalism and gravitational wave echoes from them. J Bora, D J Gogoi, U D Goswami, https:/iopscience.iop.org/article/10.1088/1475-7516/2022/09/057arXiv:2204.05473v2JCAP. 0957J. Bora, D. J. Gogoi, U. D. Goswami, Strange stars in f(R) gravity Palatini formalism and gravitational wave echoes from them, JCAP 09, 057 (2022) [arXiv:2204.05473v2]
Inelastic cross section for p-air collisions from air shower experiments and total cross section for p-p collisions up to √ s = 24 TeV. M Honda, Akeno Collaboration10.1103/PhysRevLett.70.525Phys. Rev. Lett. 70525M. Honda et al. (Akeno Collaboration), Inelastic cross section for p-air collisions from air shower experiments and total cross section for p-p collisions up to √ s = 24 TeV, Phys. Rev. Lett. 70, 525 (1993).
Small-Scale Anisotropy of Cosmic Rays above 10 19 eV Observed with the Akeno Giant Air Shower Array. M Takeda, AGASA Collaborationhttps:/iopscience.iop.org/article/10.1086/307646Astrophys. J. 522225M. Takeda (AGASA Collaboration), Small-Scale Anisotropy of Cosmic Rays above 10 19 eV Observed with the Akeno Giant Air Shower Array, Astrophys. J. 522, 225 (1999).
A Study of the Composition of Ultra-High-Energy Cosmic Rays Using the High-Resolution Fly's Eye. R U Abbasi, HiRes Collaborationhttps:/iopscience.iop.org/article/10.1086/427931Astrophys. J. 622910R. U. Abbasi et al. (HiRes Collaboration), A Study of the Composition of Ultra-High-Energy Cosmic Rays Using the High-Resolution Fly's Eye, Astrophys. J. 622, 910 (2005).
First Estimate of the Primary Cosmic Ray Energy Spectrum above 3 EeV from the Pierre Auger Observatory. 10.48550/arXiv.astro-ph/0507150astro-ph/0507150Proc. 29th ICRC. 29th ICRCPune, IndiaThe Pierre Auger Collaboration, First Estimate of the Primary Cosmic Ray Energy Spectrum above 3 EeV from the Pierre Auger Obser- vatory , Proc. 29th ICRC, August 3-10, 2005, Pune, India [astro-ph/0507150].
Testing effects of Lorentz invariance violation in the propagation of astroparticles with the Pierre Auger Observatory. 10.48550/arXiv.2112.06773arXiv:2112.06773JCAP. 0123The Pierre Auger Collaboration et al., Testing effects of Lorentz invariance violation in the propagation of astroparticles with the Pierre Auger Observatory, JCAP 01, 023 (2022) [arXiv:2112.06773].
Muons in extensive air showers of energies E0 = 10 16.6 -10 19. A V Glushkov, Yakutsk Collaborationhttps:/link.springer.com/article/10.1134/1.568289JETP. Lett. 897A. V. Glushkov et al., Muons in extensive air showers of energies E0 = 10 16.6 -10 19.8 eV (Yakutsk Collaboration), JETP. Lett. 71, 97 (2000).
Observing Interstellar and Intergalactic Magnetic Fields. J L Han, 10.1146/annurev-astro-091916-055221Annu. Rev. Astron. 255111J. L. Han, Observing Interstellar and Intergalactic Magnetic Fields, Annu. Rev. Astron 255, 111 (2017).
Clusters of galaxies: observational properties of the diffuse radio emission. L Feretti, 10.1007/s00159-012-0054-zAstron. Astrophys. Rev. 2054L. Feretti et al., Clusters of galaxies: observational properties of the diffuse radio emission, Astron. Astrophys. Rev. 20, 54 (2012).
A Synthesis of Fundamental Parameters of Spiral Arms, Based on Recent Observations in the Milky Way. J P Vallée, New Astro. Rev. 5591J. P. Vallée, A Synthesis of Fundamental Parameters of Spiral Arms, Based on Recent Observations in the Milky Way, New Astro. Rev. 55, 91 (2011).
Simulations of extragalactic magnetic fields and of their observables. F Vazza, https:/iopscience.iop.org/article/10.1088/1361-6382/aa8e60/metaClass. Quantum Grav. 34234001F. Vazza et al., Simulations of extragalactic magnetic fields and of their observables, Class. Quantum Grav. 34, 234001 (2017).
B Santos, M Campista, J Santos, J S Alcaniz, 10.48550/arXiv.1207.2478arXiv.1207.2478Cosmology with Hu-Sawicki gravity in the Palatini formalism, A & A 548. 31B. Santos, M. Campista, J. Santos, J. S. Alcaniz, Cosmology with Hu-Sawicki gravity in the Palatini formalism, A & A 548, A31 (2012) [arXiv.1207.2478] .
N Aghanim, Planck Collaboration10.48550/arXiv.1807.06209arXiv:1807.06209A & A 641, A6 (2020). Planck 2018 resultsN. Aghanim et al., Planck 2018 results (Planck Collaboration), A & A 641, A6 (2020) [arXiv:1807.06209].
Review of Particle Physics. K Nakamura, Particle Data Group, https:/iopscience.iop.org/article/10.1088/0954-3899/37/7A/075021J. Phys. G: Nucl. Part. Phys. 3775021K. Nakamura and Particle Data Group, Review of Particle Physics, J. Phys. G: Nucl. Part. Phys. 37, 075021(2010).
Cosmological Dynamics of f(R) Gravity Scalar Degree of Freedom in Einstein Frame. U D Goswami, Deka, 10.48550/arXiv.1303.5868arXiv.1303.5868IJMP D. 221350083U. D. Goswami, K Deka, Cosmological Dynamics of f(R) Gravity Scalar Degree of Freedom in Einstein Frame, IJMP D 22, 13 (2013) 1350083 [arXiv.1303.5868].
Cosmology with a new f (R) gravity model in Palatini formalism. D J Gogoi, U D Goswami, 10.48550/arXiv.2108.01409arXiv:2108.01409IJMP D. 312250048D. J. Gogoi and U. D. Goswami, Cosmology with a new f (R) gravity model in Palatini formalism, IJMP D 31, 2250048 (2022) [arXiv:2108.01409].
A New Type of Isotropic Cosmological Models without Singularity. A A Starobinsky, Phys. Lett. B. 9199A. A. Starobinsky, A New Type of Isotropic Cosmological Models without Singularity, Phys. Lett. B 91, 99 (1980).
Bianchi Type I model of universe with customized scale factors. P Sarmah, U D Goswami, 10.48550/arXiv.2203.00385arXiv.2203.00385MPLA. 37P. Sarmah and U. D. Goswami, Bianchi Type I model of universe with customized scale factors, MPLA 37, 21 (2022) [arXiv.2203.00385].
ROOT: analyzing petabytes of data, scientifically. ROOT: analyzing petabytes of data, scientifically, https://root.cern.ch/.
Four New Observational H(z) Data From Luminous Red Galaxies of Sloan Digital Sky Survey Data Release Seven, Res. C Zhang, arXiv:1207.4541Astron. Astrophys. 141221C. Zhang et al., Four New Observational H(z) Data From Luminous Red Galaxies of Sloan Digital Sky Survey Data Release Seven, Res. Astron. Astrophys. 14, 1221 (2014) [arXiv:1207.4541].
Constraints on the redshift dependence of the dark energy potential. J Simon, L Verde, R Jimenez, 10.1103/PhysRevD.71.123001arXiv:astro-ph/0412269Phys. Rev. D. 71123001J. Simon, L. Verde, and R. Jimenez, Constraints on the redshift dependence of the dark energy potential, Phys. Rev. D 71, 123001 (2005) [arXiv:astro-ph/0412269].
Improved constraints on the expansion rate of the Universe up to z ∼ 1.1 from the spectroscopic evolution of cosmic chronometers. M Moresco, https:/iopscience.iop.org/article/10.1088/1475-7516/2012/08/006arXiv:1201.3609JCAP. 086M. Moresco et al., Improved constraints on the expansion rate of the Universe up to z ∼ 1.1 from the spectroscopic evolution of cosmic chronometers, JCAP 08, 006 (2012) [arXiv:1201.3609].
Clustering of luminous red galaxies -IV . Baryon acoustic peak in the line-of-sight direction and a direct measurement of H(z). E Gaztañaga, A Cabré, L Hui, 10.48550/arXiv.0807.3551arXiv.0807.3551MNRAS. 3991663E. Gaztañaga, A. Cabré and L. Hui, Clustering of luminous red galaxies -IV . Baryon acoustic peak in the line-of-sight direction and a direct measurement of H(z), MNRAS 399, 1663 (2009) [arXiv.0807.3551].
Measuring DA and H at z = 0.35 from the SDSS DR7 LRGs using baryon acoustic oscillations. X Xu, 10.48550/arXiv.1206.6732arXiv:1206.6732MNRAS. 4312834X. Xu et al., Measuring DA and H at z = 0.35 from the SDSS DR7 LRGs using baryon acoustic oscillations, MNRAS 431, 2834 (2013) [arXiv:1206.6732].
The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample. S Alam, 10.1093/mnras/stx721MNRAS. 4702617S. Alam et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample, MNRAS 470, 2617 (2017).
A 6% measurement of the Hubble parameter at z ∼ 0.45: direct evidence of the epoch of cosmic re-acceleration JCAP 05. M Moresco, https:/iopscience.iop.org/article/10.1088/1475-7516/2016/05/014arXiv:1601.0170114M. Moresco et al., A 6% measurement of the Hubble parameter at z ∼ 0.45: direct evidence of the epoch of cosmic re-acceleration JCAP 05, 014 (2016) [arXiv:1601.01701].
The WiggleZ Dark Energy Survey: joint measurements of the expansion and growth history at z < 1. C Blake, 10.48550/arXiv.1204.3674arXiv:1204.3674MNRAS. 425405C. Blake et al., The WiggleZ Dark Energy Survey: joint measurements of the expansion and growth history at z < 1, MNRAS 425, 405 (2012) [arXiv:1204.3674].
Age-dating luminous red galaxies observed with the Southern African Large Telescope. A L Ratsimbazafy, 10.1093/mnras/stx301MNRAS. 4673239A. L. Ratsimbazafy et al., Age-dating luminous red galaxies observed with the Southern African Large Telescope, MNRAS 467, 3239 (2017).
The clustering of galaxies in the SDSS-III DR9 Baryon Oscillation Spectroscopic Survey: testing deviations from Λ and general relativity using anisotropic clustering of galaxies. L Samushia, 10.48550/arXiv.1206.5309arXiv:1206.5309v2MNRAS. 4291514L. Samushia et al., The clustering of galaxies in the SDSS-III DR9 Baryon Oscillation Spectroscopic Survey: testing deviations from Λ and general relativity using anisotropic clustering of galaxies, MNRAS 429, 1514 (2013) [arXiv:1206.5309v2].
Raising the bar: new constraints on the Hubble parameter with cosmic chronometers at z ∼ 2. M Moresco, 10.1093/mnrasl/slv037MNRAS. 45016M. Moresco, Raising the bar: new constraints on the Hubble parameter with cosmic chronometers at z ∼ 2, MNRAS 450, 16 (2015).
Baryon acoustic oscillations in the Lyα forest of BOSS DR11 quasars. T Delubac, 10.1051/0004-6361/201423969arXiv:1404.1801A & A. 57459T. Delubac et al., Baryon acoustic oscillations in the Lyα forest of BOSS DR11 quasars, A & A 574, A59 (2015) [arXiv:1404.1801].
Quasar-Lyman α forest cross-correlation from BOSS DR11: Baryon Acoustic Oscillations. A Font-Ribera, https:/iopscience.iop.org/article/10.1088/1475-7516/2014/05/027arXiv:1311.1767JCAP. 0527A. Font-Ribera et al., Quasar-Lyman α forest cross-correlation from BOSS DR11: Baryon Acoustic Oscillations , JCAP 05 027 (2014) [arXiv:1311.1767].
The Distribution of Relativistic Electrons in the Galaxy and the Spectrum of Synchrotron Radio Emission. S I Syrovatskii, Soviet Astro. 322S. I. Syrovatskii, The Distribution of Relativistic Electrons in the Galaxy and the Spectrum of Synchrotron Radio Emission, Soviet Astro. 3, 22 (1959).
Ultra-high energy cosmic ray propagation in the local supercluster. G Sigl, M Lemoine, P Biermann, Astropart. Phys. 10141G. Sigl, M. Lemoine, and P. Biermann, Ultra-high energy cosmic ray propagation in the local supercluster, Astropart. Phys.10, 141 (1999).
Small-Scale Clustering in the Isotropic Arrival Distribution of Ultra-High-Energy Cosmic Rays and Implications for Their Source Candidates. H Yoshiguchi, https:/iopscience.iop.org/article/10.1086/367931Astrophys. J. 5861211H. Yoshiguchi et al., Small-Scale Clustering in the Isotropic Arrival Distribution of Ultra-High-Energy Cosmic Rays and Implications for Their Source Candidates, Astrophys. J. 586, 1211 (2003).
The Extragalactic Source of Cosmic Rays with Energies above the Knee. P L Biermann, V Souza, A Centaurus, https:/iopscience.iop.org/article/10.1088/0004-637X/746/1/72Astrophys. J. 74672P. L. Biermann, V. Souza, Centaurus A: The Extragalactic Source of Cosmic Rays with Energies above the Knee, Astrophys. J. 746, 72 (2012).
. V Berezinsky, Astrophysics of Cosmic rays. Elsevier Science Publishers B. VV. Berezinsky et al., Astrophysics of Cosmic rays, Elsevier Science Publishers B. V., North-Holland (1990).
| []
|
[
"SOME OBSERVATIONS ON LOCAL AND PROJECTIVE HYPERSURFACES",
"SOME OBSERVATIONS ON LOCAL AND PROJECTIVE HYPERSURFACES"
]
| [
"Hailong Dao "
]
| []
| []
| Let R be a hypersurface in an equicharacteristic or unramified regular local ring. For a pair of modules (M, N ) over R we study applications of rigidity of Tor R (M, N ), based on ideas by Huneke, Wiegand and Jothilingam. We then focus on the hypersurfaces with isolated singularity and even dimension, and show that modules over such rings behave very much like those over regular local rings. Connections and applications to projective hypersurfaces such as intersections of subvarieties and cohomological criteria for splitting of vector bundles are discussed. | 10.4310/mrl.2008.v15.n2.a1 | [
"https://export.arxiv.org/pdf/math/0701881v2.pdf"
]
| 7,341,169 | math/0701881 | 67e0a9c2ebc398ec2643e7a3d4f471cf226c18b6 |
SOME OBSERVATIONS ON LOCAL AND PROJECTIVE HYPERSURFACES
8 Sep 2007
Hailong Dao
SOME OBSERVATIONS ON LOCAL AND PROJECTIVE HYPERSURFACES
8 Sep 2007
Let R be a hypersurface in an equicharacteristic or unramified regular local ring. For a pair of modules (M, N ) over R we study applications of rigidity of Tor R (M, N ), based on ideas by Huneke, Wiegand and Jothilingam. We then focus on the hypersurfaces with isolated singularity and even dimension, and show that modules over such rings behave very much like those over regular local rings. Connections and applications to projective hypersurfaces such as intersections of subvarieties and cohomological criteria for splitting of vector bundles are discussed.
Introduction
The purpose of this note is to continue our investigation of the rigidity and decent intersection of modules over a local hypersurface R done in [Da1]. There we showed that the vanishing of θ R (M, N ), a function introduced by Hochster in [Ho1], implies the rigidity of Tor for (M, N ). We will apply this to give various new results on modules over hypersurfaces. We also give supporting evidence for the following Conjecture made in [Da1]:
Conjecture 1.1. Let R be an hypersurface with an isolated singularity. Assume that dim R is even. Then θ R (M, N ) always vanishes.
This conjecture can be viewed as a consequence of a local version of the following conjecture by Hartshorne on Chow groups of smooth projective hypersurfaces : Conjecture 1.2. (R. Hartshorne,[Ha1], page 142) Let X be a smooth projective hypersurface in P n C . Then CH i (X) = Z for i < dim X/2. Note that the original form of Hartshorne's question states that in codimension less than dim X/2, a cycle is homologically equivalent to 0 if and only if it is rationally equivalent to 0. It is not hard to see that Hartshorne's original statement is equivalent to the version stated above. For a K-theoretic discussion of 1.2, see [Pa] (Conjecture 1.5 and Section 6). For more discussions on how 1.2 is related to 1.1, see [Da1], Section 3. We remark that this connection actually motivated this project, as well as the proof of 4.10 below.
In Section 2 we review the basic notations and preliminary results. In Section 3 we use the procedure invented by Huneke and Wiegand in [HW2] to show that Hom R (M, M ) rarely has good depth. The use of θ R (M, N ) allows us to simplify and strengthen some results in [HW2] (see 3.2 and 3.3, also 4.4 and 4.5).
Section 4 is concerned with two consequences of Conjecture 1.1:
Conjecture 1.3. Let R be an admissible hypersurface (meaningR is a quotient of an unramified or equicharacteristic regular local ring by a nonzero element) with an isolated singularity . Assume that dim R is even. Let M, N be R-modules such that Tor R i (M, N ) = 0 for some i. Then Tor R j (M, N ) = 0 for all j ≥ i.
Conjecture 1.4. Let R be an admissible hypersurface with an isolated singularity. Assume that dim R is even. Let M, N be R-modules such that l(M ⊗ R N ) < ∞.
Then dim M + dim N ≤ dim R.
We will prove 1.1 when M is free on the punctured spectrum of R and N = M * (see 4.1). We also prove 1.4 when R is a standard graded local hypersurface and M, N are graded modules (see 4.11). Many applications follow, such as generalizations of results on Hom R (M, M ) over regular local rings by . In that section, the connection with geometry of projective hypersurfaces (which goes both ways) will be exploited. For example, 4.11 will be proved using l-adic cohomology, and from 4.1 we obtain the following (see 4.7):
Corollary 1.5. Let k be a field and n an even integer ≥ 4. Let X ⊂ P n k be a nonsingular hypersurface. Let E be a vector bundle on X. If H 1 (X, (E⊗E * )(l)) = 0 for all l ∈ Z, then E splits.
In Section 5, we give some further applications of rigidity. The first is a simple characterization of maximal Cohen-Macaulay modules (see 5.1). The second is a connection between vanishing of Ext n R (M, M ) and pd R M , generalizing a result by Jothilingam (see 5.4).
Notation and preliminary results
Unless otherwise specified, all rings are Noetherian, commutative and local, and all modules are finitely generated. A local ring (R, m, k) is a hypersurface if its completionR has the form T /(f ), where T is a regular local ring and f is in the maximal ideal of T . We say that R is admissible (as a hypersurface) if T is a power series ring over a field or over a discrete valuation ring.
For a ring R and a non-negative integer i, we set X i (R) := {p ∈ Spec(R)| dim(R p ) ≤ i}. We denote by Y (R) the set X dim(R)−1 , the punctured spectrum of R. We say that R satisfy the condition (R i ) if R p is regular for any p ∈ X i (R). We denote by G(R) the Grothendieck group of finitely generated modules over R and bȳ G(R) := G(R)/[R], the reduced Grothendieck group. Also, we let Sing(R) := {p ∈ Spec(R)|R p is not regular} be the singular locus of R. For an abelian group G, we
let G Q = G ⊗ Z Q. Let M * := Hom(M, R) be the dual of M . The module M is called reflexive provided the natural map M → M * * is an isomorphism. The module M is called maximal Cohen-Macaulay (or sometimes abbreviated as MCM) if depth R M = dim R.
For a non-negative integer n, M is said to satisfy (S n ) if :
depth Rp M p ≥ min{n, dim(R p )} ∀p ∈ Spec(R) (θ R (M, N ) = l(Tor R 2e+2 (M, N )) − l(Tor R 2e+1 (M, N ))
. where e is any integer ≥ d/2. It is well known (see [Ei]) that Tor R (M, N ) is periodic of period at most 2 after d + 1 spots, so this function is well-defined. The theta function satisfies the following properties (see [Ho1]). First, if M ⊗ R N has finite length, then:
θ R (M, N ) = χ T (M, N ). Secondly, θ R (M, N ) is biadditive on short exact sequence, assuming it is defined. Specifically, for any short exact sequence:
0 → N 1 → N 2 → N 3 → 0 and any module M such that f R (M, N i ) < ∞ for all i = 1, 2, 3, we have θ R (M, N 2 ) = θ R (M, N 1 ) + θ R (M, N 3 ). Similarly, θ(M, N ) is additive on the first variable.
In [Da1], we show that when θ R (M, N ) can be defined and vanishes, then (M, N ) is rigid: The following corollary will be used frequently in this note:
Corollary 2.2. Let R be an admissible hypersurface and M, N be R-modules such that:
(1) pd Rp M p < ∞ for any p ∈ Y (R), the punctured spectrum of R (in particular, this is always true if R has only isolated singularity).
The Pushforward
Let R be a Gorenstein ring and M a torsion-free (equivalent to (S 1 )) R-module. Consider a short exact sequence :
0 → W → R λ → M * → 0
Here λ is the minimal number of generators for M * . Dualizing this short exact sequence and noting that M embeds into M * * we get an exact sequence:
0 → M → R λ → M 1 → 0
This exact sequence is called the pushforward of M . The following proposition is taken from [HJW].
Proposition 2.3. ( [HJW], 1.6) Let R, M, M 1 as above. Then for any p ∈ Spec(R):
(1) M p is free if and only if (M 1 ) p is free. (2) If M p is a maximal Cohen-Macaulay R p -module, then so is (M 1 ) p . (3) depth Rp (M 1 ) p ≥ depth Rp M p − 1. (4) If M satisfies (S k ), then M 1 satisfies (S k−1 ).
The depth formula
A result by Huneke and Wiegand showed that when all the high Tor modules vanish, the depths of the modules satisfy a remarkable equation:
Proposition 2.4. ([HW1], 2.5) Let R be a local complete intersection. Let M, N be non-zero finitely generated modules over R such that Tor R i (M, N ) = 0 for all i ≥ 1. Then: depth(M ) + depth(N ) = depth(R) + depth(M ⊗ R N ) 3. On depth of Hom R (M, M )
Throughout this section R is a local hypersurface. We will call an R module M such that M is locally free over the punctured spectrum of R a vector bundle over Y (R) (or just vector bundle). We observe that for a vector bundle M and any R-module N , θ R (M, N ) is always defined. In this section we will show that for certain modules over admissible hypersurfaces, Hom R (M, M ) and M ⊗ R M * rarely have good depth. We will follow the same procedure as in [HW2], but with two essential additions : we focus on vector bundles from the beginning and we will exploit the function θ R (M, N ) rather heavily. These will allow us to simplify and strengthen some results in [HW2].
Proposition 3.1. Let R be an admissible hypersurface. Let M be a vector bundle over Y (R) such that depth(M ) ≥ 1. Let N be an R-module such that θ R (M, N ) = 0 and depth(M ⊗ R N ) ≥ 1. Then Tor R i (M, N ) = 0 for all i > 0.
Proof. The assumptions ensure that M is S 1 . Hence we have the pushforward of M :
0 → M → R λ → M 1 → 0
We can tensor with N to get :
0 → Tor R 1 (M 1 , N ) → M ⊗ R N → N λ → M 1 ⊗ R N → 0 By 2.3, M 1 is also a vector bundle. So l(Tor R 1 (M 1 , N )) < ∞.
Since it embeds into a module of positive depth, Tor R 1 (M 1 , N ) must be 0. Clearly θ R (M 1 , N ) is defined and equal to:
θ R (R λ , N ) − θ R (M, N ) = 0 So by 2.1, Tor R i (M 1 , N ) = 0 for all i > 0. Since M ∼ = syz R 1 (M 1 ), we are done.
The next result is an analogue of Theorem 2.4 in [HW2] Theorem 3.2. Let (R, m) be an admissible hypersurface of dimension d. Let r be an integer such that 0
≤ r < d. Let M be a vector bundle over Y (R) such that depth(M ) ≥ r. Let N be an R-module satisfying (S r ), and assume θ R (M, N ) = 0 and H r m (M ⊗ R N ) = 0. Then depth(M ⊗ R N ) ≥ r + 1, and if r > 0 we have Tor R i (M, N ) = 0 for all i > 0.
Proof. We will use induction on r. If r = 0 the conclusion is trivial. Now we assume r > 0. Then N satisfies (S 1 ) and we have the pushforward:
0 → N → R λ → N 1 → 0
Tensoring with M we get:
0 → Tor R 1 (N 1 , M ) → N ⊗ R M → M λ → N 1 ⊗ R M → 0 which we break into two short exact sequences: 0 → Tor R 1 (N 1 , M ) → N ⊗ R M → C → 0 and: 0 → C → M λ → N 1 ⊗ R M → 0 Since M is a vector bundle, T = Tor R 1 (N 1 , M ) has finite length. In particular H r+1 m (T ) = 0. By applying H 0 m (−) to the first exact sequence we get H r m (C) = 0. Now applying H 0 m (−) to the second exact sequence and using H r−1 m (M ) = 0 (because depth(M ) ≥ r) and H r m (M ⊗ R N ) = 0 we get H r−1 m (N 1 ⊗ R M ) = 0.
We need to check the other inductive assumptions for N 1 . Clearly, N 1 is (S r−1 ) by 2.3. Also, θ R (M, N 1 ) is defined and equal to:
θ R (M, R λ ) − θ R (M, N ) = 0
So by induction we have depth(M ⊗ R N 1 ) ≥ r. Then by 3.1 Tor R i (N 1 , M ) = 0 for all i > 0, so we have the last assertion and an exact sequence: Theorem 3.4. Let R be an admissible hypersurface satisfying condition (R 2 ). Let M be a reflexive R-module such that one of the following is satisfied:
0 → N ⊗ R M → M λ → N 1 ⊗ R M → 0 Therefore depth(M ⊗ R N ) ≥ r, and since H r m (M ⊗ R N ) = 0 it follows that depth(M ⊗ R N ) ≥ r + 1 .1) θ R (M, M * ) is defined and equals 0. 1') [M ] = 0 in G(R) Q . If Hom R (M, M ) satisfies (S 3 ) then M is free.
Proof. We use induction on d = dim R. If d ≤ 2 then R is regular, and M , being reflexive, must be free. Assume d ≥ 3 and (by induction hypothesis, since all the conditions localize) that M is free on the punctured spectrum. In other words, M is a vector bundle. Consider the natural map φ : M * ⊗ R M → Hom(M, M ). Since M is a vector bundle, the kernel and cokernel of φ both have finite length. By considering the long exact sequence of local cohomology from the two short exact sequences: Example 3.5. This will be our main example throughout this note. Let R = k[[x, y, u, v]]/(xu − yv) with m = (x, y, u, v) and M = (x, y). We claim that
0 → ker(φ) → M * ⊗ R M → im(φ) → 0 0 → im(φ) → Hom(M, M ) → coker(φ) →M * ∼ = (x, v). Any R-linear map φ from M to R is determined by φ(x). Hence M * is isomorphic to {a ∈ R| ya ∈ xR}, which is easily seen to be (x, v). So M ⊗ R M * ∼ = (x 2 ,
Isolated hypersurface singularities of even dimensions
In this section we will show some supporting evidence for Conjecture 1.1. Our results indicate that modules over isolated hypersurface singularities of even dimensions behave very similarly to those over regular local rings. We first prove:
Theorem 4.1. Let R be a hypersurface with isolated singularity. Assume that dim R is even. Then for any vector bundle M on Y (R), θ R (M, M * ) = 0.
We need to review the concept of stable (co)homology (for more details, see [Bu] or [AB]). Let R be a Noetherian ring, a complete resolution of an R-module M is a complex T such that H n (T) = H n (T * ) = 0 for all n ∈ Z and T ≥r = P ≥r for some projective resolution P of M and some integer r. It is known that the modules H i (Hom R (T, N )) and H i (T ⊗ R N ) are independent of the resolution, and one calls them Ext i R (M, N ) and Tor R i (M, N ), respectively. Before moving on we recall the Local Duality Theorem in our context and some consequences that will be used. Let d = dim R. Let ∨ denote Hom R (−, E(k)), the Matlis dual. Then for any module M we have an isomorphism:
Ext i R (M, R) ∼ = H d−i m (M ) ∨ In particular, if M is maximal Cohen-Macaulay, then Ext i R (M, R) = 0 for i > 0 and if l(M ) < ∞ then Ext i R (M, R) = 0 for i = d.
Also, if M is maximal Cohen-Macaulay then so is M * , since by dualizing a free resolution of M one can see that M * is a syzygy of infinitely high order.
We will need the standard and easy results below, reproved for the reader's convenience (some of these could be found in [AB], but we could not find a full reference):
Lemma 4.2. Let R be a local hypersurface. Let M, N be R-modules such that M is maximal Cohen-Macaulay (MCM).Then we have the following isomorphism:
(1) Tor R i (M, N ) ∼ = Tor R i (M, N ) for all i > 0. (2) Ext i R (M, N ) ∼ = Ext i R (M, N ) for all i > 0. (3) Tor R i (M, N ) ∼ = Tor R i+2 (M, N ) for all i. (4) Ext i R (M, N ) ∼ = Ext i+2 R (M, N ) for all i. (5) Tor R i (M, N ) ∼ = Ext −i−1 R (M * , N ) for all i. (6) Tor R i (M, N ) ∼ = Ext i+1 R (M * , N ) for all i > 0. Proof. Let F : ... → F 1 → F 0 → M * → 0 be a free resolution of M * .
Since M is MCM we have M * * = M and Ext n (M * , R) = 0 for all n > 0. So dualizing F we get an exact sequence:
F * : 0 → M → F * 0 → F * 1 → .
.. Now let G : ... → G 1 → G 0 → M → 0 be a minimal free resolution of M . We can splice G and F * together to get an exact sequence: (1) and (2). Since M, M * are MCM and R is a hypersurface, G, F are periodic of period at most 2. In particular, syz R 2n M * ∼ = M * for any n > 0. Let M i = im(F * i−1 → F * i ). Then M 2n ∼ = (syz R 2n M * ) * ∼ = M * * ∼ = M . Thus for any n > 0: (3) and (4) follow. From the construction, T * [−1] is a complete resolution of M * , and the canonical isomorphism Hom R (T * , N ) ∼ = (T ⊗ R N ) gives (5). Finally, (6) follows from combining (1), (5) and (4).
T : ... → G 1 → G 0 → F * 0 → F * 1 → ...
It is obvious that T is a complete resolution of M . That proves
... → G 1 → G 0 → F * 0 → F * 1 → ... → F * 2n−1 → 0 is a free resolution of M . Hence
We will also need the following result by Buchweitz: [Bu] is not available publicly, and in any case the assertion was derived there from very general theory, we will summarize a self-contained proof. Let T be the complete resolution of M constructed in the proof of 4.2 and I be an (finite) injective resolution of R over R. One can see that the total complexes associated to the double complexes Hom R (Hom R (T, N ), I) and Hom R (T * , Hom(N, I)) are isomorphic. Note that T * [−1] is a complete resolution of M * . We get two spectral sequences converging to the same limit:
l( Ext i R (M, N )) = l( Ext j R (M * , N * )) Proof. Since1 E i,j 2 = Ext i R ( Ext −j R (M, N ), R) ⇒ H i+j and 2 E i,j 2 = Ext i−1 R (M * , Ext j R (N, R) ⇒ H i+j As N is MCM, Ext j R (N, R) = 0 for j > 0, so the second sequence collapses, leav- ing H i+j ∼ = Ext i+j−1 R (M * , N * ). On the other hand, M is free on the punctured spectrum, so l( Ext −j R (M, N )) is finite. Thus 1 E i,j 2 = 0 unless i = d, which gives H d+j ∼ = Ext d R ( Ext −j R (M, N ), R) ∼ = Ext −j R (M, N ) ∨ .
Putting everything together we get:
Ext −j R (M, N ) ∨ ∼ = Ext d+j−1 R (M * , N * )
Since d is even and taking Matlis dual preserves length, part (4) of 4.2 finishes our proof.
Now we can prove 4.1:
Proof. (of Theorem 4.1) First, we will prove the theorem for the case when M is MCM. In this situation, for any integer i > 0, by (6) of 4.2 we have: Now, assume M is a vector bundle. Let K = syz R 1 (M ). We want to prove that θ R (M, M * ) = θ R (K, K * ). Dualizing the short exact sequence: 0 → K → F → M → 0 we get an exact sequence:
Tor R i (M, M * ) ∼ = Ext0 → M * → F * → K * → Ext 1 R (M, R) → 0 So M * + K * = Ext 1 R (M, R) in G(R)
. Note that Ext 1 R (M, R) has finite length because M is free on the punctured spectrum of R. We claim that any module of finite length is equal to 0 in G(R) Q . Since any module of finite length is a multiple of [k], it is enough to prove the claim for one finite length module. If dim R = 0 then there is nothing to prove. If dim R > 0 then pick a prime p such that dim R/p = 1 and a non-unit x / ∈ p. The exact sequence 0
→ R/p → R/p → R/(p + x) → 0 shows that [R/(p + x)] = 0, which is all we need. So [M * ] = −[K * ] and θ R (M * , −) = −θ R (K * , −). Since θ R (M, −) = −θ R (K, −), we have θ R (M, M * ) = θ R (K, K * ).
Repeating the equality above we get θ R (M, M * ) = θ R (K, K * ) when K = syz R n (M ) for any n > 0. But for n ≫ 0 K is an MCM R-module, so θ R (K, K * ) = 0.
Corollary 4.4. Let R be an admissible hypersurface such that R has an isolated singularity and dim R is an even number greater than 2. Let M be a reflexive R-module. If Hom R (M, M ) satisfies (S 3 ) then M is free.
Proof. By localizing on p ∈ Y (R) and using Theorem 3.4 (note that a regular local ring can also be considered a hypersurface) we may assume that M is a vector bundle. The result then follows from 3.4 and 4.1.
Corollary 4.5. Let R be an admissible hypersurface such that R has isolated singularity and dim R > 2 and is even. Let M be a vector bundle over Y (R). Assume that M is reflexive and H 2 m (M ⊗ R M * ) = 0. Then M is free. Proof. By 3.2 and 4.1 we have Tor R i (M, M * ) = 0 for all i > 0. Thus Proposition 3.3 shows that M is free.
Example 4.6. Let R, M as in Example 3.5. Then H 2 m (M ⊗ R M * ) = 0 but M * is not free. Note that dim R = 3.
Corollary 4.7. Let k be a field and n an even integer ≥ 4. Let X ⊂ P n k be a nonsingular hypersurface. Let E be a vector bundle on X. If H 1 (X, (E⊗E * )(l)) = 0 for all l ∈ Z, then E splits.
Proof. This is a straightforward consequence of standard connections between a projective variety and its affine cone (see section 5 in [HW2]). Let A be the homogenous coordinate ring of X, m be the irrelevant ideal and R = A m . Let N = ⊕ i∈Z H 0 (X, E(i)) and M = N m . Note that the cohomology condition on E translates to: H 2 m (M ⊗ R M * ) = 0. Now we can apply 4.5 to conclude that M is a free R-module, which means E splits.
It is worth noting two obvious consequences of Corollary 4.4 which are generalizations of well-known results by Auslander and Goldman ( [AG], Theorem 4.4) and Auslander ([Au], Theorem 1.3) for modules over regular local rings:
Corollary 4.8. Let R be an admissible hypersurface such that R has isolated singularity and dim R > 2 and is even. Let M be a reflexive R-module. If Hom R (M, M ) ∼ = R n for some n then M is free.
Corollary 4.9. Let R be an admissible hypersurface such that R has isolated singularity and dim R > 2 and is even. Let M be an R-module satisfying (S 3 ). If Hom R (M, M ) ∼ = M n for some n then M is free.
We now show that Conjecture 1.4 is true in the standard graded case:
Theorem 4.10. Let k be an algebraically closed field and n an odd integer. Let X ⊂ P n+1 k be a smooth projective hypersurface. Let U, V be subvarieties of X such that U ∩ V = ∅.Then dim U + dim V < dim X.
Proof. We are going to use l-adic cohomology (for basic properties and notations, we refer to [Mi] or [Sha]). Let l be a prime number such that l = char(k). There is a class map:
cl : CH r (X) → H 2r (X, Q l (r))
This map gives a graded rings homeomorphism CH * (X) → ⊕H 2r (X, Q l (r)) (with the intersection product on the left hand side and the cup product on the right hand side, see [Mi], VI, 10.7 and 10.8). Let a = dim U and b = dim V , and we may assume a ≥ b. Suppose a + b = dim X = n (if a + b > n, we can always choose some subvariety of smaller dimension inside U or V such that equality occurs). Then 2a ≥ n, but n is odd, so 2a > n. Let h ∈ CH 1 (X) represent the hyperplane section. By Weak Lefschetz Theorem (see, for example, [Sha], 7.7, page 112) and the fact that 2(n − a) < n, we have:
H 2(n−a) (X, Q l (n − a)) ∼ = H 2(n−a) (P N k , Q l (n − a))
The latter is generated by a power of the class of the hyperplane section. Thus cl(U ) = cl(h) n−a in H 2(n−a) (X, Q l (n − a)). We then have :
cl(U.V ) = cl(U ) ∪ cl(V ) = cl(h) n−a ∪ cl(V ) = cl(h n−a .V )
The last term is equal to the degree of h n−a .V ∈ CH n (X), so it is nonzero. But the first term has to be 0 by assumption. This contradiction proves the Theorem.
Corollary 4.11. Let k be a perfect field and n an even integer. Let A = k[x 0 , ..., x n ]/(F ) be a homogenous hypersurface. Let (R, m, k) be the local ring at the origin of A. Suppose that R has an isolated singularity. Let M, N be graded R-modules such
that l(M ⊗ R N ) < ∞. Then dim M + dim N ≤ dim R.
Proof. Without affecting matters we may assume k is algebraically closed (it would not affect the isolated singularity condition because k is perfect, so we can compute the singular locus of A by the Jacobian ideal, which would not be changed by extending k to its algebraic closure). Since the minimal primes of a graded modules are homogenous, we may replace M and N with R/P and R/Q, where P, Q are homogenous primes in R. Now let X = Proj(A), U = Proj(A/P ), V = Proj(A/Q) and applying the previous Theorem.
Some other applications
In this section we discuss some further applications of rigidity. The theme is that some strong conditions on a module could be detected by the vanishing of a single Ext or Tor module. We first note a simple characterization of maximal Cohen-Macaulay modules, due to the fact that over an admissible hypersurface, a module of finite length is rigid (see 2.4 in [HW1] Proof. For equivalence of (1) and (2): we may assume R is complete. Then we have a well-known isomorphism (see, for example, page 7 of [EG] (1) and (2) Finally, assume (3). Let n = dim R, we can choose a regular sequence x 1 , ..., x n on both M and R (by induction on n). Now, just take N = R/(x 1 , ..., x n ). Then N has finite length and Tor R 1 (M, N ) = 0.
) Tor R 1 (M, N ) ∨ ∼ = Ext 1 R (M, N ∨ ), so
It would be very interesting to know whether the previous Proposition is true for all local complete intersections. It is not hard to see that this question has intimate connection to the rigidity of modules of finite length over such rings:
Lemma 5.2. Let C be the category of all local complete intersections. Consider the following properties:
(1) For any R ∈ C, any R-module of finite length is rigid.
(2) For any R ∈ C and any R-module M , if there is a nonzero finite length Rmodule N such that Tor R 1 (M, N ) = 0 then M is maximal Cohen-Macaulay.
(3) For any R ∈ C such that dim R > 0 and any R-module M , if there is a nonzero finite length R-module N such that Tor R 1 (M, N ) = 0 then depth(M ) > 0. (4) For any R ∈ C, any R-module of finite length and finite projective dimension is rigid. We have (1) ⇒ (2) ⇔ (3) ⇒ (4).
Proof.
(1) ⇒ (2) is the proof of 5.1.
(2) ⇔ (3).
(2) clearly implies (3). Suppose (3) holds, and let M as in (2). We may assume dim R > 1. Then since depth(M ), depth(R) > 0 we can find x ∈ Ann R N such that x is regular on both R and M . Then Tor (2) ⇒ (4). Suppose M, N are nonzero R-modules such that l(N ) < ∞, pd R N < ∞ and Tor R i (M, N ) = 0 for some i > 0. Then let M ′ = syz R i−1 M and we get Tor R 1 (M ′ , N ) = 0. By (2) M ′ is MCM. Let q be the largest integer such that Tor R q (M ′ , N ) = 0. Then by Lemma 2.2 in [HW2] we have (since depth(Tor R q (M ′ , N )) = 0): depth(M ′ ) + depth(N ) = depth(R) − q Since depth(M ′ ) = depth(R) and depth(N ) = 0 we must have q = 0.
Remark. In [JS] a module M is constructed over a 0-dimensional Gorenstein ring such that M is not rigid. So property (1) fails for Gorenstein rings.
Finally, we want to use our approach to rigidity to prove a connection between vanishing of Ext and projective dimension, generalizing a result by Jothilingam. In [Jot], the following was proved:
Theorem 5.3. ( [Jot],Theorem) Let R be a regular local ring and M an R-module. Then Ext n R (M, M ) = 0 if and only if pd R (M ) < n. In [Jo], Jorgensen observed that Jothilingam's result is true for any local ring R if we assume that M is rigid. We will modify Jothilingam's proof to show:
Proposition 5.4. Let R be an admissible hypersurface and M be an R-module such that M = 0 in G(R) Q . Then Ext n R (M, M ) = 0 if and only if pd R (M ) < n. Remark. Over a hypersurface, the classes of rigid modules and of modules which are 0 in G(R) Q are incomparable (see [Da1]).
Proof. We first need to review some notation in [Jot]. Let N be an R-module and let F = ... → F 2 → F 1 → F 0 → N → 0 be a minimal free resolution of N . Then one can define for i ≥ 0:
D i (N ) = coker(F * i → F * i+1 )
Note that D i (N ) can be computed, up to free summand, from any resolution of N . We will use induction on dim R. The proof of the Theorem in [Jot] shows that to prove Ext n R (M, M ) = 0 implies pd R (M ) < n one only needs that the pair (D n (M ), M ) is rigid. Let L = D n (M ). We will show that θ R (L, M ) = 0. If dim R = 0 then θ R (−, −) is always defined and equals 0, since all modules have finite length and G(R) Q = 0. Take any prime p ∈ Y (R), the punctured spectrum. By induction hypothesis (syz R n−1 M ) p is free, so L p is a free R p -module. Thus θ R (L, −) is always defined and since M = 0 in G(R) Q we must have θ R (L, M ) = 0. So (L, M ) is rigid by 2.1 and we are done.
The depth of the 0 module is set to be ∞.). A pair of R-modules (M, N ) is called rigid if for any integer i ≥ 0, Tor R i (M, N ) = 0 implies Tor R j (M, N ) = 0 for all j ≥ i. Moreover, M is rigid if for all N , the pair (M, N ) is rigid. One defines the finite length index of the pair (M, N ) as : f R (M, N ) := min{i| l(Tor R j (M, N )) < ∞ for j ≥ i} The function θ R (M, N ) Let R = T /(f ) be an admissible local hypersurface. The function θ R (M, N ) was introduced by Hochster ([Ho1]) for any pair of finitely generated modules M, N such that f R (M, N ) < ∞ as:
Proposition 2 . 1 .
21Let R be an admissible hypersurface and M, N be R-modules such that f R (M, N ) < ∞ (so that θ R (M, N ) can be defined). Assume θ R (M, N ) = 0. Then (M, N ) is rigid.
(2) [N ] = 0 in G(R) Q . Then θ R (M, N ) can be defined and equals 0. Consequently, (M, N ) is rigid. Proof. The condition on M ensures that θ R (M, N ) can be defined for all R-module N . In other words, θ R (M, −) gives a Z-linear map from G(R) to Z. The conclusions are now obvious (note that θ R (M, R) = 0).
Proposition 3 . 3 .
33Let R be a local hypersurface and M be an R-module. Assume that depth(M ⊗ R M * ) ≥ 2 and Tor R i (M, M * ) = 0 for all i > 0. Then M is free. Proof. By the depth formula we get: depth(M ) + depth(M * ) ≥ dim R + 2 so by ([Va], 3.3.16) we must have depth(M ) = depth(M * ) = dim R. On the other hand, the vanishing of all Tor R i (M, M * ) forces one of the modules to have finite projective dimension (see ([HW2], 1.9). Either way, M must be free (since M, M * are maximal Cohen-Macaulay).
Hom(M, M )) = 0 for i < 3 (because Hom(M, M ) is (S 3 ) and d ≥ 3) we can deduce that H 2 m (M * ⊗ R M ) = 0. Now M * is also a vector bundle, so θ R (−, M * ) is always defined. Therefore if [M ] = 0 in G(R) Q then θ R (M, M * ) = 0. So in both cases 1) and 1'), θ R (M, M * ) = 0 and Theorem 3.2 implies depth(M ⊗ R M * ) ≥ 3 as well as Tor R i (M, M * ) = 0 for all i > 0. The result now follows from Proposition 3.3.
xy, xv, yv) ∼ = (x 2 , xy, xv, xu) ∼ = m (see Example 1.8 of[HW2]). Using the long exact sequence of local cohomology for the sequence 0→ m → R → k → 0 we get H 1 m (M ⊗ R M * ) = H 1 m (m) = k and H 2 m (M ⊗ R M * ) = H 2 m (m) = 0. Clearly depth(M ⊗ R M * ) = depth(m) = 1. Obviously, M is not free.Note that both M and M * are maximal Cohen-Macaulay modules over R. Since R has an isolated singularity, they are also vector bundles over Y (R). It can be easily computed that TorR 1 (M, M * ) = k, Tor R 2 (M, M * ) = 0 and θ R (M, M * ) = −1. Consider Proposition 3.1 and Theorem 3.2. Let N = M * . Then the example shows that the condition θ R (M, N ) = 0 can not be dropped. Consider Theorem 3.4. Note that G(R) Q = Q[M ] ∼ = Q. The example shows that the condition M = 0 in G(R) Q can not be dropped either.
Proposition 4.3. ([Bu],10.3.3) Let R be a local hypersurface with isolated singularity such that d = dim R is even. Then for any two MCM R-modules M, N , and integers i, j such that i − j is odd :
i+1 R
i+1(M * , M * ) and Tor R i+1 (M * , M ) ∼ = Ext i+2 R (M, M ) Hence 4.3 (applied for N = M ) shows that θ R (M, M * ) = 0.
or 4.10 in [Da1]): Proposition 5.1. Let R be a admissible hypersurface and M an R-module. The following are equivalent: (1) There is a nonzero finite length module N such that Tor R 1 (M, N ) = 0. (2) There is a nonzero finite length module N such that Ext 1 R (M, N ) = 0. (3) M is a maximal Cohen-Macaulay R-module.
are equivalent. Assume (1). Then since N is rigid, we have Tor R i (M, N ) = 0 for all i ≥ 1. The depth formula shows: depth(M ) + depth(N ) = depth(R) + depth(M ⊗ R N ) hence depth(M ) = depth(R) and (3) follows.
xM, N ) ∼ = Tor R 1 (M, N ) = 0. So we can replace R, M by R/(x), M/xM and repeat the process until dim R = 1, at which point M is MCM by (3).
Example 5 . 5 .
55Let R = k[[x, y]]/(xy) and M = R/(x). Then Ext 2i+1 (M, M ) = 0 for all i > 0 but pd R M = ∞. Note that G(R) ∼ = Z[M ] ∼ = Z.
AckowledgementsWe would like to thank Ragnar-Olaf Buchweitz, David Jorgensen, Craig Huneke, Mircea Mustaţǎ, Paul Roberts, Vasudevan Srinivas and Claire Voisin for patiently explaining many useful facts and ideas to us. We also thank the anonymous referee whose reports have tremendously improved the presentation and mathematical content of the note.
On the Purity of the Branch Locus. M Auslander, Amer. J. Math. 84M. Auslander, On the Purity of the Branch Locus, Amer. J. Math. 84 (1962), 116-125.
Support varieties and cohomology over complete intersections. L L Avramov, R.-O Buchweitz, Invent. Math. 142L. L. Avramov, R.-O. Buchweitz, Support varieties and cohomology over complete inter- sections, Invent. Math. 142 (2000), 285-318.
. M Auslander, O Goldman, Maximal Orders, Trans. Amer. Math. Soc. 97M. Auslander, O. Goldman, Maximal Orders, Trans. Amer. Math. Soc. 97 (1960), 1-24.
W Bruns, J Herzog, Cohen-Macaulay rings. CambridgeCambridge Univ. PressW. Bruns, J. Herzog, Cohen-Macaulay rings, Cambridge Univ. Press, Cambridge (1996).
Maximal Cohen-Macaulay modules and Tate cohomology over Gorenstein rings. R.-O Buchweitz, Unive. HannoverPreprintR.-O. Buchweitz, Maximal Cohen-Macaulay modules and Tate cohomology over Goren- stein rings, Preprint, Unive. Hannover, (1986).
H Dao, Decency and rigidity over hypersurfaces, arXiv math.AC/0611568. preprintH. Dao, Decency and rigidity over hypersurfaces, arXiv math.AC/0611568, preprint.
. E G Evans, P Griffith, Lond Sygyzies, Math. Soc. Lect. Notes. 106E. G. Evans, P. Griffith, Sygyzies, Lond. Math. Soc. Lect. Notes 106 (1985).
Homological algebra on a complete intersection ,with an application to group representations. D Eisenbud, Tran. Amer. Math. Soc. 260D. Eisenbud, Homological algebra on a complete intersection ,with an application to group representations, Tran. Amer. Math. Soc. 260 (1980), 35-64.
W Fulton, Intersection Theory. BerlinSpringer-VerlagW. Fulton, Intersection Theory, Springer-Verlag, Berlin (1998).
Eléments de géometrique algébrique. A Grothendieck, Chapage IV Publ. Math. I.H.E.S. 24A. Grothendieck, Eléments de géometrique algébrique, Chapage IV Publ. Math. I.H.E.S. 24 (1965).
Equivalence relations on algebraic cycles and subvarieties of small codimension. R Hartshorne, Proc. Symp. Pure Math. 29R. Hartshorne, Equivalence relations on algebraic cycles and subvarieties of small codi- mension, Proc. Symp. Pure Math. 29 (1975), 129-164.
Algebraic Geometry. R Hartshorne, Graduate Text in Mathematics. Springer-VerlagR. Hartshorne, Algebraic Geometry, Graduate Text in Mathematics, Springer-Verlag, New York, (1977).
The dimension of an intersection in an ambient hypersurface. M Hochster, Proceedings of the First Midwest Algebraic Geometry Seminar. the First Midwest Algebraic Geometry SeminarChicago CircleSpringer-Verlag862M. Hochster, The dimension of an intersection in an ambient hypersurface, Proceedings of the First Midwest Algebraic Geometry Seminar (Chicago Circle,1980), Lecture Notes in Mathematics 862, Springer-Verlag, 1981, 93-106.
Tensor products of modules and the rigidity of Tor. C Huneke, R Wiegand, Math. Ann. 299C. Huneke, R. Wiegand, Tensor products of modules and the rigidity of Tor, Math. Ann. 299 (1994), 449-476.
Tensor products of modules, rigidity and local cohomology. C Huneke, R Wiegand, Math. Scan. 81C. Huneke, R. Wiegand, Tensor products of modules, rigidity and local cohomology, Math. Scan. 81 (1997), 161-183.
Vanishing theorems for complete intersections. C Huneke, R Wiegand, D Jorgensen, J. Algebra. 238C. Huneke, R. Wiegand, D. Jorgensen, Vanishing theorems for complete intersections, J. Algebra 238 (2001), 684-702.
On the obstruction group to lifting. D Jorgensen, preprintD. Jorgensen, On the obstruction group to lifting, preprint.
A note on grade. P Jothilingam, Nagoya Math. J. 59P. Jothilingam, A note on grade, Nagoya Math. J. 59 (1975), 149-152.
Nonvanishing cohomology and classes of Gorenstein rings. D Jorgensen, L Sega, Adv. Math. 188D. Jorgensen, L. Sega, Nonvanishing cohomology and classes of Gorenstein rings, Adv. Math. 188 (2004), 470-490.
Etale cohomology. J S Milne, Princeton Univ. PressJ. S. Milne, Etale cohomology, Princeton Univ. Press (1980).
Cohomological and cycle-theoretic connectivity. K H Paranjape, Ann. of Math. 140K. H. Paranjape, Cohomological and cycle-theoretic connectivity, Ann. of Math. 140 (1994), 641-660.
C Peskine, L Szpiro, Dimension projective finie et cohomologie locale. Applicationsà la démonstration de conjectures de M. Auslander, H. Bass et A. Grothendieck. 42C. Peskine, L. Szpiro, Dimension projective finie et cohomologie locale. Applicationsà la démonstration de conjectures de M. Auslander, H. Bass et A. Grothendieck, Inst. Hautes tudes Sci. Publ. Math. 42 (1973), 47-119.
Multiplicities and Chern classes in Local Algebra. P Roberts, Cambridge Univ. PressCambridgeP. Roberts, Multiplicities and Chern classes in Local Algebra, Cambridge Univ. Press, Cambridge (1998).
Algebraic Geometry II. Cohomology of algebraic varieties. Algebraic surfaces. I R Shafarevich, Encyclopaedia of Mathematical Sciences. 35SpringerI. R. Shafarevich (Ed.), Algebraic Geometry II. Cohomology of algebraic varieties. Alge- braic surfaces, Encyclopaedia of Mathematical Sciences, 35, Springer, Berlin (1995).
Arithmetic of Blowup Algebra. W Vasconcelos, London Math. Soc. Lecture Notes. 195W. Vasconcelos, Arithmetic of Blowup Algebra, London Math. Soc. Lecture Notes 195 (1994).
| []
|
[
"D R A F T Framework for global stability analysis of dynamical systems",
"D R A F T Framework for global stability analysis of dynamical systems"
]
| [
"George Datseris \nDepartment of Mathematics and Statistics\nUniversity of Exeter\nExeterUnited Kingdom\n",
"Kalel Luiz Rossi \nTheoretical Physics/Complex Systems\nICBM\nvon Ossietzky University Oldenburg\nLower SaxonyOldenburgCarlGermany\n",
"Alexandre Wagemakers \nDepartamento de Física\nNonlinear Dynamics, Chaos and Complex Systems Group\nUniversidad Rey Juan Carlos\nMadridSpain\n"
]
| [
"Department of Mathematics and Statistics\nUniversity of Exeter\nExeterUnited Kingdom",
"Theoretical Physics/Complex Systems\nICBM\nvon Ossietzky University Oldenburg\nLower SaxonyOldenburgCarlGermany",
"Departamento de Física\nNonlinear Dynamics, Chaos and Complex Systems Group\nUniversidad Rey Juan Carlos\nMadridSpain"
]
| []
| Dynamical systems, that are used to model power grids, the brain, and other physical systems, can exhibit coexisting stable states known as attractors. A powerful tool to understand such systems, as well as to better predict when they may "tip" from one stable state to the other, is global stability analysis. It involves identifying the initial conditions that converge to each attractor, known as the basins of attraction, measuring the relative volume of these basins in state space, and quantifying how these fractions change as a system parameter evolves. By improving existing approaches, we present a comprehensive framework that allows for global stability analysis on any dynamical system. Notably, our framework enables the analysis to be made efficiently and conveniently over a parameter range. As such, it becomes an essential complement to traditional continuation techniques, that only allow for linear stability analysis. We demonstrate the effectiveness of our approach on a variety of models, including climate, power grids, ecosystems, and more. Our framework is available as simple-to-use open-source code as part of the Dynam-icalSystems.jl library.critical transitions | multistability | tipping | continuation | global stability | null | [
"https://export.arxiv.org/pdf/2304.12786v1.pdf"
]
| 258,309,298 | 2304.12786 | 84f4f7e3fab08cd3e40995ba1d81e2877c2ee79a |
D R A F T Framework for global stability analysis of dynamical systems
George Datseris
Department of Mathematics and Statistics
University of Exeter
ExeterUnited Kingdom
Kalel Luiz Rossi
Theoretical Physics/Complex Systems
ICBM
von Ossietzky University Oldenburg
Lower SaxonyOldenburgCarlGermany
Alexandre Wagemakers
Departamento de Física
Nonlinear Dynamics, Chaos and Complex Systems Group
Universidad Rey Juan Carlos
MadridSpain
D R A F T Framework for global stability analysis of dynamical systems
10.1073/pnas.XXXXXXXXXXThis manuscript was compiled on April 26, 2023
Dynamical systems, that are used to model power grids, the brain, and other physical systems, can exhibit coexisting stable states known as attractors. A powerful tool to understand such systems, as well as to better predict when they may "tip" from one stable state to the other, is global stability analysis. It involves identifying the initial conditions that converge to each attractor, known as the basins of attraction, measuring the relative volume of these basins in state space, and quantifying how these fractions change as a system parameter evolves. By improving existing approaches, we present a comprehensive framework that allows for global stability analysis on any dynamical system. Notably, our framework enables the analysis to be made efficiently and conveniently over a parameter range. As such, it becomes an essential complement to traditional continuation techniques, that only allow for linear stability analysis. We demonstrate the effectiveness of our approach on a variety of models, including climate, power grids, ecosystems, and more. Our framework is available as simple-to-use open-source code as part of the Dynam-icalSystems.jl library.critical transitions | multistability | tipping | continuation | global stability
M ultistable dynamical systems exhibit two or more coexisting stable states, formally called attractors. Multistability is ubiquitous in nature and in mathematical models (1), with examples ranging from power grids (2)(3)(4), the climate (5,6), ecosystems like the Amazon rain forest (7,8), the brain and neuronal circuits therein (9)(10)(11), or metabolic systems (12)(13)(14). Some attractors of these systems can be desirable, such as synchronized oscillations in power grids, crucial for their proper functioning (15). But they can also be undesirable, as for example the extinction of a certain species in ecological models or the collapse of circulation in climate models (16). In a multistable system, the attractor at which the system ends up depends on the initial conditions, but perturbations of the state may enforce switching between attractors, a phenomenon called "tipping" (1,8,17). Alterations in the parameters of a dynamical system can trigger tipping. Hence, it becomes important to evaluate how "resilient" attractors are to perturbations, either to parameters or to the system's variables. This is a crucial problem of practical importance in several areas of research (8,18).
A traditional solution to this problem is the continuationbased bifurcation analysis (CBA). It identifies fixed points, limit cycles (under some requirements), and describes their linear stability dependence on a system parameter. One major downside is that, by definition, it cannot be applied to chaotic attractors (19). In the majority of cases, this analysis must be done numerically via one of several software, e.g., AUTO (20), MATCONT (21), CoCo (22), or Bifurcationkit.jl (23). The information provided by these frameworks is useful, but incomplete: rigorously speaking, linear stability only conveys information about the system's response to infinitesimally local perturbations. It cannot yield insight on the response to finite perturbations in the state space, which are predominant in practice.
For such responses, it is necessary to study the global stability (24) of the system's attractors, which involves the nonlinear dynamics over the full state space of the system * . A proxy for global stability of an attractor is the portion of all possible initial conditions ending up at this attractor, i.e., the fraction of the state space that is in the basin of said attractor. When the state space is infinite, the concept of the state space fraction becomes a pragmatic one: we need to define a finite-volume box of physically plausible initial conditions for the system under study, and we are concerned about the fractions of these plausible initial conditions. In this analysis, attractors with larger basins fractions are globally more stable because stronger perturbations are typically needed to switch the system to another attractor (24). Frequently, this measure is also a much better indicator of the loss of stability as a system parameter is varied, when compared to the linear analysis of the system (see e.g., (24) or (19,Chap. 12)).
Analyzing global stability as a function of a parameter demands extensive effort from researchers, as it requires the creation of algorithms that can find system attractors and * The term "global stability" is also used when a dynamical system has a single global attractor, which is different from our use of the term here.
Significance Statement
Dynamical systems are ubiquitous to model natural phenomena ranging from power grids, climate, ecosystems, and more. Global stability analysis is the study of their attracting states over the full state space. It goes beyond (linear) bifurcation analysis and provides insight on the resilience of the states, i.e., how close they are to "tip" to a different, perhaps undesirable stable state such as population death in ecosystems or circulation shutdown in climate. In this work we develop an accurate and flexible framework for global stability analysis of arbitrary dynamical systems, that we believe will accelerate multistability and tipping point research in many fields. We also provide a software implementation that is fast and easy to use in the open source DynamicalSystems.jl library.
D R A F T
their global stability, "continue" them across a parameter, and also perform the expensive numerical simulations required for such algorithms. In the literature, the only framework so far that can aid this analysis is the featurizing and grouping approach, proposed first by Gelbrecht et al (25) as MCBB (Monte Carlo Basin Bifurcation Analysis) and then later very similarly by Stender et al (26) as bSTAB (basin stability). The method integrates randomly sampled initial conditions (ICs) of a dynamical system for a preset time span. The trajectories of these ICs are then mapped onto features, numbers describing the trajectories, such as the mean or standard deviation of some of the system variables. All the feature vectors are clustered using the DBSCAN algorithm (27) so that ideally each cluster corresponds to an attractor of the system. The fractions of ICs in each cluster approximate the basin fractions and hence the global stability of each attractor. More details on the method in the Materials and Methods (MatMeth) §A.
This method works well in a variety of circumstances and can also be applied across a parameter range. However, it comes with significant downsides. One is that it is not clear a-priori which features should be chosen to correctly separate the attractors into clusters, requiring a lot of trial and error. Another downside is that the method cannot guarantee that the clusters of features really correspond to unique attractors, and that it is not mixing two or more attractors together. An alternative method for finding attractors and their basins of attraction is the recurrence-based approach recently proposed in Ref. (28). The method locates attractors by finding recurrences in the system's state space, assuming the Poincaré recurrence theorem holds for the system attractors. The input to this method is a state space box, and its tessellation, defining a grid to search for recurrences. Hence, the method will only find attractors within the given box, although the box can initially be arbitrarily large. We describe the method in more detail in MatMeth §B and provide a comparison between the two techniques in §2B. The main advantage of the recurrences method is that it locates the actual system attractors and only requires as an input a state space box that may contain the attractors. So far however it has not been clear how to "continue" attractors across a parameter range with this method.
In this work, in §1A, we present a novel global stability analysis and continuation algorithm that utilizes the recurrencesbased method for finding attractors (28). In §1C we apply it to exemplary models of climate, ecosystem dynamics, and more. As detailed in §2, this novel continuation algorithm is the most accurate in finding the actual attractors of a dynamical system, the most transparent in matching attractors across parameter values, and requires the least amount of guesswork from the researcher. We believe that this novel continuation of global stability, much like global stability analysis itself (24), is a crucial new tool for the analysis of dynamical systems. In some cases it supersedes CBA, and in others it can be complemented by CBA, as we discuss in §2A.
This continuation method is part of a novel automated framework that performs global stability analysis and continuation, which we present in §1B. Our framework significantly advances existing methodology, including the featurizing methods, thereby including all upsides of current literature while addressing most downsides (MatMeth §C and D describe the improvements in detail). Its design is based on modular compo-nents that can be configured or extended independently. This allows researchers to simply compose the methodology that is best suited to their problem, and then let an automated algorithm execute the process. The framework is accompanied by an extensively tested, well documented, and highly optimized open source software implementation, part of the Dynamical-Systems.jl (29) general purpose library for nonlinear dynamics (see MatMeth for code example and documentation).
Results
A. Novel global stability continuation algorithm. A major contribution of our framework is the novel algorithm for global stability analysis and continuation that we name recurrencesbased attractor find-and-match continuation, RAFM for short. As illustrated in Fig. 1A, it works as follows.
Step 0: the starting point of the algorithm. Attractors and their basins, or basins fractions, are already known at a parameter p = p1 using the recurrences-based algorithm of Ref. (28).
Step 1: new initial conditions are seeded from the existing attractors. Then, we set the system parameter to p = p2.
Step 2: evolve the seeded initial conditions according to the dynamic rule of the system. The seeds are evolved until they converge to an attractor using the recurrences-based method (the grid reflects the tessellation of the state space that is decided by the user; the finer the grid, the more accurate the results (28)). The main performance bottleneck of the recurrences-based method is finding the attractors. Once found, convergence of other initial conditions is generally much faster (28). To address this in the continuation, we use the observation that, unless a bifurcation is occurring, attractor size, shape, and position, typically depend smoothly on p. Hence, the seeded initial conditions at each new parameter will most likely converge the fastest to the new attractors.
Step 3: match attractors in current parameter p2 to those in parameter p1. Matching is arguably the most sophisticated part of the algorithm. It is flexible and attractors can be matched by their "distance" in state space, see §1D for more details. In this illustration the "red" attractor is matched to the "purple" one of Step 0, while the "yellow" attractor doesn't map to the previous "teal" one, because their state space distance is beyond a pre-defined threshold.
Step 4: with the main bottleneck of the algorithm (finding the attractors) being taken care of, now compute the basins fractions by formally applying the recurrence-based algorithm of Ref. (28). Importantly, the algorithm may still find new attractors (here the "dark green" one) that didn't exist before. The end result is the system attractors, and their basins, as functions of the parameter. The attractors and basins are labelled with the positive integers (enumerating the different attractors), and the basins always sum to 1. Note that because RAFM works on a parameter-by-parameter basis, it can be used to perform continuation across any number of parameters, not just one (we present one here as the simplest conceptual example).
B. Global stability continuation framework.
To perform global stability analysis, several tasks need to be taken in sequence, see Fig. 1B for an overview. We have abstracted and generalized the tasks to allow researchers different possibilities of how to achieve them.
The first task is the creation of a dynamical system for the global stability analysis. For our framework, this is achieved
D R A F T
Fig. 1. A:
The recurrences-based seed-and-match algorithm for global stability continuation described in §1A. B: Schematic illustration of the modular framework for global stability analysis and continuation described in §1B.
for free simply by making the implementation part of the DynamicalSystems.jl library (29) (see MatMeth E).
The second task is the creation of a mechanism to find attractors and map initial conditions to them. Possibilities for this mechanism are: (1) featurizing and grouping initial conditions into attractors (as discussed in the introduction), (2) finding attractors using the recurrences algorithm (28), or (3) mapping initial conditions to previously known attractors by proximity: once the evolution of an initial condition comes close enough to a pre-determined attractor, the initial condition is mapped to that attractor. Note that in (1) or (2) attractors are found via random sampling in the state space, and the probability to find an attractor with basin fraction f after n
samples is 1 − (1 − f ) n .
Mechanism (1) is paired with instructions on how to group features. Currently, the possibilities are: (a) clustering features into groups using DBSCAN (as done in MCBB or bSTAB), (b) grouping features by histograms in the feature space, so that features that end up in the same histogram bin belong to the same group (novel grouping approach), or (c) mapping features to their nearest feature in feature space, from a set of pre-defined features (as done in bSTAB). Having more grouping possibilities than only clustering can be useful, as discussed in §1D.
At this point one can analyze global stability at a given parameter, and also analyze the basin boundaries of attractors for fractal properties. From here, the third task is to "continue" the attractors and basins across a parameter range. Our framework has two continuation methods. The first is what has been employed so far by the MCBB or bSTAB algorithms, but with significantly increased accuracy (see MatMeth), and extended to allow any kind of instructions for how to group features. In this approach, trajectories from the dynamical system are generated by sampling random ICs across all parameter values of interest. All these trajectories are mapped to features, that are then grouped, and each group is representing an attractor. The grouped ICs are then re-distributed into the parameter slices they came from, providing the fractions of each group at each parameter value. The alternative is the RAFM algorithm that we described in §1A. Fig. 2 we apply RAFM on some exemplary systems. We stress that we could characterize the different attractors in accurate detail because RAFM finds the actual system attractors, not some incomplete representation of them (i.e., features used in the featurizingand-grouping approach).
C. Application on exemplary systems. In
D. Matching and grouping attractors.
Traditional CBA has a rigid "matching" procedure: it always matches the next point found along a "continuation curve" to the previous point. This is often correct for infinitesimal perturbations of fixed points, but becomes problematic for global stability analysis, which attempts to find all attractors in a state space box and then continue them. In this case, matching attractors from one parameter to the next becomes a crucial part of the algorithm. For instance, the analysis presented in Fig. 2 is only coherent because of the powerful matching procedure implemented in our framework. Without it, the colors would alternate arbitrarily at each parameter value.
In the featurize-and-group algorithm, matching and grouping are the same process. In RAFM, matching operates on a parameter-by-parameter basis. Each time the parameter is incremented, and the new attractors are found, a matching sub-routine is launched. The distance between attractors before and after the parameter change is estimated, with "distance" being any symmetric positive-definite function defined on the space of state space sets. By default the Euclidean distance of the attractor centroids is used, but the Hausdorff distance (35) is also provided out of the box. After the distance is computed between all new-old attractor pairs, the new attractor labels are matched to the previous attractor labels that have the smallest distance to them, prioritizing pairs with smallest distance. The matching respects uniqueness, so that an attractor from one parameter cannot be matched
D R A F T
Fig. 2.
Basins fraction continuation for exemplary dynamical systems using the novel recurrences-based continuation algorithm. The fractions of the basins of attraction are plotted as stacked band plots (hence, summing to 1). Each color corresponds to a unique attractor that is found and continued (but not plotted here). Each simulation scanned 101 parameter values, and in each it sampled randomly 100 initial conditions. The fractions fluctuate strongly versus parameter not due to lack of convergence, but because the basin boundaries are fractal in all systems considered. The systems used are: a) 3-dimensional paradigmatic chaotic model by Lorenz (Lorenz84 (30)) with a co-existence of a fixed point, limit cycle, and chaotic attractor, undergoing a crisis with the chaotic attractor merging into the limit cycle; b) 33-dimensional climate toy model (31) featuring bistability of chaotic attractors; c) 3-dimensional multistable cell-division model (32) where each cell type is considered to be a distinct attractor in the gene activity state space; d) 9-dimensional model for turbulent shear flow model in which the fluid between two walls experiences sinusoidal body forces (33); e) 8-dimensional ecosystem competition dynamics model (34) featuring extreme multistability (due to the number of attractors, we made no effort to label them further).
with more than one attractor from another parameter. Additionally, a distance threshold value can be provided, so that attractors whose distance is larger than this threshold are guaranteed to get assigned different IDs. Note that in principle finding the attractors and matching them are two completely independent processes. If after the continuation process is finished the user decides that the chosen matching procedure was unsuitable, they can launch a "re-matching" algorithm with different matching "distance" function, without having to re-do any computations for finding the attractors or their fractions (matching only renames attractor labels).
The last thing to highlight in this section is the desirable post-processing of grouping similar enough attractors. This happens automatically if one uses the featurize-and-group continuation method. However, taking as an example Fig. 2(e), the RAFM method finds countless individual attractors. For the researcher, the individual attractors may be useful for careful analysis, but it is sometimes desirable to group similar enough attractors. In our framework it is possible to use exactly the same grouping infrastructure utilized by the featurizing-and-grouping continuation, but now applied to the outcome of RAFM as a post-processing step. In Fig. 3 we highlight examples that utilize the powerful matching and/or Fig. 2(e) so that attractors are grouped into those whose 3rd species has population less than 0.01, or more. (c) A replication of the MCBB (25) results for a 2nd-order Kuramoto oscillator network representing a power grid, using the featurize-and-group continuation implementation from our framework. Features extracted from sampled trajectories are the means of the frequencies. (d) Same system as (c), but using the recurrence continuation and matching attractors by their centroid distance (i.e., as in Fig. 2). The only extra step was to post-process the results so that all attractors with basins fractions less than 4% are aggregated (as was done in Ref. (25) and in panel (c)). (e) Attractor basin fractions for a network of 1st order Kuramoto oscillators; the attractors here are found and matched using the recurrences continuation, and then grouped via a histogram of their synchronization order parameter R (19, Chap. 9). Attractors whose order parameter R falls in the same histogram bin are aggregated.
grouping components offered by our framework.
Discussion
A. Comparison with traditional continuation-based bifurcation analysis. In Table 1 and Fig. 4 we provide a careful comparison between CBA and RAFM. A direct comparison of the two approaches is difficult, since their operational basis, and main output, are fundamentally different. On one hand, CBA finds and continues the curves of individual fixed points or limit cycles across the joint state-parameter space on the basis of Newton's method. On the other hand, RAFM first finds attractors at all parameter values using the original system equations, and then matches appropriately similar attractors across different parameters, giving the illusion of continuing them individually. Additionally, the curves of stable fixed points in the joint parameter space (Fig. 4) are only a small part of the information our framework provides. Important provided information are the basin fractions and how they
D R A F T
# Traditional continuation-based bifurcation analysis (CBA)
Recurrences-based attractor find-and-match continuation (RAFM) 1 provides curves of unstable fixed points / limit cycles only finds attracting, not repelling, sets 2 several possibilities for how to continue bifurcation curves does not explicitly detect bifurcation points 3
does not put limits on state space extent needs as an input a state space box that may contain attractors 4 likely to find fixed points / limit cycles with small or even zero basins probability to find attractor is proportional to its basins fraction 5 detects and classifies local bifurcation points does not compute Jacobian eigenvalues at all 6 finds and continues fixed points and periodic orbits finds and continues any kind of attractors, including quasiperiodic or chaotic 7 requires a computable Jacobian of the dynamic rule works for any dynamical system including Poincaré maps or projections 8 ¶ user must manually search for multistability different attractors are automatically detected (via random sampling) 9
does not compute the basins of attraction or their fractions computes the fractions and, if computationally feasible, also the full basins 10 uses the local, linearized dynamics uses the full nonlinear dynamics 11 ‡ limited use in indicating loss of stability more likely to indicate loss of stability as the basin fraction approaches 0 12 * parameter change may not affect linear stability of all fixed points parameter change is more likely to affect global stability of all attractors 13 no sophistication on matching fixed points sophisticated, user-configurable matching of attractors 14 † requires expertise and constant interventions conceptually straightforward even in advanced use-cases change, which is completely absent in CBA. Based on this comparison, we argue that the RAFM algorithm and the global stability framework we provide is an essential tool for stability analysis of dynamical systems. We further believe that in some application scenarios RAFM will supersede CBA, especially given the difference in required user expertise, required interventions, and steepness of the learning curve that CBA has over RAFM. In other scenarios, we envision that RAFM can be used as the default analysis method, providing the majority of information, and CBA then becomes a more in-depth analysis of fixed points, limit cycles, and bifurcations, if such analysis is required.
B. Comparison between attractor-finding methods.
Our framework provides two radically different methods for finding attractors: the recurrence-based and the featurize-and-group approach. Generally speaking, the recurrence-based method should be preferred when possible, because of its accuracy (finding the actual attractors), and the possibility for follow-up analysis of found attractors. However, the featurize-and-group method should be preferred when the recurrence-based method fails, because e.g., the provided state space tessellation is ill-defined, or because computational demands exceed what is available. In Table 2 we provide a comprehensive comparison between the two methods.
C. Future extensions. The modularity of our framework allows it to be easily extended by researchers for their specific use cases. For instance, the default information about the attractors that is "tracked" during the RAFM continuation is their state space location. In the future, we plan to further enhance the framework to also (optionally) track dynamical characteristics of the attractors such as the Lyapunov spectra or fractal dimensions (19) (computing these quantities after-the-fact is trivial as illustrated in our example code of Listing. 1). Utilizing this information will allow us to better label the attractors (instead of using the integers as we currently do), better match attractors during the continuation, or even automatically detect bifurcations or changes in the nature of the attractor, such as transitioning from a periodic attractor (no positive Lyapunov exponents) to a chaotic one (at least one positive Lyapunov exponent).
Materials and Methods
A. Featurizing methods for finding attractors. We group together two similar methods that have been recently proposed in the literature for finding attractors. One is called Monte Carlo basin bifurcation analysis (MCBB) (25), the other is basin stability analysis (bSTAB) (26). Both methods work by identifying attractors as clusters of userdefined-features of trajectories. Their first step is to integrate N randomly chosen initial conditions inside a certain box in state space. The integration is done for some time T , after a transient Ttr. Both T needs to be sufficiently large so that the trajectories correspond to the systems' long-term behavior and also Ttr needs to be large to avoid the transient regime. Each trajectory x(t) is then transformed into a vector of K features F , specified by some featurizer function f such that f ( x(t)) = F that has to be defined by the user. Each vector of features F describes a point in the K-dimensional space of features f 1 × f 2 × · · · × f K . The key idea is that features belonging to the same attractor cluster together in state space, so that each Fig. 4. Stability analysis of a 3-dimensional neural dynamics model (37), plotting as information the maximum of the first variable of the dynamical system. For the parameters used, the model undergoes saddle-node and Hopf bifurcations and features bistability of a limit cycle and a fixed point. Left: analysis using the BifurcationKit.jl (38) Julia package for CBA. The analysis must happen in two steps; first the branch of a fixed point is found and continued, and then, using a different algorithm, the branch of the limit cycle is continued from the Hopf bifurcation. We scatter-plot special points found and labelled by the process. The computation required ∼19 seconds on an average laptop.
D R A F T
Right: analysis of the same model using our framework. The analysis happens in one fully automated step, after deciding the state space box and other meta-parameters. For the analysis, we purposefully demanded unnecessarily high accuracy, using a 9-th order ODE solver (39,40) with tolerances of 10 -9 , and requiring 1000 recurrences before claiming convergence in the recurrence algorithm (28). The process integrated in total 101,000 initial conditions, yet required ∼16 seconds on the same laptop. Attractor matching utilized a threshold: attractors whose distance in terms of their maximum value of the first variable (i.e., same information plotted in figure) exceeding 3.0 are not matched.
attractor forms a distinct cluster (a cloud of points) in feature space. The final step in the method is to therefore cluster the features. The clustering algorithm chosen for this is the Density-based Spatial Clustering of Applications with Noise (DBSCAN) (27). It first classifies two points as neighbors if their distance if smaller than a radius . Then, it clusters together points with many neighbors (equal or more than a parameter minPts), and leave as outliers points with too few neighbors (less than minPts). The radius is a crucial parameter for the algorithm, and often needs fine tuning for proper clustering. The methods by (25) and (26) use two different ways to identify a value for . Authors in (25) use a method that looks at the ordered distance of the k nearest neighbors to each point in the dataset, and finds the first knee (high derivative point) (27,42). Authors in (26) iteratively search for the that maximizes a criterion of clustering quality. To calculate this criterion, they evaluate the silhouette of the each cluster, which measures how similar each point is to the cluster it currently belongs to, compared to the other clusters, and ranges from −1 (worst matching) to +1 (ideal matching). This leads to one silhouette value per feature; the authors then take the minimum value as the representative for the clustering quality for each radius. The chosen radius is thus the value that leads to the highest minimum silhouette. In both methods, the clusters found by DBSCAN are considered then as attractors.
B. Recurrences-based method for finding attractors. The inputs to this method are a dynamical system, a state space box that may contain the attractors (although initially it may be arbitrarily large), and a tessellation of the given box into cells. An initial condition of the system is evolved step-by-step with time step ∆t. At each step, the location of the trajectory in state space is mapped to its cell, and that cell is labelled as visited. If the dynamical system has attractors, and they satisfy the Poincaré recurrence theorem (19, Chap. 9), the trajectory is guaranteed to revisit cells it has visited before. Once a pre-decided number of recurrences n f have been accumulated consecutively (i.e., enough previously-visited cells are visited again), the method claims to have found an attractor. It then proceeds to locate the attractor accurately, by collecting a pre-decided number n l of points on the attractor (Figure 1 of Ref. (28) and panel (3) of Fig. 1). A finite state machine formulation keeps track of coexisting attractors, so that each attractor is unique. It also keeps track of divergence to infinity by counting steps n d outside the box, ensures algorithm termination by setting a total nm of max amount of ∆t iterations, and makes convergence faster by utilizing information already encoded in the grid: if the trajectory visits consecutively a relatively small number nr of cells already labelled as an attractor, convergence is already eagerly decided. I.e., converging to an already found attractor is much faster than finding that attractor for the first time. Hence, ∆t, n f , n l , n d , nm, nr are the meta-parameters of the algorithm and have sensible default values that work in most cases. More information on the method can be found in Ref. (28). Notice that the recurrence method is different at a fundamental level from GAIO (43) and other cell mapping techniques (44). We expand more on this in the Supplementary Information. Also note that the method is not perfect; it may identify two attractors when only one exists due to e.g., choosing too low convergence criteria n l , n f , or due to a commensurate period of attractor and integrator time step. However, once again the importance of finding the "actual" attractors becomes apparent: further analysis, by e.g., plotting the attractors, immediately highlights such a failure and how to deal with it, and we also provide several tips in the documentation of our method in the code implementation (45).
C. Improvements to the recurrences method. A large drawback of the recurrences method was that it scaled poorly with the dimension D of the dynamical system. If an -sized tessellation of the state space is chosen, then memory allocated scaled as 1/ D . We now use sparse arrays to store accessed grid locations. This changes the memory scaling to 1/ ∆ , with ∆ the capacity dimension (19) of the attractor, which is typically much smaller than D (and is only 1 for limit cycles).
D. Improvements to the featurizing methods. First, we have changed the criterion used for finding the optimal radius in the clustering method. We have found the knee method consistently more unreliable than the iterative search. We have also found that the mean, instead of the minimum, silhouette value as the measure of clustering quality leads to better clustering. For instance, this lead to correct clustering in the Lorenz86 system, whereas the minimum value criterion did not. Furthermore, our method searches for the optimal-radius radius with a bisection method, instead of the linear method used by authors in (26). This significantly speeds up the code. Another simple modification we introduced is to rescale the features in each dimension (f 1 , f 2 , · · · , f K ) into the same interval,
D R A F T
Aspect
Recurrences method Featurizing method Accuracy
Highly accurate: finds actual attractors using their unique property of their state space location.
Less accurate, as trajectories are transformed into features, and attractors correspond to groups of features. The correspondence is not guaranteed to be unique or reversible.
Info stored
Stores samples of points on the found attractors Stores a user-provided function of the group of features (by default: the centroid of the group). Speed Very fast for low dimensional systems and for systems whose attractors are fixed points or periodic orbits. Becomes slow for attractors with long recurrence times (high dimensional chaotic attractors or very fine state space tessellations).
Performance is independent of system attractors. It linearly scales with the amount of initial conditions, the transient integration time, and the total integration time. Parallelizable. In addition is the cost of the grouping process, which is huge for clustering but trivial for histograms or nearest-feature. See also the benchmark comparison in the Supplementary Information §3 Memory Memory allocation scales as (1/ε) ∆ , with ε the state space tessellation size and ∆ the capacity dimension of the attractor, which is often much lower than the state space dimension (19).
Memory allocation of the trajectories scales linearly with integration time and sampling rate. Additionally, specifically for clustering used as the grouping mechanism, the total memory allocated is proportional to the square of [initial conditions × parameter values] which, if one attempts to obtain accurate results, is often beyond the available memory on a typical computer. Necessary input (guesswork)
A state space box that may contain the attractors, and a state space tesselation that is fine enough to differentiate the location of attractors.
A state space box that may contain the attractors; a function mapping attractors to features, such that different attractors produce different features; how much transient time to discard from time evolution; how much time to evolve and record the trajectory for, after transient.
Metaparameters
The parameters of the finite state machine of the recurrences algorithm discussed in Ref. (28), and the time stepping ∆t. All are crucial, but all are conceptually straightforward.
The integration time and sampling rate and all parameters of the grouping procedure. Integration parameters are straightforward, but optimal parameters related to the grouping are much harder to guess. Troubleshooting Easy to troubleshoot. At any point the actual trajectories and attractors are accessible and why a failure occurs is typically easy to find out. Matching of attractors happens parameter-by-parameter, hence, individual parameter slices can be isolated and analyzed to identify where matching has failed and why (distances between attractors is also provided information).
Difficult to troubleshoot. It does not find the actual system attractors, so the user must always reason in terms of features. When grouping using clustering (DBSCAN), the grouping process essentially operates as a black box after the features have been computed, making it harder to comprehend failures. Matching of attractors happens at the same time as grouping, making it nearly impossible to understand why an expected matching failed during the continuation process.
Failures
Algorithm may fail if state space tessellation is not fine enough and a grid cell may contain points from different attractors. Chaotic saddles and other sticky sets generate very long transients (41) and the algorithm can interpret them as attractors. Additionally, the algorithm is sensitive to the time step ∆t and the used integrator. For limit cycles it is often the case that a non-adaptive integrator needs to be used.
Sticky sets that are formally not attractors will be interpreted as attractors. Additionally, for ill-defined features, any group of trajectories could be interpreted as an attractor, which is not desirable in the context of this paper. Clustering via DBSCAN may fail unexpectedly, or, finding the optimal radius for the clustering may yield incorrect results. for instance [0, 1]. We noticed that the clustering method performs poorly if the features span different ranges of values, and this simple modification proved to be a very powerful solution. Third, we allow the integration of all initial conditions to be done in parallel, using several computer cores, which speeds up the solution. Lastly, grouping in our framework can also happen based on a histogram in feature space.
E. Code implementation. The code implementation of our framework is part of the DynamicalSystems.jl library (29) as the Attractors.jl package (45). The code is open source code for the Julia language, has been developed following best practices in scientific code (46), is tested extensively, and is accompanied by a high quality documentation. An example code snippet is shown in Listing 1.
Besides the quality of the implementation, three more features of the code are noteworthy. First, that it is part of Dynamical-Systems.jl instead of an isolated piece of code. This integration makes the simplicity and high-levelness of Listing 1 possible, and makes the input for the code easy to set up. Moreover, the direct output of the code can be used with the rest of the library to further analyze attractors in terms of e.g., Lyapunov exponents or fractal dimensions. Indeed, in the provided code example Listing 1 we compute the Lyapunov spectra of all found attractors, across all parameter values, in only two additional lines of code. Second, utilizing the Julia language's multiple dispatch system (47), the code is extendable. It establishes one interface for how to map initial conditions to attractors, and one for how to group features, both of which can be extended, and yet readily be usable by the rest of the library such as the continuation methods. Third, a lot of attention has been put into user experience, by establishing a short learning curve via a minimal user interface, by carefully considering how to provide the output in an intuitive format, as well as providing easy-to-use plotting functions that utilize the code output. More overview and information on the code or its design can be found in its online documentation or source code (45).
F. Code for this article. The code we used to create the figures of this article is fully reproducible and available online (48). Listing 1. Julia code snippet showcasing the usage of the DynamicalSystems.jl implementation of our framework. The code produces panel (a) of Fig. 2. The main output of the code are two vectors, containing the basins fractions and attractors at each parameter value respectively. The fractions and attractors are formulated as dictionaries, mapping attractor labels (the integers) to basin fractions and sets of points on the attractor, respectively. At its end, the code snippet computes the Lyapunov spectra of all found attractors, by using the first point on each attractor as initial condition for the computation of the Lyapunov spectrum.
D R A F T
# create Lorenz84 within D y n a m i c a l S y s t e m s . jl function lorenz84_rule (u , p , t ) F , G , a , b = p x , y , z = u dx = -y^2 -z^2 -a * x + a * F dy = x * y -y -b * x * z + G dz = b * x * y + x * z -z return SVector ( dx , dy , dz ) end u0 = ones (3) # init . state p0 = [6.886 , 1.347 , 0.255 , 4.0] # init . p a r a m e t e r s # ODE solver : diffeq = ( alg = Vern9 () , reltol = 1e -9 , abstol = 1e -9) # Main object of the library : a ' Dyn amic alSy ste m ' ds = CoupledODEs ( lorenz84_rule , u0 , p0 ; diffeq )
Fig. 3 .
3Highlights of the matching or grouping components of the framework. a) Matching attractors of the Hénon map based on their period. In the chosen parameter range, an attractor is transformed from chaotic, to period 3, 7, and 14. The attractor stays in approximately the same state space location, so whether we consider the centroid distance or the Hausdorff distance, the attractor would be matched to itself in all parameter values due to the very small distance evaluation. Here however we use as distance f (A, B) = | log 2 (len(A)) − log 2 (len(B))|, with len measuring the amount of cells (of the state space tessellation) the attractor covers, and threshold t = 0.99. This effectively means that matched attractors must have periods with ratio less than 2. b) Grouping attractors of
Newton's method transforms the dynamical system into a discrete time system with different basins, making attractors with very small or zero basin sizes have much larger ones instead. ¶ In RAFM, attractors that are not being continued from a previously found one, are found via random sampling of initial conditions in the given state space box. The probability to find an attractor is equal to 1 − (1 − f ) n with f the basins fraction of the attractor and n the amount of sampled initial conditions. ‡ Changing a parameter often does not meaningfully increase the unstable eigenvalues of the Jacobian matrix, which would indicate loss of stability(19, Chap. 12). On the other hand, basin fractions typically decrease smoothly towards zero as an attractor loses stability(24), although this is not guaranteed to be the case(36), in which scenario, neither method indicates loss of stability. * Change of a parameter may affect the linear stability of a single fixed point, not all, providing flat lines in the bifurcation diagram for the unaffected fixed points. On the contrary, loss of global stability of any attractor affects (typically increases) the global stability of all other attractors. † Advanced applications of traditional bifurcation analysis software require several manual interventions during the process, and tuning of several configuration options, many of which do not have an immediately transparent role, requiring an expert user to make several decisions. The simplicity of our approach comes in part because of the brute-force nature of mapping individual initial conditions to attractors to collect the fractions, the intuitive nature of how attractors are matched (which is also user configurable), and the lack of necessity of interventions: after the configuration is decided, the framework runs automatically.
using D y n a m i c a l S y s te m s # our f r a m e w o r k i m p l e m e n t a t i o n using Ord inaryDif fEq # high -a c c u r a c y ODE solvers
#
Provide state space box t e s s e l l a t i o n to search in xg = yg = zg = range ( -3 , 3; length = 600) grid = ( xg , yg , zg ) # i n i t i a l i z e recurrences -based a l g o r i t h m # and choose its m e t a p a r a m e t e and contin u e a t t r a c t o r s across a given # p a r a m e t e r range for the ' pidx ' -th p a r a m e t e r prange = range (1.34 , 1.37; length = 101) pidx = 2 # index of p a r a m e t e r sampler = s t at es p a c e _ s a m p l e r ( grid )[1] rsc = RAFM ( mapper ) # main output : fractions_curves , a tt r ac to rs _ in fo = continuation ( rsc , prange , pidx , sampler ) # Estimate Lyapun o v spectra for all a t t r a c t o r s # by looping over the p a r a m e t e r range l yapunovs_curves = map ( eachindex ( prange )) do index set_parameter !( ds , pidx , prange [ index ]) attractor_dict = at tr a ct or s_ i nf o [ index ] exponents = Dict ( k = > lya p u n o v s p e c t r u m ( ds , 10000; u0 = A [1]) for (k , A ) in attr actor_di ct ) end
Table 1 .
1A comparison between CBA and RAFM as tools for analyzing the stability of a dynamical system versus a parameter. The table is colored blue or green for when to prefer CBA or RAFM respectively.
Table 2 .
2Comparison the two main methods of finding and continuing attractors.
| www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Datseris et al.
| www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Datseris et al.
| www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Datseris et al.
| www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Datseris et al.
ACKNOWLEDGMENTS. The authors would like to thank UlrikeFeudel, Peter Ashwin, Ulrich Parlitz, and Harry Dankowicz for helpful discussions. G.D. was supported by the Royal Society International Newton Fellowship. K.L.R. was supported by the German Academic Exchange Service (DAAD). A.W. was supported by the Spanish State Research Agency (AEI) and the European Regional Development Fund.
Multistability and tipping: From mathematics and physics to climate and brain -Minireview and preface to the focus issue. U Feudel, Pisarchik, Showalter, Chaos. 2833501U Feudel, AN Pisarchik, K Showalter, Multistability and tipping: From mathematics and physics to climate and brain -Minireview and preface to the focus issue. Chaos 28, 33501 (2018).
Network-induced multistability through lossy coupling and exotic solitary states. F Hellmann, Nat. Commun. 11592F Hellmann, et al., Network-induced multistability through lossy coupling and exotic solitary states. Nat. Commun. 11, 592 (2020).
Multistability and variations in basin of attraction in power-grid systems. H Kim, Lee, Davidsen, Sw Son, New J. Phys. 20113006H Kim, SH Lee, J Davidsen, SW Son, Multistability and variations in basin of attraction in power-grid systems. New J. Phys. 20, 113006 (2018).
Transient chaos enforces uncertainty in the British power grid. L Halekotte, Vanselow, J. Physics: Complex. 235015L Halekotte, A Vanselow, U Feudel, Transient chaos enforces uncertainty in the British power grid. J. Physics: Complex. 2, 035015 (2021).
Instability and multiple steady states in a meridionalplane model of the thermohaline circulation. J Marotzke, Welander, Willebrand, Tellus A. 40J Marotzke, P Welander, J Willebrand, Instability and multiple steady states in a meridional- plane model of the thermohaline circulation. Tellus A 40, 162-172 (2016).
Environmental Tipping Points. Tm Lenton, Annu. Rev. Environ. Resour. 38TM Lenton, Environmental Tipping Points. Annu. Rev. Environ. Resour. 38, 1-29 (2013).
Global Resilience of Tropical Forest and Savanna to Critical Transitions. M Hirota, Holmgren, M Ehv Nes, Scheffer, Science. 334M Hirota, M Holmgren, EHV Nes, M Scheffer, Global Resilience of Tropical Forest and Sa- vanna to Critical Transitions. Science 334, 232-235 (2011).
Ecosystem tipping points in an evolving world. V Dakos, Nat. Ecol. & Evol. 3V Dakos, et al., Ecosystem tipping points in an evolving world. Nat. Ecol. & Evol. 3, 355-362 (2019).
Multistability in perception: binding sensory modalities, an overview. Philos. J L Schwartz, Grimault, Hupé, Bcj Moore, Pressnitzer, Transactions Royal Soc. B: Biol. Sci. 367JL Schwartz, N Grimault, JM Hupé, BCJ Moore, D Pressnitzer, Multistability in perception: binding sensory modalities, an overview. Philos. Transactions Royal Soc. B: Biol. Sci. 367, 896-905 (2012).
Multistability and metastability: Understanding dynamic coordination in the brain. Kelso, Philos. Transactions Royal Soc. B: Biol. Sci. 367JAS Kelso, Multistability and metastability: Understanding dynamic coordination in the brain. Philos. Transactions Royal Soc. B: Biol. Sci. 367, 906-918 (2012).
Variability of perceptual multistability: from brain state to individual trait. A Kleinschmidt, Sterzer, Rees, Philos. Transactions Royal Soc. B: Biol. Sci. 367A Kleinschmidt, P Sterzer, G Rees, Variability of perceptual multistability: from brain state to individual trait. Philos. Transactions Royal Soc. B: Biol. Sci. 367, 988-1000 (2012).
Synthetic multistability in mammalian cells. R Zhu, Jmd Rio-Salgado, Garcia-Ojalvo, Elowitz, Science. 3759765R Zhu, JMd Rio-Salgado, J Garcia-Ojalvo, MB Elowitz, Synthetic multistability in mammalian cells. Science 375, eabg9765 (2022).
Metabolic multistability and hysteresis in a model aerobe-anaerobe microbiome community. Khazaei, Sci. Adv. 6353T Khazaei, et al., Metabolic multistability and hysteresis in a model aerobe-anaerobe micro- biome community. Sci. Adv. 6, eaba0353 (2020).
. C Geiß, Salas, Guevara-Coto, Régnier-Vigouroux, Mora-Rodríguez, Multistability in Macrophage Activation Pathways and Metabolic Implications. Cells. 11404C Geiß, E Salas, J Guevara-Coto, A Régnier-Vigouroux, RA Mora-Rodríguez, Multistability in Macrophage Activation Pathways and Metabolic Implications. Cells 11, 404 (2022).
K Padiyar, Power System Dynamics: Stability and Control. WileyK Padiyar, Power System Dynamics: Stability and Control. (Wiley), (1999).
Risk of tipping the overturning circulation due to increasing rates of ice melt. J Lohmann, Pd Ditlevsen, Proc. Natl. Acad. Sci. U. S. A. 1182017989118J Lohmann, PD Ditlevsen, Risk of tipping the overturning circulation due to increasing rates of ice melt. Proc. Natl. Acad. Sci. U. S. A. 118, e2017989118 (2021).
Tipping points in open systems: Bifurcation, noiseinduced and rate-dependent examples in the climate system. P Ashwin, Wieczorek, P Vitolo, Cox, Philos. Transactions Royal Soc. A: Math. Phys. Eng. Sci. 370P Ashwin, S Wieczorek, R Vitolo, P Cox, Tipping points in open systems: Bifurcation, noise- induced and rate-dependent examples in the climate system. Philos. Transactions Royal Soc. A: Math. Phys. Eng. Sci. 370, 1166-1184 (2012).
Minimal fatal shocks in multistable complex networks. L Halekotte, Sci. Reports. 1011783L Halekotte, U Feudel, Minimal fatal shocks in multistable complex networks. Sci. Reports 10, 11783 (2020).
Nonlinear dynamics, Undergraduate Lecture Notes in Physics. G Datseris, Parlitz, Springer NatureCham, Switzerland1 editionG Datseris, U Parlitz, Nonlinear dynamics, Undergraduate Lecture Notes in Physics. (Springer Nature, Cham, Switzerland), 1 edition, (2022).
Auto: A program for the automatic bifurcation analysis of autonomous systems. Ej Doedel, Congr. Numer. 30EJ Doedel, Auto: A program for the automatic bifurcation analysis of autonomous systems. Congr. Numer 30, 25-93 (1981).
Matcont: a matlab package for numerical bifurcation analysis of odes. A Dhooge, Y A Govaerts, Kuznetsov, ACM Transactions on Math. Softw. (TOMS). 29A Dhooge, W Govaerts, YA Kuznetsov, Matcont: a matlab package for numerical bifurcation analysis of odes. ACM Transactions on Math. Softw. (TOMS) 29, 141-164 (2003).
Recipes for continuation, Computational science & engineering. H Dankowicz, Society for Industrial and Applied MathematicsPhiladelphia, Pa.H Dankowicz, Recipes for continuation, Computational science & engineering. (Society for Industrial and Applied Mathematics, Philadelphia, Pa.), (2013).
. R Veltz, BifurcationKit.jl. R Veltz, BifurcationKit.jl (2020).
How basin stability complements the linear-stability paradigm. Pj Menck, Heitzig, Marwan, Kurths, Nat. Phys. 9PJ Menck, J Heitzig, N Marwan, J Kurths, How basin stability complements the linear-stability paradigm. Nat. Phys. 9, 89-92 (2013).
Monte carlo basin bifurcation analysis. M Gelbrecht, Kurths, Hellmann, New J. Phys. 2233032M Gelbrecht, J Kurths, F Hellmann, Monte carlo basin bifurcation analysis. New J. Phys. 22, 033032 (2020).
bSTAB: an open-source software for computing the basin stability of multi-stable dynamical systems. M Stender, Hoffmann, Nonlinear Dyn. M Stender, N Hoffmann, bSTAB: an open-source software for computing the basin stability of multi-stable dynamical systems. Nonlinear Dyn. (2021).
A density-based algorithm for discovering clusters in large spatial databases with noise. M Ester, Kriegel, Sander, Xiaowei, Proc. Second. Int. Conf. on Knowl. Discov. Data Min. M Ester, HP Kriegel, J Sander, X Xiaowei, A density-based algorithm for discovering clusters in large spatial databases with noise. Proc. Second. Int. Conf. on Knowl. Discov. Data Min. (1996).
Effortless estimation of basins of attraction. G Datseris, Wagemakers, Chaos. 32G Datseris, A Wagemakers, Effortless estimation of basins of attraction. Chaos 32 (2022).
jl: A julia software library for chaos and nonlinear dynamics. G Datseris, Dynamicalsystems, J. Open Source Softw. 3598G Datseris, Dynamicalsystems.jl: A julia software library for chaos and nonlinear dynamics. J. Open Source Softw. 3, 598 (2018).
Irregularity: A fundamental property of the atmosphere. En Lorenz, Tellus A: Dyn. Meteorol. Oceanogr. 36EN Lorenz, Irregularity: A fundamental property of the atmosphere. Tellus A: Dyn. Meteorol. Oceanogr. 36, 98-110 (1984).
Analysis of a bistable climate toy model with physics-based machine learning methods. M Gelbrecht, Lucarini, Boers, Kurths, Eur. Phys. Journal: Special Top. 123M Gelbrecht, V Lucarini, N Boers, J Kurths, Analysis of a bistable climate toy model with physics-based machine learning methods. Eur. Phys. Journal: Special Top. 123 (2021).
Multistability and multicellularity: cell fates as high-dimensional attractors of gene regulatory networks in Computational Systems Biology. S Huang, ElsevierS Huang, Multistability and multicellularity: cell fates as high-dimensional attractors of gene regulatory networks in Computational Systems Biology. (Elsevier), pp. 293-326 (2006).
A low-dimensional model for turbulent shear flows. J Moehlis, Faisst, Eckhardt, New J. Phys. 656J Moehlis, H Faisst, B Eckhardt, A low-dimensional model for turbulent shear flows. New J. Phys. 6, 56 (2004).
Fundamental Unpredictability in Multispecies Competition. J Huisman, Weissing, The Am. Nat. 157J Huisman, FJ Weissing, Fundamental Unpredictability in Multispecies Competition. The Am. Nat. 157, 488-494 (2001).
. F Hausdorff, Grundzuge Der Mengenlehre, American Mathematical SocietyProvidence, RIF Hausdorff, Grundzuge Der Mengenlehre, AMS Chelsea Publishing. (American Mathemati- cal Society, Providence, RI), (1949).
Potentials and limits to basin stability estimation. P Schultz, Menck, Heitzig, Kurths, New J. Phys. 19P Schultz, PJ Menck, J Heitzig, J Kurths, Potentials and limits to basin stability estimation. New J. Phys. 19 (2017).
Short-term synaptic plasticity in the deterministic tsodyks-markram model leads to unpredictable network dynamics. Jm Cortes, Proc. Natl. Acad. Sci. 110JM Cortes, et al., Short-term synaptic plasticity in the deterministic tsodyks-markram model leads to unpredictable network dynamics. Proc. Natl. Acad. Sci. 110, 16610-16615 (2013).
. R Veltz, BifurcationKit.jl. R Veltz, BifurcationKit.jl (2020).
Numerically optimal runge-kutta pairs with interpolants. J H Verner, Numer. Algorithms. 53JH Verner, Numerically optimal runge-kutta pairs with interpolants. Numer. Algorithms 53, 383-396 (2010).
Differentialequations.jl -a performant and feature-rich ecosystem for solving differential equations in julia. C Rackauckas, Nie, The J. Open Res. Softw. 5C Rackauckas, Q Nie, Differentialequations.jl -a performant and feature-rich ecosystem for solving differential equations in julia. The J. Open Res. Softw. 5 (2017) Exported from https://app.dimensions.ai on 2019/05/05.
Yc Lai, Transient Chaos: Complex Dynamics on Finite-TIme Scales, Applied Mathematical Sciences. New York, NYSpringerYC Lai, T Tél, Transient Chaos: Complex Dynamics on Finite-TIme Scales, Applied Mathe- matical Sciences. (Springer New York, NY), (2009).
dbscan: Fast density-based clustering with r. M Hahsler, Piekenbrock, Doran, J. Stat. Softw. 91M Hahsler, M Piekenbrock, D Doran, dbscan: Fast density-based clustering with r. J. Stat. Softw. 91, 1-30 (2019).
A set-oriented path following method for the approximation of parameter dependent attractors. R Gerlach, Ziessler, M Eckhardt, Dellnitz, SIAM J. on Appl. Dyn. Syst. 19R Gerlach, A Ziessler, B Eckhardt, M Dellnitz, A set-oriented path following method for the approximation of parameter dependent attractors. SIAM J. on Appl. Dyn. Syst. 19, 705-723 (2020).
. Jq Sun, O Xiong, C Schütze, Hernández, Cell mapping methods. SpringerJQ Sun, FR Xiong, O Schütze, C Hernández, Cell mapping methods. (Springer), (2018).
G Datseris, Rossi, Wagemakers, Juliadynamics/attractors.jl: v1.2.5. G Datseris, KL Rossi, A Wagemakers, Juliadynamics/attractors.jl: v1.2.5 (2023).
G Datseris, Good scientific code workshop. G Datseris, Good scientific code workshop (2023).
Julia: A fresh approach to numerical computing. J Bezanson, S Edelman, Karpinski, Shah, SIAM review. 59J Bezanson, A Edelman, S Karpinski, VB Shah, Julia: A fresh approach to numerical com- puting. SIAM review 59, 65-98 (2017).
G Datseris, Datseris/frameworkglobalstability: v0.1. G Datseris, Datseris/frameworkglobalstability: v0.1 (2023).
| []
|
[
"Growth and decay of isolated turbulent band in Plane-Couette flow",
"Growth and decay of isolated turbulent band in Plane-Couette flow"
]
| [
"Jianzhou Lu \nDepartment of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina\n",
"Jianjun Tao \nDepartment of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina\n",
"Weitao Zhou \nDepartment of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina\n"
]
| [
"Department of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina",
"Department of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina",
"Department of Mechanics and Engineering Science\nCollege of Engineering\nSKLTCS\nPeking University\n100871BeijingChina"
]
| []
| The transition of plane Couette flow is numerically investigated in a large computational domain. It is found that the averaged period of the transient growth reduces slowly with the decrease of the Reynolds number (Re) except when Re is close to a threshold value of 286. During the decay process, the band contracts from its both ends with a statistical constant velocity, but keeps its center, width, and tilt angle statistically unchanged. For self-sustained turbulent band, three growth styles are observed. At moderate Re, the isolated band extends obliquely as what happens in plane Poiseuille flow. With the increase of Re, transverse split occurs and the band breaks into several attached segments, forming a shape of 'F'. Further increasing Re leads to a longitudinal split, i.e. the isolated band becomes wider at first and then splits into two parallel bands. | null | [
"https://export.arxiv.org/pdf/2304.12409v1.pdf"
]
| 258,309,466 | 2304.12409 | 1f85e8519988c6242c82908e43bd231cd3b2398f |
Growth and decay of isolated turbulent band in Plane-Couette flow
Jianzhou Lu
Department of Mechanics and Engineering Science
College of Engineering
SKLTCS
Peking University
100871BeijingChina
Jianjun Tao
Department of Mechanics and Engineering Science
College of Engineering
SKLTCS
Peking University
100871BeijingChina
Weitao Zhou
Department of Mechanics and Engineering Science
College of Engineering
SKLTCS
Peking University
100871BeijingChina
Growth and decay of isolated turbulent band in Plane-Couette flow
(presented at 25th ICTAM 2020+1. Abstract book, P. 2657-2658.)
The transition of plane Couette flow is numerically investigated in a large computational domain. It is found that the averaged period of the transient growth reduces slowly with the decrease of the Reynolds number (Re) except when Re is close to a threshold value of 286. During the decay process, the band contracts from its both ends with a statistical constant velocity, but keeps its center, width, and tilt angle statistically unchanged. For self-sustained turbulent band, three growth styles are observed. At moderate Re, the isolated band extends obliquely as what happens in plane Poiseuille flow. With the increase of Re, transverse split occurs and the band breaks into several attached segments, forming a shape of 'F'. Further increasing Re leads to a longitudinal split, i.e. the isolated band becomes wider at first and then splits into two parallel bands.
Introduction
It is known that plane Couette flow (PCF) is always linearly stable, but may turn to be turbulent at moderate Reynolds numbers by the subcritical transition. Leutheusser and Chu [1] studied the flow between a moving water surface and an upper flat stationary plate, and determined the transitional Reynolds number as 280. It should be noted that the spanwise aspect ratio of the experimental channel was small and the water flow was turbulent, suggesting that the water surface was rough. From then on, substantial progresses have been made and the localized turbulent band or stripe is found to be a key structure for the transition in channel flows [2] . The band growth and decay processes have been simulated in large domains with under-resolution simulations [3,4] , and it was mentioned that the low resolution lowered the threshold for sustained turbulence from 325 to 210 [3] and impeded reliable predictions for fully resolved cases. Therefore, the spatio-temporal evolution of the isolated turbulent bands still requires fullyresolved numerical investigations.
Methods and results
We conduct numerical simulations on the PCF in a large domain of the size 800h × 2h × 712h, and a spectral code [5] is used to solve the incompressible Navier-Stokes equations, where the velocity field is expanded in a basis of Fourier modes (in the streamwise x-and spanwise z-directions) and Chebyshev polynomials (in the wall-normal direction y). The numerical resolution is 2048 spectral modes in x, 33 in y, and 2048 in z. The boundary conditions are periodic in the x-and z-directions and there is no-slip at the walls (y = ±h). The half of the velocity difference between the boundaries U and the half channel gap h are chosen as the characteristic velocity and the length scale, respectively. In the following simulations, an isolated turbulent band, i.e., a unique straight band whose length is several times smaller than the domain size, is used as the initial perturbation.
As shown in Fig. 1, there are three different growth styles for localized turbulence. For moderate Re, e.g. Re=326, the isolated band extends obliquely with time as what occurs in plane Poiseuille flow but with a much lower extension velocity. When Re increases to 330, transverse split occurs and the band breaks into attached segments, forming a shape of 'F'. It seems that the transverse split may lead to the lateral branching observed in the previous under-resolution simulations [4] . A longitudinal split happens when Re is further increased, e.g. Re=335 as shown in Fig. 1, the isolated band becomes wider at first and then splits into two parallel bands. In fact, the later growth style includes the former one(s), and hence the turbulence spreads more and more efficiently with the increase of Re.
Since the disturbance kinetic energy of the main body of the isolated band is statistically uniform and the band is straight with a finite width, its shape may be simplified as a tilted rectangle. The center, width, length, and tilt angle of the rectangle can be determined based on the disturbance kinetic energy of the band in the midplane with the method proposed for plane Poiseuille flow [6] . According to the simulations for Re = 314, 317, and 318, the position of the band center varies stochastically but does not move much during a period more than 3000 time units, reflecting that the convective velocity of such a localized structure is nearly zero. After an initial period of adjustment, the band contracts longitudinally, i.e. its length decreases generally while its width and tilt angle remain statistically constants. The temporal variation of the band length may be fitted linearly, suggesting a constant contraction velocity. A similar contraction phenomenon was observed before in an under-resolved simulation, where the position of the band center and the shape parameters (e.g., the tilt angle) were not analyzed quantitatively [3] . For the present decaying isolated band, the tilt angle is (27 ± 3) • when Re is between 314 and 318.
In order to study the statistical properties of the decay process (Fig. 2a), ten samples are calculated for each Reynolds number, and the initial flow fields are composed by the same band and different random disturbances in the whole fields. We define a parameter, the transient-growth time T grow , which is the ensemble average of the mean periods obtained from different samples when the growth rate of E k is positive, and draw it in Fig. 2(b). It is shown that T grow decreases monotonically by lowering Re, and there is a drastic decline of T grow as Re reduces to 286, indicating a threshold for the transient growth of the turbulent band. When Re¡286, the probability of the transient growth is very small, implying that the turbulent band can hardly be formed from the transient growth of any initial disturbances.
Conclusions
Direct numerical simulations are carried out to study the subcritical transition of plane Couette flow in a large computational domain. It is found that the isolated turbulent band decays in a style of longitudinal contraction at low Reynolds numbers, and the lower bound Reynolds number of the transient-growth regime is defined as 286. In addition, three different growth styles of the isolated turbulent band are illustrated, i.e. the oblique extension, transverse split, and the longitudinal split. It is believed that their joint action leads to the formation of labyrinthine patterns observed in simulations and experiments.
Figure 1 :
1Iso-contours of the spanwise velocity in the mid-plane. The initial fields are the same for all three cases.
Figure 2 :
2Time series of the volume-averaged disturbance kinetic energy E k at different Reynolds numbers obtained with the same initial turbulent band; (b) transient-growth time T grow as a function of Re.
Experiments on plane Couette flow. H J Leutheusser, V H Chu, Journal of the Hydraulics Division. 97ASCELeutheusser H. J. Chu V. H. Experiments on plane Couette flow. Journal of the Hydraulics Division, ASCE, 97, 1269-1284 (1971)
Patterns in wall-bounded shear flows. L S Tuckerman, M Chantry, D Barkey, Annual Review of Fluid Mechanics. 52Tuckerman L. S., Chantry M., Barkey D. Patterns in wall-bounded shear flows. Annual Review of Fluid Mechanics, 52, 343-367 (2020)
On the decay of turbulence in plane Couette flow. P Manneville, Fluid Dyn. Res. 4365501Manneville P. On the decay of turbulence in plane Couette flow. Fluid Dyn. Res., 43, 065501 (2011)
On the growth of laminar-turbulent patterns in plane Couette flow. P Manneville, Fluid Dyn. Res. 4431412Manneville P. On the growth of laminar-turbulent patterns in plane Couette flow. Fluid Dyn. Res., 44, 031412 (2012)
SIMSON a pseudospectral solver for incompressible boundary layer flows. M Chevalier, P Schlatter, A Lundbladh, D S Henningson, TRITA- MEK7Technical ReportChevalier M., Schlatter P., Lundbladh A., and Henningson D.S. SIMSON a pseudo- spectral solver for incompressible boundary layer flows, Technical Report TRITA- MEK, 2007:07 (2007)
Extended localized structures and the onset of turbulence in channel flow. J J Tao, B Eckhardt, X Xiong, Phys. Rev. Fluids. 311902Tao J. J., Eckhardt B., Xiong X. M. Extended localized structures and the onset of turbulence in channel flow. Phys. Rev. Fluids, 3, 011902(R) (2018).
| []
|
[
"Homotopy Classification of Super Vector Bundles and Universality",
"Homotopy Classification of Super Vector Bundles and Universality"
]
| [
"Mohammad Javad Afshari \nDepartment of Mathematics\nInstitute for Advanced Studies in Basic Sciences (IASBS)\n45137-66731ZanjanIran\n",
"Saad Varsaie \nDepartment of Mathematics\nInstitute for Advanced Studies in Basic Sciences (IASBS)\n45137-66731ZanjanIran\n"
]
| [
"Department of Mathematics\nInstitute for Advanced Studies in Basic Sciences (IASBS)\n45137-66731ZanjanIran",
"Department of Mathematics\nInstitute for Advanced Studies in Basic Sciences (IASBS)\n45137-66731ZanjanIran"
]
| []
| This study first provides a brief overview of the structure of typical Grassmann manifolds.Then a new type of supergrassmannians is construced using an odd involution in a super ringed space and by gluing superdomains together. Next, constructing a Gauss morphism of a super vector bundle, some properties of this morphism is discussed. By this, we generalize one of the main theorems of homotopy classification for vector bundles in supergeometry. Afterwards, a similar structure is introduced in the state of infinite-dimensional. Here our tools mainly include multilinear algebra of Grassmann algebras, the direct limit of the base spaces and the inverse limit of the structure sheaf of ringed spaces. We show that the resulting super vector bundle is a universal member of its category. | null | [
"https://export.arxiv.org/pdf/2304.12808v1.pdf"
]
| 258,309,469 | 2304.12808 | d6d72cb8acf673d38ae215e5164ac72fda6035b1 |
Homotopy Classification of Super Vector Bundles and Universality
25 Apr 2023
Mohammad Javad Afshari
Department of Mathematics
Institute for Advanced Studies in Basic Sciences (IASBS)
45137-66731ZanjanIran
Saad Varsaie
Department of Mathematics
Institute for Advanced Studies in Basic Sciences (IASBS)
45137-66731ZanjanIran
Homotopy Classification of Super Vector Bundles and Universality
25 Apr 2023Super vector bundleGauss morphismν-GrassmannianPullback AMS 2020 subject classifications: Primary58A5055R15; Secondary54B4055P10
This study first provides a brief overview of the structure of typical Grassmann manifolds.Then a new type of supergrassmannians is construced using an odd involution in a super ringed space and by gluing superdomains together. Next, constructing a Gauss morphism of a super vector bundle, some properties of this morphism is discussed. By this, we generalize one of the main theorems of homotopy classification for vector bundles in supergeometry. Afterwards, a similar structure is introduced in the state of infinite-dimensional. Here our tools mainly include multilinear algebra of Grassmann algebras, the direct limit of the base spaces and the inverse limit of the structure sheaf of ringed spaces. We show that the resulting super vector bundle is a universal member of its category.
Introduction
This paper aims at extending a homotopy classification for super vector bundles. In the category of vector bundles, it is shown that canonical vector bundles γ n k on Grassmannians Gr(n, k) are universal. Equivalently, associated to each vector bundle E on M , up to homotopy, there exists a unique map f : M → Gr(n, k), for sufficiently large n, such that E is isomorphic with the induced bundle of γ n k under f . Totally a universal vector bundle, γ k , is a canonical vector bundle on an infinite dimensional Grassmannian, Gr k . It is shown that any vector bundle of rank k on a compact manifold, B, is isomorphic to the pullback of a canonical vector bundle, γ k , along a suitable mapping such as B → Gr k , which is * E-mail addresses: [email protected] (M. J. Afshari), [email protected] (S. Varsaie).
proven to be unique up to homotopic approximation. As a result, there is an isomorphism,
V ect k (B) ∼ = [B, Gr k ],
between the set of isomorphism classes of rank k with base B and the set of homotopic classes of mappings B → Gr k . In this way, the category of vector bundles can be classified [6]. For this reason, the infinite-dimensional Grassmann manifold is called the classification space, and the canonical vector bundle on it is called the universal vector bundle. The class of these maps corresponds to the invariants called cohomology classes on B, known as characteristic classes. All characteristic classes are derived from these maps and universal Chern classes which are characteristic classes of Grassmann spaces Gr k [3]. In fact, Chern classes of a vector bundle may be described as the pullback of Chern classes of the universal bundle. In general, this is true for vector bundles with paracompact base spaces and so for any vector bundle whose base space is a manifold (because the compact local and second countable spaces are paracompact). In [10], some cohomology elements called ν-classes, as a supergeneralization of universal Chern classes, are introduced for canonical super vector bundles over ν-Grassmannians.
Our result (corollary 3.1) in this work make it possible to extend the concept of Chern classes for all super vector bundles. In physics, Chern classes are related to special sort of quantum numbers called topological charges [11]. Therefore, the appropriate generalization of these classes in supergeometry can be a mathematical framework for describing similar charges in supersymmetry.
The supergrassmannians introduced in [8] and the Grassmannians, in some sense, are homotopy equivalent (cf. subsection 1.2). So, the cohomology group associated to supergrassmannian is equal to that of Grassmannian. In other words, the former group contains no information about the superstructure.
Hence, from classifying space viewpoint, supergrassmannians are not good generalization of Grassmannians. Therefore, new generalizations entitled ν-Grassmannians have been invented.
The study is organized in three sections. In the preliminaries section, following [1], we introduce ν-Grassmannians denoted by ν Gr(m|n), as a new supergeneralization of Grassmannians. In the second section, we show the existence of Γ, a canonical super vector bundle over ν Gr(m|n). After introducing Gauss morphisms for super vector bundles, some properties of this morphisms are studied. Then, we extend one of the main theorems on homotopy classification for vetor bundles to supergeometry. In the last section, we study infinite dimensional Grassmann manifolds as ringed spaces. For this end, we obtain the Grassmannian base space using the direct limit and its sheaf structure using the inverse limit. Then, the sheaf structure of infinite dimensional ν-Grassmannians (ringed spaces) is defined by the use of the direct limit [5]. After construcing the canonical super vector bundle over ν Gr ∞ k|l , we show that this super bundle is universal.
There are different approaches to generalize Chern classes in supergeometry, such as homotopy or analytic approach. In this paper our approach is homotopic. Although there are not many articles with homotopy approach, but one may refer to [13] as a good example for such papers. Nevertheless, much more efforts have been made for generalizing Chern classes in supergeometry by analytic approach.
One may refer to [2], [3], [4], [7], [9], [14]. But, in all these works, the classes obtained in this way are nothing but the Chern classes of the reduced vector bundle(s) and they do not have any information about the superstructure.
Preliminaries
In this section, first, we recall some basic definitions of supergeometry. Then, we introduce a supergeneralization of Grassmannian called ν-Grassmannian.
Supermanifolds
A super ringed space is a pair (X, O) where X is a topological space and O is a sheaf of commu-
tative Z 2 -graded rings with units on X. Let O(U ) = O ev (U ) ⊕ O odd (U ) for any open subset U of X. An element a of O(U ) is called a homogeneous element of parity p(a) = 0 if a ∈ O ev (U ) and
it is a homogeneous element of parity p(a) = 1 if a ∈ O odd (U ). A morphism between two super ringed spaces (X, O X ) and (Y, O Y ) is a pair ( ψ, ψ * ) such that ψ : X −→ Y is a continuous map and
ψ * : O Y −→ ψ * (O X )
is a homomorphism between the sheaves of Z 2 -graded rings.
Definition 1.1. A superdomain is a super ringed space R p|q := (R p , O) where O = C ∞ R p ⊗ R ∧R q , p, q ∈ N.
By C ∞ R p we mean the sheaf of smooth functions on R p . A super ringed space which is locally isomorphic to R p|q is called a supermanifold of dimension p|q.
Note that a morphism ( ψ, ψ * ) between two supermanifolds (X, O X ) and (Y, O Y ) is just a morphism between the super ringed spaces such that for any x ∈ X, ψ * :
O Y, ψ(x) −→ ψ * (O X,x ) is local, i.e., ψ * (m ψ(x) ) ⊆ m x , where m x is the unique maximal ideal in O X,x .
ν-Grassmannian
Supergrassmannians are not good generalization of Grassmannians. Indeed these two, in some sense, are homotopy equivalent. This equivalency may be shown easily in the case of projective superspaces.
To this end, let P m|n = (RP m , O P m|n ) be the real projective superspace. By a deformation retraction from P m|n to P m , we mean that there is a morphism P m|n × R 1|0 −→ P m|n such that the following diagrams commute: where j 0 and j 1 are morphisms from P m|n to P m|n × R 1|0 corresponding to pairs (id, 0) and (id, 1) w.r.t. the bijection Hom(P m|n , P m|n × R 1|0 ) ∼ = Hom(P m|n , P m|n ) × Hom(P m|n , R 1|0 ) as a result of categorical product in category of supermanifolds. In addition j : P m −→ P m|n is the embedding of reduced manifold P m into the supermanifold P m|n . For more details on catagorical product and reduced manifolds see [12], pages 137 and 133 respectively. r : P m|n −→ P m is a morphism such that r = id P m and r * : O P m −→ O P m|n is a sheaf homomorphism induced by the local homomorphisms
P m|n × R 1|0 H / / PO P m (U α ) r * α −→ O P m|n (U α ) with x α i −→ x α i forx k −→ x k , e l −→ (1 − t)e l .
Then one may easily show that for this morphism, all above diagrams commute.
This proposition shows that in the construction of projective superspaces, the odd variables do not play principal roles. Solving this problem is our motivation for defining ν-Projective spaces or generally ν-Grassmannians. Before that, it is necessary to recall some basic concepts.
A ν-domain with dimension p|q is a super ringed space
R p|q := (R p , O), O = C ∞ R p ⊗ R ∧R q , p, q ∈ N,
with an odd involution ν, i.e.
ν : O −→ O, ν(O ev ) ⊆ O odd , ν(O odd ) ⊆ O ev , ν 2 = id.
In addition, ν is a homomorphism between C ∞ -modules.
Let k, l, m and n be non-negative integers with k < m and l < n. For convenience from now on, we
write Gr(k, m) as Gr m k , and set p = k(m − k) + l(n − l) and q = k(n − l) + l(m − k). A real ν-Grassmannian, ν Gr k|l (m|n), or shortly ν Gr = (Gr m k × Gr n l , G), is a real superspace obtained by gluing ν-domains (R p , O) of dimension p|q.
Here, we need to set some notations that are useful later.
Let I be a k|l multi-index, i.e., an increasing finite sequence of {1, · · · , m + n} with k + l elements. So one may set
I := {i a } k+l a=1 .
A standard (k|l) × (m|n) supermatrix, say T , may be decomposed into four blocks as follows:
A k×m B k×n --- --- C l×m D l×n .
The upper left and lower right blocks are filled by even elements. These two blocks together form the even part of T . The upper right and lower left blocks are filled by odd elements and form the odd part of T .
The columns with indices in I together form a minor denoted by M I (T ).
A pseudo-unit matrix id I corresponding to k|l multi-index I, is a (k|l) × (k|l) matrix whose all entries are zero except on its main diagonal that are 1 or 1ν, where 1ν is a formal expression used as odd unit. For each open subset U of R p and each z ∈ O(U ), we also need the following rules:
z.(1ν) := ν(z), (1ν)(1ν) = 1.
So for each I, one has
id I .id I = id.
As a result, for each I and each (k|l) × (k|l) supermatrix T , we can see that
T = (T.id I ).id I .
The following steps may be taken in order to construct the structure sheaf of ν Gr:
Step1: For each k|l multi-index I, consider the ν-domain (V I , G I ).
Step2 say a, that is in a block with opposite parity is replaced by ν(a).
As an example, consider ν Gr 2|2 (3|3) with I = {1, 2, 3, 6}. Then one has
1 0 0 ; νx 1 e 3 0 0 1 0 ; νx 2 e 4 0 --------- 0 0 1ν ; νe 1 x 3 0 0 0 0 ; νe 2 x 4 1 .
The columns of A I with indices in I together form the following supermatrix:
M I (A I ) := id I = 1 0 0 ; 0 0 1 0 ; 0 ----- 0 0 1ν ; 0 0 0 0 ; 1 .
For each pair multi-indices I and J, define the set V IJ to be the largest subset of V I on which
M J (A I ).id J is invertible on it.
Step3: On V IJ , the equality
M J (A I ).id J −1 .A I = A J ,
defines a correspondence between even and odd coordinates of V J and rational expressions in G I appear as corresponding entries of matrices on the two sides of the equality. By ( [12], Th 4.3.1), one has a unique homomorphism
ϕ * IJ : G J|V JI −→ G I|V IJ .
Step4: The homomorphisms ϕ * IJ satisfy the gluing conditions, i.e., for each I, J and K, we have
1) ϕ * II = id. 2) ϕ * IJ • ϕ * JI = id. 3) ϕ * IK • ϕ * KJ • ϕ * JI = id.
In the first condition, ϕ * II is defined by the following equality:
M I (A I ).id I −1 .A I = id I .id I −1 .A I . Since id I .id I −1 = id, we have ϕ * II = id. For the last condition, note that ϕ * KJ • ϕ * JI is obtained from the equality M I M J (A K ).id J −1 .A K .id I −1 M J (A K ).id J −1 .A K = A I .
For the left hand side of this equality, one has
M J (A K ).id J −1 .M I (A K ).id I −1 M J (A K ).id J −1 .A K = M I (A K ).id I −1 M J (A K ).id J M J (A K ).id J −1 .A k = M I (A K ).id I −1 .A K = A I .
Thus the third condition is established.
The second condition may results from other conditions as follows:
ϕ * IJ • ϕ * JI = ϕ * II , ϕ * II = id.
Super vector bundles
Here, we recall the definition of super vector bundles and their homomorphisms. Then, we introduce a canonical super vector bundle over ν-Grassmannian.
E(U α ) ≃ O(U α ) k ⊕ πO(U α ) l ,
or equivalently,
E(U α ) ≃ O(U α ) ⊗ R R k ⊕ π(R l ) . For example, let O k|l M := (⊕ k i=1 O) ⊕ (⊕ l j=1 πO) where πO is an O-module which satisfies (πO) ev = O odd , (πO) odd = O ev .
The right multiplication is the same as in O and the left multiplication is as follows:
z(πw) := (−1) p(z) π(zw),
where πw is an element of πO.
Canonical super vector bundle over ν-Grassmannian
Let I be a k|l multi-index and let (V I , G I ) be a ν-domain. Consider the trivial super vector bundle
Γ I := G I ⊗ R R k ⊕ π(R l ) = G I ⊗ R R k|l .
By gluing these super vector bundles through suitable homomorphisms, one may construct a super vector bundle Γ over ν-Grassmannian ν Gr. For this, consider a basis {e 1 , · · · , e k , f 1 , · · · , f l } for R k|l and set
m := M J (A I ).id J −1 ,
where M J (A I ).id J −1 is introduced in subsection 1.2. Gluing morphisms are defined as follows:
ψ * IJ : Γ J|V JI −→ Γ I|V IJ , a ⊗ e i −→ ϕ * IJ|V JI (a) t≤k m it ⊗ e t + t>k m it ⊗ f t ,
where the elements m it are the entries of the i-th column of the supermatrix m. The morphisms ψ * IJ satisfy the gluing conditions. So Γ , I s may glued together to form a super vector bundle denoted by Γ.
Gauss morphisms
In common geometry, a Gauss map is defined as a map from the total space of a vector bundle, say ξ, to a Euclidean space such that its restriction to any fiber is a monomorphism. Equivalently, one Let {e 1 , · · · , e k , f 1 , · · · , f l } be a basis for R k|l so that {e i } and {f j } are respectively bases for R k and π(R l ), then B :
= {1 ⊗ e 1 , · · · , 1 ⊗ e k , 1 ⊗ f 1 , · · · , 1 ⊗ f l } is a generator for the O(U α )-module, O(U α ) ⊗ R R k|l . Set s α i := ψ * −1 α (1 ⊗ e i ), t α j := ψ * −1 α (1 ⊗ f j ). (1) So (ψ * α ) −1 (B) iss = t α=1 ρ α r α (s),
where r α is the restriction morphism. In addition, one has
r α (s) = k i=1 λ α i s α i + l j=1 δ α j t α j , λ α i , δ α j ∈ O(U α ).
By the last two equalities, we have
s = t α=1 ρ α k i=1 λ α i s α i + l j=1 δ α j t α j = t α=1 k i=1 √ ρ α λ α i . √ ρ α s α i + t α=1 l j=1 √ ρ α δ α j . √ ρ α t α j ,
where √ ρ α s α i and √ ρ α t α j are even and odd sections of E(M ) respectively, and √ ρ α λ α i and
√ ρ α δ α j are sections of O(M ). So A := { √ ρ α s α i } α,i ∪ { √ ρ α t α j } α,j is a generating set of E(M )
. Now, for each α, consider the following monomorphism between O(U α )-modules:
i α : O(U α ) ⊗ R R k|l −→ O(U α ) ⊗ R R tk|tl , 1 ⊗ e i −→1 ⊗ e (α−1)k+i , 1 ⊗ f j −→1 ⊗ f (α−1)l+j . Set g(s) := t α=1 ρ α .i α • ψ * α • r α (s).(2)
It is easy to see that g is a Gauss morphism of E(M ).
Gauss supermatrix
Now, we are going to obtain the matrix of the Gauss supermap g.
Definition 2.4. By a Gauss supermatrix associated to the super vector bundle E, we mean a supermatrix, say G, which is obtained as follows with respect to the generating set A:
g √ ρ β s β j = t α=1 ρ α .i α • ψ * α • r α √ ρ β s β j ,
where g is a Gauss morphism of E.
By (1), we have
g √ ρ β s β j = t α=1 ρ α .i α • ψ * α • ψ * −1 β √ ρ β e j = k i=1 t α=1 ρ α √ ρ β a αβ ij e (α−1)k+i + l i=1 t α=1 ρ α √ ρ β a αβ (k+i)j f (α−1)l+i ,(3)
where [a αβ ij ] is a matrix of ψ * α • ψ * −1 β relative to the generator B. The natural ordering on {i α e i } α,i and {i α f s } α,s induces an ordering on their coefficients in (3). Let G be a tk|tl × tk|tl standard supermatrix.
Fill the even and odd top blocks of G by these coefficients according to their parity from left to right
along the (β − 1)k + j -th row, 1 ≤ j ≤ k, 1 ≤ β ≤ t.
Similarly, by coefficients in the decomposition of g √ ρ β t β r , one may fill the odd and even down blocks of G along the (β − 1)k + r -th row, 1 ≤ r ≤ l, 1 ≤ β ≤ t.
Example 2.1. For k|l = 2|1 and a suitable covering with two elements, i.e., t = 2, we have g √ ρ 2 s 2 2 = ρ 1 √ ρ 2 a 12 12 e 1 + ρ 1 √ ρ 2 a 12 22 e 2 + ρ 2 √ ρ 2 a 22 12 e 3 + ρ 2 √ ρ 2 a 22 22 e 4 + ρ 1 √ ρ 2 a 12 32 f 1 + ρ 2 √ ρ 2 a 22 32 f 2 .
Then the 4-th row of the associated supermatrix G is as below:
. . . ; ρ 1 √ ρ 2 a 12 12 ρ 1 √ ρ 2 a 12 22 ρ 2 √ ρ 2 a 22 12 ρ 2 √ ρ 2 a 22 22 ; ρ 1 √ ρ 2 a 12 32 ρ 2 √ ρ 2 a 22 32 ------------------------------------------ . . . ; .
On the other hand, one may consider a covering {U α } α so that for each α, we have an isomorphism
O(U α ) ≃ −→ C ∞ (R m ) ⊗ R ∧R n .(4)
Let ν be an odd involution on C ∞ R (k 2 +l 2 )(t−1) ⊗ R ∧R 2kl(t−1) preserving C ∞ (R m ) ⊗ R ∧R n as a subalgebra. Thus, it induces an odd involution on O(U α ) through the isomorphism (4) which is denoted by the same notation ν. Then the correspondence
σ * : h −→ h(6)
is a well-defined homomorphism from G(Gr tk k × Gr tl l ) to O(M ) and so induces a smooth map σ, from M to Gr tk k × Gr tl l [10].
The homomorphism σ = ( σ, σ * ) is called the associated morphism with the Gauss morphism g. Proof. We show that the sheaf O ⊗ σ G Γ is isomorphic to E. Let s ′ be a global section on Γ. One has
Pullback of the canonical super vector bundle
s ′ = I⊆{1,··· ,t(k+l)}, |I|=k+l ρ ′ I .r ′ I (s ′ ),
where {ρ ′ I } is the partition of unity of ν Gr subordinate to the open cover {V I }, and r ′ I is the restriction morphism giving sections over V I . On the other hand, one may write each section r ′ I (s ′ ) as below:
r ′ I (s ′ ) = k+l j=1 h I j s ′ I j ,
where s ′ I j are generators of Γ(V I ) and the coefficients h I j are the sections of G I . Therefore, we can write
s ′ = I⊆{1,··· ,t(k+l)}, |I|=k+l k+l j=1 (ρ ′ I h I j )s ′ I j .
Note that each row of G is in correspondence with a section in the generator set A. So there is a morphism from the pullback of Γ to E as
O ⊗ σ G Γ −→ E, u ⊗ s ′ −→ u.δ(s ′ ),(7)
where δ(s ′ ) is I⊆{1,··· ,t(k+l)}, |I|=k+l k+l j=1 σ * (ρ ′ I h I j ).s I j and s I j is the section corresponding to the j-th row of G(I) (cf. subsection 2.3). One may show that the morphism in (7) is an isomorphism. To this end, first note that every locally isomorphism between two sheaves of O-modules with the same rank is a globally isomorphism. Also for the super vector bundle Γ of rank k|l over G, one can write a locally isomorphism
O ⊗ σ G Γ ≃ −→ O ⊗ R R k|l ,
because for each sufficiently small open set V in Gr tk k × Gr tl l one can write
Γ(V ) ≃ G(V ) ⊗ R R k|l and then O σ −1 (V ) ⊗ σ G Γ(V ) ≃ O σ −1 (V ) ⊗ σ G G(V ) ⊗ R R k|l .
This shows that the morphism in (7) may be represented locally by the following isomorphism:
O ⊗ G ⊗ R k|l → O ⊗ R k|l .(8)
Thus (7) defines a global isomorphism.
Homotopy properties of Gauss supermaps and their associated morphisms
J e , J o , J : O(M ) ⊗ R R m|n −→ O(M ) ⊗ R R 2m|2n
by the conditions
J e :1 ⊗ e i −→ 1 ⊗ e 2i , J o : 1 ⊗ e i −→ 1 ⊗ e 2i−1 , J : 1 ⊗ e i −→ 1 ⊗ e i , 1 ⊗ f j −→ 1 ⊗ f 2j , 1 ⊗ f j −→ 1 ⊗ f 2j−1 , 1 ⊗ f j −→ 1 ⊗ f j .I o := {2i 1 − 1, · · · , 2i k+l − 1}, I := {i 1 , · · · , i b , i b+1 + m − k, · · · , i k+l + m − k}, where i a ∈ I ,1 ≤ i a ≤ k + l, and i b is an element of I for which i b ≤ m ≤ i b+1 .
So the maps J e , J o and J induce the homomorphisms J e ,J o ,J : ν Gr(m|n) −→ ν Gr(m ′ |n ′ ).
In fact, (J e ) * | W I e is obtained by
G I e (W I e ) −→ G I (V I ),
y i(2j−1) −→ x ij , i = 1, · · · , k + l, j = 1, · · · , m + n − k − l, other generators −→ 0. One can define a family of homomorphisms
F t : E(M ) −→ O ⊗ R R 2m|2n , ϕ −→ (1 − t).(J e g)(ϕ) + t.(J o g 1 )(ϕ),
where F 0 = J e g and
Universality
At first we introduce the infinite Grassmannian Gr ∞ k which is the direct limit of finite dimensional Grassmannians Gr m k .
Grassmann Manifold Gr
∞ k
The Grassmann manifold, Gr n k , is a compact space containing all the k-dimensional vector subspaces in R n . Each of these vector subspaces can be represented by a linearly independent k-tuple basis v 1 , ..., v k .
Any v i is in the form of an ordered n-tuple. With the help of inclusion mapping ι : R n ֒→ R n+1 , these vectors can be embedded as (n + 1)-tuples ι(v i ) = (v i1 , ..., v in , 0) in R n+1 . Thus, a mapping like ι : Gr n k −→ Gr n+1 k is induced, which is represented by ι for the sake of simplicity. Under this mapping Gr n k can be considered as a subspace of Gr
to construct Gr n k . In both view of points, by means of some ι, one may consider Gr n k as a subset of Gr n+1 k .
The union of all Gr n k for a constant k constitutes the infinite dimensional Grassmann manifold to which an inductive topology is described by means of a direct limit:
Gr ∞ k = k≤n Gr n k .
Thus, each subset of Gr ∞ k is said to be open if its intersection with any Gr n k is an open set. Equivalently, in order to determine the topology of this set, a sequence of open sets like
U i ⊆ Gr k+i k where (U i ) i∈N∪{0}
can be used such that for each i, we have
ι(U i ) ⊆ U i+1 .
In this case, on the union of a separated family of open sets U i s, consider the equivalence ∼ with the following criterion
a ∼ b ↔ ι(a) = b,
where a ∈ U i and b ∈ U i+1 . We define U ∞ (Ui) := Ui
∼ as an open set in Gr ∞ k and represent it abbreviately by U ∞ . These sets form a basis for the topology of Gr ∞ k . Thus the following embedding can be defined for each i:
p i :Gr k+i k → Gr ∞ k , a i → [a i ].
For the sake of simplicity, we represent the structure sheaf of Gr k+i k by O i . Now, using the inverse limit, a smooth sheaf structure can be defined over Gr ∞ k , which is denoted by O ← . In fact, section f on the open set U ∞ is defined by the following rule: Under this mapping, the structure sheaf O i enjoys a O i+1 -module structure.
f ∈ O ← (U ∞ ) ⇐⇒ f = (f i ) i , f i ∈ O i (U i ), ι * i (f i+1 ) = f i , where, homorphisms ι * i : O i+1 → O iTheorem 3.1. O ← is a sheaf.
Proof.
For each two open sets U ∞ and V ∞ , we have:
U ∞ ⊆ V ∞ ⇐⇒ U i ⊆ V i ∀i ∈ N ∪ {0}.
In this case, the restriction homomorphism can be defined as follows:
r (Ui)(Vi) (f ) := (r UiVi (f i )) i .
With this definition, it is easy to see that for all three open sets U ∞ , V ∞ , and W ∞ , if U ∞ ⊆ V ∞ ⊆ W ∞ , then the following relationships are established: to U αi such that r αi (f i ) = f αi . Therefore, f = (f i ) i is a section for which for each α we have
r U ∞ V ∞ • r V ∞ W ∞ = r U ∞ W ∞ , r U ∞ U ∞ = id O ← U ∞ .r α (f ) = f α ,
indicating that the global property for the pre-sheaf O ← is satisfied.
Considering the structure defined, naturally a homomorphism from a finite-dimensional Grassmannian to an infinite-dimensional Grassmannian is induced by the embedding p i :
p * i : O ← → O i , p * i ((f j )) = f i .
Under this mapping, the structure sheaf O i also has a O ← -module structure.
ν-Grassmannian ν Gr ∞ k|l
To construct the ν-Grassmannian ν Gr ∞ k|l := (Gr ∞ k × Gr ∞ l , G ← ), similar to the previous subsection, first its base space, i.e., Gr ∞ k × Gr ∞ l is determined using the direct limit and then its sheaf structure is determined using the inverse limit. This requires that a natural homomorphism between the two ν-Grassmannians ν Gr m|n k|l and ν Gr m ′ |n ′ k|l for m ≤ m ′ and n ≤ n ′ are examined.
Firstly, since the structure of the base spaces is multiplicative, the mapping ι : Gr m k × Gr n l → Gr m ′ k × Gr n ′ l is established between them.
Secondly, note that the ν-Grassmannian ν Gr m ′ |n ′ k|l is constructed by gluing together standard ν-domains of dimension p ′ |q ′ where p ′ = p + k(m ′ − m) + l(n ′ − n) and q ′ = q + k(n ′ − n) + l(m ′ − m). Similar to the normal case, if V ′ I ∩ ι(Gr m k × Gr n l ) is nonempty for an arbitrary multi-index I, then ι −1 (V ′ I ) will be equal to V I . In this case the correspondence
G ′ I (V ′ I ) −→ G I (V I ) x i −→ x i , i = 1
, · · · , p, e i −→ e i , i = 1, · · · , q, I ⊆ {1, ..., m} ∪ {m ′ + 1, ..., m ′ + n}, other generators −→ 0,
G ′ I (V ′ I ) −→ 0, I {1, ..., m} ∪ {m ′ + 1, ..., m ′ + n}.
induces a natural homomorphism as
ι * I : G ′ I |(V ′ IJ ) −→ G I |(V IJ ).(10)
Under this mapping, the structure sheaf G I has a G ′ I -module structure as well. It can be seen that the following diagram consisting of this homomorphism and the gluing maps between the ν-domains for arbitrary and permissible multi-indices I and J is commutative;
G ′ J |(V ′ JI ) ι * J −→ G J |(V JI ) ϕ ′ * IJ ↓ ϕ * IJ ↓ G ′ I |(V ′ IJ ) ι * I −→ G I |(V IJ )
because the gluing maps ϕ * IJ and ϕ ′ * IJ are exactly constructed using the invertible square supermatrices M J (A I ).id J and M ′ J (A ′I ).id J of dimensions k|l × k|l that are equal to each other and, on the other hand, the orders of the even generators x 1 , ..., x p and the odd generators e 1 , ..., e q do not change in the same columns in the two supermatrices. ] that do not present in the representation supermatrix G J | VJI , and the other columns are same:
A J := [A] k×m [B] k×n --- --- [C] l×m [D] l×n , A ′J := [A] k×m [a ′ ] k×(m ′ −m) [B] k×n [b ′ ] k×(n ′ −n) ------ ------ [C] l×m [c ′ ] l×(m ′ −m) [D] l×n [d ′ ] l×(n ′ −n) .
These columns have no effect on the gluing mappings in the above diagram because these mappings are derived based on the minors whose columns in the both supermatrices are the same. As a result, the above diagram commutes.
The homomorphisms ι * I induce the following natural homomorphisms by the induced equivalences ϕ * where A I Theorem 3.2. The super vector bundle γ tk|tl k|l and the pullback of γ ∞ k|l along p ij are isomorphic.
Proof. By definition, the pullback of γ ∞ k|l along p ij is in the form of G ⊗ pij G ← Γ ← and it suffices to show that the following mapping is an isomorphism:
T : G ⊗ pij G ← Γ ← → γ tk|tl k|l , u ⊗ (s i ′ j ′ ) → u.s ij .
Consider an arbitrary section like s =
(s c i ′ j ′ ) = k+l c=1 1.(a c i ′ j ′ ) ⊗ (s c i ′ j ′ ) = k+l c=1 1 ⊗ (a c i ′ j ′ )(s c i ′ j ′ ),
the section s is zero too; i.e., the kernel of T is zero and T is locally an isomorphism. Since the rank of two super vector bundles on a common base space are equal, T is an isomorphism generally.
The following result is directly derived from the last two theorems.
Corollary 3.1. Each super vector bundle E with rank k|l is isomorphic to the pullback of γ ∞ k|l along p ij • σ t , for a big enough t and i = tk − k and j = tl − l.
There are other morphisms that can replace p ij • σ t in the Corollary 3.1 we call these morphisms as "desirable morphisms w.r.t. super vector bundle E ". In the next subsection we introduce such a morphism explicitly for an specific example.
An example of desirable morphisms
Consider the infinite dimensional Grassmann manifold Gr ∞ k = (Gr ∞ k , O ← ) introduced in 3.1. We show that the embedding of reduced manifold Gr ∞ k into the supermanifold ν Gr ∞ k|0 denoted by
ι ← : Gr ∞ k → ν Gr ∞ k|0
is a desirable morphism w.r.t. canonical vector bundle over Gr ∞ k . In other words one has the following proposition:
Proposition 3.1. The pullback of γ ∞ k|0 along ι ← is isomorphic to the vector bundle γ ∞ k .
: Corresponding to each I, consider a (k|l) × (m|n) supermatrix A I which its columns with indices in I together form id I . The formal expression 1ν appears when a diagonal entry of id I places in odd part of A I . In addition, the other columns of A I , from left to right, and each one from up to down, are filled by even and odd coordinates of ν-domain G I , i.e., x 1 , ..., x k , e 1 , ..., e l , ..., x (m−k−1)k+1 , ..., x (m−k)k , e (m−k−1)l+1 , ..., e (m−k)l , e (m−k)l+1 , ..., e (m−k)l+k , x (m−k)k+1 , ..., x (m−k)k+l , ... . respectively. Afterwards, each entry,
Definition 2. 1 .
1By a super vector bundle E of rank k|l over a supermanifold (M, O), we mean a sheaf of Z 2 -graded O-modules on M which locally is a free k|l module. In other words, there exists an open cover {U α } α of M such that
M
is a super vector bundle over the supermanifold M . Let E and E ′ be two super vector bundles over a supermanifold (M, O). By a homomorphism from E to E ′ , we mean an even sheaf homomorphism τ : E −→ E ′ . Each super vector bundle over M isomorphic to O k|l M , is called a trivial super vector bundle of rank k|l.
may consider a 1
1− 1 strong bundle map from ξ to a trivial vecor bundle. The Gauss map induces a homomorphism between the vector bundle and the canonical vector bundle on a Grassmannian Gr n k with sufficiently large value of n. A simple method for constructing such a map is the use of coordinate representation for ξ. In this section, for constructing a Gauss morphism of a super vector bundle, one may use the same method.
Theorem 2. 1 ..
1Let E be a super vector bundle over a supermanifold (M, O) and let G be a Gauss supermatrix associated to E. Then the Gauss supermatrix induces a homomorphism from G, the structure sheaf of ν Gr, to O. Proof. Let h be an element of G(Gr m k × Gr n l ), where m = tk, n = tl, and {ρ ′ I } be a partition of unity subordinate to the covering {V I } I⊆{1,··· ,t(k+l)} , then one has h = I⊆{1,··· ,t(k+l)}, |I|=k+l ρ ′ I .h| VI . (5) Consider the rows of G with indices in I as a (k + l) × t(k + l) supermatrix and name it G(I). Then multiply it by id I from left, i.e., id I .G(I) and delete the columns with indices in I, Note that all entries of this supermatrix are sections of O(M ). Let A I be the matrix introduced in subsection 1.2 and let x I ij be its entry out of M I (A I ) = id I . Then the correspondence x I ij −→ y I ij defines a homomorphism ϕ * I : G I (V I ) −→ O(M ). Now, for each global section h of G, one may define h = I⊆{1,··· ,t(k+l)}, |I|=k+l ϕ * I (ρ ′ I .h| VI ).
Definition 2. 5 .
5Let σ = ( σ, σ * ) is a morphism from a supermanifold (M, O) to Gr m|n k|l . One can define a G-module structure on O(M ) as follows:a * b := σ * (a).b, a ∈ G(Gr m k × Gr n l ), b ∈ O(M ).The sheaf O ⊗ σ G Γ is called the structure sheaf of the pullback of γ m|n k|l along σ .Theorem 2.2. Let σ be the associated morphism introduced above(6). Then, the super vector bundle E and the pullback of Γ (the canonical super vector bundle over ν Gr) along σ are isomorphic.
Now, let (V I , G I ) be ν-domains introduced in subsection 1.2. In addition assume (W J , G J ) be ν-domains of dimension 2p|2q. For each k|l multi-index I = {i 1 , · · · , i k+l } ⊂ {1, ..., m + n}, one can associate the following multi-indices I e := {2i 1 , · · · , 2i k+l },
Theorem 2. 3 .
3Let f, f 1 : (M, O) −→ ν Gr(m|n) are induced by the Gauss supermaps g and g 1 . Then, Jf andJf 1 induced by Jg and Jg 1 are homotopic. Proof. Consider the homomorphisms J e g and J o g 1 , with the induced homomorphismsJ e f andJ o f 1 .
F 1 =
1J o g 1 . By subsection 2.3, a family of morphismsF t from (M, O) to ν Gr(m ′ |n ′ ) are induced. ObviouselyF 0 =J e f andF 1 =J o f 1 ; thus,F t is a homotopy fromJf tō Jf 1 .
.
By standard coordinate map of a Grassmannian we mean coordinate maps which are introduced in [[8], page 9]. In this case, if (V ′ I , x 1 , ..., x p+k ) for p = k(n − k) is a standard coordinate map of Gr n+1 k corresponding to multi-indices I = {i 1 , ..., i k } ⊆ {1, ..., n}, then (V I , x 1 , ..., x p ) for ι −1 (V ′ I ) = V I is a standard coordinate map of Gr n k . From the sheaf-theoretic view, there is a sheaf (U I , O I ) := (R p , C ∞ R p ) for each coordinate map (V I , x 1 , ..., x p ) that can be glued together through homomorphisms φ IJ : (U IJ , O J |) −→ (U IJ , O I |)
are induced by mappings ι i :
pre-sheaf. To prove that it is a sheaf, it suffices to examine the local and global properties of sheaves. Consider two sections f and g on Gr ∞ k so that for any arbitrary point in the base space, they are equal in a neighborhood of that point. We know these sections are in the form of a sequence of sections f = (f i ) and g = (g i ) , so for each i and any arbitrary point in Gr k+i k , the sections f i and g i are equal in a neighborhood U i of that point. Since Gr k+i k is a sheaf, according to the local property of sheaves, the sections f i and g i are equal on the whole Gr k+i k . As a result, f and g are equal. Now consider an open cover {U ∞ α } for Gr ∞ k and sections f α ∈ O ← α so that for each α and β, we have r αβ,β (f β ) = r αβ,α (f α ), where r (U ∞ α ∩U ∞ β )U ∞ β are abbreviately represented by r αβ,β . Since each f α is in the form of a sequence of sections like f αi , so for each i, there is a section f i and a restriction r αi in Gr k+i k
j ′ )(s c i ′ j ′ ) with a basis like {(s c i ′ j ′ )} c=1,...,k+l on a small enough open set like U ∞ in Gr ∞ k × Gr ∞ l . The image of this section under T in Γ(U ) is in the form of k+l c=1 a c ij .s c ij with the basis {s c ij } c=1,...,k+l for i = tk − k and j = tl − l. As a result, if T (s) is equal to zero, all the coefficients a c ij are equal to zero and according to the relation k+l c=1 a c ij ⊗
m|nP m|n
j0
O O
id
9 9
r
r
r
r
r
r
r
r
r
r
r
,
P m|n × R 1|0 H
/ / P m|n
P m|n
j1
O O
j•r
9 9
r
r
r
r
r
r
r
r
r
r
r
,
Definition 2.2. A super vector bundle E over a supermanifold (M, O) is of finite type if there is a finite open cover {U α } t α=1 for M such that for each α, the restriction of E to U α is trivial, i.e., there exist isomorphisms as ψ * α : E| Uα ≃ −→ O| Uα ⊗ R R k|l .Definition 2.3. A Gauss morphism of E is a homomorphism from E to the trivial super vector bundle
over (M, O) so that its kernel is trivial.
a generator for E(U α ) as an O(U α )-module. Choose a partition of unity {ρ α } t α=1 subordinate to the covering {U α } t α=1 . Considering s as a global section of E(M ), we can write
be two trivial super vector bundles where m ′ = 2m − k, n ′ = 2n − l; then, one can write the inclusion homomorphismsLet O
m|n
M
and O
m ′ |n ′
M
Actually, in the representation supermatrix of ν-domain G ′ J | V ′ JI , there are columns like [[a ′ ]
[c ′ ] ] and [
[b ′ ]
[d ′ ]
where, i := m − k, i ′ := m ′ − k, j := n − l, j ′ := n ′ − l.The collection of all these structure sheaves with natural homomorphisms (G ′ (Gr m ′ k × Gr n ′ l ), ι * ii ′ |jj ′ ) satisfy the inverse limit conditions and determine a sheaf on Gr ∞ k × Gr ∞ l represented by G ← . In fact, every section f on an arbitrary open set U ∞ := (U ij ) i,j in the base space is denoted by a sequence of sections (f ij ) i,j satisfying the following relationships:According to the structure defined and similar to the normal case, a homomorphism from a finite dimensional ν-Grassmannian to an infinite dimensional ν-Grassmannian can be defined as follows:Under this mapping, the structure sheaf G ij has a G ← -module structure as well.Super Vector Bundle γ ∞ k|lIt is reminded that ν-Grassmannians are constructed by gluing together ν-domains (V I , G I ) := (R p , C ∞ R p ⊗ R ∧R q ) in the direction of homomorphisms ϕ IJ . Also, the canonical super vector bundle γt is the t-th row of the supermatrix A I and < A I 1 , ..., A I k+l > is the real vector space generated by A I t s. It can be seen that each of these free sheaves are equivalent to (V I , G I ⊗ R k|l ). On the other hand, the canonical homomorphism in (10) induces a canonical homomorphism between G ′ I -module sheaves in the form ofIt can be shown that the following diagram consisting of these homomorphisms are commutative with the gluing homomorphisms:In this case, if U ∞ is an open set in Gr ∞ k × Gr ∞ l , then one can define a canonical homomorphism using the equivalency induced by the above diagramsis defined. The sequence of all these modules and the canonical homomorphisms between them as These homomorphisms induce the homomorphismp ij from γ m|n k|l to γ ∞ k|l .Pullback of Super Vector BundlesIn this section, we show that the super vector bundle γ ∞ k|l is a universal member of the category of super vector bundles.On the other hand, the canonical super vector bundle γ m|n k|0 := (Gr m k × { * }, Γ) over ν Gr m|n k|0 is constructed by gluing the free G I -module sheaves G I ⊗ < A I >:= G I ⊗ < A I 1 , ..., A I k > on V I through the G-module homomorphisms:where ϕ IJ = (id, ϕ * IJ ) is gluing morphism defined in step 3 of construction of the ν-Grassmannians in page 6, and A I t is the t-th row of the supermatrix A I . In addition, < A I 1 , ..., A I k > is a real vector space generated by A I t s. It can be seen that each of these free sheaves are equivalent to (V I , G I ⊗ R k|0 ). The homomorphism ι I m,n * , induces a canonical homomorphism between sheaves of G I -modules asThe collection of all these homomorphisms determine a homomorphism as followsBy definition, the pullback of γ ∞ k|0 along ι This completes the proof.
A Class of Homogeneous Superspaces Associated to Odd Involutions. F Bahadorykhalily, M Mohammadi, S Varsaie, Periodica Mathematica Hungerica. 82F. Bahadorykhalily, M. Mohammadi, S. Varsaie, A Class of Homogeneous Superspaces Associated to Odd Involutions, Periodica Mathematica Hungerica, 82, 153-172 (2021).
Cohomology of The Structure Sheaf of Real and Complex Supermanifolds. C Bartocci, U Bruzzo, Journal of Mathematical Physics. 291789C. Bartocci, U. Bruzzo, Cohomology of The Structure Sheaf of Real and Complex Supermani- folds, Journal of Mathematical Physics 29, 1789 (1988).
The Geometry of Supermanifolds. C Bartocci, U Bruzzo, D Hernandez Ruiperez, Kluwer Academic PublisherC. Bartocci, U. Bruzzo, D. Hernandez Ruiperez, The Geometry of Supermanifolds, Kluwer Aca- demic Publisher(1991).
Characteristic Classes of Super Vector Bundles. U Bruzzo, D Hernandez Ruiperez, Journal of Mathematical Physics. 301233U. Bruzzo, D. Hernandez Ruiperez, Characteristic Classes of Super Vector Bundles, Journal of Mathematical Physics 30, 1233 (1989).
. J Dugundji, Allyn Topology, Bacon, BostonJ. Dugundji, Topology, Allyn and Bacon, Boston, 1966.
D Husemuller, Fibre bundles. SpringerThird editionD. Husemuller, Fibre bundles, Third edition, Springer, 1994.
Projective Modules of Finite Type Over The Supersphere S 2,2 , Differential Geometry and its Application. G Landi, 14G. Landi, Projective Modules of Finite Type Over The Supersphere S 2,2 , Differential Geometry and its Application 14, 95-111 (2001).
Yu I Manin, Gauge Field Theory and Complex Geometry. New YorkSpringerYu. I. Manin, Gauge Field Theory and Complex Geometry, Springer, New York,1988.
Yu I Manin, I B Penkov, The Formalism of Left and Right Connections on Supermanifolds, Lectures on Supermanifolds. World ScientificYu. I. Manin, I. B. Penkov, The Formalism of Left and Right Connections on Supermanifolds, Lectures on Supermanifolds, Geometrical Methods of Conformal Groups, World Scientific (1989).
Analytic Approach To ν-classes. M Roshandelbana, S Varsaie, Asian-European Journal of Mathematics. AEJMM. Roshandelbana, S. Varsaie, Analytic Approach To ν-classes, Asian-European Journal of Math- ematics (AEJM), Accepted Nov 2021.
Topological Quantum Numbers in Nonrelativistic Physics. D J Thouless, World ScientificD. J. Thouless, Topological Quantum Numbers in Nonrelativistic Physics; World Scientific (1998).
Supersymmetry For Mathematician: An Introduction. V S Varadarajan, V. S. Varadarajan, Supersymmetry For Mathematician: An Introduction;
. American Mathematical Society. American Mathemat- ical Society (2004).
. A A Voronov, Yu I Manin, Schubert Supercells, Funct. Anal. Appl. 18A. A. Voronov, Yu. I. Manin, Schubert supercells, Funct. Anal. Appl., 18:4 (1984), 329-330.
Elements of Supergeometry. A A Voronov, Yu I Manin, I B Penkov, J. Math. Sci. 512069A. A. Voronov, Yu. I. Manin, I. B. Penkov , Elements of Supergeometry, J. Math. Sci. 51: 2069 (1990).
| []
|
[
"Learning Affinity-Aware Upsampling for Deep Image Matting *",
"Learning Affinity-Aware Upsampling for Deep Image Matting *"
]
| [
"Yutong Dai \nThe University of Adelaide\nAustralia\n",
"Hao Lu \nHuazhong University of Science and Technology\nChina\n",
"Chunhua Shen \nThe University of Adelaide\nAustralia\n"
]
| [
"The University of Adelaide\nAustralia",
"Huazhong University of Science and Technology\nChina",
"The University of Adelaide\nAustralia"
]
| []
| We show that learning affinity in upsampling provides an effective and efficient approach to exploit pairwise interactions in deep networks. Second-order features are commonly used in dense prediction to build adjacent relations with a learnable module after upsampling such as non-local blocks. Since upsampling is essential, learning affinity in upsampling can avoid additional propagation layers, offering the potential for building compact models. By looking at existing upsampling operators from a unified mathematical perspective, we generalize them into a second-order form and introduce Affinity-Aware Upsampling (A 2 U) where upsampling kernels are generated using a light-weight lowrank bilinear model and are conditioned on second-order features. Our upsampling operator can also be extended to downsampling. We discuss alternative implementations of A 2 U and verify their effectiveness on two detail-sensitive tasks: image reconstruction on a toy dataset; and a largescale image matting task where affinity-based ideas constitute mainstream matting approaches. In particular, results on the Composition-1k matting dataset show that A 2 U achieves a 14% relative improvement in the SAD metric against a strong baseline with negligible increase of parameters (< 0.5%). Compared with the state-of-the-art matting network, we achieve 8% higher performance with only 40% model complexity. | 10.1109/cvpr46437.2021.00677 | [
"https://arxiv.org/pdf/2011.14288v1.pdf"
]
| 227,227,773 | 2011.14288 | 865ffe014782a565d70eeb1a362479d22eed6508 |
Learning Affinity-Aware Upsampling for Deep Image Matting *
Yutong Dai
The University of Adelaide
Australia
Hao Lu
Huazhong University of Science and Technology
China
Chunhua Shen
The University of Adelaide
Australia
Learning Affinity-Aware Upsampling for Deep Image Matting *
We show that learning affinity in upsampling provides an effective and efficient approach to exploit pairwise interactions in deep networks. Second-order features are commonly used in dense prediction to build adjacent relations with a learnable module after upsampling such as non-local blocks. Since upsampling is essential, learning affinity in upsampling can avoid additional propagation layers, offering the potential for building compact models. By looking at existing upsampling operators from a unified mathematical perspective, we generalize them into a second-order form and introduce Affinity-Aware Upsampling (A 2 U) where upsampling kernels are generated using a light-weight lowrank bilinear model and are conditioned on second-order features. Our upsampling operator can also be extended to downsampling. We discuss alternative implementations of A 2 U and verify their effectiveness on two detail-sensitive tasks: image reconstruction on a toy dataset; and a largescale image matting task where affinity-based ideas constitute mainstream matting approaches. In particular, results on the Composition-1k matting dataset show that A 2 U achieves a 14% relative improvement in the SAD metric against a strong baseline with negligible increase of parameters (< 0.5%). Compared with the state-of-the-art matting network, we achieve 8% higher performance with only 40% model complexity.
Introduction
The similarity among positions, a.k.a. affinity, is commonly investigated in dense prediction tasks [1,2,3,4,5]. Compared with directly fitting ground truths using firstorder features, modeling similarity among different positions can provide second-order information. There currently exist two solutions to learn affinity in deep networks: i) learning an affinity map before a non-deep backend and ii) defining a learnable affinity-based module to propagate information. We are interested in end-to-end affinity learning, because classic methods often build upon some assump- Figure 1 -Visualization of upsampled feature maps with various upsampling operators. From left to right, the input RGB image, feature maps after the last upsampling using nearest neighbor interpolation, bilinear upsampling, and our proposed affinity-aware upsampling, respectively. Our method produces better details with clear connectivity.
tions, rendering weak generalization in general cases. Existing approaches typically propagate or model affinity after upsampling layers or before the last prediction layer. While affinity properties are modeled, they sometimes may not be effective for the downstream tasks. For instance,the work in [5] requires a feature encoding block besides the encoderdecoder architecture to learn affinity. The work in [2] needs more iterations to refine the feature maps according to their affinity at the last stage. As shown in Fig. 1, one plausible reason is that pairwise similarity is damaged during upsampling. In addition, it is inefficient to construct interactions between high-dimensional feature maps. We therefore pose the question: Can we model affinity earlier in upsampling in an effective and efficient manner?
Many widely used upsampling operators interpolate values following a fixed rule at different positions. For instance, despite reference positions may change in bilinear upsampling, it always interpolates values based on relative spatial distances. Recently, the idea of learning to upsample emerges [6,7,8]. A learnable module is often built to generate upsampling kernels conditioned on feature maps to enable dynamic, feature-dependent upsampling behaviors. Two such representative operators include CARAFE [8] and IndexNet [7]. In our experiments, we find that CARAFE may not work well in low-level vision tasks where details need to be restored. IndexNet instead can recover details much better. We believe that one important reason is that IndexNet encodes, stores, and delivers spatial information prior to downsampling. But computation can be costly when the network goes deep. This motivates us to pursue not only flexible but also light-weight designs of the upsampling operator.
In this paper, we propose to model affinity into upsampling and introduce a novel learnable upsampling operator, i.e., affinity-aware upsampling (A 2 U). As we show later in Section 3, A 2 U is a generalization of first-order upsampling operators: in some conditions, the first-order formulation in [8] and [6] can be viewed as special cases of our secondorder one. In addition, by implementing A 2 U in a low-rank bilinear formulation, we can achieve efficient upsampling with few extra parameters.
We demonstrate the effectiveness of A 2 U on two detailsensitive tasks: an image reconstruction task on a toy dataset with controllable background and a large-scale image matting task with subtle foregrounds. Image matting is a desirable task to justify the usefulness of affinity, because affinity-based matting approaches constitute one of prominent matting paradigms in literatures. Top matting performance thus can suggest appropriate affinity modeling. In particular, we further discuss alternative design choices of A 2 U and compare their similarities and differences. Compared with a strong image matting baseline on the Composition-1k matting dataset, A 2 U exhibits a significant improvement (∼ 14%) with negligible increase of parameters (< 0.5%), proffering a light-weight image matting architecture with state-of-the-art performance.
Related work
Upsampling Operators in Deep Networks. Upsampling is often necessary in dense prediction to recover spatial resolution. The mostly used upsampling operators are bilinear interpolation and nearest neighbor interpolation. Since they are executed only based on spatial distances, they may be sub-optimal in detail-oriented tasks such as image matting where distance-based similarity can be violated. Compared with distance-based upsampling, maxunpooling is feature-dependent and has been shown to benefit detail-oriented tasks [6,7], but it must match with maxpooling. In recent literatures, learning-based upsampling operators [9,10,8,7] emerge. The Pixel Shuffle (P.S.) [9] upsamples feature maps by reshaping. The deconvolution (Deconv) [10], an inverse version of convolution, learns the upsampling kernel via back-propagation. Both P.S. and Deconv are data-independent during inference, because the kernel is fixed once learned. By contrast, CARAFE [8] and IndexNet [6] learn the upsampling kernel dynamically conditioned on the data. They both introduce additional modules to learn upsampling kernels. Since the upsampling kernel is directly related to the feature maps, these upsampling operators are considered first-order.
Following the learning-based upsampling paradigm, we also intend to learn dynamic upsampling operators but to condition on second-order features to enable affinity-informed upsampling. We show that, compared with firstorder upsampling, affinity-informed upsampling not only achieves better performance but also introduces a lightweight learning paradigm.
Deep Image Matting. Affinity dominates the majority of classic image matting approaches [11,12,13,14]. The main assumption in propagation-based matting is that, similar alpha values can be propagated from known positions to unknown positions, conditioned on affinity. This assumption, however, highly depends on the color distribution. Such methods can perform well on cases with clear color contrast but more often fail in cases where the color distribution assumption is violated. Recently, deep learning is found effective to address ill-posed image matting. Many deep matting methods arise [15,4,16,17,18,6,5,19]. This field has experienced from a semi-deep stage [15, 4] to a fully-deep stage [16,18,6,5,19]. Here 'semi-deep' means that the matting part still relies on classic methods [11,12] to function, while 'fully-deep' means that the entire network does not resort to any classic algorithms. Among fully-deep matting, DeepMatting [16] first applied the encoder-decoder architecture and reported improved results. Targeting this strong baseline, several deep matting methods were proposed. AlphaGAN matting [20] and IndexNet matting [6] explored adversarial learning and index generating module to improve matting performance, respectively. In particular, works in [18,5,19,17] imitated classic sampling-based and propagation-based ideas into deep networks to ease the difficulty of learning. Therein, GCA matting [5] first designed an affinity-based module and demonstrated the effectiveness of affinity in fully-deep matting. It treats alpha propagation as an independent module and adds it to different layers to refine the feature map, layer by layer.
Different from the idea of 'generating then refining', we propose to directly incorporate the propagation-based idea into upsampling for deep image matting. It not only benefits alpha propagation but also shows the potential for lightweight module design.
A Mathematical View of Upsampling
The work in [7] unifies upsampling from an indexing perspective. Here we provide an alternative mathematical view. To simplify exposition, we discuss the upsampling of the one-channel feature map. Without loss of generality, the one-channel case can be easily extended to multichannel upsampling, because most upsampling operators execute per-channel upsampling. Given a one-channel local feature map Z ∈ R k×k used to generate an upsampled feature point, it can be vectorized to z ∈ R k 2 ×1 . Similarly, the vectorization of an upsampling kernel W ∈ R k×k can be denoted by w ∈ R k 2 ×1 . If g(w, z) defines the output of upsampling, most existing upsampling operations follow g(w, z) = w T z .
(1)
Note that g(w, z) indicates an upsampled point. In practice, multiple such points can be generated to form an upsampled feature map. w may be either shared or unshared among channels depending on the upsampling operator. Different operators define different w's. Further, even the same w can be applied to different z's. According to how the upsampling kernel w is generated, we categorize the kernel into two types: the universal kernel and the customized kernel. The universal kernel is input-independent. It follows the same upsampling rule given any input. One example is deconvolution [10]. The customized kernel, however, is input-dependent. Based on what input is used to generate the kernel, the customized kernel can be further divided into distance-based and feature-based. We elaborate as follows.
Distance-based Upsampling. Distance-based upsampling is implemented according to spatial distances, such as nearest neighbor and bilinear interpolation. The difference between them is the number of positions taken into account. Under the definition of Eq. (1), the upsampling kernel is a function of the relative distance between points. By taking bilinear interpolation with 4 reference points as an example,
w = [w 1 , w 2 , w 3 , w 4 ], where w 1 = 1 (x1−x0)(y1−y0) (x 1 − x) (y 1 − y)
given the coordinates of two reference points (x 0 , y 0 ) and (x 1 , y 1 ); x and y is the coordinates of the interpolated point; w 2 , w 3 , and w 4 can be derived similarly. In multi-channel cases, the same w is shared by all channels of input.
Feature-based Upsampling. Feature-based upsampling is feature-dependent. They are developed in deep networks, including max-unpooling [21], CARAFE [8], and IndexNet [7]:
i) Max-unpooling interpolates values following the indices returned from max-pooling. In a 2 × 2 region of the feature layer after upsampling, only one position recorded in the indices has value, and other three are filled with 0. Since each position on the upsampled feature map is interpolated from a 1 × 1 point at the low-resolution layer, we can define w by a 1 × 1 vector w = [w], where w ∈ R 1×1 , and z is also the 1 × 1 point at the lowresolution layer. Note that, w ∈ {0, 1}, and only one w can equal to 1 in a 2 × 2 region of the output feature map.
In multi-channel cases, w and z are different in different channels conditioned on the max operator. ii) CARAFE learns an upsampling kernel w ∈ R k 2 ×1 (k = 5 in [8]) via a kernel generation module given a decoder feature map ready to upsample. It also conforms to Eq. (1), where z ∈ R k 2 ×1 is obtained from the lowresolution decoder feature map. The kernel size of w de-pends on the size of z. In multi-channel cases, the same w is shared among channels. iii) IndexNet also learns an upsampling kernel dynamically from features. The difference is that IndexNet learns from high-resolution encoder feature maps. Under the formulation of Eq. (1), the upsampling kernel follows a similar spirit like max-unpooling: w = [w], where w ∈ R 1×1 , because each position on the upsampled feature layer is interpolated from a corresponding point on the low-resolution map by multiplying by an interpola-
tion weight w. But here w ∈ [0, 1] instead of {0, 1}.
Hence, distanced-based and feature-based upsampling operators have a unified form g(w, z) = w T z, while different operators correspond to different w's and z's, where w can be heuristically defined or dynamically generated. In particular, existing operators define/generate w according to distances or first-order features, while second-order information remains unexplored in upsampling.
Learning Affinity-Aware Upsampling
Here we explain how we exploit second-order information to formulate the affinity idea in upsampling using a bilinear model and how we apply a low-rank approximation to reduce computational complexity.
General Formulation of Upsampling. Given a feature map M ∈ R C×H×W to be upsampled, the goal is to generate an upsampled feature map M ∈ R C×rH×rW , where r is the upsampling ratio. For a position (i , j ) in M , the corresponding source position (i, j) in M is derived by solving i = i /r , j = j /r . We aim to learn an upsampling kernel w ∈ R k 2 ×1 for each position in M . By applying the kernel to a channel of the local feature map X ∈ R C×k×k centered at position l on M, denoted by X ∈ R 1×k×k , the corresponding upsampled feature point m l ∈ M of the same channel at target position l can be obtained by
m l = w T x according to Eq. (1), where x ∈ R k 2 ×1 is the vectorization of X.
General Meaning of Affinity. Affinity is often used to indicate pairwise similarity and is considered second-order features. An affinity map can be constructed in different ways such as using a Gaussian kernel. In self-attention, the affinity between the position l and the enumeration of all possible positions p at a feature map M is denoted by sof tmax ∀p (sim (m l , m p )), where m l and m represent two vectors at position l and p, respectively, and sim (m l , m p ) measures the similarity between m l and m p with the inner product m l T m p .
Affinity-Aware Upsampling via Bilinear Modeling. Given a local feature map X ∈ R C×h1×w1 , X has an equiv-
alent matrix form X ∈ R C×N , where N = h 1 × w 1 . We 2 s H W C H W 1 1 C 1 1 1 2 1 1 s Conv
Dot Product aim to learn an upsampling kernel conditioned on X. Previous learning-based upsampling operators [8,6,7] generate the value of the upsampling kernel following a lin-
ear model by w = C i=1 N j=1 a ij x ij ,
where a ij and x ij are the weight and the feature at the channel i and position j of X, respectively. Note that w ∈ R 1×1 . To encode second-order information, a natural generalization of the linear model above is bilinear modeling where another
feature matrix Y ∈ R C×M transformed from the feature map Y ∈ R C×h2×w2 (M = h 2 × w 2 ), is introduced to pair with X to model affinity. Given each x i ∈ R C×1 in X, y j ∈ R C×1
in Y, the bilinear weight a ij of the vector pair, and the embedding weights q k and t k for each channel of x i and y j , we propose to generate each value of the upsampling kernel from embedded pairwise similarity, i.e.,
w = N i=1 M j=1 a ij ϕ(x i ) T φ(y j ) = C k=1 N i=1 M j=1 a ij q k x ik t k y jk = C k=1 N i=1 M j=1 a ijk x ik y jk = C k=1 x k T A k y k ,(2)
where x k ∈ R N ×1 and y k ∈ R M ×1 are the k-th channel of X and Y, respectively, A k ∈ R N ×M is the affinity matrix for k-th channel, a ijk = a ij q k t k , and ϕ and φ represent the embedding function.
Factorized Affinity-Aware Upsampling. Learning A k can be expensive when M and N are large. Inspired by [22,23], a low-rank bilinear method can be derived to reduce computational complexity of Eq. (2). Specifically, A k can be rewritten by A k = U k V T k , where U k ∈ R N ×d and V k ∈ R M ×d . d represents the rank of A k under the constraint of d ≤ min(N, M ). Eq. (2) therefore can be rewrit-ten by
w = C k=1 x k T U k V k T y k = C k=1 1 T (U k T x k • V k T y k ) = 1 T C k=1 (U k T x k • V k T y k ) ,(3)
where 1 ∈ R d is a column vector of ones, and • denotes the Hadamard product. Since we need to generate a s × s upsampling kernel, 1 in Eq. (3) can be replaced with P ∈ R d×s 2 . Note that, Eq. (3) is applied to each position of a feature map, so the inner product here can be implemented by convolution. The full upsampling kernel therefore can be generated by
w = P T C k=1 (U k T x k • V k T y k ) = P T d cat r=1 C k=1 (u kr T x k • v kr T y k ) = conv P, d cat r=1 gpconv(U r , X) gpconv(V r , Y) ,(4)
where u kr ∈ R N ×1 , v kr ∈ R M ×1 . The convolution kernels P ∈ R d×s 2 ×1×1 , U ∈ R d×C×h1×w1 , and V ∈ R d×C×h2×w2 are reshaped tensor versions of P, U and V, respectively. conv(K, M) represents a convolution operation on the feature map M with the kernel K; gpconv(K, M) defines a group convolution operation (C groups) with the same input. cat is the concatenate operator. This process is visualized in Fig. 2.
Alternative Implementations. Eq. (4) is a generic formulation. In practice, many design choices can be discussed in implementation:
i) The selection of X and Y can be either same or different.
In this paper, we only discuss self-similarity, i.e., X = Y; ii) The rank d can be chosen in the range [1, min(N, M )].
For example, if X and Y are extracted in 5×5 regions, the range will be [1,25]. In our experiments, we set d = 1 to explore the most simplified and light-weight case. iii) U and V can be considered two encoding functions. They can be shared, partly-shared, or unshared among channels. We discuss two extreme cases in the experiments: 'channel-shared' ('cs') and 'channel-wise' ('cw'). iv) Eq. (4) adjusts the kernel size of w only using P. Since the low-rank approximation has less parameters, fixed P, U, and V may not be sufficient to model all local variations. Inspired by CondConv [24], we attempt to generate P and U, V dynamically conditioned on the input. We investigate three implementations: 1) static: none of them is input-dependent; 2) hybrid: only P is conditioned on input; and 3) dynamic: P, U, and V are all conditioned on input. The dynamic generation of P, U, or V is implemented using a global average pooling and a 1 × 1 convolution layer. v) We implement stride-2 U and V in our experiments. They output features of size C × H 2 × W 2 . To generate an upsampling kernel of size s 2 × H × W , one can either use 4 sets of different weights for U and V or 4 sets of weights for P (4×s 2 × H 2 × W 2 ), followed by a shuffling operation (s 2 × H × W ). We denote the former case as 'pointwise' ('pw'). Further, as pointed out in [22], nonlinearity, e.g., tanh or relu, can be added after the encoding of U and V. We verify a similar idea by adding normalization and nonlinearity in the experiments.
Extension to Downsampling. Following [7], our method can also be extended to downsampling. Downsampling is in pair with upsampling, so their kernels are generated from the same encoder feature. We use 'd' to indicate the use of paired downsampling in experiments. We share the same U and V in Eq. (4) in both downsampling and upsampling, but use different P's considering that they may have different kernel sizes. We denote the overall upsampling kernel by W u ∈ R su 2 ×H×W and the downsampling kernel by W d ∈ R s d 2 ×H/r×W/r , where r is the ratio of upsampling/downsampling. We set s d = r 2 s u in our experiments.
Image Reconstruction and Analysis
Here we conduct a pilot image reconstruction experiment on a toy dataset to show the effectiveness of A 2 U. Inspired by [7], we build sets of reconstruction experiments on the MNIST dataset [25] and Fashion-MNIST dataset [26]. The motivation is to verify whether exploiting second-order information into upsampling benefits recovering spatial information.
We denote C(k) to be a convolution layer with k-channel output and 3 × 3 filters (stride is 1 unless stated), followed by BatchNorm and ReLU, and denote D r a downsampling operator with a ratio of r, and denote U r an upsampling operator with a ratio of r. We build the network architecture as: C(32)-D 2 -C(64)-D 2 -C(128)-D 2 -C(256)-C(128)-U 2 -C(64)-U 2 -C(32)-U 2 -C(1). The same training strategies and evaluation metrics are used following [7]. Since training patches are relatively small (32×32), upsampling kernel sizes for CARAFE and A 2 U are both set to 1, and the encoding convolution kernels in IndexNet and A 2 U are both set to 4. Other settings keep the default ones. We apply 'static-pw-cw' A 2 U here because it is the same as Holistic IndexNet if letting convolution results of U to be all ones. We hence add a sigmoid function after U to generalize In-dexNet. To avoid extra layers, we apply max-pooling to D r to obtain high-resolution layers when validating IndexNet and A 2 U. Reconstruction results are presented in Table 1. Table 1, upsampling operators informed by features (max-unpooling, CARAFE, IndexNet, and A 2 U) outperform the operators guided by spatial distances (nearest, bilinear, and bicubic). Moreover, learning from highresolution features matter for upsampling, among which, learning-based operators (IndexNet, A 2 U) achieve the best results. Further, it is worth noting that, A 2 U performs better than IndexNet with even fewer parameters. From these observations, we believe in upsampling: 1) high-resolution features are beneficial to extract spatial information, and 2) second-order features can help to recover more spatial details than first-order ones.
As shown in
Experiments and Discussions
Here we evaluate A 2 U on deep image matting. This task is suitable for assessing the quality of modeling pairwise relations.
Network Architecture
Similar to [5], our baseline network adopts the first 11 layers of the ResNet34 [27] as the encoder. The decoder consists of residual blocks and upsampling stages. The In-Place Activated BatchNorm [28] is applied to each layer except the last one to reduce GPU memory consumption. As shown in Fig. 3, the overall network follows the UNet architecture [29] with 'skip' connection. To apply A 2 U to upsampling, we replace the upsampling operations in the decoder with A 2 U modules. Specifically, we learn upsampling kernels from the skipped features. If A 2 U is used in both upsampling and downsampling stages, we change all 2-stride convolution layers in the encoder to be 1-stride and implement paired downsampling and upsampling operations, respectively, by learning upsampling/downsampling kernels from the modified 1-stride feature layer.
Datasets
We mainly conduct our experiments on the Adobe Image Matting dataset [16]. Its training set has 431 unique foreground objects and ground-truth alpha mattes. Instead of compositing each foreground with fixed 100 background images chosen from MS COCO [30], we randomly choose the background images in each iteration and generate the composition images on-the-fly. The test set, termed the Composition-1k, contains 50 unique foreground objects; each foreground is composited with 20 background images from the Pascal VOC dataset [31].
We also evaluate our method on the alphamatting.com benchmark [32]. This online benchmark has 8 unique testing images and 3 different trimaps for each image, providing 24 test cases.
Further, we report results on the recently proposed Distinctions-646 dataset [33]. It has 596 foreground objects in the training set and 50 foreground objects in the test set. We generate the training data and the test set following the same protocol as on the Adode Image Matting dataset.
Implementation Details
Our implementation is based on PyTorch [34]. Here we describe training details on the Adobe Image Matting dataset. The 4-channel input concatenates the RGB image and its trimap. We mainly follow the data argumentation of [5]. Two foreground objects are first chosen with a probability of 0.5 and are composited to generate a new foreground image and a new alpha matte. Next, they are resized to 640 × 640 with a probability of 0.25. Random affine transformations are then applied. Trimaps are randomly dilated from the ground truth alpha mattes with distances in the range between 1 and 29, followed by 512 × 512 random cropping. The background image is randomly chosen from the MS COCO dataset [30]. After imposing random jitters to the foreground object, the RGB image is finally generated by composition.
The backbone is pretrained on ImageNet [35]. Adam optimizer [36] is used. We use the same loss function as [16,6], including alpha prediction loss and composition loss computed from the unknown regions indicated by trimaps. We update parameters for 30 epochs. Each epoch has a fixed number of 6000 iterations. A batch size of 16 is used and BN layers in the backbone are fixed. The learning rate is initialized to 0.01 and reduced by ×10 at the 20-th epoch and the 26-th epoch, respectively. The training strategies on the Distinction646 dataset are the same except that we update the parameters for only 25 epochs. We evaluate our results using Sum of Absolute Differences (SAD), Mean Squared Error (MSE), Gradient (Grad), and Connectivity (Conn) [32]. We follow the evaluation code provided by [16].
The Adobe Image Matting Dataset
Ablation Study on Alternative Implementations. Here we verify different implementations of A 2 U on the Composition-1k test set and compare them with existing upsampling operators. Quantitative results are shown in Table 2. All the models are implemented by the same architecture but with different upsampling operators. The 'nearest' and 'bilinear' are our direct baselines. They achieve close performance with the same model capacity. For CARAFE, we use the default setting as in [8], i.e., k up = 5 and k encoder = 3. We observe CARAFE has a negative effect on the performance. The idea behind CARAFE is to reassemble contextual information, which is not the focus of matting where subtle details matter. However, it is interesting that CARAFE can still be useful for matting when it follows a light-weight MobileNetV2 backbone [7]. One possible explanation is that a better backbone (ResNet34) suppresses the advantages of context reassembling. We report results of IndexNet with the best-performance setting ('depthwise+context+nonlinear') in [6,7]. The upsampling indices are learned from the skipped feature layers. In-dexNet achieves a notable improvement, especially on the Grad metric. However, IndexNet significantly increases the number of parameters.
We further investigate 6 different implementations of A 2 U and another version with paired downsampling and upsampling. According to the results, the 'static' setting can only improve the SAD and Conn metrics. The positionwise and position-shared settings report comparable results, so we fix the position-shared setting in the following 'hybrid' and 'dynamic' experiments. We verify both channelwise and channel-shared settings for 'hybrid' and 'dynamic' models. The 'hybrid' achieves higher performance with channel-wise design, while the 'dynamic' performs better with channel-shared design. All 'hybrid' and 'dynamic' models show improvements against baselines on all metrics, except the MSE and Grad metrics for the channel-shared 'hybrid' model. The last implementation, where channelshared 'dynamic' downsampling is paired with upsampling, achieves the best performance (at least 14% relative improvements against the baseline) with negligible increase of parameters (< 0.5%).
Hence, while the dedicated design of upsampling operators matters, paired downsampling and upsampling seems more important, at least for image matting. Ablation Study on Upsampling Kernel. Here we investigate the performance of our models with different upsampling kernel sizes. The encoding kernel size (the kernel size of U or V) is set to k en = 5 in all matting experiments unless stated. Under this setting, results in Table 3 show that k up = 3 performs the best. It is interesting to observe that larger upsampling kernel does not imply better performance. We believe this is related to the encoding kernel size and the way how we generate U, V and P. We use k up = 3 as our default setting.
Ablation Study on Normalization. In both [8] and [7], different normalization strategies are verified, and experiments show that normalization significantly affects the results. We thus justify the normalization choices in our A 2 U module here. We conduct the experiments on the channel-wise 'hybrid' model and the channel-shared 'dynamic' model. Two normalization choices are considered: 'softmax' and 'sigmoid+softmax'. It is clear that the latter normalization works better (Table 4). It may boil down to the nonlinearity introduced by the sigmoid function.
Comparison with State of the Art. Here we compare our models against other state-of-the-art methods on the Composition-1k test set. Results are shown in Table 5. We observe that our models outperform other methods on all the evaluation metrics with the minimum model capacity. Compared with the state-of-the-art method [5], our best model achieves 8% higher performance with only 40% model complexity. Our model is also memory-efficient, being able to infer high-resolution images on a single 1080Ti GPU without downsampling on the Composition-1k test set. Some qualitative results are shown in Fig. 4. Our results show improved detail delineation such as the net structure and the filament.
The alphamatting.com Benchmark
Here we report results on the alphamatting.com online benchmark [32]. We follow [5] the data in the Adobe matting dataset and then test it on the benchmark. As shown in Table 6, our method ranks the first w.r.t. the gradient error among all published methods. We also achieve comparable overall ranking compared with AdaMatting [19] under the SAD and MSE metrics, suggesting our method is one of the top performing methods on this benchmark.
The Distinction-646 Dataset
We also evaluate our method on the recent Distinction-646 test set. In Table 7, we report results of the three models performing the best on the Composition-1k dataset and also compare with other benchmarking results provided by [33]. We have two observations: 1) our models show improved performance against the baseline, which further confirms the effectiveness of our A 2 U; 2) Our models outperform other reported benchmarking results by large margins, setting a new state of the art on this dataset.
Visualization of Upsampling Kernels
Here we visualize the learned upsampling kernel in a 'hybrid' model to showcase what is learned by the kernel. Two examples are illustrated in Fig. 5. We observe that, after learning, boundary details are highlighted, while flat regions are weakened.
Conclusion
Considering that affinity is widely exploited in dense prediction, we explore the feasibility to model such secondorder information into upsampling for building compact models. We implement this idea with a low-rank bilinear formulation, based on a generalized mathematical view of upsampling. We show that, with negligible parameters increase, our method A 2 U can achieve better performance on both image reconstruction and image matting tasks. We also investigate different design choices of A 2 U. Results on three image matting benchmarks all show that A 2 U achieve a significant relative improvement and also state-of-the-art results. In particular, compared with the best performing image matting network, our model achieves 8% higher performance on the Composition-1k test set, with only 40% model capacity. For future work, we plan to extend A 2 U to other dense prediction tasks.
C. Qualitative Results
We show additional qualitative results on the alphamatting.com benchmark [32] in Fig. 6. 4 top-performing methods are visualized here. Since all these methods achieve good performance, and their quantitative results on the benchmark are very close, it is difficult to tell the obvious difference in Fig. 6. It worth noting that, however, our method produces better visual results on detailed structures, such as gridding of the net, and leaves of the pineapple.
We also show qualitative results on the Distinction-646 test set [33] in Fig. 7. Since no implementation of other deep methods on this benchmark is publicly available, we only present the results of our baseline and our method here to show the relative improvements. According to Fig. 7, our method produces clearly better predictions on highly transparent objects such as the bubbles.
GCA Ada
Context-Aware Ours RGB Trimap
GCA Ada
Context-Aware Ours RGB Trimap
GCA Ada
Context-Aware Ours RGB Trimap Figure 6 -Qualitative results on the alphamatting.com test set. The methods in comparison include AdaMatting [19], GCA Matting [5], Context-Aware Matting [18], and our method.
Figure 2 -
2Kernel generation of A 2 U. Given a feature map of size C × H × W , a s × s upsampling kernel is generated at each spatial position conditioned on the feature map. The rank d is 1 here.
Figure 3 -
3Overview of our matting framework. The focus of this work is on the upsampling stages.
Figure 5 -
5Visualization of the upsampling kernel. The left is the randomly initialized kernel, and the right is the learned kernel.
Figure 7 -
7Qualitative results on the Distinction-646 test set. The methods in comparison include the baseline and our method.
[
-Reconstruction results on the MNIST dataset and the Fashion-MNIST dataset. † denotes holistic index network, ‡ represents depthwise index network. Both index networks here apply the setting of 'context+linear' for a fair comparison.Method
MNIST
Fashion-MNIST
PSNR (↑)
SSIM (↑)
MSE (↓)
MAE (↓)
PSNR (↑)
SSIM (↑)
MSE (↓)
MAE (↓)
Conv/2-Nearest
28.54
0.9874
0.0374
0.0148
25.58
0.9797
0.0527
0.0269
Conv/2-Bilinear
26.12
0.9783
0.0495
0.0205
23.68
0.9675
0.0656
0.0343
Conv/2-Deconv [10]
31.85
0.9942
0.0256
0.0089
27.42
0.9870
0.0426
0.0207
P.S. [9]
31.63
0.9939
0.0262
0.0099
27.33
0.9868
0.0431
0.0212
MaxPool-MaxUnpool
29.91
0.9916
0.0320
0.0133
28.31
0.9901
0.0385
0.0218
MaxPool-CARAFE [8]
28.72
0.9885
0.0367
0.0131
25.17
0.9773
0.0552
0.0266
MaxPool-IndexNet † [6]
45.51
0.9997
0.0053
0.0024
45.83
0.9998
0.0051
0.0033
MaxPool-A 2 U (Ours)
47.63
0.9998
0.0042
0.0020
46.41
0.9999
0.0048
0.0031
MaxPool-IndexNet ‡ [6]
47.13
0.9997
0.0044
0.0020
44.35
0.9998
0.0061
0.0036
Table 1
Table 3 -
3Ablation study of upsampling kernel size on the Composition-1k test set.Method
Norm
SAD MSE Grad Conn
A 2 U (hybrid-cw)
softmax
35.93 0.0092 17.13 33.87
A 2 U (hybrid-cw) sigmoid+softmax 34.76 0.0088 16.39 32.29
A 2 U (dynamic-cs)
softmax
36.40 0.0100 17.67 34.33
A 2 U (dynamic-cs) sigmoid+softmax 35.86 0.0095 17.13 33.71
Table 4 -
4Ablation study of normalization on the Composition-1k test set.
to train our model with allFigure 4 -Qualitative results on the Composition-1k test set. The methods in comparison include Closed-Form Matting [11], KNN Matting [12], Deep Image Matting (DIM) [16], IndexNet Matting [6], GCA Matting [5], our baseline, and our method. U (dynamic-cs-d) 32.15 0.0082 16.39 29.25 8.09MRGB
Trimap
Ground Truth
Closed Form
KNN
DIM
IndexNet
GCA
Baseline
Ours
RGB
Trimap
Ground Truth
DIM
IndexNet
GCA
Baseline
Ours
Method
SAD MSE Grad Conn
# Params
Closed-Form [11]
168.1 0.091 126.9 167.9
-
KNN Matting [12]
175.4 0.103 124.1 176.4
-
Deep Matting [16]
50.4 0.014 31.0 50.8 > 130.55M
IndexNet Matting [6] 45.8 0.013 25.9 43.7
8.15M
AdaMatting [19]
41.7 0.010 16.8
-
-
Context-Aware [18]
35.8 0.0082 17.3 33.2
107.5M
GCA Matting [5]
35.28 0.0091 16.9 32.5
25.27M
A 2 U (hybrid-cw)
34.76 0.0088 16.39 32.29
8.09M
A 2 U (dynamic-cs)
35.86 0.0095 17.13 33.71
8.07M
A 2
Table 5 -
5Benchmark results on Composition-1k test set. The best performance is in boldface.
Table 6 -Table 7 -
67Gradient errors on the alphamatting.com test set. The top-4 methods are shown. The lowest errors are in boldface. Benchmark results on the Distinctions-646 test set. The best performance is in boldface.Method
SAD
MSE
Grad
Conn
Closed-Form [11]
105.73
0.023
91.76
114.55
KNN Matting [12]
116.68
0.025
103.15
121.45
Deep Matting [16]
47.56
0.009
43.29
55.90
Baseline-Nearest
25.03
0.0106
13.85
24.41
A 2 U (hybrid-cw)
24.08
0.0104
13.53
23.59
A 2 U (dynamic-cs)
24.55
0.0107
14.51
23.89
A 2 U (dynamic-cs-d)
23.20
0.0102
12.39
22.20
Table 8 -
8Analysis on the complexity of A 2 U. 'cw': channel-wise, 'cs': channel-shared
13] Yung-Yu Chuang, Brian Curless, David H Salesin, and Richard Szeliski. A bayesian approach to digital matting. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages II-II. IEEE, 2001. 2 [14] Kaiming He, Christoph Rhemann, Carsten Rother, Xiaoou Tang, and Jian Sun. A global sampling method for alpha matting. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2049-2056. IEEE, 2011. 2 [15] Donghyeon Cho, Yu-Wing Tai, and Inso Kweon. Natural image matting using deep convolutional neural networks. In Proc. European Conference on Computer Vision (ECCV),
AppendixA. Training Details of Image ReconstructionThe image reconstruction experiments are implemented on the MNIST dataset[25]and Fashion-MNIST dataset[26]. They both include 60, 000 training images and 10, 000 test images. During training, the input images are resized to 32 × 32, and 1 loss is used. We use the SGD optimizer with an initial learning rate of 0.01. The learning rate is decreased by ×10 at the 50-th, 70-th, and 85th epoch, respectively. We update the parameters for 100 epochs in total with a batch size of 100. The evaluation metrics are Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM), Mean Absolute Error (MAE) and root Mean Square Error (MSE).B. Analysis of ComplexityHere we summarize the model complexity of different implementations of A 2 U inTable 8. We assume that the encoding kernel size is k × k, the upsampling kernel size is s × s, and the channel number of feature map X is C. Since C is much larger than k and s, A 2 U generally has the complexity: dynamic cw > hybrid cw > static cw > dynamic cs > hybrid cs > static cs.ModelType # Params static cw
Learning affinity via spatial propagation networks. Sifei Liu, Jinwei Shalini De Mello, Guangyu Gu, Ming-Hsuan Zhong, Jan Yang, Kautz, Advances in Neural Information Processing Systems (NIPS). Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan Yang, and Jan Kautz. Learning affinity via spa- tial propagation networks. In Advances in Neural Informa- tion Processing Systems (NIPS), pages 1520-1530, 2017. 1
Depth estimation via affinity learned with convolutional spatial propagation network. Xinjing Cheng, Peng Wang, Ruigang Yang, Proc. European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)Xinjing Cheng, Peng Wang, and Ruigang Yang. Depth esti- mation via affinity learned with convolutional spatial propa- gation network. In Proc. European Conference on Computer Vision (ECCV), pages 103-119, 2018. 1
Ssap: Single-shot instance segmentation with affinity pyramid. Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang, Proc. IEEE International Conference on Computer Vision (ICCV). IEEE International Conference on Computer Vision (ICCV)Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, and Kaiqi Huang. Ssap: Single-shot instance segmentation with affinity pyramid. In Proc. IEEE Interna- tional Conference on Computer Vision (ICCV), pages 642- 651, 2019. 1
Deep propagation based image matting. Yu Wang, Yi Niu, Peiyong Duan, Jianwei Lin, Yuanjie Zheng, International Joint Conference on Artificial Intelligence. 3Yu Wang, Yi Niu, Peiyong Duan, Jianwei Lin, and Yuanjie Zheng. Deep propagation based image matting. In Interna- tional Joint Conference on Artificial Intelligence, volume 3, pages 999-1006, 2018. 1, 2
Natural image matting via guided contextual attention. Yaoyi Li, Hongtao Lu, Proc. AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence3411Yaoyi Li and Hongtao Lu. Natural image matting via guided contextual attention. In Proc. AAAI Conference on Artificial Intelligence, volume 34, pages 11450-11457, 2020. 1, 2, 6, 7, 8, 9, 11
Indices matter: Learning to index for deep image matting. Hao Lu, Yutong Dai, Chunhua Shen, Songcen Xu, Proc. IEEE International Conference on Computer Vision (ICCV). IEEE International Conference on Computer Vision (ICCV)7Hao Lu, Yutong Dai, Chunhua Shen, and Songcen Xu. In- dices matter: Learning to index for deep image matting. In Proc. IEEE International Conference on Computer Vision (ICCV), pages 3266-3275, 2019. 1, 2, 4, 5, 6, 7, 8
Hao Lu, Yutong Dai, Chunhua Shen, Songcen Xu, dex networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. 57Hao Lu, Yutong Dai, Chunhua Shen, and Songcen Xu. In- dex networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 1, 2, 3, 4, 5, 7
Carafe: Content-aware reassembly of features. Jiaqi Wang, Kai Chen, Rui Xu, Ziwei Liu, Chen Change Loy, Dahua Lin, Proc. IEEE International Conference on Computer Vision (ICCV). IEEE International Conference on Computer Vision (ICCV)57Jiaqi Wang, Kai Chen, Rui Xu, Ziwei Liu, Chen Change Loy, and Dahua Lin. Carafe: Content-aware reassembly of fea- tures. In Proc. IEEE International Conference on Computer Vision (ICCV), pages 3007-3016, 2019. 1, 2, 3, 4, 5, 7
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, P Andrew, Rob Aitken, Daniel Bishop, Zehan Rueckert, Wang, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)25Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1874-1883, 2016. 2, 5
Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)25Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proc. IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 3431-3440, 2015. 2, 3, 5
A closed-form solution to natural image matting. Anat Levin, Dani Lischinski, Yair Weiss, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3029Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):228-242, 2007. 2, 8, 9
Knn matting. Qifeng Chen, Dingzeyu Li, Chi-Keung Tang, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3599Qifeng Chen, Dingzeyu Li, and Chi-Keung Tang. Knn mat- ting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9):2175-2188, 2013. 2, 8, 9
Deep image matting. Ning Xu, Brian Price, Scott Cohen, Thomas Huang, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)89Ning Xu, Brian Price, Scott Cohen, and Thomas Huang. Deep image matting. In Proc. IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 2970- 2979, 2017. 2, 6, 8, 9
Learning-based sampling for natural image matting. Jingwei Tang, Yagiz Aksoy, Cengiz Oztireli, Markus Gross, Tunc Ozan Aydin, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Jingwei Tang, Yagiz Aksoy, Cengiz Oztireli, Markus Gross, and Tunc Ozan Aydin. Learning-based sampling for natu- ral image matting. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3055-3063, 2019. 2
Context-aware image matting for simultaneous foreground and alpha estimation. Qiqi Hou, Feng Liu, Proc. IEEE International Conference on Computer Vision (ICCV). IEEE International Conference on Computer Vision (ICCV)911Qiqi Hou and Feng Liu. Context-aware image matting for si- multaneous foreground and alpha estimation. In Proc. IEEE International Conference on Computer Vision (ICCV), pages 4130-4139, 2019. 2, 8, 9, 11
Disentangled image matting. Shaofan Cai, Xiaoshuai Zhang, Haoqiang Fan, Haibin Huang, Jiangyu Liu, Jiaming Liu, Jiaying Liu, Jue Wang, Jian Sun, Proc. IEEE International Conference on Computer Vision (ICCV). IEEE International Conference on Computer Vision (ICCV)911Shaofan Cai, Xiaoshuai Zhang, Haoqiang Fan, Haibin Huang, Jiangyu Liu, Jiaming Liu, Jiaying Liu, Jue Wang, and Jian Sun. Disentangled image matting. In Proc. IEEE International Conference on Computer Vision (ICCV), pages 8819-8828, 2019. 2, 8, 9, 11
Alphagan: Generative adversarial networks for natural image matting. Sebastian Lutz, Konstantinos Amplianitis, Aljosa Smolic, British Machince Vision Conference (BMVC). Sebastian Lutz, Konstantinos Amplianitis, and Aljosa Smolic. Alphagan: Generative adversarial networks for nat- ural image matting. British Machince Vision Conference (BMVC), 2018. 2
SegNet: A deep convolutional encoder-decoder architecture for image segmentation. Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3912Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 39(12):2481-2495, 2017. 3
Jin-Hwa Kim, Woosang Kyoung-Woon On, Jeonghee Lim, Jung-Woo Kim, Byoung-Tak Ha, Zhang, arXiv:1610.04325Hadamard product for low-rank bilinear pooling. 45arXiv preprintJin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325, 2016. 4, 5
Hierarchical bilinear pooling for fine-grained visual recognition. Chaojian Yu, Xinyi Zhao, Qi Zheng, Peng Zhang, Xinge You, Proc. European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)Chaojian Yu, Xinyi Zhao, Qi Zheng, Peng Zhang, and Xinge You. Hierarchical bilinear pooling for fine-grained visual recognition. In Proc. European Conference on Computer Vi- sion (ECCV), pages 574-589, 2018. 4
Condconv: Conditionally parameterized convolutions for efficient inference. Brandon Yang, Gabriel Bender, V Quoc, Jiquan Le, Ngiam, Advances in Neural Information Processing Systems (NIPS). Brandon Yang, Gabriel Bender, Quoc V Le, and Jiquan Ngiam. Condconv: Conditionally parameterized convolu- tions for efficient inference. In Advances in Neural Infor- mation Processing Systems (NIPS), pages 1307-1318, 2019. 5
The mnist database of handwritten digits. Yann Lecun, 510Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. 5, 10
Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf, arXiv:1708.07747510arXiv preprintHan Xiao, Kashif Rasul, and Roland Vollgraf. Fashion- mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. 5, 10
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 6
In-place activated batchnorm for memory-optimized training of dnns. Samuel Rota, Lorenzo Bulò, Peter Porzi, Kontschieder, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. In-place activated batchnorm for memory-optimized training of dnns. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 6
Unet: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 234-241. Springer, 2015. 6
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, Proc. European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proc. European Conference on Computer Vision (ECCV), pages 740-755. Springer, 2014. 6
The pascal visual object classes (voc) challenge. Mark Everingham, Luc Van Gool, K I Christopher, John Williams, Andrew Winn, Zisserman, International Journal of Computer Vision. 882Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, 2010. 6
A perceptually motivated online benchmark for image matting. Christoph Rhemann, Carsten Rother, Jue Wang, Margrit Gelautz, Pushmeet Kohli, Pamela Rott, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)IEEE710Christoph Rhemann, Carsten Rother, Jue Wang, Margrit Gelautz, Pushmeet Kohli, and Pamela Rott. A perceptu- ally motivated online benchmark for image matting. In Proc. IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 1826-1833. IEEE, 2009. 6, 7, 10
Attention-guided hierarchical structure aggregation for image matting. Yu Qiao, Yuhao Liu, Xin Yang, Dongsheng Zhou, Mingliang Xu, Qiang Zhang, Xiaopeng Wei, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)810Yu Qiao, Yuhao Liu, Xin Yang, Dongsheng Zhou, Mingliang Xu, Qiang Zhang, and Xiaopeng Wei. Attention-guided hi- erarchical structure aggregation for image matting. In Proc. IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 13676-13685, 2020. 6, 8, 10
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in Neural Information Processing Systems (NIPS). Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NIPS), pages 8026-8037, 2019. 6
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Communications of the ACM. 606Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. Communications of the ACM, 60(6):84-90, 2017. 6
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
| []
|
[
"Animating Pictures with Eulerian Motion Fields",
"Animating Pictures with Eulerian Motion Fields"
]
| [
"Aleksander Holynski \nUniversity of Washington\n\n",
"Brian Curless \nUniversity of Washington\n\n",
"Steven M Seitz \nUniversity of Washington\n\n",
"Richard Szeliski \nFacebook\n\n"
]
| [
"University of Washington\n",
"University of Washington\n",
"University of Washington\n",
"Facebook\n"
]
| []
| Figure 1: Given a single input image (a), our method estimates an image-aligned motion field (b), and uses it to create a looping video (c).AbstractIn this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video. We target scenes with continuous fluid motion, such as flowing water and billowing smoke. Our method relies on the observation that this type of natural motion can be convincingly reproduced from a static Eulerian motion description, i.e. a single, temporally constant flow field that defines the immediate motion of a particle at a given 2D location. We use an image-to-image translation network to encode motion priors of natural scenes collected from online videos, so that for a new photo, we can synthesize a corresponding motion field. The image is then animated using the generated motion through a deep warping technique: pixels are encoded as deep features, those features are warped via Eulerian motion, and the resulting warped feature maps are decoded as images. In order to produce continuous, seamlessly looping video textures, we propose a novel video looping technique that flows features both forward and backward in time and then blends the results. We demonstrate the effectiveness and robustness of our method by applying it to a large collection of examples including beaches, waterfalls, and flowing rivers. | 10.1109/cvpr46437.2021.00575 | [
"https://arxiv.org/pdf/2011.15128v1.pdf"
]
| 227,238,903 | 2011.15128 | a9eb5c59763b5da7a9a0e1c5ef543b2f02630feb |
Animating Pictures with Eulerian Motion Fields
Aleksander Holynski
University of Washington
Brian Curless
University of Washington
Steven M Seitz
University of Washington
Richard Szeliski
Facebook
Animating Pictures with Eulerian Motion Fields
Figure 1: Given a single input image (a), our method estimates an image-aligned motion field (b), and uses it to create a looping video (c).AbstractIn this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video. We target scenes with continuous fluid motion, such as flowing water and billowing smoke. Our method relies on the observation that this type of natural motion can be convincingly reproduced from a static Eulerian motion description, i.e. a single, temporally constant flow field that defines the immediate motion of a particle at a given 2D location. We use an image-to-image translation network to encode motion priors of natural scenes collected from online videos, so that for a new photo, we can synthesize a corresponding motion field. The image is then animated using the generated motion through a deep warping technique: pixels are encoded as deep features, those features are warped via Eulerian motion, and the resulting warped feature maps are decoded as images. In order to produce continuous, seamlessly looping video textures, we propose a novel video looping technique that flows features both forward and backward in time and then blends the results. We demonstrate the effectiveness and robustness of our method by applying it to a large collection of examples including beaches, waterfalls, and flowing rivers.
Introduction
For humans, a picture often contains much more than a collection of pixels. Drawing from our previous observations of the world, we can recognize objects, structure, and even imagine how the scene was moving when the picture was taken. Using these priors, we can often envision the image as if it were animated, with smoke billowing out of a chimney, or waves rippling across a lake. In this paper, we propose a system that learns these same motion priors from videos of real scenes, enabling the synthesis of plausible motions for a novel static image and allowing us to render an animated video of the scene.
General scene motion is highly complex, involving perspective effects, occlusions, and transience. For the purposes of this paper, we restrict our attention to fluid motions, such as smoke, water, and clouds, which are well approximated by Eulerian motion, in particular, particle motion through a static velocity field.
Our proposed method takes as input a single static image and produces a looping video texture. We begin by using an image-to-image translation network [29] to synthesize an Eulerian motion field. This network is trained using pairs of images and motion fields, which are extracted from a large collection of online stock footage videos of natural scenes. Through Euler integration, this motion field defines each source pixel's trajectory through the output video sequence. Given the source pixel positions in a future frame, we render the corresponding frame using a deep warping technique: we use an encoder network to transform the input image into a deep feature map, warp those features using a novel temporally symmetric splatting technique, and use a decoder network to recover the corresponding warped color image. Lastly, in order to ensure our output video loops seamlessly, we apply a novel video looping technique that operates in deep feature space.
Our contributions include (1) a novel motion representation for single-frame textural animation that uses Euler integration to simulate motion, (2) a novel symmetric deep splatting technique for synthesizing realistic warped frames, and (3) a novel technique for seamless video looping of textural motion.
Previous Work
Video Textures
There is a large body of work aimed at producing looping videos, known variously as video textures, cinemagraphs, or live photos. These techniques typically take as input a longer video sequence and, through some analysis of the motion, produce a single seamlessly looping video, or an infinite (yet not obviously looping) video [21]. The term cinemagraph often refers to selective animation of looping clips, where only certain parts of the frame, as chosen by the user, are animated (or de-animated) [2]. Newer approaches [26,34,13,12,17] perform this task fully automatically, determining which regions are easily looped, and which regions contain motions that are large in magnitude or otherwise unsuitable for looping. These approaches have also been extended to operate on specific domains, such as videos of faces [2], urban environments [33], panoramas [1], and continuous particle effects [3,14]. All these methods, however, require a video as input.
Single-image animation
There are also a number of methods aimed at animating still images. Recently, these techniques have gained popularity through commercial applications such as Plotagraph 1 and Pixaloop 2 , which allow users to manually "paint" motion onto an image. In the following, we focus on approaches to perform some of this annotation automatically.
Physical simulation. Instead of manually annotating the direction and magnitude of motion, the motion of certain objects, such as boats rocking on the water, can be physically simulated, as long as each object's identity is known and its extent is precisely defined [5], or automatically identified through class-specific heuristics [9]. Since each object category is modeled independently, these methods do not easily extend to more general scene animation.
Using videos as guidance. Alternatively, motion or appearance information can be transferred from a userprovided reference video, containing either similar scene composition [20], aligned information from a different domain, such as semantic labels [28], or unaligned samples from the same domain [24,4]. Instead of a single userprovided video, a database of homogeneous videos can be used to inherit nearest-neighbor textural motion, assuming a segmentation of the dynamic region is provided [18].
Transformations in latent space. Recent advances in deep learning have enabled realistic, high-resolution image synthesis using generative adverserial networks (GANs). Many of these systems operate by representing images or scenes as a latent feature vector, which is decoded into a synthesized image. By perturbing the latent vector, or performing a randomized walk in the latent feature space, the resulting decoded images remain plausible, while also varying temporally [23,8,10]. These animations can visualize the space of possible appearances, but do not necessarily animate plausible motion.
Instead of a random walk, one can also directly control movement by applying spatial warps to latent features [15]. Still, deciding how to warp the image is non-trivial -to produce a realistic video, the applied transformations must correspond with feasible motion in the scene.
Using learned motion or appearance priors. Deep learning also enables motion synthesis from single-frame inputs [7,27]. Similarly, video prediction methods [35,32,11,31,19] can predict future video frames from a single image, even modelling the inherent multi-modality of predicting the future. These techniques typically predict a set of future frames at once, and thus are limited to either low spatial resolution or few predicted frames.
Most similar to our work, Endo et al. [6] demonstrate high-quality motion and appearance synthesis for animating timelapses from static landscape imagery. In our evaluations, we provide comparisons to this technique, showing that our method more reliably estimates motion for scenes with fluids and animates videos with fewer visible artifacts.
Overview
Given a single static image I 0 , we generate a looping video of length N + 1, consisting of frames I t with t ∈ [0, N ]. Our pipeline begins by using an image-to-image translation network to estimate a corresponding motion field M (Section 4), which is used to define the position of each pixel in all future frames. We use this information to animate the image through a deep warping technique (Section 5). Finally, in order to produce seamlessly looping videos, we introduce a technique to ensure that our videos always start and end with the same frame (Section 5.2). Our approach is summarized in Figure 2
Motion estimation
We begin by describing the motion model and the motion estimation network. Given an image as input, we wish to synthesize plausible motion for the observed scene. To animate the input image using our estimated motion, we first use a feature encoder network to encode the image as a feature map D0. This feature map is warped by the displacement fields (using a novel symmetric splatting technique) to produce the corresponding warped feature map Dt. The warped features are provided to the decoder network to create the output video frame It.
Prior work accomplishes this task through recurrent prediction of incremental flow fields [6], theoretically enabling generation of an infinite number of future frames at highresolution. In practice, however, recurrent estimation often results in long-term distortion, since predicted motions are dependent on previously generated frames. In contrast, our motion field is only predicted once, given the input image, and thus does not degrade over time. Even though we use a single static motion field to represent the motion of an entire video, we can still model complex motion paths. This is because our motion field M is a static Eulerian flow field, i.e., a 2D map of motion vectors where each pixel's value defines its immediate velocity, which does not change over time. We use M to simulate the motion of a point (particle) from one frame to the next via Euler integration:
x t+1 =x t + M (x t ),(1)
wherex t is the point's (x, y) coordinate in frame t. In other words, treating each pixel as a particle, this motion field is the flow between each frame and its adjacent future frame:
M (x t ) = F t→t+1 (x t )(2)
To synthesize this motion field, we train an image-toimage translation network [29] on color-motion pairs, such that when provided with a new color image I 0 , it estimates a plausible motion field M . Given an image, M is only estimated once through an inference call to the network. Once estimated, it can be used to define the source pixel positions in all future frames t by recursively applying:
F 0→t (x 0 ) = F 0→t−1 (x 0 ) + M (x 0 + F 0→t−1 (x 0 )) (3)
This results in displacement fields F 0→t , which define the trajectory of each source pixel in I 0 across future frames I t . These displacement fields are then used for warping the input image, as further described in Section 5. Computing F 0→t does not incur additional calls to the network -it only uses information from the already-estimated M .
Note that unlike Endo et al. [6], who predict backward flow fields for warping (i.e., using bilinear backward sampling), we predict the forward motion field, i.e., aligned with the input image. In our evaluations, we show that predicting forward motion results in more reliable motion prediction and sharper motion estimates at object boundaries. As a result, this enables more realistic animation of nearby scenes with partial occlusions, since regions that are moving are more precisely delineated from those that are not.
Animation
Once we have estimated the displacement fields F 0→t from the input image to all future frames, we use this information to animate the image. Typically, forward warping, i.e., warping an image with a pixel-aligned displacement field, is accomplished through a process known as splatting. This process involves sampling each pixel in the input image, computing its destination coordinate as its initial position plus displacement, and finally assigning the source pixel's value to the destination coordinate. Warping an image with splatting unfortunately suffers from two significant artifacts: (1) the output is seldom dense -it usually contains holes, which are regions to which no source pixel is displaced, and (2) multiple source pixels may map to the same destination pixel, resulting in loss of detail or aliasing. Additionally, the predicted motion fields may be imperfect, and naively warping the input image can result in boundary artifacts. In the following section, we introduce a deep image warping approach to resolve these issues.
Deep image warping
Given an image I 0 and a displacement field F 0→t , we adopt a deep warping technique to realistically warp the input frame and fill unknown regions. Our method consists of three steps: (1) use an encoder network to encode the input image I 0 as a deep feature map D 0 , (2) use the estimated displacement field F 0→t to splat those features to a future frame, producing D t , and (3) use a decoder network to convert the warped features to an output color image I t . For our encoder and decoder networks, we use variants of the architectures proposed in SynSin [30]. More implementation details are provided in Section 6.
As mentioned in the previous section, unlike backward warping, splatting may result in multiple source pixels mapping to the same destination coordinate. In these cases, it is necessary to decide which value will occupy the pixel in the destination image. For this, we adopt softmax splatting [16], which assigns a per-pixel weighting metric Z to the source image, and uses a softmax to determine the contributions of colliding source pixels in the destination frame:
D t (x ) = x∈X D 0 (x) · exp(Z(x)) x∈X exp(Z(x))(4)
where X is the set of pixels which map to destination pixel x . Our method infers Z automatically as an additional channel of the encoded feature map. The learned metric allows the network to assign importance to certain features over others, and the softmax exponentiation avoids uniform blending, resulting in sharper synthesized frames. Symmetric Splatting. As feature pixels are warped through repeated integration of our motion field M , we typically observe increasingly large unknown regions (Figure 3), occurring when pixels vacate their original locations and are not replaced by others. This effect is especially prominent at motion "sources", such as the top of a waterfall, where all predicted motion is outgoing. Although our decoder network is intended to fill these holes, it is still desirable to limit the complexity of the spatio-temporal inpainting task, as asking the network to animate an entire waterfall from a small set of distant features is unlikely to produce a compelling and temporally stable video. Our solution to this problem leverages the fact that our motion is textural and fluid, and thus much of the missing textural information in unknown regions can be feasibly borrowed from other parts of the frame that lie along the same motion path. With this intuition in mind, we describe a symmetric splatting technique which uses reversed motion to provide valid textural information for regions which would otherwise be unknown.
So far, the process we have described to generate an animated video involves warping the encoded feature map D 0 by F 0→t to produce future feature maps V f = {D 0 ...D N }, which are decoded to produce the output video frames. However, since our motion map M defines the motion between adjacent frames, we could just as easily animate the image by generating a video of the past, i.e., instead of warping D 0 into the future, use −M to compute F 0→−t , resulting in warped feature maps V p = {D −N ...D 0 }. Decoding this feature video produces an equally plausible animation of the frame, with the main difference being that the large unknown regions in V p occur at the start of the sequence, as opposed to at the end of the sequence in V f .
In fact, because the direction of motion has been reversed, the motion sources have been replaced with motion "sinks" and vice versa (Figure 4). This means that the locations of the unknown regions in V p are also largely complementary to those found in V f . For instance, if our input image contains a waterfall, V f will begin with the input feature map D 0 , and pixels will gradually flow down the waterfall, eventually accumulating at the bottom, and leaving a large unoccupied region at the top. Conversely, V p will begin with pixels accumulated at the top of the waterfall, and a large hole at the bottom, and will end with D 0 . We leverage this complementarity by compositing pairs of feature maps (one in the past, one in the future) to produce a feature map which is typically fully dense.
We perform this composition through joint splatting: we splat each pixel of D 0 twice to the same destination frame, once using F 0→t and once using F 0→t−N . Note that F 0→t does not necessarily equal −F 0→−t , rather F 0→−t is the result of applying −M recursively through Eq. 3. As before, we use the softmax splatting approach with a networkpredicted per-pixel weighting metric to resolve conflicts. This process results in a composite feature map that seldom contains significant holes, enabling generation of longer videos with larger magnitude motion. These two videos typically contain complementary unknown regions (shown in magenta). Before decoding, we combine the two feature maps via joint splatting. We modulate the contribution of each using splatting weights αt, such that in the blended composite, the first and last frames are guaranteed to equal D0, thus ensuring a seamless loop. Note that RGB images are shown for visualization, but these are both deep feature videos.
Looping
In this section, we focus on ensuring that our output videos loop seamlessly. To this end, we first describe a modification to the splatting weights that guarantees that the first and last output video frames will be identical. Then, we describe an approach that enables end-to-end training without requiring a dataset of looping videos.
Prior work [6] produces looping videos through crossfading: an animated, but non-looping, video is generated first, and a crossfade is applied across the two ends of the video to smooth out any jarring frame transitions. This approach can be quite effective in certain cases, but often produces artifacts in the form of double edges and ghosting.
Instead of directly crossfading the animated video, our approach performs the transition in deep feature space, and provides the smoothly transitioning feature maps to the decoder. This allows us to enforce smooth transitions, while still producing images that contain realistic texture, avoiding many of the artifacts of direct crossfading.
Looping weights. Our looping technique relies on the observation that our two warped sequences V p and V f each have the input feature map D 0 on opposite ends of the sequence, as illustrated in Figure 4. With this in mind, if we are able to smoothly control the contribution of each, such that the first frame contains only the values in V f and the last frame contains only the values in V p , our feature maps (and our decoded images) on opposite ends of the video are guaranteed to be identical, and thus, our video is guaranteed to loop seamlessly. As such, we modulate the contribution of each feature map by introducing a temporal scaling coefficient to Eq. (4):
D t (x ) = x∈X α t (x) · D 0 (x) · exp(Z(x)) x∈X α t (x) · exp(Z(x))(5)
where X is the set of pixels which map to destination pixel x , either by warping forward or backward in time. For a given frame t, we set:
α t (x) = t Nx ∈ V p 1 − t Nx ∈ V f(6)
Although the scaling coefficient α t is linearly interpolated, the resulting composited feature video is not a linear interpolation of V p and V f , since coinciding splatted features from each are typically not from the same input locations, and thus have different values of Z. Since the value of Z is unconstrained and exponentiated, the overall magnitude of our weighting function (α t (x) · exp(Z(x))) can vary significantly, and thus our composited feature map seldom contains equally blended features. The added coefficient α t serves as a forcing function to ensure that the composited feature maps D t are equal to D 0 at t = 0 and t = N , but composited features will transition from V f to V p at different rates per-pixel, depending on the relative magnitudes of the splatted Z values.
Training on regular videos. Training our deep warping component (i.e., our encoder and decoder networks) to produce looping videos introduces an additional challenge: our training dataset consists of natural non-looping videos. In other words, the looping video we are tasking our networks with generating does not exist, even for our training examples, and thus, it's non-trivial to formulate a reconstruction loss for supervision. Therefore, as illustrated in Figure 5, we modify the task for training: instead of warping one frame in two directions, we use two different frames, one from the start of the video clip I GT 0 , and one from the end I GT N , encoded separately as feature maps. We additionally predict a motion field M from I GT 0 , which is integrated to produce displacement fields F 0→t and F 0→t−N . The two feature maps, D 0 and D N , are respectively warped by F 0→t and F 0→t−N to an intermediate frame t, using our joint splatting technique with the weights defined in Eq. 5. Finally, the composited feature map D t is decoded to an image I t , and a loss is computed against the real intermediate frame I GT t . At testing time, we perform the same process, except that instead of two input images, we use only one image, warped in both directions. This process is effectively training the network to perform video interpolation, and at inference time, using the network to interpolate between a frame and itself, while strictly enforcing the desired motion by warping the feature maps. Figure 5: Training: As described in Section 5.1, each frame in our generated looping video is composed of textures from two warped frames. To supervise this process during training, i.e., to have a real frame to compare against, we perform our symmetric splatting using the features from two different frames, I0 and IN (instead of I0 twice, as in inference). We enforce the motion field M to match the motion estimated from the ground truth video M GT , and the output frame It to match the real video frame I GT t . For both, we use a combination of photometric and discriminative losses.
Implementation Details
In this section, we provide more details about the implementation of our method. First, we provide a summary of the network architectures used for the motion estimation and warping networks. Then, we provide details about our training and inference pipelines. In order to facilitate future work, full training and inference code will be made publicly available at eulerian.cs.washington.edu.
Network architecture. For the feature encoder and decoder networks, we use the architectures proposed in SynSin [30], which have shown compelling results for single-image novel-view synthesis. Since our aim is not to generate new viewpoints, but rather to animate the scene, we replace the reprojection component with the softmax splatting technique proposed in Niklaus et al. [16]. Additionally, we replace the noise-injected batch normalization layer from SynSin with the modulated convolution approach proposed in Karras et al. [10] (to which we also provide a latent noise vector). This modification greatly helps reduce visual artifacts and enables stable discriminator training with smaller batch sizes (a necessity for limited GPU memory). For our motion estimation network, we use the architecture proposed in Pix2PixHD [29].
Training. We focus on natural scenes with fluid textures such as waterfalls, turbulent streams, and flowing waves. For our training data, we collected and processed a set of 1196 unique videos of textural motion from an online stock footage website 3 . We use 1096 for training, 50 for validation, and 50 for testing. To generate ground-truth motion fields, we use a pre-trained optical flow estimator [25] to compute the average optical flow between adjacent frames 3 www.storyblocks.com over a 2-second window. This effectively filters most motion which is cyclic, since pixels with cyclic motion will usually have been observed moving in opposing directions.
We use only training videos from stationary cameras. The motion estimation network is trained using 5 imagemotion pairs from each of our 1096 training videos (a total of 5480 pairs) for 35 epochs, using the default parameters from Pix2PixHD [29]. Prior to training, we resize all our images to a standard size of 1280 × 720.
The warping component is trained on 5 short video clips from each of our 1096 training videos. A training triplet (start frame, middle frame, end frame) is selected from each video clip at random during training, further increasing the effective dataset size. We also apply random augmentation to our training examples, including horizontal flips and cropping. We train the network on batches of 8 images of size 256 × 256 for 200 epochs, using a discriminator learning rate of 3.5 × 10 −3 and generator learning rate of 3.5 × 10 −5 . We use the same losses and loss balancing coefficients shown in SynSin [30].
As in Niklaus et al. [16], and for the purpose of training stability, we first train our two components separately. We start by training our motion estimation network supervised only by the ground-truth motion. Then, we train our warping component, which consists of the encoder and decoder networks, using the ground-truth motion fields as input. Finally, we fine-tune the two end-to-end, by warping with the predicted motion from the motion estimation network. This final step allows each network to best adapt to the properties and error characteristics of the other. We fine-tune for 20 epochs with discriminator and generator learning rates of 1 × 10 −3 and 1 × 10 −5 .
Inference. Our looping output videos have length N = 200 with a framerate of 30 frames per second. Each sequence is processed in 40 seconds on a Titan Xp GPU.
Results & Evaluation
We first present a quantitative analysis of our method, and show comparisons with the state-of-the-art in stillimage animation [6] (Section 7.1), as well as ablated variations of our method. Then, we show qualitative results of our method on a diverse collection of input images (Section 7.2). We refer readers to our supplementary video for a full collection of visual results.
Quantitative evaluation
In this section, we present our experiments evaluating the different components of our method, i.e., (1) a novel motion representation, (2) a novel symmetric splatting technique, and (3) a novel looping technique.
Motion representation. We evaluate the effectiveness of our proposed motion representation (integrated Eulerian flow) by comparing our predicted motion to ground truth pixel positions in future frames of the video. We establish ground truth motion by densely tracking all pixels through a sequence of 60 frames, using an off-the-shelf optical flow estimator [25]. We report the average Euclidean error between the ground truth positions and those estimated through our synthesized motion field, i.e., the endpoint error. We compare our proposed method to the following variants: (1) the per-frame recurrent estimation from Endo et al. [6], (2) directly predicting F 0→N and linearly interpolating intermediate motion F 0→t as t N F 0→N , and (3) training our motion network to predict the backward flow field, i.e., M = F 1→0 (and thus all splatting is replaced by backward warping). The results of this experiment can be found in Figure 7. We see that our method is able to most faithfully reproduce the ground-truth motion for our scenes. Empirically, we observe that the methods employing backward warping produce a majority of errors at motion boundaries, such as occlusions. We hypothesize that these differences are because the network is more easily able to predict an output that is spatially aligned with the input image.
Since images may often have many plausible motion directions, we ensure that the comparisons performed in this experiment are on video examples that contain unambiguous motion, eg. waterfalls and rivers. In order to identify these samples, we gave each of 5 users 50 images from different scenes, and asked them to manually annotate the likely motion direction. We retained the scenes for which all the annotations were within 30 degrees of the median motion direction in our ground truth flow values. Additionally, since we prefer the motion comparison to be agnostic to animation speed, i.e., animating the scene realistically but in slow-motion is acceptable, we solve for a persequence time-scaling constant that best aligns the motion magnitudes of the predicted and ground-truth displacement fields. This constant is computed for all methods and is used in all our comparisons.
The motion estimation network from Endo et al. [6] uses We compare our proposed motion representation to three alternative methods, described in Section 7.1. The shaded region shows the range of predictions produced by Endo et al. [6]. We find that our proposed motion representation is able to most reliably reproduce the true motion information for scenes with fluid motion. All comparisons are performed on images of size 1280 × 720.
a latent code as input to the network, and different latent codes produce different predicted motions. To consider all possible outcomes of their method, we randomly sample 100 latent codes from the training codebook and report statistics on the resulting synthesized motions. i.e., Z(x) = 1 for all pixels, (4) our method without symmetric splatting, and (5) our full method. We again use temporally-scaled sequences with unambiguous motion to compare each method's synthesized frames with the ground truth future video frames. We perform this comparison using PSNR, SSIM, and LPIPS [36]. Table 1 shows a quantitative comparison of these techniques, demonstrating that our proposed approach outperforms the alternatives at synthesizing future frames when the same motion is provided. Additionally, in the supplemental material, we show a qualitative comparison of these techniques. Compared to our approach, we observe that standard color splatting results in significant sparsity, i.e. many holes with unknown color. Backward warping instead fills these holes with interpolated (stretched) texture, which in most cases is equally jarring. Feature warping without inferred Z(x) values results in blurred details, since features are more often evenly combined. Removing symmetric splatting results in large unknown regions, which are filled in by the decoder network with blurry and often unrealistic texture.
Looping. Finally, we evaluate the choice of our looping technique. We compare four approaches: (1) our synthesis technique followed by the crossfading technique described in Endo et al. [6], (2) the end-to-end pipeline described in Endo et al. [6], (3) our approach without the scaling coefficient α t introduced in Eq. 5 and (4) our proposed approach. Since we do not have a ground truth looping video for com-parison, we instead perform a user study, in which 100 MTurk users are asked to rank the four variants by visual quality. Table 2 shows the results of the user study, which demonstrate that our proposed approach compares favorably against the alternatives. In the supplemental material, we also show a visual comparison of these approaches. For our comparison to Endo et al. [6], we use an implementation provided by the authors, trained on our dataset for the recommended 5000 epochs. Note that we only use the motion estimation component of their method, as our scenes are not timelapses, and therefore do not have as significant changes in overall appearance.
Qualitative evaluation
For evaluation purposes, we demonstrate our system on a large collection of still images. A subset of these images, along with their synthesized motions, can be seen in Figure 6. The dataset contains a variety of natural scenes, including waterfalls, oceans, beaches, rivers, smoke, and clouds. In the supplementary video, we provide a larger set of input images and final rendered animations, as well as intermediate outputs such as synthesized motion fields.
In the results, we can see that the network learns important motion cues, such as perspective (i.e. motion is larger for objects closer to the camera), water turbulence, and detailed flow direction from surface ripples. By comparison, we find that the generated videos using the method in Endo et al. [6] more often produces videos with unrealistic motion or incorrect motion boundaries. Additionally, since our method performs warping in the deep feature domain, instead of explicitly warping RGB pixels, our results do not contain many of the same characteristic artifacts of warping, such as shearing or rubber-sheeting. Finally, we observe that our results loop more seamlessly, without obvious crossfading or ghosting.
Conclusion
In this paper, we have presented a method that can synthesize realistic motion from single photographs to produce animated looping videos. We introduced a novel motion representation for single-image textural animation that uses Euler integration. This motion is used to animate the input image through a novel symmetric splatting technique, in which we combine texture from the future and past. Finally, we introduced a novel video looping technique for singleframe textural animation, allowing for seamless loops of our animated videos.
We demonstrated our method on a wide collection of images with fluid motion, and showed that our method is able to produce plausible motion and realistic animations.
.
Figure 2 :
2Overview: Given an input image I0, our motion estimation network predicts a motion field M . Through Euler integration, M is used to generate future and past displacement fields F0→t and F0→t−N , which define the source pixel locations in all other frames t.
Figure 3 :
3Deep warping: Above: Naïve splatting of RGB pixels results in increasingly large unknown regions over time, shown in magenta. Below: For the same frames, our deep warping approach synthesizes realistic texture in these unknown regions.
Figure 4 :
4Seamless looping: An illustrated example of how seamless loops are created. Two feature videos are created by warping D0. The first, V f , contains the result of integrating the motion field M , resulting in a video starting with the input image and animating into the future. The second, Vp, instead uses −M , resulting in a video starting in the past and ending with the input frame.
Figure 6 :
6Examples of the input images (top), alongside their corresponding synthesized motion fields (bottom). Full resolution images, along with their corresponding animated videos, can be found in the supplementary video.
Figure 7 :
7Quantitative evaluation, motion prediction: We evaluate the quality of the predicted motion by comparing the pixel positions in 60 future frames to those in the ground-truth video.
Video synthesis. Second, we evaluate the choice of warping technique. Given the same flow values for a set of testing (unseen) video clips, we evaluate five future frame synthesis techniques: (1) naïve color splatting, where the feature encoder and decoder are not used, and instead the color values are warped, (2) backward warping, where the forward displacement field is inverted[22], and then backward warping is applied, such that no holes occur during warping, (3) our method without the network inferred weights,Table 1: Quantitative evaluation, video synthesis: We evaluate the quality of future frame predictions by comparing 60 synthesized frames with corresponding frames in the ground truth video. We compare our method to four alternatives, described in Section 7.1. All variant use our proposed motion estimation network.↑ PSNR ↑ SSIM ↓ LPIPS
Naïve color splatting
7.90
0.313
0.595
Backward Warping
10.29
0.409
0.483
Ours -Z(x) = 1
13.88
0.541
0.344
Ours -No Symmetric Splatting 12.19
0.493
0.418
Ours -Full
14.63
0.619
0.313
S ≥ 1 S ≥ 2 S = 3
Endo et al. [6]
348
101
9
Ours -No α t
173
183
0
Ours -Crossfade 470
418
43
Ours -Full
500
472
448
Table 2 :
2User study: We perform a user study to compare four techniques for producing looping videos. We collected 5 unique annotations for each of 100 samples. We direct users to judge the visual quality and realism of each looping video and rank the videos with unique scores S = [0, 3], where 3 is best. We report the cumulative number of annotations above a certain ranking. On average, users rank our method higher than the alternatives.
https://app.plotaverse.com 2 https://www.pixaloopapp.com
Panoramic video textures. Aseem Agarwala, Colin Ke, Chris Zheng, Maneesh Pal, Michael Agrawala, Brian Cohen, David Curless, Richard Salesin, Szeliski, ACM Transactions on Graphics (TOG). 243Aseem Agarwala, Ke Colin Zheng, Chris Pal, Maneesh Agrawala, Michael Cohen, Brian Curless, David Salesin, and Richard Szeliski. Panoramic video textures. ACM Transac- tions on Graphics (TOG), 24(3):821-827, 2005. 2
Selectively de-animating video. Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi, ACM Trans. Graph. 314Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi. Selectively de-animating video. ACM Trans. Graph., 31(4):66-1, 2012. 2
Flow-based video synthesis and editing. S Kiran, Bhat, M Steven, Jessica K Seitz, Pradeep K Hodgins, Khosla, ACM Transactions on Graphics. 232ACMKiran S Bhat, Steven M Seitz, Jessica K Hodgins, and Pradeep K Khosla. Flow-based video synthesis and editing. In ACM Transactions on Graphics (TOG), volume 23, pages 360-363. ACM, 2004. 2
Time flies: Animating a still image with time-lapse video as reference. Chia-Chi Cheng, Hung-Yu Chen, Wei-Chen Chiu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChia-Chi Cheng, Hung-Yu Chen, and Wei-Chen Chiu. Time flies: Animating a still image with time-lapse video as refer- ence. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 5641-5650, 2020. 2
Animating pictures with stochastic motion textures. Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, H David, Richard Salesin, Szeliski, ACM Transactions on Graphics (TOG). 243Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H Salesin, and Richard Szeliski. Animating pictures with stochastic motion textures. ACM Transactions on Graphics (TOG), 24(3):853-860, 2005. 2
Animating landscape: Self-supervised learning of decoupled motion and appearance for single-image video synthesis. Yuki Endo, Yoshihiro Kanamori, Shigeru Kuriyama, Proceedings of ACM SIG-GRAPH Asia. ACM SIG-GRAPH Asia38Yuki Endo, Yoshihiro Kanamori, and Shigeru Kuriyama. An- imating landscape: Self-supervised learning of decoupled motion and appearance for single-image video synthesis. ACM Transactions on Graphics (Proceedings of ACM SIG- GRAPH Asia 2019), 38(6):175:1-175:19, 2019. 2, 3, 5, 6, 7, 8
Im2flow: Motion hallucination from static images for action recognition. Ruohan Gao, Bo Xiong, Kristen Grauman, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRuohan Gao, Bo Xiong, and Kristen Grauman. Im2flow: Motion hallucination from static images for action recogni- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5937-5947, 2018. 2
Improved techniques for training single-image gans. Tobias Hinz, Matthew Fisher, Oliver Wang, Stefan Wermter, arXiv:2003.11512arXiv preprintTobias Hinz, Matthew Fisher, Oliver Wang, and Stefan Wermter. Improved techniques for training single-image gans. arXiv preprint arXiv:2003.11512, 2020. 2
Animating still landscape photographs through cloud motion creation. Wei-Cih Jhou, Wen-Huang Cheng, IEEE Transactions on Multimedia. 181Wei-Cih Jhou and Wen-Huang Cheng. Animating still land- scape photographs through cloud motion creation. IEEE Transactions on Multimedia, 18(1):4-13, 2015. 2
Analyzing and improving the image quality of StyleGAN. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition26Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improv- ing the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020. 2, 6
Flow-grounded spatial-temporal video prediction from still images. Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Flow-grounded spatial-temporal video prediction from still images. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), pages 600- 615, 2018. 2
Fast computation of seamless video loops. Jing Liao, Mark Finch, Hugues Hoppe, ACM Transactions on Graphics (TOG). 346197Jing Liao, Mark Finch, and Hugues Hoppe. Fast computation of seamless video loops. ACM Transactions on Graphics (TOG), 34(6):197, 2015. 2
Automated video looping with progressive dynamism. Zicheng Liao, Neel Joshi, Hugues Hoppe, ACM Transactions on Graphics (TOG). 32477Zicheng Liao, Neel Joshi, and Hugues Hoppe. Automated video looping with progressive dynamism. ACM Transac- tions on Graphics (TOG), 32(4):77, 2013. 2
Creating waterfall animation on a single image. Chih-Yang Lin, Yun-Wen Huang, Timothy K Shih, Multimedia Tools and Applications. 786Chih-Yang Lin, Yun-Wen Huang, and Timothy K Shih. Cre- ating waterfall animation on a single image. Multimedia Tools and Applications, 78(6):6637-6653, 2019. 2
Elizaveta Logacheva, Roman Suvorov, Oleg Khomenko, Anton Mashikhin, Victor Lempitsky, Deeplandscape, arXiv:2008.09655Adversarial modeling of landscape video. arXiv preprintElizaveta Logacheva, Roman Suvorov, Oleg Khomenko, Anton Mashikhin, and Victor Lempitsky. Deeplandscape: Adversarial modeling of landscape video. arXiv preprint arXiv:2008.09655, 2020. 2
Softmax splatting for video frame interpolation. Simon Niklaus, Feng Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition46Simon Niklaus and Feng Liu. Softmax splatting for video frame interpolation. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 5437-5446, 2020. 4, 6
In So Kweon, and Sing Bing Kang. Personalized cinemagraphs using semantic understanding and collaborative learning. Tae-Hyun Oh, Kyungdon Joo, Neel Joshi, Baoyuan Wang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionTae-Hyun Oh, Kyungdon Joo, Neel Joshi, Baoyuan Wang, In So Kweon, and Sing Bing Kang. Personalized cinemagraphs using semantic understanding and collaborative learning. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 5160-5169, 2017. 2
Animating pictures of fluid using video examples. Makoto Okabe, Ken Anjyo, Takeo Igarashi, Hans-Peter Seidel, Computer Graphics Forum. 28Makoto Okabe, Ken Anjyo, Takeo Igarashi, and Hans-Peter Seidel. Animating pictures of fluid using video examples. In Computer Graphics Forum, volume 28, pages 677-686.
. Wiley Online Library, Wiley Online Library, 2009. 2
Video generation from single semantic label map. Junting Pan, Chengyu Wang, Xu Jia, Jing Shao, Lu Sheng, Junjie Yan, Xiaogang Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJunting Pan, Chengyu Wang, Xu Jia, Jing Shao, Lu Sheng, Junjie Yan, and Xiaogang Wang. Video generation from sin- gle semantic label map. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 3733-3742, 2019. 2
A phase-based approach for animating images using video examples. Ekta Prashnani, Maneli Noorkami, Daniel Vaquero, Pradeep Sen, Computer Graphics Forum. Wiley Online Library36Ekta Prashnani, Maneli Noorkami, Daniel Vaquero, and Pradeep Sen. A phase-based approach for animating images using video examples. In Computer Graphics Forum, vol- ume 36, pages 303-311. Wiley Online Library, 2017. 2
Video textures. Arno Schödl, Richard Szeliski, H David, Irfan Salesin, Essa, Proceedings of the 27th annual conference on Computer graphics and interactive techniques. the 27th annual conference on Computer graphics and interactive techniquesACM Press/Addison-Wesley Publishing CoArno Schödl, Richard Szeliski, David H Salesin, and Ir- fan Essa. Video textures. In Proceedings of the 27th an- nual conference on Computer graphics and interactive tech- niques, pages 489-498. ACM Press/Addison-Wesley Pub- lishing Co., 2000. 2
Layered depth images. Jonathan Shade, Steven Gortler, L He, Richard Szeliski, Proceedings of the 25th annual conference on Computer graphics and interactive techniques. the 25th annual conference on Computer graphics and interactive techniquesACM Press/Addison-Wesley Publishing CoJonathan Shade, Steven Gortler, L. He, and Richard Szeliski. Layered depth images. In Proceedings of the 25th an- nual conference on Computer graphics and interactive tech- niques, pages 231-242. ACM Press/Addison-Wesley Pub- lishing Co., 1998. 7
Singan: Learning a generative model from a single natural image. Tamar Rott Shaham, Tali Dekel, Tomer Michaeli, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionTamar Rott Shaham, Tali Dekel, and Tomer Michaeli. Sin- gan: Learning a generative model from a single natural im- age. In Proceedings of the IEEE International Conference on Computer Vision, pages 4570-4580, 2019. 2
First order motion model for image animation. Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe, Advances in Neural Information Processing Systems. Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. First order motion model for im- age animation. In Advances in Neural Information Process- ing Systems, pages 7137-7147, 2019. 2
Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition67Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8934- 8943, 2018. 6, 7
Towards moment imagery: Automatic cinemagraphs. James Tompkin, Fabrizio Pece, Kartic Subr, Jan Kautz, 2011 Conference for Visual Media Production. James Tompkin, Fabrizio Pece, Kartic Subr, and Jan Kautz. Towards moment imagery: Automatic cinemagraphs. In 2011 Conference for Visual Media Production, pages 87-93. IEEE, 2011. 2
Dense optical flow prediction from a static image. Jacob Walker, Abhinav Gupta, Martial Hebert, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionJacob Walker, Abhinav Gupta, and Martial Hebert. Dense optical flow prediction from a static image. In Proceedings of the IEEE International Conference on Computer Vision, pages 2443-2451, 2015. 2
Few-shot video-to-video synthesis. Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, Bryan Catanzaro, arXiv:1910.12713arXiv preprintTing-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, and Bryan Catanzaro. Few-shot video-to-video synthesis. arXiv preprint arXiv:1910.12713, 2019. 2
High-resolution image synthesis and semantic manipulation with conditional gans. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition16Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image syn- thesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8798-8807, 2018. 1, 3, 6
Synsin: End-to-end view synthesis from a single image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition46Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a sin- gle image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7467- 7477, 2020. 4, 6
Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. Wei Xiong, Wenhan Luo, Lin Ma, Wei Liu, Jiebo Luo, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWei Xiong, Wenhan Luo, Lin Ma, Wei Liu, and Jiebo Luo. Learning to generate time-lapse videos using multi-stage dy- namic generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 2364-2373, 2018. 2
Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. Tianfan Xue, Jiajun Wu, Advances in neural information processing systems. Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Free- man. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in neural in- formation processing systems, pages 91-99, 2016. 2
Turning an urban scene video into a cinemagraph. Hang Yan, Yebin Liu, Yasutaka Furukawa, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHang Yan, Yebin Liu, and Yasutaka Furukawa. Turning an urban scene video into a cinemagraph. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 394-402, 2017. 2
An approach to automatic creation of cinemagraphs. Mei- , Chen Yeh, Po-Yi Li, Proceedings of the 20th ACM international conference on Multimedia. the 20th ACM international conference on MultimediaACMMei-Chen Yeh and Po-Yi Li. An approach to automatic cre- ation of cinemagraphs. In Proceedings of the 20th ACM international conference on Multimedia, pages 1153-1156. ACM, 2012. 2
Dtvnet: Dynamic time-lapse video generation via single still image. Jiangning Zhang, Chao Xu, Liang Liu, Mengmeng Wang, Xia Wu, Yong Liu, Yunliang Jiang, arXiv:2008.04776arXiv preprintJiangning Zhang, Chao Xu, Liang Liu, Mengmeng Wang, Xia Wu, Yong Liu, and Yunliang Jiang. Dtvnet: Dynamic time-lapse video generation via single still image. arXiv preprint arXiv:2008.04776, 2020. 2
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRichard Zhang, Phillip Isola, Alexei A Efros, Eli Shecht- man, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 586-595, 2018. 8
| []
|
[
"VISUALTTS: TTS WITH ACCURATE LIP-SPEECH SYNCHRONIZATION FOR AUTOMATIC VOICE OVER",
"VISUALTTS: TTS WITH ACCURATE LIP-SPEECH SYNCHRONIZATION FOR AUTOMATIC VOICE OVER"
]
| [
"Junchen Lu \nDepartment of Electrical and Computer Engineering\nNational University of Singapore\n\n",
"Berrak Sisman \nInformation Systems Technology and Design\nSingapore University of Technology and Design\n\n",
"Rui Liu \nDepartment of Electrical and Computer Engineering\nNational University of Singapore\n\n\nInformation Systems Technology and Design\nSingapore University of Technology and Design\n\n",
"Mingyang Zhang \nDepartment of Electrical and Computer Engineering\nNational University of Singapore\n\n",
"Haizhou Li \nDepartment of Electrical and Computer Engineering\nNational University of Singapore\n\n\nThe Chinese University of Hong Kong\nShenzhenChina\n"
]
| [
"Department of Electrical and Computer Engineering\nNational University of Singapore\n",
"Information Systems Technology and Design\nSingapore University of Technology and Design\n",
"Department of Electrical and Computer Engineering\nNational University of Singapore\n",
"Information Systems Technology and Design\nSingapore University of Technology and Design\n",
"Department of Electrical and Computer Engineering\nNational University of Singapore\n",
"Department of Electrical and Computer Engineering\nNational University of Singapore\n",
"The Chinese University of Hong Kong\nShenzhenChina"
]
| []
| In this paper, we formulate a novel task to synthesize speech in sync with a silent pre-recorded video, denoted as automatic voice over (AVO). Unlike traditional speech synthesis, AVO seeks to generate not only human-sounding speech, but also perfect lip-speech synchronization. A natural solution to AVO is to condition the speech rendering on the temporal progression of lip sequence in the video. We propose a novel text-to-speech model that is conditioned on visual input, named VisualTTS, for accurate lip-speech synchronization. The proposed VisualTTS adopts two novel mechanisms that are 1) textual-visual attention, and 2) visual fusion strategy during acoustic decoding, which both contribute to forming accurate alignment between the input text content and lip motion in input lip sequence. Experimental results show that VisualTTS achieves accurate lip-speech synchronization and outperforms all baseline systems. | 10.1109/icassp43922.2022.9746421 | [
"https://arxiv.org/pdf/2110.03342v3.pdf"
]
| 238,419,122 | 2110.03342 | 23e15581146b79f1c3ac1de8e97da793ff3950eb |
VISUALTTS: TTS WITH ACCURATE LIP-SPEECH SYNCHRONIZATION FOR AUTOMATIC VOICE OVER
Junchen Lu
Department of Electrical and Computer Engineering
National University of Singapore
Berrak Sisman
Information Systems Technology and Design
Singapore University of Technology and Design
Rui Liu
Department of Electrical and Computer Engineering
National University of Singapore
Information Systems Technology and Design
Singapore University of Technology and Design
Mingyang Zhang
Department of Electrical and Computer Engineering
National University of Singapore
Haizhou Li
Department of Electrical and Computer Engineering
National University of Singapore
The Chinese University of Hong Kong
ShenzhenChina
VISUALTTS: TTS WITH ACCURATE LIP-SPEECH SYNCHRONIZATION FOR AUTOMATIC VOICE OVER
Index Terms-Visual text-to-speechtextual-visual at- tentionlip-speech synchronizationautomatic voice over
In this paper, we formulate a novel task to synthesize speech in sync with a silent pre-recorded video, denoted as automatic voice over (AVO). Unlike traditional speech synthesis, AVO seeks to generate not only human-sounding speech, but also perfect lip-speech synchronization. A natural solution to AVO is to condition the speech rendering on the temporal progression of lip sequence in the video. We propose a novel text-to-speech model that is conditioned on visual input, named VisualTTS, for accurate lip-speech synchronization. The proposed VisualTTS adopts two novel mechanisms that are 1) textual-visual attention, and 2) visual fusion strategy during acoustic decoding, which both contribute to forming accurate alignment between the input text content and lip motion in input lip sequence. Experimental results show that VisualTTS achieves accurate lip-speech synchronization and outperforms all baseline systems.
INTRODUCTION
Automatic voice over (AVO) aims to deliver speech that voice-synchronizes with a silent pre-recorded video. An AVO system takes a silent video of a spoken utterance and its text script as the input, and generate natural speech that synchronizes with lip motion, emotional states, and dialogue scenarios in the video automatically. AVO technology will transform the way the movie industry conducts voice over. It will also enable new applications in entertainment, education, and business.
Text-to-speech synthesis (TTS) is the task of synthesizing speech from text input. With the advent of deep learning, endto-end neural TTS systems are able to produce high-quality synthetic speech. In these techniques, the key idea is to integrate the conventional TTS pipeline into a unified encoderdecoder network and to learn the mapping in the <text, wav> pair [1]. Successful implementations include Tacotron 1/2 [2,3], Transformer TTS [4], FastSpeech 1/2 [5,6] and their variants [1,7,8]. Together with neural vocoders [9,10], they can generate impressive natural-sounding speech.
Motivated by the study of neural TTS, a natural solution to AVO is to build a TTS system by taking text script as input, and conditioning on the temporal progression of lip movement and facial expression. One of the challenges is that humans are sensitive to audio-video mismatch. A minor mismatch may seriously affect the perceived speech quality, and intelligibility. A general purpose TTS doesn't guarantee such lip-speech synchronization as no visual information is taken into consideration.
Audio-video synchronization has been exploited in multimodal signal processing, such as multi-modal speech recognition [11], and multi-modal speech separation [12]. For example, Afouras et al. [11] studied the use of Transformer [13] for audio-visual information fusion, which achieves remarkable performance in multi-modal speech recognition. Pan et al. [14] proposed a multi-modal speaker extraction network to introduce lip-speech synchronization cues obtained from lip image sequence as the reference signal for speech extraction from a target speaker.
In this paper, we propose a TTS framework leveraging visual information (VisualTTS) with textual-visual attention and visual fusion strategy, which can learn the accurate alignment between the text script and the lip motion in input lip image sequence obtained from a video clip of spoken utterance. We conduct experiments on GRID dataset [15]. VisualTTS achieves accurate lip-speech synchronization and outperforms all baseline systems.
The main contributions of this paper include: 1) we formulate the AVO research problem and propose a novel neural model to incorporate visual information into TTS; 2) we propose two novel mechanisms, textual-visual attention and visual fusion strategy, to achieve accurate lip-speech synchronization. To our best knowledge, this is the first indepth study of automatic voice over in speech synthesis.
The rest of the paper is organized as follows: Section 2 presents the related work of this paper; Section 3 elaborates the model architectures; Section 4 describes details of our experiments; Section 5 provides conclusion of this paper.
RELATED WORK
Multi-modal speech synthesis
There have been studies on speech synthesis with multimodal information such as image-to-speech [16,17], videoto-speech [18,19] and automatic dubbing [20]. The AVO task is a completely new multi-modal speech synthesis task, which has not been investigated in depth. AVO takes a text script and a silent video clip as input to generate a speech audio that synchronizes with the lip motion and facial expression in video.
An AVO workflow is illustrated in Fig. 1. It differs from other multi-modal speech synthesis tasks in many ways. To start with, image-to-speech [16,17] seeks to generate caption speech from an image, while video-to-speech [18,19] aims to reconstruct speech signal from silent video of utterances spoken by people. Both tasks take visual information as the sole input to predict the speech output, while AVO receives both text and video as input. The study on automatic dubbing [20] is essentially to generate speech in one language for a video in another language, where machine translation plays a keep role while lip-speech synchronization is not the main focus.
In an AVO task, visual information learning and representation are required to synchronize the synthesized voice with the video input, which will be the focus of this paper.
Visual embedding
Video clips contain important information that can be useful for speech synthesis such as lip motion, facial expression and emotional states [21,18]. Appropriate rendering of phonetic duration in output speech depends on accurate lipspeech synchronization. As the modeling of lip-speech synchronization is built on the characterization of lip motion and speech signals [14,22], feature representation of lip motion from video is critically important.
Visual embedding has been successfully used in speech research. For lip-reading task, which is also known as visual speech recognition, the use of visual embedding has shown to provide useful information by condensing the lip motion information in the video [11,23]. Another example is the audio-visual speech enhancement task, in which Chen et al. [22] proposed to fuse visual embedding extracted in a lipreading task with audio embedding to provide lip-speech correlation information. Inspired by the recent success in visual embedding, we propose to use visual embedding extracted by a lip-reading network to guide the duration alignment in our VisualTTS for accurate lip-speech synchronization.
VISUALTTS
We formulate the AVO problem and propose a visual TTS solution next. With the motivation of generating speech in accurate synchronization with video, in VisualTTS, we propose a novel textual-visual attention and a visual fusion strategy for leveraging lip-speech synchronization information obtained from lip image sequence.
Overall architecture
As shown in Fig. 2, the overall architecture of VisualTTS consists of visual encoder, textual encoder, speaker encoder, visual-guided aligner, acoustic decoder and WaveNet vocoder [9].
The visual encoder aims to learn the visual embedding α to represent the lip motion information of the given lip image sequence. The textual encoder takes the input script as input to generate the textual embedding β . The speaker encoder seeks to encode the speaker ID into an utterance level speaker embedding γ . Textual embedding and visual embedding are then sent to visual-guided aligner for textual-visual alignment learning. The outputs of visual-guided aligner are decoded by the acoustic decoder into mel-spectrogram features, which are then converted to audio waveform using a pre-trained WaveNet vocoder [9,24].
The textual encoder consists of a character embedding layer and a CBHG-LSTM module, which is similar to that of Tacotron [2]. We will introduce the visual encoder, speaker encoder, visual-guided aligner and acoustic decoder in detail next.
Visual encoder
The AVO task takes text and video as input, hence as a preprocessing step, we obtain lip image sequence L by cropping the lip region from frames of video. We note that each lip image corresponds to one frame of video. We then propose to use a visual encoder to exploit the visual cues from lip image sequence, as shown in the left panel of Fig. 2.
The visual encoder consists of a 3D convolutional (Conv3D) layer and a ResNet-18 block [25]. Such an architecture has shown to be effective in the lip-reading task to learn the lip motion information in the video [26]. The visual encoder takes L as input and outputs the visual embedding α for each frame of lip image sequence L.
We note that all modules of visual encoder are pre-trained in a lip-reading task, in a similar way that is reported in [25]. In other words, during VisualTTS training, all weights of the visual encoder are fixed.
Speaker encoder
VisualTTS aims to achieve multi-speaker speech synthesis, hence we use a speaker encoder as shown in Fig. 2 to obtain the speaker embedding for a given speaker ID, which is a unique integer assigned to each speaker.
We note that the speaker encoder adopts a lookup table to match d-vector γ obtained by a pre-trained speaker verification model [27].
Visual-guided aligner
The visual-guided aligner consists of a textual-visual attention (TVA) mechanism to align cross-modal information, namely textual and visual information.
Specifically, the output of visual encoder, visual embedding α, is passed to TVA as key K V and value V V . Textual embedding β is passed to TVA as query Q T . A multi-head scaled dot-product attention [13] is used for our implementation of TVA. The textual-visual context is given by:
C(Q T , K V , V V ) =sof tmax( Q T K T V d K V )V V (1a) =sof tmax( βα T √ d α )α (1b) where d K V is the dimension of α.
Since the content of speech is determined solely by its corresponding text script, speech can be synchronized with lip motion accurately if the content of speech matches with lip motion information. In such a way, TVA captures longterm information for textual-visual dependency and learns the alignment between textual embedding and visual embedding, thus helps to yield speech well aligned with lip motion.
Acoustic decoder
The acoustic decoder consists of a visual fusion layer, and the decoder from Tacotron [2] that consists of an attention-based recurrent neural network (RNN), and a linear layer.
In practice, the length of mel-spectrogram is of certain ratio to the length of visual embedding, since speech audio and video are temporally synchronized. Each frame of the mel-spectrogram can be indexed to its corresponding video frame according to this ratio. In each time step of acoustic decoding, a frame of mel-spectrogram feature is concatenated with its corresponding visual embedding by the visual fusion layer. The purpose is to leverage the temporal correlation between visual embedding and mel-spectrogram. The concatenated representation is added with the speaker embedding to form a multi-modal representation [8], which is then projected to a multi-modal hidden sequence as output of the visual fusion layer. During acoustic decoding, the output of TVA is concatenated with speaker embedding [8] and passed to the rest part of the decoder along with the visual fusion output, and then decoded into the mel-spectrogram feature.
Note that the acoustic decoder can stop speech generation at the exact moment the synthetic speech reaches the length of the video clip, as the length of visual embedding indicates accurate utterance duration, thus avoid the infinite decoding problem in autoregressive speech synthesis.
EXPERIMENTS
We conduct objective and subjective evaluation to assess the performance of VisualTTS for automatic voice over. We note that there are no existing baselines for automatic voice over, so we propose to use two TTS baselines for comparison: Tacotron [2], and a modified Tacotron with visual encoder and TVA, denoted as Tacotron with TVA. We note that unlike VisualTTS, Tacotron with TVA has no visual fusion. All baselines adopt the speaker encoder as described in Sec. 3.1.2 to support multi-speaker speech synthesis.
Datasets and experimental setup
We report the performance on GRID dataset [15], an audiovisual dataset consisting of 33 speakers, each speaking 1000 short English utterances. The training set consists of 900 sentences from 33 speakers, totaling 32,670 utterances. The remaining 100 sentences from each one of these 33 speakers are used for the test set. Speech audios are re-sampled at 24kHz and synchronized with 25Hz frame rate videos.
We set the head number of the TVA to 2. TVA output is projected to 64 dimensions. The dimension of the visual fusion layer output is set to 256. The dimension of textual embedding is set to 512. The decoder RNN consists of 1 layer of attention RNN with 256-dimensional hidden size, and 2 layers of LSTM with 256-dimensional hidden size and 10% zoneout rate. The acoustic decoder generates an 80-dimensional mel-spectrogram feature, two frames at a time, as output. The visual encoder is pre-trained on LRS2 and LRS3 datasets [11,28]. The kernel size of Conv3D is {5, 7, 7}. Visual embedding is a 512-dimensional vector for each frame of lip image sequence. Speaker embedding is a 256-dimensional d-vector obtained by a dvector extractor pre-trained from a speaker verification task on AISHELL2 [29] corpus. Speaker embedding is projected to 64 dimensions before concatenating with TVA output. All models use WaveNet [9] pre-trained on VCTK dataset [30] as the vocoder for waveform generation.
Objective evaluation
We use Lip Sync Error -Confidence (LSE-C) and Lip Sync Error -Distance (LSE-D) [31] to measure lip-speech synchronization between silent videos from GRID dataset and synthetic speeches. We note that LSE-D measures the average distance between audio and lip representations obtained from a video of spoken utterance, while LSE-C is the average confidence score. LSE-C and LSE-D are measured using a pre-trained SyncNet [32]. Lower LSE-D values and higher LSE-C values indicate better lip-speech synchronization.
LSE-C and LSE-D evaluation results are reported in Table 1. To start with, Tacotron with TVA and proposed VisualTTS both outperform Tacotron in terms of lip-speech synchronization. We note that VisualTTS achieves better synchronization than Tacotron with TVA. These results prove that both our visual-guided aligner and visual fusion strategy help to improve lip-speech synchronization.
We use frame disturbance (FD) [33] to measure duration deviation between synthetic speech and ground truth speech from the GRID dataset. We note that FD has been used to measure duration modeling performance of TTS [1]. Furthermore, as ground truth speech is synchronized with video, FD also indicates lip-speech synchronization between synthetic speech and video. VisualTTS achieves remarkable performance and outperforms both baselines with an FD value of 5.92.
Subjective evaluation
We further conduct subjective evaluation to assess the performance of all three frameworks in terms of voice quality and lip-speech synchronization. 12 subjects participate in the listening tests, and each of them listens to 30 speech samples per framework. We use mean opinion score (MOS) [33] to appraise the voice quality. Each listener is asked to rate all speech samples on a five-point scale: higher scores indicate higher naturalness of speech samples. As shown in Table 1, all three frameworks achieve good voice quality and their performance are comparable to that of each other. We note that improving voice quality is not the main focus of VisualTTS. It is a TTS model that aims to achieve accurate lip-speech synchronization given text and video as input.
We also conduct preference test on lip-speech synchronization. In this experiment, subjects are asked to watch each pair of videos and choose the one with better lip-speech synchronization. We note that we replace the original prerecorded speeches in videos from the test set with synthetic speech samples produced by Tacotron, Tacotron with TVA, and VisualTTS. As shown in Fig. 3, most of the subjects prefer videos with speech utterances synthesized by Visu-alTTS. These results prove the effectiveness of VisualTTS for generating speech samples that are in better synchronization with lip motion in videos.
CONCLUSION
In this paper, we propose a new solution for AVO, introducing visual information to TTS for accurate lip-speech synchronization. We show that the proposed VisualTTS has a clear advantage over baselines in terms of lip-speech synchronization. As a future work, we are considering incorporating visual information with non-autoregressive TTS for more accurate lip-speech synchronization and fine-grained duration control with visual information.
Fig. 1 .
1The typical workflow of automatic voice over: An AVO framework takes video and text script as input, and generates speech audio in sync with video.
Fig. 2 .
2Model architecture of the proposed VisualTTS, that consists of a visual encoder, a textual encoder, a visualguided aligner and an acoustic decoder. Pre-trained blocks are denoted with a lock.
Table 1 .
1LSE-C, LSE-D, FD and MOS (with 95% confidence intervals) evaluation results.Fig. 3. Preference test result for lip-speech synchronization with 95% confidence intervals.Method
LSE-C ↑ LSE-D ↓ FD ↓
MOS ↑
Ground Truth
7.67
6.88
NA 4.70±0.03
Tacotron [2]
5.49
8.85
8.92 4.16±0.07
Tacotron with TVA
5.82
8.51
7.66 4.17±0.06
VisualTTS
5.87
8.45
5.92 4.17±0.06
Expressive tts training with frame and style reconstruction loss. Rui Liu, Berrak Sisman, Guang Lai Gao, Haizhou Li, Speech, and Language Processing. Rui Liu, Berrak Sisman, Guang lai Gao, and Haizhou Li, "Expressive tts training with frame and style reconstruction loss," IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021.
Tacotron: Towards end-to-end speech synthesis. Yuxuan Wang, Daisy Skerry-Ryan, Yonghui Stanton, Ron J Wu, Navdeep Weiss, Zongheng Jaitly, Ying Yang, Zhifeng Xiao, Samy Chen, Bengio, arXiv:1703.10135arXiv preprintYuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al., "Tacotron: Towards end-to-end speech synthesis," arXiv preprint arXiv:1703.10135, 2017.
Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPJonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al., "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2018, pp. 4779-4783.
Neural speech synthesis with transformer network. Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, Ming Liu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu, "Neural speech synthesis with transformer network," in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 6706-6713.
Fastspeech: fast, robust and controllable text to speech. Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu, NeurIPSYi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu, "Fastspeech: fast, robust and controllable text to speech," in NeurIPS 2019, 2019, pp. 3171-3180.
Fastspeech 2: Fast and high-quality end-to-end text to speech. Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu, arXiv:2006.04558arXiv preprintYi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu, "Fastspeech 2: Fast and high-quality end-to-end text to speech," arXiv preprint arXiv:2006.04558, 2020.
Investigation of enhanced tacotron text-to-speech synthesis systems with self-attention for pitch accent language. Yusuke Yasuda, Xin Wang, Shinji Takaki, Junichi Yamagishi, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPYusuke Yasuda, Xin Wang, Shinji Takaki, and Junichi Yamagishi, "Investigation of enhanced tacotron text-to-speech synthesis systems with self-attention for pitch accent language," in ICASSP 2019- 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6905-6909.
Zero-shot multi-speaker textto-speech with state-of-the-art neural speaker embeddings. Erica Cooper, -I Cheng, Yusuke Lai, Fuming Yasuda, Xin Fang, Nanxin Wang, Junichi Chen, Yamagishi, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPErica Cooper, Cheng-I Lai, Yusuke Yasuda, Fuming Fang, Xin Wang, Nanxin Chen, and Junichi Yamagishi, "Zero-shot multi-speaker text- to-speech with state-of-the-art neural speaker embeddings," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6184-6188.
Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu, arXiv:1609.03499Wavenet: A generative model for raw audio. arXiv preprintAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu, "Wavenet: A generative model for raw audio," arXiv preprint arXiv:1609.03499, 2016.
Efficient neural audio synthesis. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Oord, Sander Dieleman, Koray Kavukcuoglu, International Conference on Machine Learning. PMLRNal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Oord, Sander Dieleman, and Koray Kavukcuoglu, "Efficient neural audio synthesis," in International Conference on Machine Learning. PMLR, 2018, pp. 2410-2419.
Deep audio-visual speech recognition. Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, Andrew Zisserman, IEEE transactions. Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman, "Deep audio-visual speech recognition," IEEE transactions on pattern analysis and machine intelligence, 2018.
Looking to listen at the cocktail party: A speaker-independent audiovisual model for speech separation. Ariel Ephrat, Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, T William, Michael Freeman, Rubinstein, arXiv:1804.03619arXiv preprintAriel Ephrat, Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, William T Freeman, and Michael Rubinstein, "Looking to listen at the cocktail party: A speaker-independent audio- visual model for speech separation," arXiv preprint arXiv:1804.03619, 2018.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
Muse: Multimodal target speaker extraction with visual cues. Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEZexu Pan, Ruijie Tao, Chenglin Xu, and Haizhou Li, "Muse: Multi- modal target speaker extraction with visual cues," in ICASSP 2021- 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6678-6682.
An audio-visual corpus for speech perception and automatic speech recognition. Martin Cooke, Jon Barker, Stuart Cunningham, Xu Shao, The Journal of the Acoustical Society of America. 1205Martin Cooke, Jon Barker, Stuart Cunningham, and Xu Shao, "An audio-visual corpus for speech perception and automatic speech recognition," The Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421-2424, 2006.
Text-free image-to-speech synthesis using learned segmental units. Wei-Ning Hsu, David Harwath, Christopher Song, James Glass, arXiv:2012.15454arXiv preprintWei-Ning Hsu, David Harwath, Christopher Song, and James Glass, "Text-free image-to-speech synthesis using learned segmental units," arXiv preprint arXiv:2012.15454, 2020.
End-to-end image-to-speech generation for untranscribed unknown languages. Johanes Effendi, Sakriani Sakti, Satoshi Nakamura, IEEE Access. 9Johanes Effendi, Sakriani Sakti, and Satoshi Nakamura, "End-to-end image-to-speech generation for untranscribed unknown languages," IEEE Access, vol. 9, pp. 55144-55154, 2021.
Learning individual speaking styles for accurate lip to speech synthesis. Rudrabha Kr Prajwal, Mukhopadhyay, P Vinay, C V Namboodiri, Jawahar, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar, "Learning individual speaking styles for accurate lip to speech synthesis," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13796-13805.
End-to-end video-tospeech synthesis using generative adversarial networks. Rodrigo Mira, Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, W Björn, Maja Schuller, Pantic, arXiv:2104.13332arXiv preprintRodrigo Mira, Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, Björn W Schuller, and Maja Pantic, "End-to-end video-to- speech synthesis using generative adversarial networks," arXiv preprint arXiv:2104.13332, 2021.
Evaluating and optimizing prosodic alignment for automatic dubbing. Marcello Federico, Yogesh Virkar, Robert Enyedi, Roberto Barra-Chicote, INTERSPEECH. Marcello Federico, Yogesh Virkar, Robert Enyedi, and Roberto Barra- Chicote, "Evaluating and optimizing prosodic alignment for automatic dubbing.," in INTERSPEECH, 2020, pp. 1481-1485.
Multi-modal attention for speech emotion recognition. Zexu Pan, Zhaojie Luo, Jichen Yang, Haizhou Li, Proc. Interspeech. InterspeechZexu Pan, Zhaojie Luo, Jichen Yang, and Haizhou Li, "Multi-modal attention for speech emotion recognition," in Proc. Interspeech 2020, 2020, pp. 364-368.
Correlating subword articulation with lip shapes for embedding aware audio-visual speech enhancement. Hang Chen, Jun Du, Yu Hu, Li-Rong Dai, Bao-Cai Yin, Chin-Hui Lee, Neural Networks. Hang Chen, Jun Du, Yu Hu, Li-Rong Dai, Bao-Cai Yin, and Chin-Hui Lee, "Correlating subword articulation with lip shapes for embedding aware audio-visual speech enhancement," Neural Networks, 2021.
Lipnet: End-to-end sentence-level lipreading. M Yannis, Brendan Assael, Shimon Shillingford, Nando De Whiteson, Freitas, arXiv:1611.01599arXiv preprintYannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando De Freitas, "Lipnet: End-to-end sentence-level lipreading," arXiv preprint arXiv:1611.01599, 2016.
A comparison of recent waveform generation and acoustic modeling methods for neural-network-based speech synthesis. Xin Wang, Jaime Lorenzo-Trueba, Shinji Takaki, Lauri Juvela, Junichi Yamagishi, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPXin Wang, Jaime Lorenzo-Trueba, Shinji Takaki, Lauri Juvela, and Junichi Yamagishi, "A comparison of recent waveform generation and acoustic modeling methods for neural-network-based speech synthesis," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4804-4808.
Time domain audio visual speech separation. Jian Wu, Yong Xu, Shi-Xiong Zhang, Lian-Wu Chen, Meng Yu, Lei Xie, Dong Yu, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEJian Wu, Yong Xu, Shi-Xiong Zhang, Lian-Wu Chen, Meng Yu, Lei Xie, and Dong Yu, "Time domain audio visual speech separation," in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019, pp. 667-673.
The conversation: Deep audio-visual speech enhancement. Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman, arXiv:1804.04121arXiv preprintTriantafyllos Afouras, Joon Son Chung, and Andrew Zisserman, "The conversation: Deep audio-visual speech enhancement," arXiv preprint arXiv:1804.04121, 2018.
Deep neural networks for small footprint text-dependent speaker verification. Ehsan Variani, Xin Lei, Erik Mcdermott, Ignacio Lopez Moreno, Javier Gonzalez-Dominguez, 2014 IEEE international conference on acoustics, speech and signal processing. ICASSPEhsan Variani, Xin Lei, Erik McDermott, Ignacio Lopez Moreno, and Javier Gonzalez-Dominguez, "Deep neural networks for small footprint text-dependent speaker verification," in 2014 IEEE interna- tional conference on acoustics, speech and signal processing (ICASSP).
. IEEE. IEEE, 2014, pp. 4052-4056.
Lrs3-ted: a large-scale dataset for visual speech recognition. Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman, arXiv:1809.00496arXiv preprintTriantafyllos Afouras, Joon Son Chung, and Andrew Zisserman, "Lrs3- ted: a large-scale dataset for visual speech recognition," arXiv preprint arXiv:1809.00496, 2018.
Aishell-2: Transforming mandarin asr research into industrial scale. Jiayu Du, Xingyu Na, Xuechen Liu, Hui Bu, arXiv:1808.10583arXiv preprintJiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu, "Aishell-2: Transforming mandarin asr research into industrial scale," arXiv preprint arXiv:1808.10583, 2018.
Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit. Christophe Veaux, Junichi Yamagishi, Kirsten Macdonald, University of Edinburgh. The Centre for Speech Technology Research (CSTRChristophe Veaux, Junichi Yamagishi, Kirsten MacDonald, et al., "Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit," University of Edinburgh. The Centre for Speech Technology Research (CSTR), 2017.
A lip sync expert is all you need for speech to lip generation in the wild. Rudrabha Kr Prajwal, Mukhopadhyay, P Vinay, C V Namboodiri, Jawahar, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaKR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar, "A lip sync expert is all you need for speech to lip generation in the wild," in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 484-492.
Out of time: automated lip sync in the wild. J S Chung, A Zisserman, Workshop on Multi-view Lip-reading, ACCV. J. S. Chung and A. Zisserman, "Out of time: automated lip sync in the wild," in Workshop on Multi-view Lip-reading, ACCV, 2016.
An overview of voice conversion and its challenges: From statistical modeling to deep learning. Berrak Sisman, Junichi Yamagishi, Simon King, Haizhou Li, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 29Berrak Sisman, Junichi Yamagishi, Simon King, and Haizhou Li, "An overview of voice conversion and its challenges: From statistical modeling to deep learning," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 132-157, 2021.
| []
|
[
"LETERME et al., ON THE SHIFT INVARIANCE OF MAX POOLING FEATURE MAPS 1 On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks",
"LETERME et al., ON THE SHIFT INVARIANCE OF MAX POOLING FEATURE MAPS 1 On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks"
]
| [
"Hubert Leterme ",
"Kévin Polisano ",
"Valérie Perrier ",
"Karteek Alahari "
]
| []
| []
| In this paper, we aim to improve the mathematical interpretability of convolutional neural networks for image classification. When trained on natural image datasets, such networks tend to learn parameters in the first layer that closely resemble oriented Gabor filters. By leveraging the properties of discrete Gabor-like convolutions, we prove that, under specific conditions, feature maps computed by the subsequent max pooling operator tend to approximate the modulus of complex Gabor-like coefficients, and as such, are stable with respect to certain input shifts. We then compute a probabilistic measure of shift invariance for these layers. More precisely, we show that some filters, depending on their frequency and orientation, are more likely than others to produce stable image representations. We experimentally validate our theory by considering a deterministic feature extractor based on the dual-tree wavelet packet transform, a particular case of discrete Gabor-like decomposition. We demonstrate a strong correlation between shift invariance on the one hand and similarity with complex modulus on the other hand. | 10.48550/arxiv.2209.11740 | [
"https://export.arxiv.org/pdf/2209.11740v1.pdf"
]
| 252,519,649 | 2209.11740 | d9e979679d85c66c8a06ff7e30da227f83472df3 |
LETERME et al., ON THE SHIFT INVARIANCE OF MAX POOLING FEATURE MAPS 1 On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
Hubert Leterme
Kévin Polisano
Valérie Perrier
Karteek Alahari
LETERME et al., ON THE SHIFT INVARIANCE OF MAX POOLING FEATURE MAPS 1 On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
Index Terms-Deep learningimage classificationdual-tree wavelet packet transformmax poolingshift invariancefeature extractorsubsamplingaliasing
In this paper, we aim to improve the mathematical interpretability of convolutional neural networks for image classification. When trained on natural image datasets, such networks tend to learn parameters in the first layer that closely resemble oriented Gabor filters. By leveraging the properties of discrete Gabor-like convolutions, we prove that, under specific conditions, feature maps computed by the subsequent max pooling operator tend to approximate the modulus of complex Gabor-like coefficients, and as such, are stable with respect to certain input shifts. We then compute a probabilistic measure of shift invariance for these layers. More precisely, we show that some filters, depending on their frequency and orientation, are more likely than others to produce stable image representations. We experimentally validate our theory by considering a deterministic feature extractor based on the dual-tree wavelet packet transform, a particular case of discrete Gabor-like decomposition. We demonstrate a strong correlation between shift invariance on the one hand and similarity with complex modulus on the other hand.
I. INTRODUCTION
U NDERSTANDING the mathematical properties of deep convolutional neural networks (CNNs) [1] remains a challenging issue today. On the other hand, wavelet and multi-resolution analysis are built upon a well-established mathematical framework. They have proven to be efficient for tasks such as signal compression and denoising [2], and have been widely used as feature extractors for signal, image and texture classification [3]- [6].
There is a broad literature revealing strong connections between these two paradigms, as discussed in section I and section II. Inspired by this line of research, our work extends existing knowledge about CNN properties. In particular, we study some behaviors arising from their discrete nature.
A. Motivation
In many computer vision applications, including classification, input images are transformed through a non-linear operator, generally referred to as a feature extractor [7], [8]. The output feature maps, which contain high-level information, can in turn be fed into deeper feature extractors. Specifically, CNNs contain a sequence of such operators with a large H. Leterme number of trainable parameters, whereas the final classifier generally preforms multinomial logistic regression [9], [10].
It is widely assumed that a good feature extractor must retain discriminant image components while decreasing intraclass variability [7], [11]. In particular, information about frequencies and orientations should be captured by the operator [7], [11], [12]. On the other hand, extracted features should be stable with respect to transformations such as small shifts, rotations or deformations [8], [11]- [14].
It has been noted that many CNNs trained on natural image datasets perform some kind of discrete real-valued Gabor transform in their first layer [15], [16]. In other words, images are decomposed through subsampled convolutions using filters with well-defined frequency and orientation. This observation, which is exploited in several papers [17]- [22], reveals the discriminative nature of CNNs' first layer. Whether such a layer can extract stable features is partly addressed in [23], [24]. These papers point out that convolution and pooling layers may greatly diverge from shift invariance, due to aliasing when subsampling. In response, recent work [24]- [27] introduced antialiased convolution and pooling operators. They managed to increase both stability and predictive power of CNNs, despite the resulting loss of information.
In the current paper, we show that, in certain situations, the first max pooling layer can actually reduce aliasing and therefore recover stability. Inspired by Waldspurger's work [28, pp. 190-191], we unveil a connection between the output of this pooling operator and the modulus of complex Gaborlike coefficients, which is known to be nearly shift invariant. As hinted in section VII, this can lead to an alternative solution to improve stability which, unlike the above papers, does not require losing information.
B. Proposed Approach
We first consider an operator computing the modulus of discrete Gabor-like feature maps, defined as subsampled convolutions with nearly analytic and well-oriented complex filters. We show that the output of such a feature extractor, referred to as complex-Gabor-modulus (CGMod), is stable with respect to small input shifts.
Then, we consider an operator which only keeps the real part of the above Gabor-like convolutions and computes their maximum value over a sliding discrete grid. We refer to this as a real-Gabor-max-pooling (RGPool) extractor. We then prove that, under additional conditions on the filter's frequency and orientation, CGMod and RGPool produce comparable outputs. We deduce a measure of shift invariance for RGPool operators, which benefit from the stability of CGMod.
arXiv:2209.11740v1 [cs.CV] 19 Sep 2022
Next, we show that, after training with ImageNet, the feature extractor formed by early layers of popular CNN architectures can approximately be reformulated as a stack of RGPool operators. Our framework therefore provides a theoretical grounding to study these networks.
We apply our theoretical results on the dual-tree complex wavelet packet transform (DT-CWPT), a particular case of discrete Gabor-like decomposition with perfect reconstruction properties [29], [30], possessing characteristics comparable to standard convolution layers. Finally, we verify our predictions on a deterministic setting based on DT-CWPT. Given an input image, we compute the mean discrepancy between the outputs of CGMod and RGPool, for each wavelet packet filter. 1 We then observe that shift invariance, when measured on RGPool feature maps, is nearly achieved if they remain close to CGMod outputs. We therefore establish an invariance validity domain for RGPool operators.
Prior to this work, we presented a preliminary study [31], where we experimentally showed that an operator based on DT-CWPT can mimic the behavior of the first convolution layer with fewer parameters, while keeping the network's predictive power. Our model was solely based on real-valued filters, 2 which are known to be generally unstable [32]. Yet, we observed a limited but genuine form of shift invariance, compared to other models based on the standard, non-analytic wavelet packet transform. At the same time, we became aware of a preliminary work in Waldspurger's PhD thesis [28, pp. 190-191], suggesting a potential connection between the combinations "real wavelet transform + max pooling" on the one hand and "complex wavelet transform + modulus" on the other hand. Following this idea, we decided to study whether invariance properties of complex moduli could somehow be captured by the max pooling operator. As shown in the present paper, Waldspurger's work does not fully extend to discrete and subsampled convolutions. We address this issue by adopting a probabilistic point of view.
II. RELATED WORK
A. Wavelet Scattering Networks
These models, introduced by Bruna and Mallat [11], compute cascading wavelet convolutions followed by non-linear operations. They produce translation-invariant image representations which are stable to deformation and preserve highfrequency information. A variation has been proposed in [33] to improve stability with respect to small rotations. Wavelet scattering networks were later adapted to the discrete framework using the dual-tree complex wavelet transform [34], as well as functions defined on graphs [35].
Such networks, which are totally deterministic aside from the output classifier, achieve results on small image datasets but do not scale well to more complex ones. According to Oyallon et al. [36], [37], this is partly due to non-geometric sources of variability within classes. Instead, the authors proposed to use scattering coefficients as inputs to a CNN, showing that the network complexity can be reduced while keeping competitive performance. More recent work by Zarka et al. [38] proposed to sparsify wavelet scattering coefficients by learning a dictionary matrix, and managed to outperform AlexNet [10]. This was extended by the same team in [39], where the authors proposed to learn 1×1 convolutions between feature maps of scattering coefficients and to apply softthresholding to reduce within-class variability. This model reached the classification accuracy of ResNet-18 on ImageNet.
Other work proposed architectures in which the scattering transform is no longer deterministic. Cotter and Kingsbury [40] built a learnable scattering network. In this model, feature maps of scattering coefficients are mixed together using trainable weights, to account for cross-channel filtering as implemented in CNNs. Their architecture outperformed VGG networks on small image datasets. Recently, Gauthier et al. [41] introduced parametric scattering networks, in which the scale, orientation and aspect ratio of each wavelet filter are adjusted during training. Their approach has proven successful when trained on limited dataset.
All these papers are driven by the purpose of building adhoc CNN-like feature extractors, implementing well defined mathematical operators specifically designed to meet a certain number of desired properties. By contrast, our work seeks evidence that such properties, which have been established for wavelet scattering networks, are-to some extent-embedded in existing CNN architectures, with no need to alter their behavior or introduce new features.
B. Invariance Studies in CNNs
Several papers analyze invariance properties in CNN-related feature extractors, including-but not limited to-wavelet scattering networks. Whereas extensive studies related to the original architecture are proposed by Mallat in [42], [43], more recent work tackle the question for various extensions of the model. In [44], [45], scattering networks based on uniform covering frames-i.e., frames splitting the frequency domain into windows of roughly equal size, much like Gabor framesare studied. Besides, [8] considers a wide variety of feature extractors involving convolutions, Lipschitz-continuous nonlinearities and pooling operators. The paper shows that outputs become more translation invariant with increasing network depth. Finally, [46] shows that certain classes of CNNs are contained into the reproducing kernel Hilbert space (RKHS) of a multilayer convolutional kernel representation. As such, stability metrics are estimated, based on the RKHS norm which is difficult to control in practice.
In these studies, invariance properties are obtained for continuous signals. Whereas real-life CNNs can be mathematically described in the continuous framework, feature maps computed at their hidden and output layers are actually discrete sequences, which can be recovered by sampling the continuous signals. At each convolution and pooling layers, the sampling interval is increased (subsampling), resulting in a loss of information. Unfortunately, this may greatly affect shift invariance, as explained in section I. The current paper specifically addresses this issue.
III. SHIFT INVARIANCE OF OPERATORS
The goal of this section is to theoretically establish conditions for near-shift invariance at the output of the first max pooling layer. We start by proving shift invariance of CGMod operators. Then, we establish conditions under which RGPool and CGMod produce closely related outputs. Finally, we derive a probabilistic measure of shift invariance for RGPool.
A. Notations
The complex conjugate of any number z ∈ C is denoted by z * . For any p ∈ R * + ∪ {∞}, x ∈ R 2 and r ∈ R + , we denote by B p (x, r) ⊂ R 2 the closed l p -ball with center x and radius r. When x = 0, we write B p (r).
Continuous Framework: Given p > 0 and a measurable subset of R or R 2 denoted by E, we consider L p (E) as the space of measurable complex-valued functions f :
E → C such that f L p := E |f (x)| p dx < +∞.
Whenever we talk about equality in L p (E) or inclusion in E, it shall be understood as "almost everywhere with respect to the Lebesgue measure". Besides, we denote by L 2
R (R 2 ) ⊂ L 2 (R 2 ) the subset of real-valued functions. For any f ∈ L 2 (R 2 ), f denotes its flipped version: f (x) := f (−x). The 2D Fourier transform of any f ∈ L 2 (R 2 ) is denoted by f ∈ L 2 (R 2 ), such that ∀ν ∈ R 2 , f (ν) := R 2 f (x)e −i ν, x d 2 x.(1)
For any ε > 0 and ν ∈ R 2 , we denote by V ν, ε ⊂ L 2 (R 2 ) the set of functions whose Fourier transform is supported in a square region of size ε × ε centered in ν:
V ν, ε := ψ ∈ L 2 (R 2 ) supp ψ ⊂ B ∞ (ν, ε/2) . (2)
For any h ∈ R 2 , we also consider the translation operator, denoted by T h , defined by T h f :
x → f (x − h). Discrete Framework: We consider l 2 (Z d ) as the space of d-dimensional sequences X ∈ C Z d such that X 2 2 := n∈Z d X[n]
2 < +∞. Indexing is made between square brackets: ∀X ∈ l 2 (Z d ), ∀n ∈ Z d , X[n] ∈ C, and we denote by l 2 R (Z d ) ⊂ l 2 (Z d ) the subset of real-valued sequences. For any X ∈ l 2 (Z d ), X denotes its flipped version: X[n] := X[−n]. The subsampling operator is denoted by ↓: for any X ∈ l 2 (Z d ) and any m ∈ N * , (X ↓ m)[n] := X[mn].
2D images, feature maps and convolution kernels are considered as elements of l 2 (Z 2 ), and are denoted by straight capital letters. Besides, arrays of 2D sequences are denoted by bold straight capital letters, for instance: X = (X k ) k∈{0..K−1} . Note that indexing starts at 0 to comply with practical implementations. We will also consider 1D sequences x ∈ l 2 (Z), denoted by straight lower-case letters.
The 2D discrete-time Fourier transform of any X ∈ l 2 (Z 2 ) is denoted by X ∈ L 2 ([−π, π] 2 ), such that For any κ ∈ ]0, 2π] and ξ ∈ B ∞ (π), we denote by G ξ, κ ⊂ l 2 (Z 2 ) the set of 2D sequences whose Fourier transform is supported in a square region of size κ × κ centered in ξ:
∀ξ ∈ [−π, π] 2 , X(ξ) := n∈Z 2 X[n]e −i ξ, n .(3)G ξ, κ := W ∈ l 2 (Z 2 ) supp W ⊂ B ∞ (ξ, κ/2) . (4)
Remark 1: The support B ∞ (ξ, κ/2) actually lives in the quotient space [−π, π] 2 /(2πZ 2 ). Consequently, when ξ is close to an edge, a fraction of this region is located at the far end of the frequency domain. From now on, the choice of ξ and κ is implicitly assumed to avoid such a situation.
B. Intuition
In many CNNs for computer vision, input images are first transformed through subsampled-or strided-convolutions. For instance, in AlexNet, convolution kernels are of size 11×11 and the subsampling factor is equal to 4. Fig. 1 displays the corresponding kernels after training with ImageNet. This linear transform is generally followed by rectified linear unit (ReLU) and max pooling.
We can observe that many kernels display oscillating patterns with well-defined orientations. We denote by V ∈ l 2 R (Z 2 ) one of these "well-behaved" filters. Its Fourier spectrum roughly consists in two bright spots which are symmetric with respect to the origin. 3 Now, we consider a complex-valued companion W ∈ l 2 (Z 2 ) such that, for any ξ = (ξ 1 ,
ξ 2 ) ∈ [−π, π] 2 , W(ξ) := 1 + sgn ξ 1 · V(ξ).(5)
We can show that V is the real part of W, and that W = V + iH(V), where H denotes the two-dimensional Hilbert transform as introduced in [47]. 4 As a consequence, W is equal to 2 V on one half of the Fourier domain, and 0 on the other half. Therefore, only one bright spot remains in the spectrum. It turns out that such complex filters with high frequency resolution produce stable signal representations, as we will see in section III-C. In the subsequent sections, we then wonder whether this property is kept when considering the max pooling of real-valued convolutions. 3 Actually, the Fourier transform of any real-valued sequence is centrally symmetric: V(−ξ) = V(ξ) * . The specificity of well-oriented filters lies in the concentration of their power spectrum around two precise locations. 4
H(V) is defined such that H(V)(ξ) := −i sgn(ξ 1 ) V(ξ).
In what follows, W will be referred to as a discrete Gaborlike filter, and the coefficients resulting from the convolution with W will be referred to as discrete Gabor-like coefficients.
C. Shift Invariance of CGMod Outputs
The aim of this section is to show that the modulus of discrete Gabor-like coefficients-i.e., the output of a CGMod operator such as introduced in section I-B-is nearly shiftinvariant (the meaning of shift invariance will be clarified). This result is hinted in [32] but not formally proven.
1) Continuous Framework: We introduce several results regarding functions defined on the continuous space R 2 . Nearshift invariance on discrete 2D sequences will then be derived from these results by taking advantage of sampling theorems. Lemma 1 below is adapted from [28, pp. 190-191].
Lemma 1: Given ε > 0 and ν ∈ R 2 , let ψ ∈ V ν, ε denote a complex-valued filter such as defined in (2). Now, for any real-valued function f ∈ L 2 R (R 2 ), we consider the complex-valued function f 0 ∈ L 2 (R 2 ) defined by
f 0 : x → (f * ψ)(x) e i ν, x .(6)Then f 0 is low-frequency, with supp f 0 ⊂ B ∞ (ε/2). Proof: See Appendix A.
On the other hand, the following proposition provides a shift invariance bound for low-frequency functions such as introduced above.
Proposition 1: For any f 0 ∈ L 2 R (R 2 ) such that supp f 0 ⊂ B ∞ (ε/2), and any h ∈ R 2 satisfying h 1 ≤ π/ε,
T h f 0 − f 0 L 2 ≤ α(εh) f 0 L 2 ,(7)
where we have defined
α : τ → τ 1 2 .(8)
Proof: See Appendix B. 2) Adaptation to Discrete 2D Sequences: Given κ ∈ ]0, 2π] and ξ ∈ B ∞ (π), let W ∈ G ξ, κ denote a discrete Gabor-like filter such as defined in (4). For any image X ∈ l 2 R (Z 2 ) with finite support and any subsampling factor m ∈ N * , we express (X * W) ↓ m using the continuous framework introduced above, and derive an invariance formula.
For any sampling interval s ∈ R * + , let U s denote the space of 2D functions g ∈ L 2 (R 2 ) such that the support of g is included in B ∞ (π/s). 5 We consider the following lemma.
Lemma 2: Let s > 0. For any g ∈ U s and any ω ∈ B ∞ (π/s), we have
g(ω) = s Y(sω),(9)
where Y ∈ l 2 (Z 2 ) is defined such that Y[n] := s g(sn), for any n ∈ Z 2 . Besides, we have the following norm equality:
g L 2 = Y 2 .(10)
Proof: See Appendix C. 5 Using the notation introduced in (2), we have Us = V(0, 2π/s).
We now consider φ (s) ∈ L 2 R (R 2 ) such that φ (s) := s1 B∞(π/s) . 6 For any n ∈ Z 2 , we denote by φ (s) n := T sn φ (s) a shifted version of φ (s) . According to Theorem 3.5 in [48, p. 68], {φ (s) n } n∈Z 2 is an orthonormal basis of U s . 7 We then get the following proposition, which draws a bond between the discrete and continuous frameworks.
Proposition 2: Let X ∈ l 2 R (Z 2 ) denote an input image with finite support, and W ∈ G ξ, κ . Considering a sampling interval s ∈ R * + , we define f X ∈ L 2 R (R 2 ) and ψ W ∈ L 2 (R 2 ) such that f X := n∈Z 2 X[n] φ (s) n and ψ W := n∈Z 2 W[n] φ (s) n . (11)
Then, ψ W ∈ V ξ/s, κ/s . Moreover, for all n ∈ Z,
X[n] = s f X (sn); W[n] = s ψ W (sn),(12)
and, for a given subsampling factor m ∈ N * ,
(X * W) ↓ m [n] = f X * ψ W (msn) .(13)
Proof: See Appendix D.
Proposition 2 introduces a latent subspace of L 2 R (R 2 ) from which input images are uniformly sampled. This allows us to define, for any u ∈ R 2 , a translation operator T u on discrete sequences, even if u has non-integer values:
T u X[n] := s T su f X (sn),(14)
where f X is defined in (11). We can indeed show that this definition is independent from the choice of sampling interval s > 0. Besides, given X ∈ l 2 R (Z 2 ), we have
∀k ∈ Z 2 , T k X[n] = X[n − k]; (15) ∀u, v ∈ R 2 , T u (T v X) = T u+v X,(16)
which shows that T u corresponds to the intuitive idea of a shift operator. Expressions (15) and (16) are direct consequence of the following lemma, which bonds the shift operator in the discrete and continuous frameworks.
Lemma 3: For any X ∈ l 2 R (Z 2 ) and any u ∈ R 2 , f TuX = T su f X .(17)
Proof: See Appendix E.
We now consider the following corollary to Proposition 2.
Corollary 1: For any shift vector u ∈ R 2 , we have
(T u X * W) ↓ m [n] = (T su f X * ψ W ) (msn) .(18)
Proof: Apply (13) in Proposition 2 with X ← T u X, and use Lemma 3 to conclude.
3) Shift Invariance in the Discrete Framework: We consider the following operator, for any W ∈ l 2 (Z 2 ):
C m [W] : X → |(X * W) ↓ m|.(19)
When W ∈ G ξ, κ , we refer to this as a CGMod operator. For the sake of concision, in what follows we will write C m instead of C m [W], when no ambiguity is possible.
We are now ready to state the main result about shift invariance of CGMod outputs.
Theorem 1: Let W ∈ G ξ, κ denote a discrete Gabor-like filter and m ∈ N * denote a subsampling factor. If κ ≤ 2π/m, then, for any input image X ∈ l 2 R (Z 2 ) with finite support and any translation vector u ∈ R 2 satisfying u 1 ≤ π/κ,
C m (T u X) − C m X 2 ≤ α(κu) C m X 2 ,(20)
where α has been defined in (8).
Proof: The proof of this theorem, which involves Lemmas 1-2, Propositions 1-2 and Corollary 1, is provided in Appendix F.
Interestingly, the reference value used in Theorem 1, i.e., C m X 2 , is fully shift-invariant, as stated in the following proposition.
Proposition 3: Let W ∈ G ξ, κ and m ∈ N * . Assuming κ ≤ 2π/m, we have, for any X ∈ l 2 R (Z 2 ) and any u ∈ R 2 , C m (T u X) 2 = C m X 2 .(21)
Proof: See Appendix G.
D. From CGMod to RGPool
Since CGMod operators are not found in classical CNN architectures, the above result does not applies straightforwardly. Instead, the first convolution layer contains real-valued kernels, and is generally followed by ReLU and max pooling. As shown in section IV, this process can be described as an operator parameterized by W ∈ l 2 (Z 2 ), defined by
R m, q [W] : X → MaxPool q X * Re W ↓ m ,(22)
where MaxPool q selects the maximum value over a sliding grid of size (2q + 1) × (2q + 1), with a subsampling factor of 2. More formally, for any Y ∈ l 2 R (Z 2 ) and any n ∈ Z 2 ,
MaxPool q (Y)[n] := max k ∞ ≤q Y[2n + k].(23)
As hinted in section III-B, an important number of trained convolution kernels exhibit oscillating patterns with various scales and orientations. In such a case, W ∈ G ξ, κ for a certain value of ξ ∈ [−π, π] 2 and κ ∈ ]0, 2π], and we refer to R m, q [W] as an RGPool operator. For the sake of concision, from now on we write R m, q instead of R m, q [W], when no ambiguity is possible.
In what follows, we show that, under specific conditions on W, RGPool and CGMod operators produce comparable outputs. We then provide a shift invariance bound for RGPool. 1) Continuous Framework: This paragraph, directly adapted from [28, pp. 190-191], provides an intuition about resemblance between RGPool and CGMod in the continuous framework. As will be highlighted later in this section III-D, adaptation to discrete 2D sequences is not straightforward and will require a probabilistic approach.
We consider an input function f ∈ L 2 R (R 2 ) and a band-pass filter ψ ∈ V ν, ε . Let us also consider
g : (x, h) → cos ν, h − η(x) ,(24)
where η denotes the phase of f * ψ. Lemma 1 introduced low-frequency functions f 0 , with slow variations. Roughly speaking, since supp f 0 ⊂ B ∞ (ε/2), we can define a "minimal wavelength" λ f0 := 2π/ε. Then,
h 2 2π ε =⇒ f 0 (x + h) ≈ f 0 (x),(25)
which leads to
(f * Re ψ)(x + h) ≈ (f * ψ)(x) g(x, h).(26)
On the one hand, we consider a continuous equivalent of the CGMod operator C m [W] as introduced in (19). Such an operator, denoted by
C[ψ], is defined, for any f ∈ L 2 R (R 2 ), by C[ψ](f ) : x → (f * ψ)(x) .(27)
On the other hand, we consider the continuous counterpart of RGPool as introduced in (22). It is defined as the maximum value of f * Re ψ over a sliding spatial window of size r > 0. This is possible because f and Re ψ both belong to L 2 R (R 2 ), and therefore f * Re ψ is continuous. Such an operator, denoted by
R r [ψ], is defined, for any f ∈ L 2 R (R 2 ), by R r [ψ](f ) : x → max h ∞ ≤r (f * Re ψ)(x + h).(28)
For the sake of concision, the parameter between square brackets is ignored from now on. If r 2π/ε, then (26) is valid for any h ∈ B ∞ (r). Then, using (27) and (28), we get
r 2π/ε =⇒ R r f (x) ≈ Cf (x) max h ∞ ≤r g(x, h). (29)
Using the periodicity of g, we can show that, if r ≥ π
ν 2 , then h → g(x, h) reaches its maximum value (= 1) on B ∞ (r). We therefore get π ν 2 ≤ r 2π ε =⇒ R r f (x) ≈ Cf (x).(30)
An exact quantification of the above approximation remains an open question. In the current paper, it will be provided as a conjecture, for the discrete framework.
2) Adaptation to Discrete 2D Sequences: We consider an input image X ∈ l 2 R (Z 2 ), a subsampling factor m ∈ N * and a grid half-size q ∈ N * . We seek a relationship between
Y pool := R m, q X and Y mod := C 2m X,(31)
where C 2m and R m, q have been defined in (19) and (22), respectively. Note that, since max pooling also performs subsampling, both CGMod and RGPool operators, as defined in (31), have a subsampling factor equal to 2m.
We now use the sampling results obtained in section III-C. Let f X and ψ W ∈ U s denote the functions satisfying (11). On the one hand, we apply (13) in Proposition 2 to Y mod . For
any n ∈ Z 2 , C 2m X[n] = Cf X (x n ),(32)
where x n := 2msn. On the other hand, we postulate that
R m, q X[n] = R r f X (x n )(33)
for a certain value of r ∈ R * + . Then, (30) implies Y mod ≈ Y pool . However, as shown below, (33) is not satisfied. According to (22) and (23), we have
R m, q X[n] = max k ∞ ≤q Re X * W ↓ m [2n + k]. (34)
Therefore, according to (13) in Proposition 2, we get
R m, q X[n] = max k ∞ ≤q (f X * Re ψ W ) (x n + h k ) ,(35)
with x n := 2msn and h k := msk.
By considering r q := ms q + 1 2 , we get a variant of (33) in which the maximum is evaluated on a discrete grid of (2q + 1) 2 elements, instead of the continuous region B ∞ (r q ). As a consequence, (29) is replaced in the discrete framework by
q 2π/(mκ) =⇒ R m, q X[n] ≈ C 2m X[n] max k ∞ ≤q g X x n , h k ,(37)
with
g X : (x, h) → cos ν, h − η X (x) ,(38)
with ν := ξ/s, and where η X denotes the phase of f X * ψ W . Unlike the continuous case, even if the window size r q is large enough, the existence of k ∈ {−q . . q} 2 such that g X x n , h k = 1 is not guaranteed, as illustrated in Fig. 2 with q = 1. Instead, we can only seek a probabilistic estimation of the relative quadratic error between Y pool and Y mod .
Approximation (37) implies
q 2π/(mκ) =⇒ C 2m X − R m, q X 2 ≈ δ m, q X 2 , (39) where δ m, q X ∈ l 2 R (Z 2 ) is defined such that, for any n ∈ Z 2 , δ m, q X[n] := C 2m X[n] 1 − max k ∞ ≤q g X x n , h k .(40)
Expression (39) suggests that the ratio between the left and right terms can be bounded by a quantity which only depends on the product mκ (subsampling factor × frequency localization) and the grid half-size q:
C 2m X − R m, q X 2 ≤ (1 + β q (mκ)) δ m, q X 2 ,(41)
for some function β q : R + → R + to be characterized. This is the goal of the following conjecture. Conjecture 1: There exists
β q : R + → R + satisfying β q (t) = O(t),(42)
independent from the characteristic frequency ξ ∈ [−π, π] 2 , such that, for any X ∈ l 2 R (Z 2 ), (41) is satisfied. We now seek a probabilistic bound for δ m, q X 2 . In what follows, for any z ∈ C * , we denote by ∠z ∈ [0, 2π[ the argument of z. We now consider the unit circle S 1 ⊂ C. For any z, z ∈ S 1 , the angle between z and z is given by ∠(z * z ). We then denote by [z, z ] S 1 ⊂ S 1 the arc going from z to z counterclockwise:
[z, z ] S 1 := z ∈ S 1 ∠(z * z ) ≤ ∠(z * z ) .(43)
By using the relation cos α = Re(e iα ), (36) and (38) yield, for any n ∈ Z 2 and any k ∈ {−q . . q} 2 ,
g X x n , h k = Re z X (x n ) * z k ,(44)
where we have defined z X (x) := e i ηX(x) and z k := e i ν, h k = e im ξ, k . (45) Let us denote N q := (2q + 1) 2 . We consider a sequence, denoted by z (q) i i∈{0..N0−1} , obtained by ordering {z k } k∈{−q..q} 2 in ascending order of their arguments:
0 = θ (q) 0 ≤ · · · ≤ θ (q) Nq−1 < 2π,(46)
where we have denoted θ
(q) i := ∠ z (q) i
. Besides, we extend the notations with θ (q) Nq := 2π and z (q)
Nq := z (q) 0 . Then, we split S 1 into N q arcs delimited by z (q) i i∈{0..N0−1} : A (q) i := z (q) i , z (q) i+1 S 1 if θ (q) i+1 − θ (q) i < 2π; S 1 otherwise.(47)
Finally, for any i ∈ {0 . . N q − 1}, we denote by ω
(q) i := θ (q) i+1 − θ (q) i the angular measure of arc A (q)
i . Remark 2: According to (45), the above quantities depend on the product m × ξ ∈ R 2 . Therefore, we will sometimes write ω
(q) i (mξ), where ω (q) i is defined as a function of R 2 : ω (q) i : R 2 → [0, 2π] .(48)
2) Random Variables: From now on, input X is considered as discrete 2D stochastic processes. In order to "randomize" f X introduced in (11), we define a continuous stochastic process from X, denoted by F X , such that
∀x ∈ R 2 , F X (x) := n∈Z 2 X[n] φ (s) n (x).(49)
Now, we consider the following stochastic processes, which are parameterized by X:
M X := |F X * ψ|; H X := ∠(F X * ψ); Z X := e iHX ,(50)
and, for any k ∈ {−q . . q} 2 ,
G X, k := Re(Z * X z k ); G maxX := max k ∞ ≤q G X, k . (51)
For any x ∈ R 2 , f X (x) and η X (x), such as introduced in (11) and (38), are respectively drawn from F X (x) and H X (x). Then, z X (x) such as introduced in (45) is a realization of Z X (x). Consequently, according to (44), g X x, h k is a realization of G X, k (x). Besides, according to the definition of CGMod in (19) and x n in (36), Proposition 2 implies that
M X (x n ) = C 2m X[n].(52)
We remind that ξ ∈ [−π, π] 2 and κ ∈ ]0, 2π] respectively denote the center and size of the Fourier support of W, as introduced in section III-C. To compute the expected discrepancy between Y pool and Y mod , we assume that
ξ 2 2π/M ; (53) ξ 2 κ,(54)
where M ∈ N * denotes the support size of input images. These assumptions exclude low-frequency filters from the scope of our study. We then state the following hypotheses, for which a justification is provided in Appendix H. Hypothesis 1: For any x ∈ R 2 , Z X (x) is uniformly distributed on S 1 .
Hypothesis 2: For any n ∈ N * and x, y 1 , . . . , y n ∈ R 2 , the random variables M X (y i ) for i ∈ {1 . . n} are jointly independent of Z X (x).
F. Expected Quadratic Error between RGPool and CGMod
In this section, we propose to estimate the expected value of the stochastic quadratic error P 2 X , defined such that
P X := C 2m X − R m, q X 2 / C 2m X 2 .(55)
According to (31), this is an estimation of the relative error between Y mod and Y pool . First, let us reformulate δ m, q X, introduced in (40), using the probabilistic framework. According to (44) and (51), we have, for any n ∈ Z 2 , δ m, q X[n] := C 2m X[n] (1 − G maxX (x n )).
We now consider the stochastic process Q X := 1 − G maxX , and the random variable
Q X := δ m, q X 2 / C 2m X 2 .(57)
The next steps are as follows: 1) at the pixel level, show that E[Q X (x) 2 ] depends on the filter frequency ξ, and remains close to zero with some exceptions; 2) at the image level, show that the expected value of Q 2 X is equal to the latter quantity; 3) use Conjecture 1, which states that P X ≈ Q X , to deduce an upper bound on the expected value of P 2 X . The first point is established in Proposition 4 below, and the two remaining ones are the purpose of Theorem 2.
Proposition 4: Assuming Hypothesis 1, the expected value of Q X (x) 2 is independent from the choice of x ∈ R 2 , and
E Q X (x) 2 = γ q (mξ) 2 ,(58)
where we have defined
γ q : ζ → 3 2 + 1 4π Nq−1 i=0 sin ω (q) i (ζ)−8 sin ω (q) i (ζ) 2 ,(59)
with ω (48).
(q) i : R 2 → [0, 2π] being introduced in
Proof: See Appendix J.
We consider an ideal scenario where the z k are evenly spaced on S 1 . Then, an order 2 Taylor expansion yields γ q (ζ) = o(1/q 2 ), meaning that Q X quickly vanishes when the grid half-size q increases. Fig. 4 displays ξ → γ q (mξ) 2 for ξ ∈ [−π, π] 2 , with m = 4 and q = 1 as in AlexNet.
We notice that, for the major part of the Fourier domain, γ q remains close to 0. However, we observe a regular pattern of dark regions, which correspond to pathological frequencies where the repartition of z k is unbalanced. So far, we established a result at the pixel level. Before stating Theorem 2, we need the following intermediate result.
Proposition 5: We consider the random variable
S X := C 2m X 2 .(60)
Under Hypothesis 2, for any x ∈ R 2 ,
• Z X (x) is independent of S X ; • Z X (x)
, M X (x) are conditionally independent given S X . Proof: See Appendix K.
Finally, Propositions 4 and 5 yield the following theorem. It provides an upper bound on the expected value of the relative quadratic error P 2 X , such as defined in (55). Theorem 2: We assume that Conjecture 1 is true. Then, under Hypotheses 1 and 2, we have
E P 2 X ≤ (1 + β q (mκ)) 2 γ q (mξ) 2 ,(61)
with β q and γ q being introduced in (42) and (59), respectively. Proof: See Appendix L.
G. Shift Invariance of RGPool Outputs
In this section, we present the main theoretical claim of this paper. Based on the previous results, we provide a probabilistic measure of shift invariance for RGPool operators.
Theorem 3: Let W ∈ G ξ, κ denote a discrete Gabor-like filter, m ∈ N * a subsampling factor and q ∈ N * a grid halfsize. We consider a stochastic process X whose realizations are elements of l 2 R (Z 2 ). We assume that Hypotheses 1 & 2 are satisfied. 8 Besides, we assume Conjecture 1.
Given a translation vector u ∈ R 2 , we consider the following random variable:
R X, u := R m, q (T u X) − R m, q X 2 / C 2m X 2 .(62)
Then, under the following conditions:
κ ≤ π/m and u 1 ≤ π/κ,(63)
we have
E R X, u ≤ 2 (1 + β q (mκ)) γ q (mξ) + α(κu),(64)
where α, β q and γ q are defined in (8), (42) and (59).
Proof: See Appendix M.
If κ is sufficiently small, then α(κu) and β q (mκ) become negligible with respect to γ q (mξ), and the bound given in (64) is roughly equal to 2 γ q (mξ). Theorem 3 therefore provides a validity domain for shift invariance of RGPool operators, as illustrated in Fig. 4 with q = 1.
Remark 3: The stochastic discrepancy introduced in (62) is estimated relatively to the CGMod output. This choice is motivated by the perfect shift invariance of its norm, as shown in Proposition 3.
Remark 4: In practice, most of the time max pooling is performed on a grid of size 3 × 3; therefore q = 1. For the sake of concision, in the remaining of this paper, we will drop q in the notations, which implicitly means q = 1.
IV. STANDARD CNNS IN OUR FRAMEWORK
In this section, we show how the theoretical results established above on single-channel inputs can be applied to standard CNNs such as AlexNet or ResNet, which are usually designed for multichannel inputs-e.g., RGB images.
A. Background
We focus on the first layers of a classical CNN architecture, in which input images are propagated through a convolution layer followed by rectified linear unit and max pooling operators, as described below.
We denote by K and L ∈ N * the number of input and output channels, respectively. The convolution layer is characterized by a trainable weight tensor V ∈ l 2 R (Z 2 ) L×K with a finite support, a bias vector b ∈ R L and a subsampling factor m ∈ N * . Considering an input image X ∈ l 2 R (Z 2 ) K with finite support, we denote by A ∈ l 2 R (Z 2 ) L the output of the max pooling layer. Then we have, for any l ∈ {0 . . L − 1},
A l := (MaxPool • ReLU) b l + K−1 k=0 X k * V lk ↓ m ,(65)
where MaxPool has been introduced in (23) with q = 1 (see Remark 4), and ReLU is defined such that, for any X ∈ l 2 R (Z 2 ) and any n ∈ Z 2 , ReLU(X )[n] := max 0, X [n] . 8 We can easily prove that these properties are independent from the choice of sampling interval s > 0.
Expression (65) also introduces a bias notation, defined such that (b + X )[n] = b + X [n] for any n ∈ Z 2 .
In many cases, input images are composed of three RGB channels; therefore K = 3. The other parameters depend on the chosen CNN architecture. For example, in AlexNet, the weight tensor V is supported in a region of size 11 × 11 and the subsampling factor m is equal to 4. ResNet models use kernels of size 7 × 7 with m = 2.
B. Making CNNs Compatible with our Theoretical Results
The bias and ReLU are outside the scope of our study. However, we can show that
A l = ReLU(b l + Y pool l ), where we have defined Y pool l := R (l) m X, with R (l) m : X → MaxPool K−1 k=0 X k * V lk ↓ m .(66)
For the sake of our study, we therefore consider a strictly equivalent CNN architecture where the bias and ReLU are computed after the max pooling layer. We then focus on the intermediate output Y pool ∈ l 2 R (Z 2 ) L .
Remark 5: In many architectures including ResNet, bias b is replaced by a batch normalization layer with affine transformation [49]. Swapping such a layer with max pooling isn't straightforward, but can nevertheless be done with caution. Therefore, we can once again focus on Y pool such as introduced in (66).
In what follows, we assume that the network has been trained on a natural image dataset such as ImageNet. Let L denote the subset of output channels l ∈ {0 . . L − 1} such that, for any k ∈ {0 . . K − 1}, V lk behaves like a band-pass filter (see Fig. 1). They are referred to as Gabor channels. We would like to apply the theoretical results obtained in section III on Y pool l , for any l ∈ L.
To do so, we need to show that Y pool l is the output of a RGPool operator such as introduced in (22), for some weight and input to be defined. This is in general not possible though, because the sum operator in (66) cannot be interchanged with max pooling. To solve this problem, we state the following hypothesis, applied on any Gabor channel l ∈ L. It states that the trained kernels V lk for k ∈ {0 . . K − 1} are identical, up to a multiplicative constant.
Hypothesis 3: Let V l := 1 K K−1 k=0 V lk denote the mean kernel of the l-th output channel. Then, for any l ∈ L, there exists µ l ∈ R K such that
∀k ∈ {0 . . K − 1} , V lk = µ lk V l .(67)
Intuitively, when looking at Gabor-like kernels in Fig. 1, they roughly appear grayscale. This observation supports Hypothesis 3 with µ lk ≈ 1. A more accurate justification for this hypothesis is provided in Appendix I.
Then, considering the linear combination X l := µ l X, we apply Hypothesis 3 to (66): where W l ∈ l 2 (Z 2 ) denotes the complex-valued companion of V l satisfying (5). We also define C (l) 2m and its associated output feature map Y modl , such that
∀l ∈ L, Y pool l = R (l) m X = R m W l ( X l ),(68)∀l ∈ L, Y modl := C (l) 2m X = C 2m W l ( X l ).(69)
Besides, the following hypothesis states that, for any l ∈ L, W l is a discrete Gabor-like filter for which the Fourier support size is shared among the Gabor channels. Hypothesis 4: There exists κ ≤ π/m such that, for any Gabor channel l ∈ L, ∃ξ l ∈ [−π, π] 2 : W l ∈ G ξ l , κ .
Let l ∈ L. According to Hypotheses 3-4, R such as defined in (68) and (69) can be respectively qualified as RGPool and CGMod operators. We now consider P (l)
X := C (l) 2m X − R (l) m X 2 / C (l) 2m X 2 ; (71) R (l) X, u := R (l) m (T u X) − R (l) m X 2 / C (l) 2m X 2 .(72)
Then, under Hypotheses 1-4, we can apply Theorems 2 and 3 to the above random variables, thus providing a shift invariance measure for a subset of outputs in CNNs:
E P (l) X 2 ≤ (1 + β(mκ)) 2 γ(mξ l ) 2 ; (73) E R (l) X, u ≤ 2 (1 + β(mκ)) γ(mξ l ) + α(κu),(74)
where ξ l has been introduced in (70). In practice, Hypothesis 4 cannot be exactly satisfied. This is because V l is finitely supported, and thus its power spectrum cannot be exactly zero on a region with non-zero measure. To evaluate how close we are to this ideal situation, we measured the maximum percentage of energy within a window of size κ × κ in the Fourier domain, with respect to the whole filter W l . We then computed the mean percentage over all the Gabor channels l ∈ L. The results are shown in Table I, for AlexNet and ResNet after training with ImageNet. The window size κ has been set to its highest admissible value, i.e., π/m. We notice that residual energy outside the window of interest is quite high, especially for AlexNet. Therefore, W l deviates from a "perfect" Gabor-like filter, which may lead to higher instabilities.
V. OPERATORS BASED ON THE DUAL-TREE WAVELET PACKET TRANSFORM
In order to validate the results established in section III, we consider the dual-tree wavelet packet transform (DT-CWPT), a fast algorithm which achieves subsampled convolutions with discrete Gabor-like filters. As explained below, DT-CWPT spawns a set of filters which tiles the whole frequency domain in a regular fashion. Furthermore, increasing the subsampling factor m results in a decreased Fourier support size κ = π/m, therefore matching the condition stated in (63). This is also consistent with what is observed in CNNs such as AlexNet or ResNet. DT-CWPT thus provides a convenient framework to emulate the behavior of an actual CNN while testing Theorems 1-3 in a controlled environment.
A. Discrete Wavelet Packet Transform
This is a brief overview on the classical, real-valued 2D wavelet packet transform (WPT) algorithm [48, p. 377], which is the starting point for building the redundant, complex-valued DT-CWPT.
Given a pair of low-and high-pass 1D orthogonal filters h, g ∈ l 2 R (Z) satisfying a quadrature mirror filter (QMF) relationship, we consider a separable 2D filter bank (FB), denoted by G := (G l ) l∈{0..3} , defined by
G 0 = h ⊗ h; G 1 = h ⊗ g; G 2 = g ⊗ h; G 3 = g ⊗ g. (75) Let X ∈ l 2 R (Z 2 )
. The decomposition starts with X (0) 0 = X. Given j ∈ N, suppose that we have computed 4 j sequences of wavelet packet coefficients at stage j, denoted by X
(j) l ∈ l 2 R (Z 2 ) for each l ∈ 0 . . 4 j − 1 .
They are referred to as feature maps.
At stage j + 1, we compute a new representation of X with increased frequency resolution-and decreased spatial resolution. It is obtained by further decomposing each feature map X (j) l into four sub-sequences, using subsampled (or strided) convolutions with kernels G k , for each k ∈ {0 . . 3}:
∀k ∈ {0 . . 3} , X (j+1) 4l+k = X (j) l * G k ↓ 2.(76)
The algorithm stops after reaching the desired number of stages J > 0-referred to as decomposition depth. Then,
X (J) := X (J) l l∈{0..4 J −1} (77)
constitutes a multichannel representation of X in an orthonormal basis, from which the original image can be perfectly reconstructed.
The following proposition introduces an array of resulting kernels V (J) which is illustrated in Fig. 3 with J = 2.
Proposition 6: For any l ∈ 0 . .
4 J − 1 , there exists V (J) l ∈ l 2 R (Z 2 ) such that X (J) l = X * V (J) l ↓ 2 J .(78)
Proof: See Appendix N.
B. Dual-Tree Complex Wavelet Packet Transform
Despite having interesting properties such as sparse signal representation, WPT is unstable with respect to small shifts and suffers from a poor directional selectivity. To overcome this, N. Kingsbury designed a new type of discrete wavelet transform [29], where images are decomposed in a redundant frame of nearly-analytic, complex-valued waveforms. It was later extended to the wavelet packet framework [30]. The latter operation, referred to as dual-tree complex wavelet packet transform (DT-CWPT), is performed as follows.
Let (h 0 , g 0 ) and (h 1 , g 1 ) denote two pairs of QMFs as defined in section V-A, satisfying the half-sample delay condition:
∀ω ∈ [−π, π] , h 1 (ω) = e −iω/2 h 0 (ω).(79)
Then, for any k ∈ {0 . . 3}, we build a two-dimensional FB G k := (G k, l ) l∈{0..3} similarly to (75):
G k, 0 = h i ⊗ h j ; G k, 1 = h i ⊗ g j ; (80) G k, 2 = g i ⊗ h j ; G k, 3 = g i ⊗ g j ,(81)
where i, j ∈ {0, 1} are defined such that k = 2 × i + j. 9 Let J > 0 denote a decomposition depth. Using each of the four FBs G 0−3 as defined above, we assume that we have decomposed an input image X into four multichannel WPT representations X (J) 0−3 , each of which satisfies (76) and (77). Then, for any l ∈ 0 . . 4 J − 1 , the following complex feature maps are computed:
Z (J) l Z (J) 4 J +l = 1 −1 1 1 X (J) 0, l X (J) 3, l + i 1 1 1 −1 X (J) 2, l X (J) 1, l .
(82) We denote by L J := 2 · 4 J the number of output feature maps. Then,
Z (J) := Z (J) l l∈{0..L J −1} (83)
constitutes a complex-valued, four-time redundant multichannel representation of X from which the original image can be reconstructed.
As for standard WPT, the following proposition introduces an array of resulting (complex-valued) kernels W (J) for which a graphical representation is provided in Fig. 3 with J = 2.
C. Invariance Results Applied to the Dual-Tree Framework
We assume that h 0 is a Shannon filter, such that h 0 (ω) := √ 2 if ω ∈ − π 2 , π 2 and 0 otherwise. Let J ∈ N * denote the number of decomposition stages. We consider the resulting kernels W (J) l satisfying (84). The following hypothesis states that DT-CWPT tiles the frequency plane with a square grid.
Hypothesis 5: For any l ∈ {0 . . L J − 1}, there exists σ (J) l ∈ −2 J . . 2 J − 1 2 such that W (J) l ∈ G ξ (J) l , κ J ,(85)
where we have defined
ξ (J) l := σ (J) l + 1 2 π 2 J and κ J := π 2 J .(86)
It can be shown that Hypothesis 5 is an asymptotic result, when J goes to ∞. In reality, the Fourier support of W (J) l is contained in four symmetric square regions of size κ J . If the dual filter h 1 satisfies the half-sample delay condition (79), then the energy of W (J) l goes to 0 in all but one of the four regions (relatively to the filter's total energy). We nevertheless consider Hypothesis 5 as reasonable when J ≥ 2. 10 Then, according to (84) In order Theorems 1 and 3 to be applicable, we need to check that conditions stated in (63) are satisfied. Using the DT-CWPT framework, they become
κ J ≤ π 2 J−1 and u 1 ≤ π κ J = 2 J .(89)
The first condition is always met, according to (86). As for the second one, it establishes a limit on u 1 above which shift invariance can no longer be estimated. Note however that shifting the input by 2 J pixels results in a 1-pixel output shift. Therefore, R (J) l (T u X) can always be compared with a shifted version of R (J) l X. We then get a partial measure of shift equivariance.
In the Shannon setting, h 0 [n], g 0 [n], h 1 [n] and g 1 [n] decay in O(1/n), which makes them difficult to approximate in practice. It requires very large vectors to avoid numerical instabilities. Practical implementations use fast-decaying filters such as Meyer QMFs [52], or finite-length filters which approximate the half-sample delay condition [53]. Therefore, residual energy can be observed outside the Fourier windows introduced in Hypothesis 5. To counterbalance this, we relax this hypothesis by increasing the window size up to κ J := π/2 J−1 , which is closer to what is observed in standard convolutions layers (see Table I).
VI. EXPERIMENTS AND RESULTS
In this section, we experimentally validate our theoretical results. To do so, we built operators based on DT-CWPT, as explained in section V. Using a dataset of natural images, we measured the mean discrepancy between RGPool and CGMod outputs, and evaluated the shift invariance of both models.
Dual-tree decompositions have been performed with Qshift orthogonal filters of length 10 [50], which approximately meets the half-sample delay condition (79).
Our implementation is based on PyTorch. The models were evaluated on the validation set of ImageNet ILSVRC2012 [54], which contains N data := 50 000 images.
A. Discrepancies between RGPool and CGMod
Each image n ∈ {0 . . N data − 1} in the dataset was converted to grayscale, from which a center crop of size 224×224 was extracted. We denote by X n ∈ l 2 R (Z 2 ) the resulting input feature map. For any l ∈ {0 . . L J − 1}, we denote by Y l . According to Hypothesis 5, these frequencies form a regular grid in the Fourier domain. This provides a visual representation of ρ (J) l 2 , as shown in Fig. 5. We can observe a regular pattern of dark spots. More precisely, high discrepancies between max pooling and modulus seem to occur when the support of W (J) l is located in a dark region of Fig. 4. This result corroborates the theoretical study, which states that high discrepancies are expected for certain pathological frequencies, due to the search for a maximum value over a discrete grid-see Fig. 2. Fig. 4. γ(mξ) 2 as a function of the kernel characteristic frequency ξ ∈ [−π, π] 2 . According to Theorem 2, this quantity provides an approximate bound for the expected quadratic error between RGPool and CGMod outputs. The subsampling factor m has been set to 2 as in ResNet (left), and 4 as in AlexNet (right). The bright regions correspond to frequencies for which the two outputs are expected to be similar. However, in the dark regions, pathological cases such as illustrated in Fig. 2 are more likely to occur. l , such as introduced in (86). Since the subsampling factor m J is equal to 2 J−1 , these empirical estimates can be compared with the left and right parts of Fig. 4. The plots are symmetrized to account for the complex conjugate feature maps.
B. Shift invariance
For each input image X n previously converted to grayscale, two crops of size 224 × 224 were extracted, such that the corresponding sequences X n and X n are shifted by one pixel along the x-axis.
From these inputs, the following quantity was then computed:
ρ (J) pool nl := Y (J) pool nl − Y (J) pool nl 2 / Y (J) modnl 2 ,(91)
where Y
(J) pool , Y (J)
pool and Y (J) mod satisfy (87) with X ← X n or X n . Finally, for each output channel l ∈ {0 . . L J − 1}, an empirical estimate for E R X, u such as introduced in (62), with u := (1, 0), was obtained by averaging ρ (J) pool nl over the whole dataset. We denote by ρ (J) pool l the corresponding quantity. We point out that shift invariance is measured relatively to the norm of the CGMod output, as explained in Remark 3.
On the other hand, the same procedure was applied on the CGMod operators:
ρ (J) modnl := Y (J) modnl − Y (J) modnl 2 / Y (J) modnl 2 ,(92)
and ρ modl are provided in Fig. 6. Two observations can be drawn here.
When the filter is horizontally oriented, the corresponding output is highly stable with respect to horizontal shifts. This can be explained by noticing that such kernels perform lowpass filtering along the x axis. The exact transposed phenomenon occurs for vertical shifts (not shown in this paper).
Elsewhere, we observe that high discrepancies between RGPool and CGMod outputs (Fig. 5) are correlated with shift instability of RGPool (Fig. 6, top). This is in line with (61) and (64) in Theorems 2-3. Note that CGMod outputs are nearly shift invariant regardless the characteristic frequency ξ (J) l (Fig. 6, bottom), as predicted by Theorem 1 (20).
VII. CONCLUSION
In this paper, we studied shift invariance properties captured by the max pooling operator, when applied on top of a convolution layer with Gabor-like kernels. More precisely, we established a validity domain for near-shift invariance, and confirmed our predictions with an experimental setting based on the dual-tree complex wavelet packet transform.
As shown in this paper, the CGMod operator acts like a proxy for RGPool, extracting comparable features with higher stability. This result suggests a way to build an architecture sharing the structure and behavior of a standard network, except that shift invariance would be improved. This could be done by considering a DT-CWPT-based twin model such as introduced in our workshop paper [31], and replacing RGPool by CGMod operators as done above in a deterministic context.
Since CNNs generally implement successive blocks of convolution and pooling layers, another line of research would be to extend our results to a cascade of RGPool operators. Our work is thus an important step towards deeper understanding of popular networks, based on the wavelet theory.
APPENDIX A PROOF OF LEMMA 1
Proof: We can show that, for any ω ∈ R 2 ,
f 0 (ω) = (f * ψ)(ω − ν) = T ν ( f ψ)(ω).
By hypothesis on ψ, we have supp ψ ⊂ B ∞ (−ν, ε/2), which yields the result.
APPENDIX B PROOF OF PROPOSITION 1
Proof: Using the 2D Plancherel formula, we compute
T h f 0 − f 0 2 L 2 = 1 4π 2 T h f 0 − f 0 2 L 2 = 1 4π 2 B∞(ε/2) f 0 (ω) 2 e −i h, ω − 1 2 d 2 ω = 1 4π 2 B∞(ε/2) f 0 (ω) 2 2 − 2 cos h, ω d 2 ω.
The integral is computed on a compact domain because, according to Lemma 1, supp f 0 ⊂ B ∞ (ε/2). Now, we use the Cauchy-Schwarz inequality to compute:
∀ω ∈ B ∞ (ε/2), | h, ω | ≤ h 1 · ω ∞ ≤ ε 2 h 1 .
By hypothesis on h, ε 2 h 1 ≤ π 2 , and thus
T h f 0 − f 0 2 L 2 ≤ 2 − 2 cos ε h 1 2 f 0 2 L 2 .
Finally, since cos t ≥ 1 − t 2 2 , we get (7).
APPENDIX C PROOF OF LEMMA 2
Proof: Expression (9) is obtained by adapting (3.2) and (3.3) in [48] to the 2D case. Then, by combining (9) with the Plancherel formula, we get
g 2 L 2 = 1 4π 2 g 2 L 2 = 1 4π 2 B∞(π/s) | g(ω)| 2 d 2 ω = 1 4π 2 B∞(π/s) s Y(sω) 2 d 2 ω.
The integral is performed on B ∞ (π/s) because g ∈ U s . Then, by applying the change of variable ω ← sω, we get
g 2 L 2 = 1 4π 2 B∞(π) Y(ω ) 2 d 2 ω = 1 4π 2 Y 2 L 2 = Y 2 2 ,
hence (10), which concludes the proof.
APPENDIX D PROOF OF PROPOSITION 2
Proof: First, f X and ψ W are well defined because X ∈ l 2 R (Z 2 ) and W ∈ l 2 (Z 2 ). By construction, ψ W ∈ U s . Thus, (12) is a direct consequence of the Shannon-Whittaker sampling theorem [48, p. 61], coupled with the orthonormality of {φ (s) n } n∈Z 2 . Therefore, using (9) in Lemma 2, we get, for any ω ∈ B ∞ (π/s), ψ W (ω) = s W(sω). Since ψ W (ω) = 0 outside B ∞ (π/s), we deduce that ψ W ∈ V ξ/s, κ/s . We now prove (13). For n ∈ Z 2 , we compute:
(f X * ψ W )(msn) = R 2 f X (msn−x) ψ(x) d 2 x = R 2 k∈Z 2 X[k]φ (s) k (msn−x) ψ(x) d 2 x = k∈Z 2 X[k] R 2 φ (s) k (msn−x) ψ(x) d 2 x.
The sum-integral interchange is possible because X has a finite support. Then:
(f X * ψ W ) (msn) = k∈Z 2 X[k] R 2 ψ(x) φ (s) s(mn−k)−x d 2 x = k∈Z 2 X[k] ψ * φ (s) s(mn − k) .
Since {φ (s) n } n∈Z 2 is an orthonormal basis of U s , we can easily show that, for any k ∈ Z 2 , W[k ] = ψ, φ (s) k = ψ * φ (s) (sk ). Therefore we get
(f X * ψ W ) (msn) = k∈Z 2 X[k] W[mn − k] = X * W [mn],
hence the result.
APPENDIX E PROOF OF LEMMA 3
Proof: Let u ∈ R 2 . By definition of f X , f TuX and T u X,
f TuX = n∈Z 2 s T su f X (sn) φ (s) n .(93)
By construction, f X ∈ U s . Therefore, T su f X ∈ U s . Then, the Shannon-Whittaker theorem [48, p. 61], coupled with the orthonormality of {φ (s)
n } n∈Z 2 , implies s T su f X (sn) = T su f X , φ (s) n .(94)
Finally, plugging (94) into (93) concludes the proof.
APPENDIX F PROOF OF THEOREM 1
Proof: We consider
f 0 : x → (f X * ψ W )(x) e i ξ/s, x ,(95)
with f X and ψ W satisfying (11). Then,
|f X * ψ W | = |f 0 | and |T su f X * ψ W | = |T su f 0 |. (96)
According to Proposition 2, ψ W ∈ V ξ/s, κ/s . Therefore, according to Lemma 1, supp f 0 ⊂ B ∞ κ 2s . Moreover, by hypothesis, κ ≤ 2π/m; thus, B ∞ κ 2s ⊂ B ∞ π ms . Therefore, f 0 ∈ U s , where we have denoted s := ms.
According to (13) (Proposition 2), (18) (Corollary 1) and (96), we get C m X[n] = |f 0 (s n)| ;
(97)
C m (T u X)[n] = |(T su f 0 )(s n)| .(98)
Then, using the reverse triangular inequality on (97) and (98),
C m (T u X) − C m X 2 2 ≤ n∈Z 2 T su f 0 (s n) − f 0 (s n) 2 = n∈Z 2 |g(s n)| 2 = 1 s 2 Y 2 2 ,
where we have denoted, for any n ∈ Z 2 , g := T su f 0 − f 0 and Y[n] := s g(s n). We have g ∈ U s since f 0 ∈ U s . Then, according to (10) in Lemma 2, Y 2 = g L 2 . Therefore,
C m (T u X) − C m X 2 2 ≤ 1 s 2 g 2 L 2 = 1 s 2 T su f 0 − f 0 2 L 2 .
According to Proposition 1 with ε ← κ/s and h ← su, we then get the following bound:
C m (T u X) − C m X 2 2 ≤ α(κu) 2 s 2 f 0 2 L 2 .(99)
Besides, according again to Lemma 2, f 0 2 L 2 = X 0 2 2 , where X 0 [n] := s f 0 (s n) for any n ∈ Z 2 . Therefore, according to (97),
C m X 2 = 1 s X 0 2 = 1 s f 0 L 2 .(100)
Finally, plugging (100) into (99) completes the proof.
APPENDIX G PROOF OF PROPOSITION 3
Proof: Let X ∈ l 2 R (Z 2 ) and s > 0. We consider f 0 ∈ L 2 (R 2 ) as the "low-frequency" function satisfying (95). Again, we introduce s := ms and X 0 ∈ l 2 (Z 2 ) such that X 0 [n] := s f 0 (s n). Moreover, for any Y ∈ l 2 R (Z 2 ), we denote:
f (s ) Y := n∈Z 2 Y[n] φ (s ) n .(101)
On the one hand, in the proof of Theorem 1, we already got (100). On the other hand, (98) can be rewritten
C m (T u X)[n] = T s u f 0 (s n) ,(102)
with u := u/m. Besides, according to the proof of Theorem 1, f 0 ∈ U s . Thus, by definition of X 0 , the Shannon-Whittaker theorem [48, p. 61] implies that f 0 = f (s ) X0 such as defined in (101). Then, using Lemma 3 with X ← X 0 , u ← u and s ← s , we get f (s )
T u X0 = T s u f (s ) X0 = T s u f 0 .(103)
Then, using (12) in Proposition 2 with X ← T u X 0 and s ← s , and inserting (103) into the result yields
T u X 0 [n] = s f (s ) T u X0 (s n) = s T s u f 0 (s n).(104)
Therefore, (102) and (104) imply
C m (T u X) 2 = 1 s T u X 0 2 .(105)
Moreover, since f 0 ∈ U s , and according to (104), we can use Lemma 2 with s ← s , g ← T s u f 0 and Y ← T u X 0 . We get
T u X 0 2 = T s u f 0 L 2 = f 0 L 2 .(106)
Finally, (100), (105) and (106) imply C m (T u X) 2 = C m X 2 , which concludes the proof.
APPENDIX H JUSTIFICATION FOR HYPOTHESES 1 AND 2 Given n ∈ N * , we define n-th order stationarity of a given stochastic process F as in [55, p. 152]: for any n ∈ {1 . . n}, (x 1 , . . . , x n ) ∈ (R 2 ) n and h ∈ R 2 , the joint distribution of F (x 1 ), . . . , F (x n ) is identical to the one of F (x 1 + h), . . . , F (x n + h) . Besides, strictsense stationarity is defined as n-th order stationarity for any n ∈ N * .
We recall that ν := ξ/s. We then state the following results. Proposition 8: We assume that F X is first-order stationary. If, for any x ∈ R 2 and any h ∈ B 2 (2π/ ν 2 ),
(T h F X * ψ)(x) = e i ν, h (F X * ψ)(x),(107)
then Hypothesis 1 is satisfied. Proof: We show that the probability measure of Z X (x) is invariant with respect to phase shift. In other words, we show that, for any measurable set A ⊂ S 1 ,
∀ω ∈ [0, 2π] , µ(A) = µ(e iω A),(108)
where we have denoted
µ : A → P {Z X (x) ∈ A} .
Let h ∈ B 2 (2π/ ν 2 ). According to (107),
Z X (x) ∈ A ⇐⇒ T h Z X (x) ∈ e i ν, h A.
Therefore,
P {Z X (x) ∈ A} = P T h Z X (x) ∈ e i ν, h A .
Since F X is first-order stationary, Z X (x) and T h Z X (x) have the same probability distribution. Thus we get
P {Z X (x) ∈ A} = P Z X (x) ∈ e i ν, h A .
Let ω ∈ [0, 2π]. Considering h := ω ν/ ν 2 2 , we have h ∈ B 2 (2π/ ν 2 ) and ν, h = ω. Therefore,
∀ω ∈ [0, 2π] , P {Z X (x) ∈ A} = P Z X (x) ∈ e iω A ,
which yields (108).
Any probability measure defined on S 1 is a Radon measure. Therefore, according to Haar's theorem [56], there exists a unique probability measure on S 1 satisfying (108). Since the uniform probability measure is also invariant to phase shift, we deduce that Z X (x) is uniformly distributed on S 1 .
Proposition 9: We assume the conditions of Proposition 8 are met. If, moreover, F X is strict-sense stationary, then Hypothesis 2 is satisfied.
Proof: Let n ∈ N * and x, y 1 , . . . , y n ∈ R 2 . To alleviate notations, we consider the random vector M = (M X (y i )) i∈{1..n} with outcomes in R n + . The proof is organized as follows. Using a similar reasoning as Proposition 8, we show that Z X follows a uniform conditional probability distribution given M . Since we already know that Z X follows a uniform (unconditional) distribution, we deduce that Z X and M are independent.
Let A ⊂ S 1 and S := (S i ) i∈{1..n} ⊂ R n + denote measurable sets. Let h ∈ B 2 (2π/ ν 2 ). According to (107),
Z X (x) ∈ A ⇐⇒ T h Z X (x) ∈ e i ν, h A; M X (y i ) ∈ S i ⇐⇒ T h M X (y i ) ∈ S i ∀i ∈ {1 . . n} .
Therefore,
P (Z X (x) ∈ A) & (M ∈ S) = P T h Z X (x) ∈ e i ν, h A & (T h M ∈ S) .
Since F X is strict-sense stationary, the joint probability density of T h Z X (x), T h M X (y 1 ), . . . , T h M X (y n ) is identical to the one of Z X (x), M X (y 1 ), . . . , M X (y n ). Therefore we get
P (Z X (x) ∈ A) & (M ∈ S) = P Z X (x) ∈ e i ν, h A & (M ∈ S) .
We assume that P(M ∈ S) > 0. According to the above expression, and similarly to the proof of Proposition 8, we get,
∀ω ∈ [0, 2π] , P {Z X (x) ∈ A | M ∈ S} = P Z X (x) ∈ e iω A M ∈ S .
Then, the above conditional probability measure satisfies phase shift invariance (108). Therefore, as in the proof of Proposition 8, Haar's theorem implies that Z X (x) follows a uniform conditional distribution given M ∈ S.
Moreover, strict-sense implies first-order stationarity, and thus, according to Proposition 8, Z X (x) follows a uniform (unconditional) distribution. Therefore we get, for any measurable sets A ⊂ S 1 and S ⊂ R n + such that P(M ∈ S) > 0,
P {Z X ∈ A | M ∈ S} = P(Z X ∈ A),
which proves independence between Z X (x) and M .
Strict-sense stationarity suggests that any translated version of a given image is equally likely. In reality, this statement is too strong, for several reasons. First, by construction, X has all its realizations in L 2 R (R 2 ). In that context, a stationary process yields outcomes which are zero almost everywhere. Besides, depending on which category the image belongs to, the pixel distribution is likely to vary across various regions. For instance, we can expect the main subject to be located at the center of the image. More details on statistical properties of natural images can be found in [57]. Nevertheless, this hypothesis will be considered as a reasonable approximation if the shift is much smaller than the image "characteristic" size in the continuous domain; i.e., if h 2 sM , where, as a reminder, M denotes the support size of input images. 12 As it turns out, the proofs of Propositions 8 and 9 only requires shifts with h 2 ≤ 2π/ ν 2 . Then, according to (53), h 2 sM , and the stationarity hypothesis holds. Finally, to justify (107), we consider ϕ W : x → ψ W (x)e −i ν, x . Similarly to Lemma 1, we can show that ϕ W is a low-pass filter, with supp ϕ W ⊂ B ∞ (ε/2). For all h ∈ R 2 such that h 2 ≤ 2π/ ν 2 , we have
(T h F X * ψ)(x) = R 2 T h F X (x − y) ϕ(y) e −i ν, y d 2 y = e i ν, h R 2 F X (x − y ) ϕ(y − h) e −i ν, y d 2 y .
Since supp ϕ W ⊂ B ∞ κ 2s , we can define a "minimal wavelength" λ ϕW := 2πs/κ. Then, if h 2 λ ϕW , we can approximate ϕ(y − h) ≈ ϕ(y ). This sufficient condition is actually met, because h 2 ≤ 2π/ ν 2 and, according to (54), ν 2 κ/s. Therefore,
(T h F X * ψ)(x) ≈ e i ν, h (F X * ψ)(x).(109)
As a result, the conditions for Propositions 8 and 9 are approximately satisfied. We will therefore consider Hypotheses 1 and 2 as reasonable.
APPENDIX I JUSTIFICATION FOR HYPOTHESIS 3 We consider, for any l ∈ L and any k ∈ {0 . . K − 1}, the value of µ ∈ R minimizing µ V l −V lk 2 2 , denoted by µ lk . We then denote by δ lk := µ lk V l − V lk 2 2 / V lk 2 2 the relative quadratic error between V lk and its projection on R V l . We get
δ lk = 1 − V l , V lk 2 V l 2 2 · V lk 2 2 .(110)
Expression (67) holds if and only if δ lk = 0 for any k ∈ {0 . . K − 1}. In the case of AlexNet, when l ∈ L, δ lk do not exceed 10 −1 , and its mean value is around 10 −2 . Therefore δ lk 1, and Hypothesis 3 can be considered as a reasonable assumption for any Gabor channel l ∈ L. Similar observations can be drawn for ResNet.
APPENDIX J PROOF OF PROPOSITION 4
Proof: We consider the Borel σ-algebra on S 1 generated by [z, z ] S 1 z, z ∈ S 1 ∪ {S 1 }, on which we have defined the angular measure ϑ such that ϑ(S 1 ) := 2π, and ∀z, z ∈ S 1 , ϑ([z, z ] S 1 ) := ∠(z * z ).
For any p ∈ N * , we compute the p-th moment of G maxX (x) defined in (51). By considering
h max : S 1 → [−1, 1] z → max k ∞ ≤q
Re(z * z k ), we get G maxX (x) = h max (Z X (x)). According to Hypothesis 1, Z X (x) follows a uniform distribution on S 1 . Therefore,
E [G maxX (x) p ] = 1 2π S 1 h max (z) p dϑ(z),
which proves that E [G maxX (x) p ] does not depend on x. Let us split the unit circle S 1 into the arcs A (q) 0 , . . . , A (q)
Nq−1 such as defined in (47) i , h max (z) = Re(z * z (q) i ). As a consequence, using symmetry, we get
A (q) i 1 σ E ∆ 2 X S 2 X = σ = γ q (mξ) 2 .
Then, law of total expectation states that
E Q 2 X = E E Q 2 X S 2 X = γ q (mξ) 2 .
Finally, according to Conjecture 1, we have:
P X ≤ (1 + β q (mκ)) Q X ,
which yields (61).
APPENDIX M PROOF OF THEOREM 3
Proof: Using the triangular inequality, we compute R m, q (T u X) − R m, q X 2 ≤ C 2m (T u X) 2 P TuX + C 2m X 2 P X
+ C 2m (T u X) − C 2m X 2 ,
where P X and P TuX are defined in (55). Since, by hypothesis, κ ≤ π/m, expression (21) in Proposition 3 states that C 2m (T u X) 2 = C 2m X 2 .
Moreover, according to (63), we can apply (20) in Theorem 1 on the third term of the above expression. We get R m, q (T u X) − R m, q X 2 ≤ P TuX + P X + α(κu) C 2m X 2 .
Then, by linearity of E, we get E R X, u ≤ E P TuX + E P X + α(κu).
For any stochastic process X satisfying Hypotheses 1 and 2, expression (61) in Theorem 2 and Jensen's inequality yield:
E P X ≤ (1 + β q (mκ)) γ q (mξ).
Since Hypotheses 1 & 2 are satisfied for Z X and M X , Lemma 3 implies that they are also true for Z TuX and M TuX . Therefore, (115) is valid for X ← X and X ← T u X, and plugging this expression into (114) concludes the proof.
APPENDIX N PROOF OF PROPOSITION 6
Proof: This proposition is a simple reformulation of the well-known result that two successive convolutions can be written as another convolution with a wider kernel. We introduce the upsampling operator: (X ↑ m)[n] := X[n/m] if n/m ∈ Z 2 , and 0 otherwise. We also consider the "identity" filter I ∈ l 2 (Z 2 ) such that I[0] = 1 and I[n] = 0 otherwise.
First, for any U, V ∈ l 2 (Z 2 ) and any s, t ∈ N * , we have ((U ↓ s) * V) ↓ t = (U * (V ↑ s)) ↓ (st). Then, a simple reasoning by induction yields the result, with
and K. Alahari are with Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LJK, 38000 Grenoble, France (e-mail: [email protected]). K. Polisano and V. Perrier are with Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France. This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01) funded by the French program Investissement d'avenir, as well as the ANR grant AVENUE (ANR-18-CE23-0011).
Fig. 1 .
1Spatial (left) and Fourier (right) representations of convolution kernels in the first layer of AlexNet, after training with ImageNet ILSVRC2012. Each kernel connects the 3 RGB input channels to one of the 64 output channels.
Fig. 2 .
2Search for the maximum value of h → g X (x, h) over a discrete grid of size 3 × 3, i.e., q = 1. This figure displays 3 examples with different frequencies ν := ξ/s and phases η X (x). Hopefully the result will be close to the true maximum (left), but there are some pathological cases in which all points in the grid fall into pits (middle and right).
Fig. 3 .
3Resulting kernels V(2) for WPT (left) and W(2) for DT-CWPT (right), computed with Q-shift orthogonal QMFs of length 10[50]. They have been cropped to size 11 × 11 for the sake of visibility. The right part of the figure displays 32 complex filters, alternatively represented by their real and imaginary parts. The feature maps related to 1 and 2 are obtained with two distinct formulas, which are summarized in (82). Illustration from[31].
and (85), we can apply Theorems 1-3 to the dual-tree framework. More precisely, for any output channel l ∈ {0 . . L J − 1}, J := 2 J−1 . We recall that R m and C 2m have been introduced in (31) with q ← 1, for any m ∈ N * .Both R -CWPT with J decomposition stages. However, the max pooling operator requires an extra level of subsampling. To counterbalance this, when computing Y (J) pool , the last stage of DT-CWPT decomposition must be performed without subsampling, at the cost of increased redundancy.11 This is why m J := 2 J−1 instead of 2 J .
outputs satisfying (87) with X ← X n . Then, the relative quadratic error between Y
Fig. 5 .
5Relative quadratic error between the outputs of RGPool and CGMod. For each wavelet packet channel l ∈ {0 . . L J − 1}, ρ
Fig. 6 .
6Empirical measure of horizontal shift invariance of RGPool and CGMod outputs. For each l ∈ {0 . . L J − 1}, ρ (J) pool l (top) and ρ (J) modl (bottom) are represented as a grayscale pixel centered in ξ (J) l , such as introduced in (86).
l
* G k ↑ 2 j for any l ∈ {0 . . j − 1} and any k ∈ {0 . . 3}.APPENDIX O PROOF OF PROPOSITION 7Proof: For any k ∈ {0 . . 3}, the result is obtained by plugging (116) into (82) for k ∈ {0 . . 3}, and by denoting
TABLE I PERCENTAGE
IOF ENERGY WITHIN A FOURIER WINDOW OF SIZE κ × κ, FOR GABOR-LIKE FILTERS { W l } l∈LMODEL
NB CHANNELS SIZE OF L
κ
MEAN RATIO
ALEXNET
64
26
π/4
67%
RESNET-34
64
22
π/2
76%
:
E [G maxX (x) p ] = h max (z) p dϑ(z). (111) Let i ∈ {0 . . N q − 1}. We can show that i , h max (z) = max Re z * zTherefore, h max is symmetric with respect to the center value of A i , where h max reaches its minimum. We denote by A S 1 the first half of arc A1
2π
Nq−1
i=0
A
(q)
i
∀z ∈ A
(q)
(q)
i
, Re z * z
(q)
i+1
.
(q)
i , denoted by z
(q)
(q)
i
:= z
(q)
i , z
(q)
i
(q)
i .
Then,
∀z ∈ A
(q)
DT-CWPT paves the Fourier domain into square regions of identical size, each of them associated to a specific filter.2 To do so, we split the real and imaginary parts of the original filters.
φ (s) is a tensor product of scaled and normalized sinc functions.7 We actually use the 2D formulation, mentioned in p. 82.
Actually, the FB design requires some technicalities which are not described here.
This asymptotic result is not true for "edge filters", i.e., when ξ (J) l ∞ = 1 − 2 −(J+1) π. In this case, a small fraction of the filter's energy remains located at the far end of the Fourier domain[30]. However, this edge effect is ignored and we still consider Hypothesis 5 as reasonable.11 This is similar to the concept of stationary wavelet transform[51].
We refer readers to[58] for a related notion of local stationarity.
Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 5217553Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015.
Wavelets, approximation, and compression. M Vetterli, IEEE Signal Processing Magazine. 185M. Vetterli, "Wavelets, approximation, and compression," IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 59-73, Sep. 2001.
Texture classification by wavelet packet signatures. A Laine, J Fan, IEEE Trans. Pattern Analysis and Machine Intelligence. 1511A. Laine and J. Fan, "Texture classification by wavelet packet signa- tures," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1186-1191, Nov. 1993.
Feature extraction from wavelet coefficients for pattern recognition tasks. S Pittner, S V Kamarthi, IEEE Trans. Pattern analysis and machine intelligence. 211S. Pittner and S. V. Kamarthi, "Feature extraction from wavelet coeffi- cients for pattern recognition tasks," IEEE Trans. Pattern analysis and machine intelligence, vol. 21, no. 1, pp. 83-88, Jan. 1999.
Wavelet packet feature extraction for vibration monitoring. G G Yen, IEEE Trans. Industrial Electronics. 473G. G. Yen, "Wavelet packet feature extraction for vibration monitoring," IEEE Trans. Industrial Electronics, vol. 47, no. 3, pp. 650-667, Jun. 2000.
Wavelet feature selection for image classification. K Huang, S Aviyente, IEEE Trans. Image Processing. 179K. Huang and S. Aviyente, "Wavelet feature selection for image classi- fication," IEEE Trans. Image Processing, vol. 17, no. 9, pp. 1709-1720, Aug. 2008.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proc. IEEE. IEEE86Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE, vol. 86, no. 11, pp. 2278- 2323, Nov. 1998.
A mathematical theory of deep convolutional neural networks for feature extraction. T Wiatowski, H Bölcskei, IEEE Trans. Information Theory. 643T. Wiatowski and H. Bölcskei, "A mathematical theory of deep convolu- tional neural networks for feature extraction," IEEE Trans. Information Theory, vol. 64, no. 3, pp. 1845-1866, Mar. 2018.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in ICLR, 2015.
ImageNet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Communications of the ACM. 606A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Communications of the ACM, vol. 60, no. 6, pp. 84-90, May 2017.
Invariant scattering convolution networks. J Bruna, S Mallat, IEEE Trans. Pattern Analysis and Machine Intelligence. 358J. Bruna and S. Mallat, "Invariant scattering convolution networks," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1872-1886, May 2013.
Rotation-invariant texture features extraction using dual-tree complex wavelet transform. B Liao, F Peng, Intl. Conf. Information, Networking and Automation (ICINA). B. Liao and F. Peng, "Rotation-invariant texture features extraction using dual-tree complex wavelet transform," in Intl. Conf. Information, Networking and Automation (ICINA), 2010.
Rotation, scaling and deformation invariant scattering for texture discrimination. L Sifre, S Mallat, CVPR. L. Sifre and S. Mallat, "Rotation, scaling and deformation invariant scattering for texture discrimination," in CVPR, 2013.
Invariance and stability of deep convolutional representations. A Bietti, J , NeurIPS. A. Bietti and J. Mairal, "Invariance and stability of deep convolutional representations," in NeurIPS, 2017.
How transferable are features in deep neural networks?. J Yosinski, J Clune, Y Bengio, H Lipson, in NeurIPSJ. Yosinski, J. Clune, Y. Bengio, and H. Lipson, "How transferable are features in deep neural networks?" in NeurIPS, 2014.
A review of convolutional neural networks and Gabor filters in object recognition. M Rai, P Rivas, Intl. Conf. Computational Science and Computational Intelligence (CSCI). 2020M. Rai and P. Rivas, "A review of convolutional neural networks and Gabor filters in object recognition," in Intl. Conf. Computational Science and Computational Intelligence (CSCI), 2020.
Robust CNN-based speech recognition with Gabor filter kernels. S.-Y Chang, N Morgan, INTERSPEECH. S.-Y. Chang and N. Morgan, "Robust CNN-based speech recognition with Gabor filter kernels," in INTERSPEECH, 2014.
Wavelet convolutional neural networks for texture classification. S Fujieda, K Takayama, T Hachisuka, arXiv:1707.07394S. Fujieda, K. Takayama, and T. Hachisuka, "Wavelet convolutional neural networks for texture classification," arXiv:1707.07394, Jul. 2017.
Gabor filter assisted energy efficient fast learning convolutional neural networks. S S Sarwar, P Panda, K Roy, IEEE/ACM International Symposium on Low Power Electronics and Design. S. S. Sarwar, P. Panda, and K. Roy, "Gabor filter assisted energy efficient fast learning convolutional neural networks," in IEEE/ACM International Symposium on Low Power Electronics and Design, 2017.
Gabor convolutional networks. S Luan, C Chen, B Zhang, J Han, J Liu, IEEE Trans. Image Processing. 279S. Luan, C. Chen, B. Zhang, J. Han, and J. Liu, "Gabor convolutional networks," IEEE Trans. Image Processing, vol. 27, no. 9, pp. 4357-4366, May 2018.
Multi-level wavelet convolutional neural networks. P Liu, H Zhang, W Lian, W Zuo, IEEE Access. 7P. Liu, H. Zhang, W. Lian, and W. Zuo, "Multi-level wavelet convolu- tional neural networks," IEEE Access, vol. 7, pp. 74 973-74 985, Jun. 2019.
Harmonic networks for image classification. M Ulicny, V A Krylov, R Dahyot, BMVC. M. Ulicny, V. A. Krylov, and R. Dahyot, "Harmonic networks for image classification," in BMVC, 2019.
Why do deep convolutional networks generalize so poorly to small image transformations. A Azulay, Y Weiss, JMLR. 20184A. Azulay and Y. Weiss, "Why do deep convolutional networks gen- eralize so poorly to small image transformations?" JMLR, vol. 20, no. 184, pp. 1-25, 2019.
Making convolutional networks shift-invariant again. R Zhang, ICML. R. Zhang, "Making convolutional networks shift-invariant again," in ICML, 2019.
An effective anti-aliasing approach for residual networks. C Vasconcelos, H Larochelle, V Dumoulin, N L Roux, R Goroshin, arXiv:2011.10675C. Vasconcelos, H. Larochelle, V. Dumoulin, N. L. Roux, and R. Goroshin, "An effective anti-aliasing approach for residual networks," arXiv:2011.10675, Nov. 2020.
Delving Deeper into Anti-aliasing in ConvNets. X Zou, F Xiao, Z Yu, Y J Lee, BMVC. X. Zou, F. Xiao, Z. Yu, and Y. J. Lee, "Delving Deeper into Anti-aliasing in ConvNets," in BMVC, 2020.
Truly shift-invariant convolutional neural networks. A Chaman, I Dokmanic, CVPR. A. Chaman and I. Dokmanic, "Truly shift-invariant convolutional neural networks," in CVPR, 2021.
Wavelet transform modulus : Phase retrieval and scattering. I Waldspurger, ParisEcole normale supérieureDoctoral ThesisI. Waldspurger, "Wavelet transform modulus : Phase retrieval and scattering," Doctoral Thesis, Ecole normale supérieure, Paris, 2015.
Complex wavelets for shift invariant analysis and filtering of signals. N Kingsbury, Applied and computational harmonic analysis. 103N. Kingsbury, "Complex wavelets for shift invariant analysis and filter- ing of signals," Applied and computational harmonic analysis, vol. 10, no. 3, pp. 234-253, May 2001.
On the dual-tree complex wavelet packet and M-band transforms. I Bayram, I W Selesnick, IEEE Trans. Signal Processing. 566I. Bayram and I. W. Selesnick, "On the dual-tree complex wavelet packet and M-band transforms," IEEE Trans. Signal Processing, vol. 56, no. 6, pp. 2298-2310, Jun. 2008.
Modélisation parcimonieuse de CNNs avec des paquets d'ondelettes dual-tree. H Leterme, K Polisano, V Perrier, K Alahari, ORASIS. H. Leterme, K. Polisano, V. Perrier, and K. Alahari, "Modélisation parcimonieuse de CNNs avec des paquets d'ondelettes dual-tree," in ORASIS, 2021.
Wavelet transforms in image processing. N Kingsbury, J Magarey, Signal Analysis and Prediction. BirkhäuserN. Kingsbury and J. Magarey, "Wavelet transforms in image processing," in Signal Analysis and Prediction. Birkhäuser, 1998, pp. 27-46.
Deep roto-translation scattering for object classification. E Oyallon, S Mallat, CVPR. E. Oyallon and S. Mallat, "Deep roto-translation scattering for object classification," in CVPR, 2015.
Dual-Tree wavelet scattering network with parametric log transformation for object classification. A Singh, N Kingsbury, ICASSP. A. Singh and N. Kingsbury, "Dual-Tree wavelet scattering network with parametric log transformation for object classification," in ICASSP, 2017.
Graph convolutional neural networks via scattering. D Zou, G Lerman, Applied and Computational Harmonic Analysis. 493D. Zou and G. Lerman, "Graph convolutional neural networks via scattering," Applied and Computational Harmonic Analysis, vol. 49, no. 3, pp. 1046-1074, Nov. 2020.
Scaling the scattering transform: Deep hybrid networks. E Oyallon, E Belilovsky, S Zagoruyko, ICCV. E. Oyallon, E. Belilovsky, and S. Zagoruyko, "Scaling the scattering transform: Deep hybrid networks," in ICCV, 2017.
Compressing the input for CNNs with the first-order scattering transform. E Oyallon, E Belilovsky, S Zagoruyko, M Valko, ECCV. E. Oyallon, E. Belilovsky, S. Zagoruyko, and M. Valko, "Compressing the input for CNNs with the first-order scattering transform," in ECCV, 2018.
Deep network classification by scattering and homotopy dictionary learning. J Zarka, L Thiry, T Angles, S Mallat, ICLR. J. Zarka, L. Thiry, T. Angles, and S. Mallat, "Deep network classification by scattering and homotopy dictionary learning," in ICLR, 2020.
Separation and concentration in deep networks. J Zarka, F Guth, S Mallat, ICLR. J. Zarka, F. Guth, and S. Mallat, "Separation and concentration in deep networks," in ICLR, 2021.
A learnable scatternet: Locally invariant convolutional layers. F Cotter, N Kingsbury, ICIP. F. Cotter and N. Kingsbury, "A learnable scatternet: Locally invariant convolutional layers," in ICIP, 2019.
. S Gauthier, B Thérien, L Alsène-Racicot, M Chaudhary, I Rish, E Belilovsky, M Eickenberg, G Wolf, Parametric scattering networks," in CVPRS. Gauthier, B. Thérien, L. Alsène-Racicot, M. Chaudhary, I. Rish, E. Belilovsky, M. Eickenberg, and G. Wolf, "Parametric scattering networks," in CVPR, 2022.
Group invariant scattering. S Mallat, Communications on Pure and Applied Mathematics. 6510S. Mallat, "Group invariant scattering," Communications on Pure and Applied Mathematics, vol. 65, no. 10, pp. 1331-1398, Jul. 2012.
Understanding deep convolutional networks. Philosophical Trans. of the Royal Society A: Mathematical, Physical and Engineering Sciences. 374--, "Understanding deep convolutional networks," Philosophical Trans. of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 374, no. 2065, Apr. 2016.
Analysis of time-frequency scattering transforms. W Czaja, W Li, Applied and Computational Harmonic Analysis. 471W. Czaja and W. Li, "Analysis of time-frequency scattering transforms," Applied and Computational Harmonic Analysis, vol. 47, no. 1, pp. 149- 171, Jul. 2019.
Rotationally invariant time-frequency scattering transforms. J. Fourier Analysis and Applications. 2614--, "Rotationally invariant time-frequency scattering transforms," J. Fourier Analysis and Applications, vol. 26, no. 1, p. 4, Jan. 2020.
Group invariance, stability to deformations, and complexity of deep convolutional representations. A Bietti, J , JMLR. 201A. Bietti and J. Mairal, "Group invariance, stability to deformations, and complexity of deep convolutional representations," JMLR, vol. 20, no. 1, pp. 876-924, 2019.
The analytic image. J Havlicek, J Havlicek, A Bovik, ICIP. J. Havlicek, J. Havlicek, and A. Bovik, "The analytic image," in ICIP, 1997.
A Wavelet Tour of Signal Processing : The Sparse Way. S Mallat, Academic PressS. Mallat, A Wavelet Tour of Signal Processing : The Sparse Way. Academic Press, 2009.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, ICML. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in ICML, 2015.
Design of Q-shift complex wavelets for image processing using frequency domain energy minimization. N Kingsbury, ICIP. N. Kingsbury, "Design of Q-shift complex wavelets for image processing using frequency domain energy minimization," in ICIP, 2003.
The stationary wavelet transform and some statistical applications. G P Nason, B W Silverman, Wavelets and Statistics, ser. Lecture Notes in Statistics. SpringerG. P. Nason and B. W. Silverman, "The stationary wavelet transform and some statistical applications," in Wavelets and Statistics, ser. Lecture Notes in Statistics. Springer, 1995, pp. 281-299.
Principe d'incertitude, bases hilbertiennes et algèbres d'opérateurs. Y Meyer, Séminaire Bourbaki. 662Y. Meyer, "Principe d'incertitude, bases hilbertiennes et algèbres d'opérateurs," in Séminaire Bourbaki, vol. 662, 1985.
The dual-tree complex wavelet transform. I W Selesnick, R Baraniuk, N Kingsbury, IEEE Signal Processing Magazine. 226I. W. Selesnick, R. Baraniuk, and N. Kingsbury, "The dual-tree complex wavelet transform," IEEE Signal Processing Magazine, vol. 22, no. 6, pp. 123-151, Nov. 2005.
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, IJCV. 1153O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei- Fei, "Imagenet large scale visual recognition challenge," IJCV, vol. 115, no. 3, pp. 211-252, Apr. 2015.
K I Park, M Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications. SpringerK. I. Park and M. Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer, 2018.
. P R Halmos, Measure Theory, SpringerP. R. Halmos, Measure Theory. Springer, 2013.
Statistics of natural image categories. A Torralba, A Oliva, Network. 143A. Torralba and A. Oliva, "Statistics of natural image categories," Network, vol. 14, no. 3, pp. 391-412, Jan. 2003.
A mathematical motivation for complex-valued convolutional networks. M Tygert, J Bruna, S Chintala, Y Lecun, S Piantino, A Szlam, Neural Computation. 285M. Tygert, J. Bruna, S. Chintala, Y. LeCun, S. Piantino, and A. Szlam, "A mathematical motivation for complex-valued convolutional networks," Neural Computation, vol. 28, no. 5, pp. 815-825, May 2016.
K B Athreya, S N Lahiri, Measure Theory and Probability Theory. Springer19K. B. Athreya and S. N. Lahiri, Measure Theory and Probability Theory. Springer, 2006, vol. 19.
| []
|
[
"Deep Learning in the Wild",
"Deep Learning in the Wild"
]
| [
"Thilo Stadelmann \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n",
"Mohammadreza Amirian \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n\nInstitute of Neural Information Processing\nUlm University\nGermany\n",
"Ismail Arabaci \nARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland\n",
"Marek Arnold \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n\nARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland\n",
"Gilbert François Duivesteijn \nDeep Impact AG\nWinterthurSwitzerland\n",
"Ismail Elezi \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n\nDAIS\nCa' Foscari University of Venice\nVenezia MestreItaly\n",
"Melanie Geiger \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n\nInstitut d'Informatique\nUniversité de Neuchâtel\nSwitzerland\n",
"Stefan Lörwald \nPricewaterhouseCoopers AG\nZürichSwitzerland\n",
"Benjamin Bruno Meier \nARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland\n",
"Katharina Rombach \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n",
"Lukas Tuggener \nZHAW Datalab & School of Engineering\nWinterthurSwitzerland\n\nIDSIA Dalle Molle Institute for Artificial Intelligence\nMannoSwitzerland\n"
]
| [
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"Institute of Neural Information Processing\nUlm University\nGermany",
"ARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"ARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland",
"Deep Impact AG\nWinterthurSwitzerland",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"DAIS\nCa' Foscari University of Venice\nVenezia MestreItaly",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"Institut d'Informatique\nUniversité de Neuchâtel\nSwitzerland",
"PricewaterhouseCoopers AG\nZürichSwitzerland",
"ARGUS DATA INSIGHTS Schweiz AG\nZürichSwitzerland",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"ZHAW Datalab & School of Engineering\nWinterthurSwitzerland",
"IDSIA Dalle Molle Institute for Artificial Intelligence\nMannoSwitzerland"
]
| []
| Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice. | 10.1007/978-3-319-99978-4_2 | [
"https://arxiv.org/pdf/1807.04950v1.pdf"
]
| 49,743,465 | 1807.04950 | bea85816758eb74b79b93ca95b24ecb858fc871e |
Deep Learning in the Wild
Thilo Stadelmann
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
Mohammadreza Amirian
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
Institute of Neural Information Processing
Ulm University
Germany
Ismail Arabaci
ARGUS DATA INSIGHTS Schweiz AG
ZürichSwitzerland
Marek Arnold
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
ARGUS DATA INSIGHTS Schweiz AG
ZürichSwitzerland
Gilbert François Duivesteijn
Deep Impact AG
WinterthurSwitzerland
Ismail Elezi
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
DAIS
Ca' Foscari University of Venice
Venezia MestreItaly
Melanie Geiger
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
Institut d'Informatique
Université de Neuchâtel
Switzerland
Stefan Lörwald
PricewaterhouseCoopers AG
ZürichSwitzerland
Benjamin Bruno Meier
ARGUS DATA INSIGHTS Schweiz AG
ZürichSwitzerland
Katharina Rombach
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
Lukas Tuggener
ZHAW Datalab & School of Engineering
WinterthurSwitzerland
IDSIA Dalle Molle Institute for Artificial Intelligence
MannoSwitzerland
Deep Learning in the Wild
data availability · deployment · loss & reward shaping · real world tasks
Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.
Introduction
Measured for example by the interest and participation of industry at the annual NIPS conference 1 , it is save to say that deep learning [49] has successfully transitioned from pure research to application [32]. Major research challenges still exist, e.g. in the areas of model interpretability [39] and robustness [1], or general understanding [53] and stability [67,25] of the learning process, to name a few. Yet, and in addition, another challenge is quickly becoming relevant: in the light of more than 180 deep learning publications per day in the last year 2 , the growing number of deep learning engineers as well as prospective researchers in the field need to get educated on best practices and what works and what doesn't "in the wild". This information is usually underrepresented in publications of a field that is very competitive and thus striving 1 See https://medium.com/syncedreview/a-statistical-tour-of-nips-2017-438201fb6c8a. 2 Google scholar counts > 68, 000 articles for the year 2017 as of June 11, 2018. above all for novelty and benchmark-beating results [38]. Adding to this fact, with a notable exception [20], the field lacks authoritative and detailed textbooks by leading representatives. Learners are thus left with preprints [37,57], cookbooks [44], code 3 and older gems [29,28,58] to find much needed practical advice.
In this paper, we contribute to closing this gap between cutting edge research and application in the wild by presenting case-based best practices. Based on a number of successful industry-academic research & development collaborations, we report what specifically enabled success in each case alongside open challenges. The presented findinds (a) come from real-world and business case-backed use cases beyond purely academic competitions; (b) go deliberately beyond what is usually reported in our research papers in terms of tips & tricks, thus complementing them by the stories behind the scenes; (c) include also what didn't work despite contrary intuition; and (d) have been selected to be transferable as lessons learned to other use cases and application domains. The inteded effect is twofold: more successful applications, and increased applied reasearch in the areas of the remaining challenges.
We organize the main part of this paper by case studies to tell the story behind each undertaking. Per case, we briefly introduce the application as well as the specific (research) challenge behind it; sketch the solution (referring details to elsewhere, as the final model architecture etc. is not the focus of this work); highlight what measures beyond textbook knowledge and published results where necessary to arrive at the solution; and show, wherever possible, examples of the arising difficulties to exemplify the challenges. Section 2 introduces a face matching application and the amount of surrounding models needed to make it practically applicable. Likewise, Section 3 describes the additional amount of work to deploy a state-of-the-art machine learning system into the wider IT system landscape of an automated print media monitoring application. Section 4 discusses interpretability and class imbalance issues when applying deep learning for images-based industrial quality control. In Section 5, measures to cope with the instability of the training process of a complex model architecture for large-scale optical music recognition are presented, and the class imbalance problem has a second appearance. Section 6 reports on practical ways for deep reinforcement learning in complex strategy game play with huge action and state spaces in non-stationary environments. Finally, Section 7 presents first results on comparing practical automated machine learning systems with the scientific state of the art, hinting at the use of simple baseline experiments. Section 8 summarizes the lessons learned and gives an outlook on future work on deep learning in practice.
Face matching
Designing, training and testing deep learning models for application in face recognition comes with all the well known challenges like choosing the architecture, setting hyperparameters, creating a representative training/dev/test dataset, preventing bias or overfitting of the trained model, and more. Anyway, very good results have been reported in the literature [42,50,9]. Although the challenges in lab conditions are not Fig. 1: Schematic representation of a face matching application with ID detection, anti-spoofing and image quality assessment. For any pair of input images (selfie and ID document), the output is the match probability and type of ID document, if no anomaly or attack has been detected. Note that all boxes contain at least one or several deep learning (DL) models with many different (convolutional) architectures.
to be taken lightly, a new set of difficulties emerges when deploying these models in a real product. Specifically, during development, it is known what to expect as input in the controlled environment. When the models are integrated in a product that is used "in the wild", however, all kinds of input can reach the system, making it hard to maintain a consistent and reliable prediction. In this section, we report on approaches to deal with related challenges in developing an actual face-ID verification product.
Although the core functionality of such a product is to quantify the match between a person's face and the photo on the given ID, more functionality is needed to make the system perform its task well, most of it hidden from the user. Thus, in addition to the actual face matching module, the final system contains at least the following machine learnable modules (see Figure 1): Image orientation detection When a user takes a photo of the ID on a flat surface using a mobile phone, in many cases the image orientation is random. A deep learning method is applied to predict the orientation angle, used to rotate the image in the correct orientation. Image quality assessment consists of an ensemble of analytical functions and deep learning models to test if the photo quality is sufficient for a reliable match. It also guides the user to improve the picture taking process in case of bad quality. User action prediction uses deep learning to predict the action performed by the user to guide the system's workflow, e.g. making a selfie, presenting an ID or if the user is doing something wrong during the sequence. Anti-Spoofing is an essential module that uses various methods to detect if a person is showing his "real" face or tries to fool the system with a photo, video or mask. It consists of an ensemble of deep learning models.
For a commercial face-ID product, the anti-spoofing module is both most crucial for success, and technically most challenging; thus, the following discussion will focus Fig. 2: Samples from the CASIA dataset [66], where photo 1, 2, and 3 on the left hand side show a real face, photo 4 shows a replay attack from a digital screen, and photos 5 and 6 show replay attacks from print. on anti-spoofing in practice. Face matching and recognition systems are vulnerable to spoofing attacks made by non-real faces, because they are not per se able to detect whether or not a face is "live" or "not-live", given only a single image as input in the worst case. If control over this input is out of the system's reach e.g. for product management reasons, it is then easy to fool the face matching system by showing a photo of a face from screen or print on paper, a video or even a mask. To guard against such spoofing, a secure system needs to be able to do live-ness detection. We'd like to highlight the methods we use for this task, in order to show the additional complexity of applying face recognition in a production environment over lab conditions.
One of the key features of spoofed images is that they usually can be detected because of degraded image quality: when taking a photo of a photo, the quality deteriorates. However, with high quality cameras in modern mobile phones, looking at image quality only is not sufficient in the real world. How then can a spoof detector be designed that approves a real face from a low quality grainy underexposed photo taken by an old 640 × 480 web cam, and rejects a replay attack using a photo from a retina display in front of a 4K video camera (compare Figure 2)?
Most of the many spoofing detection methods proposed in the literature use hand crafted features, followed by shallow learning techniques, e.g. SVM [18,34,30]. These techniques mainly focus on texture differences between real and spoofed images, differences in color space [7], Fourier spectra [30], or optical flow maps [6]. In more recent work, deep learning methods have been introduced [3,64,63,31]. Most methods have in common that they attempt to be a one-size-fits-all solution, classifying all incoming cases with one method. This might be facilitated by the available datasets: to develop and evaluate anti-spoofing tools, amongst others CASIA [66], MSU-USSA [43], and the Replay Attack Database [12] exist. Although these datasets are challenging, they turn out to be too easy compared to the input in a production environment.
The main differences between real cases and training examples from these benchmark databases are that the latter ones have been created with a low variety of hardware devices and only use few different locations and light conditions. Moreover, the quality of images throughout the training sets is quite consistent, which does not reflect real input. In contrast, the images that the system receives "in the wild" have the most wide range of possible used hardware and environmental conditions, making the anticipation of new cases difficult. Designing a single system that can classify all such cases with high accuracy seems therefore unrealistic.
We thus create an ensemble of experts, forming a final verdict from 3 independent predictions: the first method consists of 2 patch-based CNNs, one for low resolution images, the other one for high resolution images. They operate on fixed-size tiles from the unscaled input image using a sliding window. This technique proves to be effective for low and high quality input. The second method uses over 20 image quality measures as features combined with a classifier. This method is still very effective when the input quality is low. The third method uses a RNN with LSTM cells to conduct a joint prediction over multiple frames (if available). It is effective in discriminating micro movements of a real face against (simple) translations and rotations of a fake face, e.g. from a photo on paper or screen. All methods return a real vs. fake probability. The outputs of all 3 methods are fed as input features to the final decision tree classifier. This ensemble of deep learning models is experimentally determined to be much more accurate than using any known method individually. Note that as attackers are inventive and come up with new ways to fool the system quickly, it is important to update the models with new data quickly and regularly.
Print media monitoring
Content-based print media monitoring serves the task of delivering cropped digital articles from printed newspapers to customers based on their pre-formulated information need (e.g., articles about their own coverage in the media). For this form of article-based information retrieval, it is necessary to segment tens of thousands of newspaper pages into articles daily. We successfully developed neural network-based models to learn how to segment pages into their constituting articles and described their details elsewhere [57,35] (see example results in Figure 3a-b). In this section, we present challenges faced and learnings gained from integrating a respective model into a production environment with strict performance and reliability requirements.
Exclusion of non-article pages A common problem in print segmentation are special pages that contain content that doesn't represent articles in the common sense, for example classified ads, reader's letters, TV program, share prices, or sports results (see Figure 3c). Segmentation rules for such pages can be complicated, subjective, and provide little value for general use cases. We thus utilize a random forest-based classifier on handcrafted features to detect such content and avoid feeding respective pages to the general segmentation system to save compute time.
Model management
One advantage of an existing manual segmentation pipeline is the abundance of high quality, labeled training data being produced daily. To utilize this constant flow of data, we have started implementing an online learning system [52] where results of the automatic segmentation can be corrected within the regular workflow of the segmentation process and fed back to the system as training data. After training, an important business decision is the final configuration of a model, e.g. determining a good threshold for cuts to weigh between precision and recall, or the decision on how many different models should be used for the production system. We determined experimentally that it is more effective to train different models for different publishers: the same publisher often uses a similar layout even for different newspapers and magazines, while differences between publishers are considerable. To simplify the management of these different models, they are decoupled from the code. This is helpful for rapid development and experimentation.
Technological integration
For smooth development and operation of the neural network application we have chosen to use a containerized microservices architecture [14] utilizing Docker [62] and RabbitMQ [26]. This decoupled architecture (see Figure 4) brings several benefits especially for machine learning applications: (a) a separation of concerns between research, ops and engineering tasks; (b) decoupling of models/data from code, allowing for rapid experimentation and high flexibility when deploying the individual components of the system. This is further improved by a modern devops pipeline consisting of continuous integration (CI), continuous deployment (CD), and automated testing; (c) infrastructure flexibility, as the entire pipeline can be deployed to an on-premise data center or in the cloud with little effort. Furthermore, the use of Nvidia-docker [62] allows to utilize GPU-computing easily on any infrastructure; (d) precise controlling and monitoring of every component in the system is made easy by data streams that enable the injection and extraction of data such as streaming event arguments, log files, and metrics at any stage of the pipeline; and (e) easy scaling of the various components to fit different use cases (e.g. training, testing, experimenting, production). Every scenario requires a certain configuration of the system for optimal performance and resource utilization.
Visual quality control
Manual inspection of medical products for in-body use like balloon catheters is time-consuming, tiring and thus error-prone. A semi-automatic solution with high precision is thus sought. In this section, we present a case study of deep learning for visual quality control of industrial products. While this seems to be a standard use case for a CNN-based approach, the task differs in several interesting respects from standard image classification settings:
Data collection and labeling are one the most critical issues in most practical applications. Detectable defects in our case appear as small anomalies on the surface of transparent balloon catheters, such as scratches, inclusions or bubbles. Recognizing such defects on a thin, transparent and reflecting plastic surface is visually challenging even for expert operators that sometimes refer to a microscope to manually identify the defects. Thus, approx. 50% of a 2-year project duration was used on finding and verifying the optimal optical settings for image acquisition. Figure 5 depicts the results of different optical configurations for such photo shootings. Finally, operators have to be trained to produce consistent labels usable for a machine learning system. In our experience, the labeling quality rises if all involved parties have a basic understanding of the methods. This helps considerably to avoid errors like e.g. only to label a defect on the first image of a series of shots while rotating a balloon: while this is perfectly reasonable from a human perspective (once spotted, the human easily tracks the defect while the balloon moves), it is a no-go for the episodic application of a CNN.
Network and training design for practical applications experiences challenges such as class imbalance, small data regimes, and use case-specific learning targets apart from standard classification settings, making non-standard loss functions necessary (see also Section 5). For instance, in the current application, we are looking for relatively small defects on technical images. Therefore, architectures proposed for large-scale natural image classification such as AlexNet [27], GoogLeNet [59], ResNet [24] and modern variants are not necessarily successful, and respective architectures have to be adapted to learn the relevant task. Potential solutions for the class imbalance problem are for example:
-Down-sampling the majority class -Up-sampling the minority class via image augmentation [13] -Using pre-trained networks and applying transfer learning [41] -Increasing the weight of the minority class in the optimization loss [8] -Generating synthetic data for the minority class using SMOTE [11] or GANs [21] Selecting a suitable data augmentation approach according for the task is a necessity for its success. For instance, in the present case, axial scratches are more important than radial ones, as they can lead to a tearing of the balloon and its subsequent potentially lethal remaining in a patient's body. Thus, using 90°rotation for data augmentation could be fatal. Information like this is only gained in close collaboration with domain experts.
Interpretability of models received considerable attention recently, spurring hopes both of users for transparent decisions, and of experts for "debugging" the learning process. The latter might lead for instance to improved learning from few labeled examples through semantic understanding of the middle layers and intermediate representations in a network. Figure 6 illustrates some human-interpretable representations of the inner workings of a CNN on the recently published MUsculoskeletal RAdiographs (MURA) dataset [45] that we use here as a proxy for the balloon dataset. We used guided-backpropagation [56] and a standard VGG19 network [55] to visualize the feature responses, i.e. the part of the X-ray image on which the network focuses for its decision on "defect" (e.g., broken bone, foreign object) or "ok" (natural and healthy body part). It can be seen that the network mostly decides based on joints and detected defects, strengthening trust in its usefulness. We described elsewhere [2] that this visualization can be extended to an automatic defense against adversarial Fig. 7: Input, feature response and local spatial entropy for clean and adversarial images, respectively. We used VGG19 to estimate predictions and the Fast Gradient Sign Attack (FGSM) method [21] to compute the adversarial perturbation.
attacks [21] on deployed neural networks by thresholding the local spatial entropy [10] of the feature response. As Figure 7 depicts, the focus of a model under attack widens considerably, suggesting that it "doesn't know where to look" anymore.
Music scanning
Optical music recognition (OMR) [46] is the process of translating an image of a page of sheet music into a machine-readable structured format like MusicXML. Existing products exhibit a symbol recognition error rate that is an order of magnitude too high for automatic transcription under professional standards, but don't leverage deep learning computer vision capabilities yet. In this section, we therefore report on the implementation of a deep learning approach to detect and classify all musical symbols on a full page of written music in one go, and integrate our model into the open source system Audiveris 4 for the semantic reconstruction of the music. This enables products like digital music stands based on active sheets, as most of todays music is stored in image-based PDF files or on paper. We highlight four typical issues when applying deep learning techniques to practical OMR: (a) the absence of a comprehensive dataset; (b) the extreme class imbalance present in written music with respect to symbols; (c) the issues of state-of-the-art object detectors with music notation (many tiny and compound symbols on large images); and (d) the transfer from synthetic data to real world examples.
Synthesizing training data
The notorious data hunger of deep learning has lead to a strong dependence of results on large, well annotated datasets, such as ImageNet [48] or PASCAL VOC [16]. For music object recognition, no such dataset has been readily available. Since labeling data by hand is no feasible option, we put a one-year effort in synthesizing realistic (i.e., semantically and syntactically correct music notation) data and the corresponding labeling from renderings of publicly available MusicXML files and recently open sourced the resulting DeepScores dataset [60].
Dealing with imbalanced data While typical academic training datasets are nicely balanced [48,16], this is rarely the case in datasets sourced from real world tasks. Music notation (and therefore DeepScores) shows an extreme class imbalance (see Figure 8). For example, the most common class (note head black) contains more than 55% of the symbols in the entire dataset, and the top 10 classes contain more than 85% of the symbols. At the other extreme, there is a class which is present only once in the entire dataset, making its detection by pattern recognition methods nearly impossible (a "black swan" is no pattern). However, symbols that are rare are often of high importance in the specific pieces of music where they appear, so simply ignoring the rare symbols in the training data is not an option. A common way to address such imbalance is the use of a weighted loss function, as described in Section 4. This is not enough in our case: first, the imbalance is so extreme that naively reweighing loss components leads to numerical instability; second, the signal of these rare symbols is so sparse that it will get lost in the noise of the stochastic gradient descent method [61], as many symbols will only be present in a tiny fraction of the mini batches. Our current answer to this problem is data synthesis [37], using a threefold approach to synthesize image patches with rare symbols (cp. Figure 8): (a) we locate rare symbols which are present at least 300 times in the dataset, and crop the parts containing those symbols including their local context (other symbols, staff lines etc.); (b) for rarer symbols, we locate a semantically similar but more common symbol in the dataset (based on some expert-devised notion of symbol similarity), replace this common symbol with the rare symbol and add the resulting page to the dataset. This way, synthesized sheets still have semantic sense, and the network can learn from syntactically correct context symbols. We then crop patches around the rare symbols similar to the previous approach; (c) for rare symbols without similar common symbols, we automatically "compose" music containing those symbols.
Then, during training, we augment each input page in a mini batch with 12 randomly selected synthesized crops of rare symbols (of size 130 × 80 pixels) by putting them in the margins at the top of the page. This way, that the neural network (on expectation) does not need to wait for more than 10 iterations to see every class which is present in the dataset. Preliminary results show improvement, though more investigation is needed: overfitting on extreme rare symbols is still likely, and questions remain regarding how to integrate the concept of patches (in the margins) with the idea of a full page classifier that considers all context.
Enabling & stabilizing training
We initially used state-of-the-art object detection models like Faster R-CNN [47] to attempt detection and classification of musical symbols on DeepScores. These algorithms are designed to work well on the prevalent datasets that are characterized by containing low-resolution images with a few big objects. In contrast, DeepScores consists of high resolution musical sheets containing hundreds of very small objects, amounting to a very different problem [60]. This disconnect lead to very poor out-of-the-box performance of said systems.
Region proposal-based systems scale badly with the number of objects present on a given image, by design. Hence, we designed the Deep Watershed Detector as an entirely new object detection system based on the deep watershed transform [4] Fig. 10: Top: part of a synthesized image from DeepScores; middle: the same part, printed on old paper and photographed using a cell phone; bottom: the same image, automatically retrofitted (based on the dark green lines) to the original image coordinates for ground truth matching (ground truth overlayed in neon green boxes). and described it in detail elsewhere [61]. It detects raw musical symbols (e.g., not a compound note, but note head, stem and flag individually) in their context with a full sheet music page as input. As depicted in Figure 9, the underlying neural network architecture has three output heads on the last layer, each pertaining to a separate (pixel wise) task: (a) predicting the underlying symbol's class; (b) predicting the energy level (i.e., the degree of belonging of a given pixel location to an object center, also called "objectness"); and (c) predicting the bounding box of the object.
Initially, the training was unstable, and we observed that the network did not learn well if it was directly trained on the combined weighted loss. Therefore, we now train the network on each of the three tasks separately. We further observed that while the network gets trained on the bounding box prediction and classification, the energy level predictions get worse. To avoid this, the network is fine-tuned only for the energy level loss after being trained on all three tasks. Finally, the network is retrained on the combined task (the sum of all three losses, normalized by their respective running means) for a few thousand iterations, giving excellent results on common symbols.
Generalizing to real-world data
The basic assumption in machine learning for training and test data to stem from the same distribution is often violated in field applications. In the present case, domain adaptation is crucial: our training set consists of synthetic sheets created by LilyPond scripts [60], while the final product will work on scans or photographs of printed sheet music. These test pictures can have a wide variety of impairments, such as bad printer quality, torn or stained paper etc. While some work has been published on the topic of domain transfer [19], the results are non-satisfactory. The core idea to address this problem here is transfer learning [65]: the neural network shall learn the core task of the full complexity of music notation from the synthetic dataset (symbols in context due to full page input), and use a much smaller dataset to adapt to the real world distributions of lighting, printing and defect.
We construct this post-training dataset by carefully choosing several hundred representative musical sheets, printing them with different types of printers on different types of paper, and finally scanning or photographing them. We then use the BFMatcher function from OpenCV to align these images with the original musical sheets to use all the ground truth annotation of the original musical sheet for the realworld images (see Figure 10). This way, we get annotated real-looking images "for free" that have much closer statistics to real-world images than images from DeepScores. With careful tuning of the hyperparameters (especially the regularization coefficient), we get promising -but not perfect -results during the inference stage.
Game playing
In this case study, deep reinforcement learning (DRL) is applied to an agent in a multi-player business simulation video game with steadily increasing complexity, comparable to StarCraft or SimCity. The agent is expected to compete with human players in this environment, i.e. to continuously adapt its strategy to challenge evolving opponents. Thus, the agent is required to mimic somewhat general intelligent behavior by transferring knowledge to an increasingly complex environment and adapting its behavior and strategies in a non-stationary, multi-agent environment with large action and state spaces. DRL is a general paradigm, theoretically able to learn any complex task in (almost) any environment. In this section, we share our experiences with applying DRL to the above described competitive environment. Specifically, the performance of a value-based algorithm using Deep Q-Networks (DQN) [36] is compared to a policy gradient method called PPO [51].
Dealing with competitive environments
In recent years, astounding results have been achieved by applying DRL in gaming environments. Examples are Atari games [36] and AlphaGo [54], where agents learn human or superhuman performance purely from scratch. In both examples, the environments are either stationary or, if an evolving opponent is present, it did not act simultaneously in the environment; instead, actions were taken in turns. In our environment, multiple evolving players act simultaneously, making changes to the environment that can not be explained solely based on changes in the agent's own policy. Thus, the environment is perceived as non-stationary from the agent's perspective, resulting in stability issues in RL [33]. Another source of complexity in our setting is a huge action and state space (see below). In our experiments, we observed that DQN got problems learning successful control policies as soon as the environment became more complex in this respect, even without non-stationarity induced by opponents. On the other hand, PPO's performance is generally less sensitive to increasing state and action spaces. The impact of non-stationarity to these algorithms is subject of ongoing work.
Reward shaping An obvious rewarding choice is the current score of the game (or its gain). Yet, in the given environment, scoring and thus any reward based on it is sparse, since it is dependent on a long sequence of correct actions on the operational, tactical and strategic level. As any rollout of the agent without scoring is not contributing to any gain in knowledge, the learning curve is flat initially. To avoid this initial phase of Additionally, it is not sufficient for the agent to find a control policy eventually, but it is crucial to find a good policy quickly, as training times are anyhow very long. Usually, comparable agents for learning complex behaviors in competitive environments are trained using self-play [5], i.e., the agents are always trained with "equally good" competitors to be able to succeed eventually. In our setting, self play is not a straightforward first option, for several reasons: first, to jump-start learning, it is easier in our setting to play without an opponent first and only learn the art of competition later when a stable ability to act is reached; second, different from other settings, our agents should be entertaining to human opponents, not necessarily winning. It is thus not desirable to learn completely new strategies that are successful yet frustrating to human opponents. Therefore, we will investigate self-play only after stable initializations from (scripted) human opponents on different levels.
Complex state and action spaces
Taking the screen frame (i.e., pixels) as input to the control policy is not applicable in our case. First, the policy's input needs to be independent of rendering and thus of hardware, game settings, game version etc. Furthermore, a current frame does not satisfy the Markov property, since attributes like "I own item x" are not necessarily visible in it. Instead, some attributes need to be concluded from past experiences. Thus, the state space needs to be encoded into sufficient features, a task we approach with manual pre-engineering.
Next, a post-engineering approach helps in decreasing the learning time in case of DQN by removing unnecessary actions from consideration as follows: in principal, RL algorithms explore any theoretically possible state-action pair in the environment, i.e., any mathematically possible decision in the Markov Decision Process (MDP). In our environment, the available actions are dependent on the currently available in-game resources of the player, i.e., on the current state. Thus, exploring currently impossible regions in the action space is not efficient and is thus prevented by a post-engineered decision logic built to block these actions from being selected. This reduces the size of the action space per time stamp considerably. These rules where crucial in producing first satisfying learning results in our environment using DQN in a stationary setting of the game. However, when training the agent with PPO, hand-engineered rules where not necessary for proper learning.
The major problem however is the huge action and state space, as it leads to ever longer training times and thus long development cycles. It results from the fact that one single action in our environment might consist of a sequence of sub-decisions. Think e.g. of an action called "attack" in the game of StarCraft, answering the question of WHAT to do (see Figure 11). It is incompletely defined as long as it does not state WHICH opponent is to be attack using WHICH unit. In other words, each action itself requires a number of different decisions, chosen from different subcategories. To avoid the combinatorial explosion of all possible completely defined actions, we perform another post-processing on the resource management: WHICH unit to choose on WHICH type of enemy, for example, is hard-coded into heuristic rules.
This case study is work in progress, but what becomes evident already is that the combination of the complexity of the task (i.e., acting simultaneously on the operational, tactical and strategic level with exponentially increasing time horizons, as well as a huge state and action space) and the non-stationary environment prevent successful end-to-end learning as in "Pong from pixels" 5 . Rather, it takes manual preand post-engineering to arrive at a first agent that learns, and it does so better with policy-based rather than DQN-based algorithms. A next step will explore an explicitly hierarchical learner to cope with the combinatorial explosion of the action space on the three time scales (operational/tactical/strategic) without using hard-coded rules, but instead factorizing the action space into subcategories.
Automated machine learning
One of the challenging tasks in applying machine learning successfully is to select a suitable algorithm and set of hyperparameters for a given dataset. Recent research in automated machine learning [17,40] and respective academic challenges [22] accurately aimed at finding a solution to this problem for sets of practically relevant use cases. The respective Combined Algorithm Selection and Hyperparameter (CASH) optimization problem is defined as finding the best algorithm A * and set of hyperparameters λ * with respect to an arbitrary cross-validation loss L as follows:
A * , λ * = arg min A∈A ,λ∈Λ A 1 K K i =1 L (A λ , D (i ) t r ai n , D (i ) v al i d )
where A is a set of algorithms, Λ A the set of hyperparameters per algorithm A (together they form the hypothesis space), K is the number of cross validation folds and D are datasets. In this section, we compare two methods from the scientific stateof-the-art (one uses Bayesian optimization, the other genetic programming) with a commercial automated machine learning prototype based on random search.
Scientific state-of-the-art
Auto-sklearn [17] is the most successful automated machine learning framework in past competitions [23]. The algorithm starts with extracting meta-features from the given dataset and finds models which perform well on similar datasets (according to the meta-features) in a fixed pool of stored successful [40] is toolbox based on genetic programming. The algorithm starts with random initial configurations including feature preprocessing, feature selection and a supervised classifier. At every step, the top 20% best models are retained and randomly modified to generate offspring. The offspring competes with the parent, and winning models proceed to the next iteration of the algorithm.
Commercial prototype
The Data Science Machine (DSM) is currently used inhouse for data science projects by a business partner. It uses random sampling of the solution space for optimization. Machine learning algorithms in this system are leveraged from Microsoft Azure, scikit-learn and can be user-enhanced. DSM can be deployed in the cloud, on-premise, as well as standalone. The pipeline of DSM includes data preparation, feature reduction, automatic model optimization, evaluation and final ensemble creation. The question is: can it prevail against much more sophisticated systems even at this early stage of development?
Evaluation is performed using the protocol of the AutoML challenge [22] for comparability, confined to a subset of ten datasets that is processable for the current DSM prototype (i.e., non-sparse, non-big). It spans the tasks of regression, binary and multiclass classification. For applicability, we constrain the time budget of the searches by the required time for DSM to train 100 models using random algorithm selection. A performance comparison is given in Table 1, suggesting that Bayesian optimization and genetic programming are superior to random search. However, random parameter search lead to reasonably good models and useful results as well (also in commercial practice). This suggests room for improvement in actual meta-learning.
Conclusions
Does deep learning work in the wild, in business and industry? In the light of the presented case studies, a better questions is: what does it take to make it work? Apparently, the challenges are different compared to academic competitions: instead of a given task and known (but still arbitrarily challenging) environment, given by data and evaluation metric, real-world applications are characterized by (a) data quality and quantity issues; and (b) unprecedented (thus: unclear) learning targets. This reflects the different nature of the problems: competitions provide a controlled but unexplored environment to facilitate the discovery of new methods; real-world tasks on the other hand build on the knowledge of a zoo of methods (network architectures, training methods) to solve a specific, yet still unspecified (in formal terms) task, thereby enhancing the method zoo in return in case of success. The following lessons learned can be drawn from our six case studies (section numbers given in parentheses refer to respective details):
Data acquisition usually needs much more time than expected (4), yet is the basis for all subsequent success (5). Class imbalance and covariate shift are usual (2,4,5). Understanding of what has been learned and how decisions emerge help both the user and the developer of neural networks to build trust and improve quality (4,5). Operators and business owners need a basic understanding of used methods to produce usable ground truth and provide relevant subject matter expertise (4). Deployment should include online learning (3) and might involve the buildup of up to dozens of other machine learning models (2, 3) to flank the original core part. Loss/reward shaping is usually necessary to enable learning of very complex target functions in the first place (5,6). This includes encoding expert knowledge manually into the model architecture or training setup (4,6), and handling special cases separately (3) using some automatic pre-classification. Simple baselines do a good job in determining the feasibility as well as the potential of the task at hand when final datasets or novel methods are not yet seen (4,7). Increasing the complexity of methods and (toy-)tasks in small increments helps monitoring progress, which is important to effectively debug failure cases (6). Specialized models for identifiable sub-problems increase the accuracy in production systems over all-in-one solutions (2,3), and ensembles of experts help where no single method reaches adequate performance (2).
Best practices are straightforward to extract on the general level ("plan enough resources for data acquisition"), yet quickly get very specific when broken down to technicalities ("prefer policy-based RL given that . . . "). An overarching scheme seems to be that the challenges in real-world tasks need similar amounts of creativity and knowledge to get solved as fundamental research tasks, suggesting they need similar development methodologies on top of proper engineering and business planning.
We identified specific areas for future applied research: (a) anti-spoofing for face verification; (b) the class imbalance problem in OMR; and (c) the slow learning and poor performance of RL agents in non-stationary environments with large action and state spaces. The latter is partially addressed by new challenges like Dota 2 6 , Pommerman or VizDoom 7 , but for example doesn't address hierarchical actions. Generally, future work should include (d) making deep learning more sample efficient to cope with smaller training sets (e.g. by one-shot learning, data or label generation [15], or architecture learning); (e) finding suitable architectures and loss designs to cope with the complexity of real-world tasks; and (f) improving the stability of training and robustness of predictions along with (d) the interpretability of neural nets.
Fig. 3 :
3Good (a) and bad (b) segmentations (blue lines denote crop marks) for realistic pages, depending on the freedom in the layout. Image (c) shows a non-article page that is excluded from automatic segmentation.
Fig. 4 :
4Architecture of the overall pipeline: the actual model is encapsulated in the "FCNN-based article segmentation" block. Several other systems are required to warrant full functionality: (a) the Proxy is responsible to control data input and output from the segmentation model; (b) RabbitMQ controls the workflow as a message broker; (c) MongoDB stores all segmentation results and metrics; (d) the Lectorate UI visualizes results for human assessment and is used to create training data.
Fig. 5 :
5Balloon catheter images taken under different optical conditions, exposing (left to right) high reflections, low defect visibility, strong artifacts, and a good setup.
Fig. 6 :
6Visualizing VGG19 feature responses: the first row contains two negative examples (healthy patient) and the second row positives (containing anomalies). All depicted samples are correctly classified.
Fig. 8 :
8Symbol classes in DeepScores with their relative frequencies (red) in the dataset.
Fig. 9 :
9Schematic of the Deep Watershed Detector model with three distinct output heads. N and M are the height and width of the input image, #classes denotes the number of symbols and #energy_levels is a hyperparameter of the system.
Fig. 11 :
11Heuristic encoding of actions to prevent combinatorial explosion. no information gain, intermediate rewards are given to individual actions, leading to faster learning progress in both DQN and PPO.
Table 1: Comparison of different automated machine learning algorithms.machine learning endeavors. Auto-sklearn then performs meta-learning by initializing a set of model candidates with the model and hyperparameter choices of k nearest neighbors in dataset space; subsequently, it optimizes their hyperparameters and feature preprocessing pipeline using Bayesian optimization. Finally, an ensemble of the optimized models is build using a greedy search. On the other side, Tree-based Pipeline Optimization Tool (TPOT)Auto-Sklearn
TPOT
DSM
Dataset Task
Metric
Validation Test Validation Test Validation Test
Cadata
Regression
Coefficient Of Determination 0.7913 0.7801 0.8245 0.8017 0.7078 0.7119
Christine Binary Classification
Balanced Accuracy Score
0.7380 0.7405 0.7435 0.7454 0.7362 0.7146
Digits
Multiclass Classification Balanced Accuracy Score
0.9560 0.9556 0.9500 0.9458 0.8900 0.8751
Fabert
Multiclass Classification Accuracy Score
0.7245 0.7193 0.7172 0.7006 0.7112 0.6942
Helena
Multiclass Classification Balanced Accuracy Score
0.3404 0.3434 0.2654 0.2667 0.2085 0.2103
Jasmine Binary Classification
Balanced Accuracy Score
0.7987 0.8348 0.8188 0.8281 0.8020 0.8371
Madeline Binary Classification
Balanced Accuracy Score
0.8917 0.8769 0.8885 0.8620 0.7707 0.7686
Philippine Binary Classification
Balanced Accuracy Score
0.7787 0.7486 0.7839 0.7646 0.7581 0.7406
Sylvine
Binary Classification
Balanced Accuracy Score
0.9414 0.9454 0.9512 0.9493 0.9414 0.9233
Volkert
Multiclass Classification Accuracy Score
0.7174 0.7101 0.6429 0.6327 0.5220 0.5153
Average Performance
0.7678 0.7654 0.7586 0.7497 0.7048 0.6991
See e.g. https://modelzoo.co/.
See http://audiveris.org.
Compare http://karpathy.github.io/2016/05/31/rl/.
See e.g. https://blog.openai.com/dota-2/. 7 See https://www.pommerman.com/competitions and http://vizdoom.cs.put.edu.pl.
AcknowledgementsWe are grateful for the invitation by the ANNPR chairs and the support of our business partners in Innosuisse grants 17719.1 "PANOPTES", 17963.1 "DeepScore", 25256.1 "Libra", 25335.1 "FarmAI", 25948.1 "Ada" and 26025.1 "QualitAI".
N Akhtar, A Mian, arXiv:1801.00553Threat of adversarial attacks on deep learning in computer vision: A survey. arXiv preprintAkhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: A survey. arXiv preprint arXiv:1801.00553 (2018)
Trace and detect adversarial attacks on CNNs using feature response maps. M Amirian, F Schwenker, T Stadelmann, ANNPRAmirian, M., Schwenker, F., Stadelmann, T.: Trace and detect adversarial attacks on CNNs using feature response maps. In: ANNPR (2018)
Face anti-spoofing using patch and depth-based CNNs. Y Atoum, Y Liu, A Jourabloo, X Liu, IEEE Int. Joint Conference on Biometrics. Atoum, Y., Liu, Y., Jourabloo, A., Liu, X.: Face anti-spoofing using patch and depth-based CNNs. In: IEEE Int. Joint Conference on Biometrics (IJCB) (2017)
Deep watershed transform for instance segmentation. M Bai, R Urtasun, CVPRBai, M., Urtasun, R.: Deep watershed transform for instance segmentation. In: CVPR (2017)
T Bansal, J Pachocki, S Sidor, I Sutskever, I Mordatch, arXiv:1710.03748Emergent complexity via multiagent competition. arXiv preprintBansal, T., Pachocki, J., Sidor, S., Sutskever, I., Mordatch, I.: Emergent complexity via multi- agent competition. arXiv preprint arXiv:1710.03748 (2017)
W Bao, H Li, N Li, W Jiang, A liveness detection method for face recognition based on optical flow field. Int. Conference on Image Analysis and Signal Processing. Bao, W., Li, H., Li, N., Jiang, W.: A liveness detection method for face recognition based on optical flow field. Int. Conference on Image Analysis and Signal Processing (2009)
Face anti-spoofing based on color texture analysis. Z Boulkenafet, J Komulainen, A Hadid, Int. Conference on Image Processing (ICIP). Boulkenafet, Z., Komulainen, J., Hadid, A.: Face anti-spoofing based on color texture analysis. In: Int. Conference on Image Processing (ICIP) (2015)
A systematic study of the class imbalance problem in convolutional neural networks. M Buda, A Maki, M A Mazurowski, arXiv:1710.05381arXiv preprintBuda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. arXiv preprint arXiv:1710.05381 (2017)
Q Cao, L Shen, W Xie, O M Parkhi, A Zisserman, arXiv:1710.08092VGGFace2: A dataset for recognising faces across pose and age. arXiv preprintCao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: A dataset for recognising faces across pose and age. arXiv preprint arXiv:1710.08092 (2017)
An efficient blood vessel detection algorithm for retinal images using local entropy thresholding. T Chanwimaluang, G Fan, Int. Symposium on Circuits and Systems. 5Chanwimaluang, T., Fan, G.: An efficient blood vessel detection algorithm for retinal images using local entropy thresholding. Int. Symposium on Circuits and Systems (ISCAS) 5 (2003)
SMOTE: synthetic minority oversampling technique. N V Chawla, K W Bowyer, L O Hall, W P Kegelmeyer, Journal of artificial intelligence research. 16Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over- sampling technique. Journal of artificial intelligence research 16, 321-357 (2002)
On the effectiveness of local binary patterns in face anti-spoofing. I Chingovska, A Anjos, S Marcel, BIOSIGChingovska, I., Anjos, A., Marcel, S.: On the effectiveness of local binary patterns in face anti-spoofing. In: BIOSIG (2012)
Multi-column deep neural networks for image classification. D C Ciresan, U Meier, J Schmidhuber, CVPRCiresan, D.C., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: CVPR (2012)
Microservices: How to make your application scale. N Dragoni, I Lanese, S T Larsen, M Mazzara, R Mustafin, L Safina, International Andrei Ershov Memorial Conference on Perspectives of System Informatics. SpringerDragoni, N., Lanese, I., Larsen, S.T., Mazzara, M., Mustafin, R., Safina, L.: Microservices: How to make your application scale. In: International Andrei Ershov Memorial Conference on Perspectives of System Informatics. Springer (2017)
Transductive label augmentation for improved deep network learning. I Elezi, A Torcinovich, S Vascon, M Pelillo, ICPRElezi, I., Torcinovich, A., Vascon, S., Pelillo, M.: Transductive label augmentation for im- proved deep network learning. In: ICPR (2018)
The PASCAL visual object classes (VOC) challenge. Int. M Everingham, L J V Gool, C K I Williams, J M Winn, A Zisserman, Journal of Computer Vision. 882Everingham, M., Gool, L.J.V., Williams, C.K.I., Winn, J.M., Zisserman, A.: The PASCAL visual object classes (VOC) challenge. Int. Journal of Computer Vision 88(2), 303-338 (2010)
Efficient and robust automated machine learning. M Feurer, A Klein, K Eggensperger, J Springenberg, M Blum, F Hutter, NIPSFeurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., Hutter, F.: Efficient and robust automated machine learning. In: NIPS (2015)
Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition. J Galbally, S Marcel, J Fiérrez, IEEE Trans. Image Processing. 232Galbally, J., Marcel, S., Fiérrez, J.: Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition. IEEE Trans. Image Processing 23(2), 710-724 (2014)
Fine-grained recognition in the wild: A multi-task domain adaptation approach. T Gebru, J Hoffman, L Fei-Fei, Gebru, T., Hoffman, J., Fei-Fei, L.: Fine-grained recognition in the wild: A multi-task domain adaptation approach. In: ICCV (2017)
. I Goodfellow, Y Bengio, A Courville, Deep Learning. MIT PressGoodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016)
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, ICLRGoodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. ICLR (2015)
Design of the. I Guyon, K Bennett, G Cawley, H J Escalante, S Escalera, T K Ho, N Macià, B Ray, M Saeed, A Statnikov, E Viegas, ChaLearn AutoML challenge. IJCNNGuyon, I., Bennett, K., Cawley, G., Escalante, H.J., Escalera, S., Ho, T.K., Macià, N., Ray, B., Saeed, M., Statnikov, A., Viegas, E.: Design of the 2015 ChaLearn AutoML challenge. In: IJCNN (2015)
A brief review of the ChaLearn AutoML challenge. I Guyon, I Chaabane, H J Escalante, S Escalera, D Jajetic, J R Lloyd, N Macía, B Ray, L Romaszko, M Sebag, A Statnikov, S Treguer, E Viegas, AutoML workshop@ICML. Guyon, I., Chaabane, I., Escalante, H.J., Escalera, S., Jajetic, D., Lloyd, J.R., Macía, N., Ray, B., Romaszko, L., Sebag, M., Statnikov, A., Treguer, S., Viegas, E.: A brief review of the ChaLearn AutoML challenge. In: AutoML workshop@ICML (2016)
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPRHe, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Deep reinforcement learning doesn't work yet. Online. A Irpan, Irpan, A.: Deep reinforcement learning doesn't work yet. Online (Feb. 14): https://www. alexirpan.com/2018/02/14/rl-hard.html (2018)
V John, X Liu, arXiv:1704.00411A survey of distributed message broker queues. arXiv preprintJohn, V., Liu, X.: A survey of distributed message broker queues. arXiv preprint arXiv:1704.00411 (2017)
A Krizhevsky, I Sutskever, G E Hinton, ImageNet classification with deep convolutional neural networks. In: NIPS. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)
Exploring strategies for training deep neural networks. H Larochelle, Y Bengio, J Louradour, P Lamblin, JMLR. 1Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. JMLR (1), 1-40 (1 2009)
Y Lecun, L Bottou, G B Orr, K R Müller, Neural networks: Tricks of the trade. Orr, G.B., Müller, K.R.Berlin; HeidelbergSpringerEfficient backpropLeCun, Y., Bottou, L., Orr, G.B., Müller, K.R.: Efficient backprop. In: Orr, G.B., Müller, K.R. (eds.) Neural networks: Tricks of the trade, pp. 9-50. Springer, Berlin, Heidelberg (1998)
Live face detection based on the analysis of Fourier spectra. J Li, Y Wang, T Tan, A K Jain, Biometric Technology for Human Identification. Li, J., Wang, Y., Tan, T., Jain, A.K.: Live face detection based on the analysis of Fourier spectra. Biometric Technology for Human Identification (2004)
An original face anti-spoofing approach using partial convolutional neural network. L Li, X Feng, Z Boulkenafet, Z Xia, M Li, A Hadid, Int. Conference on Image Processing Theory, Tools and Applications (IPTA). Li, L., Feng, X., Boulkenafet, Z., Xia, Z., Li, M., Hadid, A.: An original face anti-spoofing ap- proach using partial convolutional neural network. In: Int. Conference on Image Processing Theory, Tools and Applications (IPTA) (2016)
A survey of deep neural network architectures and their applications. W Liu, Z Wang, X Liu, N Zeng, Y Liu, F E Alsaadi, Neurocomputing. 234Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11 -26 (2017)
Multi-agent actor-critic for mixed cooperative-competitive environments. R Lowe, Y Wu, A Tamar, J Harb, O P Abbeel, I Mordatch, Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, O.P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: NIPS (2017)
Face spoofing detection from single images using micro-texture analysis. J Määttä, A Hadid, M Pietikäinen, Int. Joint Conference on Biometrics (IJCB). Määttä, J., Hadid, A., Pietikäinen, M.: Face spoofing detection from single images using micro-texture analysis. In: Int. Joint Conference on Biometrics (IJCB) (2011)
Fully convolutional neural networks for newspaper article segmentation. B Meier, T Stadelmann, J Stampfli, M Arnold, M Cieliebak, ICDARMeier, B., Stadelmann, T., Stampfli, J., Arnold, M., Cieliebak, M.: Fully convolutional neural networks for newspaper article segmentation. In: ICDAR (2017)
V Mnih, K Kavukcuoglu, D Silver, A Graves, I Antonoglou, D Wierstra, M Riedmiller, arXiv:1312.5602Playing Atari with deep reinforcement learning. arXiv preprintMnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
Machine Learning Yearning -Technical Strategy for AI Engineers in the Era of. A Ng, Deep Learning. to appearNg, A.: Machine Learning Yearning -Technical Strategy for AI Engineers in the Era of Deep Learning (2018), [to appear]
C Olah, S Carter, Research debt. Distill. Olah, C., Carter, S.: Research debt. Distill (2017)
C Olah, A Satyanarayan, I Johnson, S Carter, L Schubert, K Ye, A Mordvintsev, The building blocks of interpretability. Distill. Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., Mordvintsev, A.: The building blocks of interpretability. Distill (2018)
Automating biomedical data science through tree-based pipeline optimization. R S Olson, R J Urbanowicz, P C Andrews, N A Lavender, L C Kidd, J H Moore, European Conference on the Applications of Evolutionary Computation (EvoApplications). Olson, R.S., Urbanowicz, R.J., Andrews, P.C., Lavender, N.A., Kidd, L.C., Moore, J.H.: Au- tomating biomedical data science through tree-based pipeline optimization. In: European Conference on the Applications of Evolutionary Computation (EvoApplications) (2016)
A survey on transfer learning. S J Pan, Q Yang, IEEE Trans. Knowledge and Data Engineering. 2210Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowledge and Data Engi- neering 22(10), 1345-1359 (2010)
Deep face recognition. O M Parkhi, A Vedaldi, A Zisserman, BMVCParkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: BMVC (2015)
Secure face unlock: Spoof detection on smartphones. K Patel, H Han, A K Jain, IEEE Trans. Information Forensics and Security. 1110Patel, K., Han, H., Jain, A.K.: Secure face unlock: Spoof detection on smartphones. IEEE Trans. Information Forensics and Security 11(10), 2268-2283 (2016)
C E Perez, The Deep Learning AI Playbook -Strategy for Disruptive Artificial Intelligence. Perez, C.E.: The Deep Learning AI Playbook -Strategy for Disruptive Artificial Intelligence (2017)
P Rajpurkar, J Irvin, A Bagul, D Ding, T Duan, H Mehta, B Yang, K Zhu, D Laird, R L Ball, arXiv:1712.06957MURA dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs. arXiv preprintRajpurkar, P., Irvin, J., Bagul, A., Ding, D., Duan, T., Mehta, H., Yang, B., Zhu, K., Laird, D., Ball, R.L., et al.: MURA dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957 (2017)
Optical music recognition: state-of-the-art and open issues. A Rebelo, I Fujinaga, F Paszkiewicz, A R S Marçal, C Guedes, J S Cardoso, Int. Journal of Multimedia Information Retrieval. 13Rebelo, A., Fujinaga, I., Paszkiewicz, F., Marçal, A.R.S., Guedes, C., Cardoso, J.S.: Optical music recognition: state-of-the-art and open issues. Int. Journal of Multimedia Information Retrieval 1(3), 173-190 (2012)
Faster R-CNN: towards real-time object detection with region proposal networks. S Ren, K He, R B Girshick, J Sun, NIPSRen, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)
ImageNet Large Scale Visual Recognition Challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, Int. Journal of Computer Vision. 1153Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. Int. Journal of Computer Vision 115(3), 211-252 (2015)
Deep learning in neural networks: An overview. J Schmidhuber, Neural networks. 61Schmidhuber, J.: Deep learning in neural networks: An overview. Neural networks 61, 85-117 (2015)
FaceNet: A unified embedding for face recognition and clustering. F Schroff, D Kalenichenko, J Philbin, CVPRSchroff, F., Kalenichenko, D., Philbin, J.: FaceNet: A unified embedding for face recognition and clustering. In: CVPR (2015)
J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Online learning and online convex optimization. S Shalev-Shwartz, Foundations and Trends in Machine Learning. 42Shalev-Shwartz, S.: Online learning and online convex optimization. Foundations and Trends in Machine Learning 4(2), 107-194 (2012)
R Shwartz-Ziv, N Tishby, arXiv:1703.00810Opening the black box of deep neural networks via information. arXiv preprintShwartz-Ziv, R., Tishby, N.: Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810 (2017)
Mastering the game of Go with deep neural networks and tree search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, nature. 5297587Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of Go with deep neural networks and tree search. nature 529(7587), 484-489 (2016)
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLRSimonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ICLR (2015)
J T Springenberg, A Dosovitskiy, T Brox, M Riedmiller, arXiv:1412.6806Striving for simplicity: The all convolutional net. arXiv preprintSpringenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
Applied Data Science -Lessons Learned for the Data-Driven Business. T Stadelmann, V Tolkachev, B Sick, J Stampfli, O Dürr, Braschler, M., Stadelmann, T., Stockinger, K.SpringerBeyond ImageNet -deep learning in industrial practice. to appearStadelmann, T., Tolkachev, V., Sick, B., Stampfli, J., Dürr, O.: Beyond ImageNet -deep learn- ing in industrial practice. In: Braschler, M., Stadelmann, T., Stockinger, K. (eds.) Applied Data Science -Lessons Learned for the Data-Driven Business. Springer (2018), [to appear]
On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, ICMLSutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: ICML (2013)
C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Going deeper with convolutions. CVPR. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., et al.: Going deeper with convolutions. CVPR (2015)
DeepScores -a dataset for segmentation, detection and classification of tiny objects. L Tuggener, I Elezi, J Schmidhuber, M Pelillo, T Stadelmann, ICPRTuggener, L., Elezi, I., Schmidhuber, J., Pelillo, M., Stadelmann, T.: DeepScores -a dataset for segmentation, detection and classification of tiny objects. In: ICPR (2018)
Deep watershed detector for music object recognition. L Tuggener, I Elezi, J Schmidhuber, T Stadelmann, ISMIRTuggener, L., Elezi, I., Schmidhuber, J., Stadelmann, T.: Deep watershed detector for music object recognition. In: ISMIR (2018)
Performance evaluation of deep learning tools in Docker containers. P Xu, S Shi, X Chu, arXiv:1711.03386arXiv preprintXu, P., Shi, S., Chu, X.: Performance evaluation of deep learning tools in Docker containers. arXiv preprint arXiv:1711.03386 (2017)
Learning temporal features using LSTM-CNN architecture for face anti-spoofing. Z Xu, S Li, W Deng, ACPRXu, Z., Li, S., Deng, W.: Learning temporal features using LSTM-CNN architecture for face anti-spoofing. In: ACPR (2015)
J Yang, Z Lei, S Z Li, arXiv:1408.5601Learn convolutional neural network for face anti-spoofing. arXiv preprintYang, J., Lei, Z., Li, S.Z.: Learn convolutional neural network for face anti-spoofing. arXiv preprint arXiv:1408.5601 (2014)
J Yosinski, J Clune, Y Bengio, H Lipson, How transferable are features in deep neural networks? In: NIPS. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS (2014)
A face antispoofing database with diverse attacks. Z Zhang, J Yan, S Liu, Z Lei, D Yi, S Z Li, Int. Conference on Biometrics (ICB). Zhang, Z., Yan, J., Liu, S., Lei, Z., Yi, D., Li, S.Z.: A face antispoofing database with diverse attacks. In: Int. Conference on Biometrics (ICB) (2012)
Improving the robustness of deep neural networks via stability training. S Zheng, Y Song, T Leung, I Goodfellow, CVPRZheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: CVPR (2016)
| []
|
[
"Automatic Segmentation of the Placenta in BOLD MRI Time Series",
"Automatic Segmentation of the Placenta in BOLD MRI Time Series"
]
| [
"S Mazdak Abulnaga [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n",
"Sean I Young [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n\nMGH/HST Martinos Center for Biomedical Imaging\nHarvard Medical School\n02129BostonMAUSA\n",
"Katherine Hobgood [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n",
"Eileen Pan [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n",
"Clinton J Wang [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n",
"P Ellen Grant \nFetal-Neonatal Neuroimaging and Developmental Science Center\nBoston Children's Hospital\nHarvard Medical School\n02115BostonMAUSA\n",
"Esra Abaci Turk [email protected] \nFetal-Neonatal Neuroimaging and Developmental Science Center\nBoston Children's Hospital\nHarvard Medical School\n02115BostonMAUSA\n",
"Polina Golland [email protected] \nComputer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA\n"
]
| [
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA",
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA",
"MGH/HST Martinos Center for Biomedical Imaging\nHarvard Medical School\n02129BostonMAUSA",
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA",
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA",
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA",
"Fetal-Neonatal Neuroimaging and Developmental Science Center\nBoston Children's Hospital\nHarvard Medical School\n02115BostonMAUSA",
"Fetal-Neonatal Neuroimaging and Developmental Science Center\nBoston Children's Hospital\nHarvard Medical School\n02115BostonMAUSA",
"Computer Science and Artificial Intelligence Lab\nMassachusetts Institute of Technology\n02139CambridgeUSA"
]
| []
| Blood oxygen level dependent (BOLD) MRI with maternal hyperoxia can assess oxygen transport within the placenta and has emerged as a promising tool to study placental function. Measuring signal changes over time requires segmenting the placenta in each volume of the time series. Due to the large number of volumes in the BOLD time series, existing studies rely on registration to map all volumes to a manually segmented template. As the placenta can undergo large deformation due to fetal motion, maternal motion, and contractions, this approach often results in a large number of discarded volumes, where the registration approach fails. In this work, we propose a machine learning model based on a U-Net neural network architecture to automatically segment the placenta in BOLD MRI and apply it to segmenting each volume in a time series. We use a boundary-weighted loss function to accurately capture the placental shape. Our model is trained and tested on a cohort of 91 subjects containing healthy fetuses, fetuses with fetal growth restriction, and mothers with high BMI. We achieve a Dice score of 0.83 ± 0.04 when matching with ground truth labels and our model performs reliably in segmenting volumes in both normoxic and hyperoxic points in the BOLD time series. Our code and trained model are available at https://github.com/mabulnaga/automatic-placenta-segmentation. | 10.48550/arxiv.2208.02895 | [
"https://export.arxiv.org/pdf/2208.02895v1.pdf"
]
| 251,371,717 | 2208.02895 | 316ce7e7f0fa748f5f87738d8e06e53a3111996b |
Automatic Segmentation of the Placenta in BOLD MRI Time Series
S Mazdak Abulnaga [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
Sean I Young [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
MGH/HST Martinos Center for Biomedical Imaging
Harvard Medical School
02129BostonMAUSA
Katherine Hobgood [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
Eileen Pan [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
Clinton J Wang [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
P Ellen Grant
Fetal-Neonatal Neuroimaging and Developmental Science Center
Boston Children's Hospital
Harvard Medical School
02115BostonMAUSA
Esra Abaci Turk [email protected]
Fetal-Neonatal Neuroimaging and Developmental Science Center
Boston Children's Hospital
Harvard Medical School
02115BostonMAUSA
Polina Golland [email protected]
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
02139CambridgeUSA
Automatic Segmentation of the Placenta in BOLD MRI Time Series
Placenta · Segmentation · BOLD MRI · CNN
Blood oxygen level dependent (BOLD) MRI with maternal hyperoxia can assess oxygen transport within the placenta and has emerged as a promising tool to study placental function. Measuring signal changes over time requires segmenting the placenta in each volume of the time series. Due to the large number of volumes in the BOLD time series, existing studies rely on registration to map all volumes to a manually segmented template. As the placenta can undergo large deformation due to fetal motion, maternal motion, and contractions, this approach often results in a large number of discarded volumes, where the registration approach fails. In this work, we propose a machine learning model based on a U-Net neural network architecture to automatically segment the placenta in BOLD MRI and apply it to segmenting each volume in a time series. We use a boundary-weighted loss function to accurately capture the placental shape. Our model is trained and tested on a cohort of 91 subjects containing healthy fetuses, fetuses with fetal growth restriction, and mothers with high BMI. We achieve a Dice score of 0.83 ± 0.04 when matching with ground truth labels and our model performs reliably in segmenting volumes in both normoxic and hyperoxic points in the BOLD time series. Our code and trained model are available at https://github.com/mabulnaga/automatic-placenta-segmentation.
Introduction
The placenta is an organ that provides oxygen and nutrients to support fetal growth. Placental dysfunction can cause pregnancy complications and can affect fetal development, so there is a critical need to assess placental function in vivo. Blood oxygen level dependent (BOLD) MRI can directly quantify oxygen transport within the placenta [16,3] and has emerged as a promising tool to study (a) BOLD signals increase during hyperoxia (b) Placental deformation from fetal motion placental function. Temporal analysis of BOLD MRI with maternal oxygenation has been used to identify contractions [1,13], biomarkers of fetal growth restriction [7,15], predict placental age [10] and to study congenital heart disease [24,18] among many uses.
Despite its importance for many downstream clinical research tasks, placental segmentation is often performed manually and can take a significant amount of time, even for a trained expert. For BOLD MRI studies, manual segmentation is rendered more challenging due to the sheer number of MRI scans acquired and rapid signal changes due to the experimental design. Experiments acquire several hundred whole-uterus MRI scans to observe signal changes in three stages: i) normoxic (baseline), ii) hyperoxic, and iii) return to normoxic. During the hyperoxic stage, the BOLD signals increase rapidly, leading to hyperintensity throughout the placenta. Furthermore, the placental shape can undergo large deformation caused by maternal breathing, contractions, and fetal motion which can be particularly increased during hyperoxia [25]. See Fig. 1 for two examples.
The current practice is to analyze BOLD signals with respect to one template volume. Deformable registration of all volumes in the time series to the template is performed to enable spatiotemporal analysis [2,25]. However, due to significant motion, registration can lead to large errors, requiring outlier detection and possibly rejecting a significant number of volumes [2,25].
To address these challenges, we propose a model to automatically segment the placenta in BOLD MRI time series. Our model is trained on several volumes from each patient during the normoxic and hyperoxic phases, to capture the nuanced placental changes. We apply our model on unseen BOLD MRI volumes to demonstrate consistency in the predicted segmentation label maps. Our method performs favorably against the state-of-the-art on a large dataset with a broad range of gestational ages and pregnancy conditions. Automatic segmentation is necessary for whole-organ signal analysis, and can be used to improve time-series registration to enable localized analysis. Furthermore, it is an essential step in several post-processing tasks, including motion correction [2], reconstruction [21], and mapping to a standardized representation [8,4].
Machine learning segmentation models for the placenta have been previously proposed and include both semi-automatic [23] and automatic [5,19,10,17] approaches. While semi-automatic methods have achieved success in predicting segmentation label maps with high accuracy, these approaches are infeasible for segmenting BOLD MRI time series due to the large number of volumes. The majority of automatic methods focus on segmentation in anatomical images. Alansary et al. [5] proposed a model for segmenting T2-weighted (T2w) images based on a 3D CNN followed by a dense CRF for segmentation refinement and validated on a singleton cohort that included patients with fetal growth restriction (FGR). Torrents-Barrena et al. [19] proposed a model based on superresolution and an SVM and validated on a singleton and twin cohort of T2w MRI. Spektor-Fadida et al. [17] tackled the problem of domain transfer by a self-training model and demonstrated successful segmentation of FIESTA and TRUFI sequences. For a more detailed treatment of segmentation methods in fetal MRI, we refer the reader to the survey by Torrents-Barrena et al. [20].
Functional images of the placenta differ greatly from anatomical images, as they have lower in-plane resolution and the contrast between the placental boundary and surrounding anatomy is less pronounced. Anatomical images may also benefit from super-resolution approaches to increase SNR in the acquired image [21]. Pietsch et al. [10] are the first to consider placental segmentation in functional MRI. They proposed a 2D patch-based U-Net model for functional image segmentation and demonstrated a successful application of age prediction using the estimated T2* values. They focused on a cohort of singleton subjects, and demonstrated success on abnormal pregnancy conditions including preeclampsia. In contrast to their approach that segments derived T2* maps, we evaluate our segmentation model on BOLD MRI time series. Furthermore, our 3D model operates on the entire volume rather than patches, thereby helping to better resolve the boundaries of the placenta.
To capture the large signal changes and placental shape variation in the time series, we train with a random sampling of manual segmentations of several volumes in the BOLD MRI series. We propose a boundary weighted loss function to more easily identify the placental boundary and improve segmentation accuracy. Finally, to evaluate the feasibility of our method for clinical research, we propose additional metrics to evaluate performance on the whole MRI time series, and illustrate a possible clinical research application.
Methods
We aim to find a model F θ : X → Y that takes a BOLD MRI time series X ∈ R T ×H×W ×D and predicts a set of placenta segmentation label maps for each time point t ∈ {1, . . . T }, Y ∈ {0, 1} T ×H×W ×D , where T is the total number of time points at which MRI scans were acquired. For a given BOLD time series, we have a small number N l of frames with ground truth labels (x, y), where x ∈ R H×W ×D is an MRI scan and y ∈ {0, 1} H×W ×D is the ground truth placenta label map.
Model
We use a 3D U-Net [12] with 4 blocks in the contracting and expanding paths. Each block consists of two sets of 3 × 3 × 3 convolution with ReLU activations, followed by max pooling (contraction path) or transpose convolution (expansion path), as illustrated in Fig. 2. We augment the images using random affine transforms, flips, whole-image brightness shifts, contrast changes, random noise, and elastic deformations, using TorchIO [11]. We simulate the effects of maternal normoxia and hyperoxia with a constant intensity shift in the placenta.
To capture the MRI signal and placental shape changes resulting from maternal hyperoxia and fetal motion, we enhance our training with several manually segmented volumes in the normoxic or hyperoxic phase. This allows the model to learn from the realistic variations that arise during maternal oxygenation.
Additive Boundary Loss
The placental boundary can be difficult to distinguish in BOLD MRI scans due to similar appearance with surrounding anatomy. To emphasize the boundary details, we construct an additive boundary-weighting W to the segmentation loss function L. Given a ground truth placental label map y, we denote its boundary as ∂y. We use a signed distance function f (x) that measures the signed distance, d(x, ∂y), of voxel x ∈ R 3 to the boundary, where f (x) < 0 when outside of the placenta and f (x) > 0 when inside. The boundary weighting is additive for voxels within δ-distance of ∂y,
W δ (x) = w 1 if − δ < f (x) < 0, w 2 if 0 ≤ f (x) < δ, 0 otherwise.(1)
The weighted-loss is then
L w (x) = L (x) [1 + W δ (x)] .(2)
In practice, we set w 1 > w 2 , to penalize outside voxels more heavily and learn to distinguish the placenta from its surrounding anatomy. To find voxels with |f (x)| < δ, we estimate a 2δ-wide boundary by an average pooling filter on y with kernel size K and take the smoothed outputs to lie in the boundary. A larger K produces a wider boundary, penalizing more misclassified voxels.
Implementation Details
We train using a learning rate η = 10 −4 for 3000 epochs and select the model with the best Dice score on the validation set. For the additive boundary loss, we set w 1 = 40, w 2 = 1, and K = 11. All volumes are normalized by mapping the 90 th percentile intensity value to 1. We use a batch size of 8 MRI volumes. We crop or pad all volumes in the dataset to have dimension 112×112×80, and train on the entire 3D volume. We augment our data with random translations of up to 10 voxels, rotations up to 22 • , Gaussian noise sampled with µ = 0, σ = 0.25, elastic deformations with 5 control points and a maximum displacement of 10 voxels, whole volume intensity shifts up to ±25%, and whole-placenta intensity shifts of ±0.15 normalized intensity values. These values were determined by cross-validation on the training set. When evaluating the model on our test set, we post-processed produced label maps by taking the largest connected component to eliminate islands. Our code and trained model are available at https://github.com/mabulnaga/automatic-placenta-segmentation.
Model Evaluation
Data
Our dataset consists of BOLD MRI scans taken from two clinical research studies. Data was collected from 91 subjects of which 78 were singleton pregnancies (gestational age (GA) at MRI scan of 23wk5d -37wk6d), and 13 were monochorionic-diamniotic (Mo-Di) twins (GA at MRI scan of 27wk5d -34wk5d). Of these, 63 were controls, 16 had fetal growth restriction (FGR), and 12 had high BMI (BMI > 30). Obstetrical ultrasound was used to classify subjects with FGR. For singleton subjects, classification was done based on having fetuses with estimated weight less than the 10 th percentile. For twin subjects, FGR classification was determined by provene monoochorionicity and discordance in the estimated fetal weight by i) growth restrction (<10 th percentile) in one or both fetuses; and/or ii) growth discordance (≥ 20%) between fetuses. Table 1 shows patient demographics and GA ranges per group. MRI BOLD scans were acquired on a 3T Siemens Skyra scanner (GRE-EPI, interleaved with 3mm isotropic voxels, TR = 5.8-8s, TE = 32−47 ms, FA = 90 • ). To eliminate intra-volume motion artifacts, we split the acquired interleaved volumes into two separate volumes with spacing 3 × 3 × 6mm, then linearly interpolate to 3 × 3 × 3mm. In our analysis, we only consider one of two split volumes. Maternal oxygen supply was alternated during the BOLD acquisition 2 ). The placenta was manually segmented by a trained observer. Each BOLD MRI time series had 1 to 6 manual segmentations, yielding a total of 176 ground truth labels. The data was split into a training, validation, and test sets: (65%/15%/20%: 63/11/17 subjects) and stratified on pregnancy condition.
Each subject in the training set had up to N l = 6 ground truth segmentations in the BOLD time series. To prevent the model from being biased by subjects with more ground truth labels, we train by randomly sampling one of N l ground truth segmentations in each epoch.
Evaluation
We first compare the predicted segmentation label maps to ground truth segmentations. We measure similarity using the Dice score (Dice), the 95 th -percentile Hausdorff distance (HD95), and the Average Symmetric Surface Distance (ASSD). To evaluate the feasibility of the produced segmentations for clinical research studying whole-organ signal changes, we evaluate the relative error in the mean BOLD values, defined as |b − b|/b, where b andb denote the mean BOLD signal in the ground truth and in the predicted segmentation, respectively.
We evaluate several variants of our model using these metrics. We assess the effect of the boundary-weighting (BW) loss term and compare performance using the Cross-entropy (CE), Dice [9], and Focal [6] loss functions. We evaluate the generalization ability by comparing with the model trained on only the first of N l BOLD frames and without random sampling of labeled segmentations. We evaluate our model's sensitivity to oxygenation by comparing the accuracy of predictions in the normoxic and hyperoxic phases for a given subject. We compute the absolute difference of the similarity metric m between an image in normoxia and in hyperoxia, |m normoxic (y,ŷ) − m hyperoxic (y,ŷ)|, where m normoxic (y,ŷ) denotes the similarity between our predicted segmentationŷ and the ground truth y using the metric m for an image in the normoxic phase. We use the Dice score, HD95, ASSD, and relative BOLD error for m.
We assess the consistency of our predictions by applying our model to all volumes in the BOLD time series of the test set. Since our volumes are acquired interleaved and split into two separate volumes, we apply our model to every second volume in the time series, yielding a mean of 111.7 ± 45.3 volumes per subject. We measure consistency by comparing the Dice score, HD95, ASSD, and normalized BOLD difference between consecutive volumes. Table 2 reports the performance of several variants of our model on the test set. Our best model achieves a Dice score of 0.83 ± 0.04 with a HD95 = 13.36 ± 6.08mm using the BW-CE loss. Further, we achieve low relative BOLD error (0.051±0.025), indicating that our model's segmentations are suitable for clinical research studies assessing whole-organ signal changes. Similar performance is achieved for the other loss functions. Training the model without the boundary weighting (Eq. (2)) results in a statistically significant drop in performance, achieving a Dice of 0.76 (p < 10 −4 using a paired t-test). Using only the first segmented volume of the BOLD MRI series (N l = 1) in the normoxic phase also results in a significant drop in performance, achieving a Dice of 0.81 (p < 0.05).
Results
Adding labeled examples in the hyperoxic phase helps generalization, as the placental shape and intensity patterns can change greatly.
Our performance is consistent across pregnancy conditions, as we achieve Dice scores of (0.76, 0.89) on the two subjects with twin pregnancies, 0.83 ± 0.04 on the singletons (N= 15), 0.83 ± 0.07 on the FGR cohort (N=3), 0.82 ± 0.04 on the controls (N=12) and (0.84, 0.88) on the two BMI cases.
Direct comparison of this work to previous studies is not feasible due to differences in data set size and patient demographics, imaging protocols, and MRI study design. The current state-of-the-art automatic segmentation method for functional MRI (T2*) achieves a Dice score of 0.58 on a cohort of low-and high-risk singleton subjects of a wide GA range [10]. Their performance was comparable to the inter-rater variability of two radiologists (Dice=0.68), which represents an upper limit. In their work, they trained on a combination of T2* weighting and BOLD sequences, while we focus only on BOLD.
Our model performs consistently well in the normoxic and hyperoxic phases. For the 5 subjects with ground truth segmentations in both the normoxic and hyperoxic phase, we achieve a mean absolute difference between predictions in normoxia and hyperoxia of 0.026±0.02 Dice, 5.69±2.33mm HD95, 0.75±0.46mm ASSD, and 0.06±0.04 relative BOLD error. These results suggest that our model is robust to contrast changes in the placenta resulting from maternal hyperoxia, and can be used in studies quantifying oxygen transport in the organ. A larger number of subjects are needed to assess statistical significance. Fig 3 compares the predicted label maps with ground truth on 5 subjects with increasing Dice scores using the BW-CE model. The model accurately identifies the location of the placenta, but in the worst cases misses boundary details. Table 3 presents statistics of the consistency between predicted label maps in consecutive volumes of the MRI time series. Predictions are highly consistent, achieving a Dice of 0.92 ± 0.02. The small differences between the relative mean-BOLD values suggest these produced segmentations may be suitable for research studies assessing placental function. Fig. 4 presents distributions of Dice score between predicted label maps of consecutive frames in the BOLD time series. Distributions have high medians (> 0.9) for all but one case, with wide density at high Dice scores (> 0.9. Dice differences are highly affected by fetal and maternal motion that causes placental deformation. We visually verified that modest drops in Dice (< 0.9) were mainly due to fetal motion, but large drops (Dice < 0.7) resulted from errors in the produced label maps.
BOLD Time Series Evaluation
Automatic segmentation of each volume in BOLD MRI time series is advantageous as it can enable whole-organ spatiotemporal analysis without requiring inter-volume motion correction or registration, which may fail under the pres- ence of large motion. We illustrate one possible application by investigating the percentage increase in BOLD signal in response to maternal hyperoxia. We calculate the percentage increase over the baseline period: ∆b = |b H − b N |/b N , where b N denotes the mean BOLD signal over the baseline period, and b H denotes the mean of the signal in the last 10 frames of the hyperoxic period. Fig. 5 shows a scatter plot of the hyperoxia response for all subjects in the test set and two examples of the BOLD signal time course in the produced placenta segmentation label maps. In the control subjects (N=12), we observe an increase of 10.2 ± 11.1%. The observed increase for the healthy controls is consistent with previous studies that demonstrated an increase of 12.6 ± 5.4% (N=21) [15] and from 5% to 20% throughout gestation (N=49) [14].
Discussion and Conclusion
We developed a model to automatically segment placental scans in BOLD MRI and achieve close matching to ground truth labels with consistent performance in predicting volumes in both the normoxic and hyperoxic phases. Key to our model development is a boundary-weighted loss function and training with labeled volumes obtained at different oxygenation phases in the BOLD MRI time series.
Segmenting each volume in the BOLD MRI time series can be advantageous for clinical research assessing whole-organ changes as it eliminates the need for registration. Registration algorithms are affected by fetal motion and may require discarding a significant number of volumes [2,25], potentially losing important signal information. We illustrate one possible study in assessing placental response during hyperoxia, observing an increase in signal intensity consistent with prior work. However, our cohort is limited, and several factors, including maternal position, gestational age, and contractions are covariates not considered.
Registration however is advantageous for localized analysis [2], and solely relying on segmentation would only permit quantifying whole-organ signal changes, for example mean T2* or mean BOLD increase. Placental segmentations can be incorporated into registration methods as spatial priors to improve registration results. Future work will investigate joint segmentation-registration models.
We assessed the consistency of predictions in BOLD MRI time series using our model, and achieved highly consistent predictions (Dice = 0.92). For many subjects, we observed modest drops in Dice (< 0.9), which were often due to fetal motion displacing the placenta. However, in a small number of cases, we observed large drops (Dice < 0.7) that we visually verified were caused by segmentation error. Since we apply the model to each volume in the time series independently, imaging artifacts, such as intensity and geometric artifacts, can affect the predicted segmentations. In future work, we will investigate incorporating temporal consistency between consecutive volumes. We will also investigate applying test-time augmentation on image intensity as this has been shown to reduce uncertainty and improve segmentation robustness [22]. Key to our model performance was maximizing data variability by having manually segmented volumes at different points in the BOLD MRI series. Future work will investigate semi-supervised learning to incorporate all unlabeled volumes. As there are often in the order of 100 unlabeled volumes in each BOLD time series, these approaches can more accurately capture the rapid signal changes resulting from fetal motion and maternal oxygenation.
Future directions of this work will investigate oxygenation dynamics in the placenta. Segmentation of the time series can be used to derive T2* maps and perform whole-organ signal comparisons between differing population groups, thereby enabling quantitative analysis of placental function with the ultimate goal of developing biomarkers of placental and fetal health.
Acknowledgments
This work was supported in part by NIH NIBIB NAC P41EB015902, NIH NICHD R01HD100009, R01EB032708, R21HD106553, MIT-IBM Watson AI Lab, NSERC PGS D, NSF GRFP, and MathWorks Fellowship.
Fig. 1 :
1Example images and placental segmentations: (a) signal brightening during hyperoxia, and (b) shape deformation caused by fetal motion. Placental boundaries are marked in yellow. Areas outside of the placenta are darkened for illustration. Intensity scale is based on the first MRI volume in the time series.
Fig. 2 :
23D Placenta Segmentation U-Net. We use a five-level 3D U-Net with max-pooling, skip connections, and convolution-transpose layers. Numbers above vertical bars denote the number of features at various stages of the processing pipeline. Batch norm is employed for normalization (batch size = 8).
Fig. 3 :
3Example predictions on 5 subjects from the test set. Ground truth segmentations are shown in yellow and predictions in red. Dice scores are indicated below each column. Two slices are shown for each subject, spaced 18mm apart.
Fig. 4 :
4Per-subject density distributions of Dice scores between consecutive predictions in BOLD MRI time series. Dots inside distributions indicate the median.
FGRFig. 5 :
5Example application using our model's produced placenta segmentations in BOLD time series to characterize oxygenation response from maternal hyperoxia. Left: observed increase relative to baseline for the test set. Right: example time series for one singleton control (GA=33wk2d, Dice=0.84, ∆b = 15.7%) and one singleton FGR subject (GA=34wk5d, Dice=0.84, ∆b = 2.9%).
Table 1 :
1Subject demographic information.Experiment Phase (N
subject = )
Dice Score
HD95 (mm)
ASSD (mm)
BOLD diff.
All
0.80 ± 0.04
14.20 ± 5.47
4.53 ± 1.03
0.072 ± 0.049
Normoxic
0.80 ± 0.05
15.36 ± 7.38
4.65 ± 1.37
0.076 ± 0.035
Hyperoxic
0.80 ± 0.03
13.04 ± 3.10
4.40 ± 0.46
0.058 ± 0.038
Table 2 :
2Test results produced by our 3D U-Net model trained using different loss functions. Numbers in bold indicate the best result in each column. Finally, we demonstrate a possible application of temporal analysis by measuring increases in mean BOLD signal during hyperoxia.Experiment Phase (N
subject = )
Dice Score
HD95 (mm)
ASSD (mm)
BOLD diff.
All
0.80 ± 0.04
14.20 ± 5.47
4.53 ± 1.03
0.072 ± 0.049
Normoxic
0.80 ± 0.05
15.36 ± 7.38
4.65 ± 1.37
0.076 ± 0.035
Hyperoxic
0.80 ± 0.03
13.04 ± 3.10
4.40 ± 0.46
0.058 ± 0.038
Table 3 :
3Consistency of predictions in the BOLD time series produced by our best-performing 3D U-Net model (trained using the BW-CE loss function).Loss
Dice Score
HD95 (mm)
ASSD (mm)
BOLD diff.
BW-CE
0.83 ± 0.04
13.36 ± 6.08
4.06 ± 0.97
0.051 ± 0.025
BW-CE + Dice
0.82 ± 0.04
13.34 ± 5.43
4.16 ± 0.99
0.050 ± 0.043
BW-Focal
0.82 ± 0.04
13.52 ± 5.54
4.15 ± 0.98
0.046 ± 0.033
BW-CE ( = )
0.81 ± 0.05
13.26 ± 5.98
4.38 ± 1.35
0.057 ± 0.033
BW-Focal + Dice
CE (no BW)
0.78 ± 0.19
0.76 ± 0.07
22.16 ± 36.25
18.26 ± 11.64
11.67 ± 29.55
6.04 ± 2.21
0.103 ± 0.239
0.051 ± 0.027
Measure
Dice Score
HD95 (mm)
ASSD (mm)
BOLD diff.
Consistency across
consecutive frames
0.92 ± 0.02
5.69 ± 2.33
1.94 ± 0.05
0.021 ± 0.007
MAE normoxic vs.
hyperoxic ( = )
0.026 ± 0.025
4.63 ± 3.97
0.75 ± 0.46
0.058 ± 0.038
Measure
Dice Overlap
HD95 (mm)
ASSD (mm)
BOLD diff.
MAE normoxic vs.
hyperoxic ( = )
0.026 ± 0.025
4.63 ± 3.97
0.75 ± 0.46
0.06 ± 0.04
Measure
Dice Score
HD95 (mm)
ASSD (mm)
BOLD diff.
Consistency across
consecutive frames
0.92 ± 0.02
5.69 ± 2.33
1.94 ± 0.05
0.021 ± 0.007
Group
Control
FGR
High BMI
Singleton: N subj.
GA at MRI
60
23wk5d -37wk6d
6
26wk6d -34wk5d
12
26wk4d -36wk6d
Placental MRI: Effect of maternal position and uterine contractions on placental bold MRI measurements. Abaci Turk, E Abulnaga, S M Luo, J Stout, J N Feldman, H A Turk, A Gagoski, B Wald, L L Adalsteinsson, E Roberts, D J Bibbo, C Robinson, J N Golland, P Grant, P E Barth, W H , Placenta. 95Abaci Turk, E., Abulnaga, S.M., Luo, J., Stout, J.N., Feldman, H.A., Turk, A., Gagoski, B., Wald, L.L., Adalsteinsson, E., Roberts, D.J., Bibbo, C., Robinson, J.N., Golland, P., Grant, P.E., Barth, W.H.: Placental MRI: Effect of maternal position and uterine contractions on placental bold MRI measurements. Placenta 95, 69 -77 (2020)
Spatiotemporal alignment of in utero BOLD-MRI series. Abaci Turk, E Luo, J Gagoski, B Pascau, J Bibbo, C Robinson, J N Grant, P E Adalsteinsson, E Golland, P Malpica, N , Journal of Magnetic Resonance Imaging. 462Abaci Turk, E., Luo, J., Gagoski, B., Pascau, J., Bibbo, C., Robinson, J.N., Grant, P.E., Adalsteinsson, E., Golland, P., Malpica, N.: Spatiotemporal alignment of in utero BOLD-MRI series. Journal of Magnetic Resonance Imaging 46(2), 403-412 (2017)
Placental MRI: Developing accurate quantitative measures of oxygenation. Abaci Turk, E Stout, J N Ha, C Luo, J Gagoski, B Yetisir, F Golland, P Wald, L L Adalsteinsson, E Robinson, J N Roberts, D Barth, W H Grant, P E , Topics in Magnetic Resonance Imaging. 285Abaci Turk, E., Stout, J.N., Ha, C., Luo, J., Gagoski, B., Yetisir, F., Golland, P., Wald, L.L., Adalsteinsson, E., Robinson, J.N., Roberts, D., Barth, W.H., Grant, P.E.: Placental MRI: Developing accurate quantitative measures of oxygenation. Topics in Magnetic Resonance Imaging 28(5), 285-297 (2019)
Volumetric parameterization of the placenta to a flattened template. S M Abulnaga, E A Turk, M Bessmeltsev, P E Grant, J Solomon, P Golland, IEEE Transactions on Medical Imaging. 414Abulnaga, S.M., Turk, E.A., Bessmeltsev, M., Grant, P.E., Solomon, J., Golland, P.: Volumetric parameterization of the placenta to a flattened template. IEEE Transactions on Medical Imaging 41(4), 925-936 (2022)
Fast fully automatic segmentation of the human placenta from motion corrupted MRI. A Alansary, K Kamnitsas, A Davidson, R Khlebnikov, M Rajchl, C Malamateniou, M Rutherford, J V Hajnal, B Glocker, D Rueckert, B Kainz, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerAlansary, A., Kamnitsas, K., Davidson, A., Khlebnikov, R., Rajchl, M., Malamate- niou, C., Rutherford, M., Hajnal, J.V., Glocker, B., Rueckert, D., Kainz, B.: Fast fully automatic segmentation of the human placenta from motion corrupted MRI. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 589-597. Springer (2016)
Focal loss for dense object detection. T Y Lin, P Goyal, R Girshick, K He, P Dollár, 2017 IEEE International Conference on Computer Vision (ICCV). Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp. 2999-3007 (2017)
In vivo quantification of placental insufficiency by BOLD MRI: a human study. J Luo, E A Turk, C Bibbo, B Gagoski, D J Roberts, M Vangel, C M Tempany-Afdhal, C Barnewolt, J Estroff, A Palanisamy, W H Barth, C Zera, N Malpica, P Golland, E Adalsteinsson, J N Robinson, P Grant, Scientific Reports. 713713Luo, J., Turk, E.A., Bibbo, C., Gagoski, B., Roberts, D.J., Vangel, M., Tempany- Afdhal, C.M., Barnewolt, C., Estroff, J., Palanisamy, A., Barth, W.H., Zera, C., Malpica, N., Golland, P., Adalsteinsson, E., Robinson, J.N., Grant, P.: In vivo quantification of placental insufficiency by BOLD MRI: a human study. Scientific Reports 7(1), 3713 (2017)
Placenta maps: in utero placental health assessment of the human fetus. H Miao, G Mistelbauer, A Karimov, A Alansary, A Davidson, D F Lloyd, M Damodaram, L Story, J Hutter, J Hajnal, M Rutherford, B Preim, B Kainz, E Gröller, IEEE Transactions on Visualization and Computer Graphics. 236Miao, H., Mistelbauer, G., Karimov, A., Alansary, A., Davidson, A., Lloyd, D.F., Damodaram, M., Story, L., Hutter, J., Hajnal, J., Rutherford, M., Preim, B., Kainz, B., Gröller, E.: Placenta maps: in utero placental health assessment of the human fetus. IEEE Transactions on Visualization and Computer Graphics 23(6), 1612- 1623 (2017)
V-net: Fully convolutional neural networks for volumetric medical image segmentation. F Milletari, N Navab, S A Ahmadi, Fourth International Conference on 3D Vision (3DV). Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. 2016 Fourth International Conference on 3D Vision (3DV) pp. 565-571 (2016)
APPLAUSE: Automatic prediction of placental health via U-net segmentation and statistical evaluation. M Pietsch, A Ho, A Bardanzellu, A M A Zeidan, L C Chappell, J V Hajnal, M Rutherford, J Hutter, Medical Image Analysis. 72102145Pietsch, M., Ho, A., Bardanzellu, A., Zeidan, A.M.A., Chappell, L.C., Hajnal, J.V., Rutherford, M., Hutter, J.: APPLAUSE: Automatic prediction of placental health via U-net segmentation and statistical evaluation. Medical Image Analysis 72, 102145 (2021)
Torchio: A python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. F Pérez-García, R Sparks, S Ourselin, Computer Methods and Programs in Biomedicine. 208106236Pérez-García, F., Sparks, R., Ourselin, S.: Torchio: A python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Computer Methods and Programs in Biomedicine 208, 106236 (2021)
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed- ical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015. pp. 234-241 (2015)
Reduced placental oxygenation during subclinical uterine contractions as assessed by BOLD MRI. M Sinding, D A Peters, J B Frøkjaer, O B Christiansen, N Uldbjerg, A Sørensen, Placenta. 39Sinding, M., Peters, D.A., Frøkjaer, J.B., Christiansen, O.B., Uldbjerg, N., Sørensen, A.: Reduced placental oxygenation during subclinical uterine contrac- tions as assessed by BOLD MRI. Placenta 39, 16-20 (2016)
Placental baseline conditions modulate the hyperoxic BOLD-MRI response. M Sinding, D A Peters, S S Poulsen, J B Frøkjaer, O B Christiansen, A Petersen, N Uldbjerg, A Sørensen, Placenta. 61Sinding, M., Peters, D.A., Poulsen, S.S., Frøkjaer, J.B., Christiansen, O.B., Pe- tersen, A., Uldbjerg, N., Sørensen, A.: Placental baseline conditions modulate the hyperoxic BOLD-MRI response. Placenta 61, 17-23 (2018)
Placental oxygen transport estimated by the hyperoxic placental BOLD MRI response. A Sørensen, M Sinding, D A Peters, A Petersen, J B Frøkjaer, O B Christiansen, N Uldbjerg, Physiological reports. 31012582Sørensen, A., Sinding, M., Peters, D.A., Petersen, A., Frøkjaer, J.B., Christiansen, O.B., Uldbjerg, N.: Placental oxygen transport estimated by the hyperoxic placen- tal BOLD MRI response. Physiological reports 3(10), e12582 (2015)
Changes in human fetal oxygenation during maternal hyperoxia as estimated by BOLD MRI. A Sørensen, Prenatal Diagnosis. 332Sørensen, A., et al.: Changes in human fetal oxygenation during maternal hyperoxia as estimated by BOLD MRI. Prenatal Diagnosis 33(2), 141-145 (2013)
A bootstrap self-training method for sequence transfer: State-of-the-art placenta segmentation in fetal MRI. B Specktor-Fadida, D Link-Sourani, S Ferster-Kveller, L Ben-Sira, E Miller, D Ben-Bashat, L Joskowicz, Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis. SpringerSpecktor-Fadida, B., Link-Sourani, D., Ferster-Kveller, S., Ben-Sira, L., Miller, E., Ben-Bashat, D., Joskowicz, L.: A bootstrap self-training method for sequence transfer: State-of-the-art placenta segmentation in fetal MRI. In: Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis, pp. 189-199. Springer (2021)
T2* placental MRI in pregnancies complicated with fetal congenital heart disease. J K Steinweg, G T Y Hui, M Pietsch, A Ho, M P Van Poppel, D Lloyd, K Colford, J M Simpson, R Razavi, K Pushparajah, M Rutherford, J Hutter, Placenta. 108Steinweg, J.K., Hui, G.T.Y., Pietsch, M., Ho, A., van Poppel, M.P., Lloyd, D., Col- ford, K., Simpson, J.M., Razavi, R., Pushparajah, K., Rutherford, M., Hutter, J.: T2* placental MRI in pregnancies complicated with fetal congenital heart disease. Placenta 108, 23-31 (2021)
Fully automatic 3D reconstruction of the placenta and its peripheral vasculature in intrauterine fetal MRI. J Torrents-Barrena, G Piella, N Masoller, E Gratacós, E Eixarch, M Ceresa, M Á G Ballester, Medical image analysis. 54Torrents-Barrena, J., Piella, G., Masoller, N., Gratacós, E., Eixarch, E., Ceresa, M., Ballester, M.Á.G.: Fully automatic 3D reconstruction of the placenta and its peripheral vasculature in intrauterine fetal MRI. Medical image analysis 54, 263- 279 (2019)
Segmentation and classification in mri and us fetal imaging: Recent trends and future prospects. J Torrents-Barrena, G Piella, N Masoller, E Gratacós, E Eixarch, M Ceresa, Ángel González, M Ballester, Medical Image Analysis. 51Torrents-Barrena, J., Piella, G., Masoller, N., Gratacós, E., Eixarch, E., Ceresa, M.,Ángel González Ballester, M.: Segmentation and classification in mri and us fetal imaging: Recent trends and future prospects. Medical Image Analysis 51, 61-88 (2019)
Deformable slice-to-volume registration for motion correction of fetal body and placenta MRI. A Uus, T Zhang, L H Jackson, T A Roberts, M A Rutherford, J V Hajnal, M Deprez, IEEE transactions on medical imaging. 399Uus, A., Zhang, T., Jackson, L.H., Roberts, T.A., Rutherford, M.A., Hajnal, J.V., Deprez, M.: Deformable slice-to-volume registration for motion correction of fetal body and placenta MRI. IEEE transactions on medical imaging 39(9), 2750-2759 (2020)
Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. G Wang, W Li, M Aertsen, J Deprest, S Ourselin, T Vercauteren, Neurocomputing. 338Wang, G., Li, W., Aertsen, M., Deprest, J., Ourselin, S., Vercauteren, T.: Aleatoric uncertainty estimation with test-time augmentation for medical image segmenta- tion with convolutional neural networks. Neurocomputing 338, 34-45 (2019)
Slic-seg: slice-by-slice segmentation propagation of the placenta in fetal mri using one-plane scribbles and online learning. G Wang, M A Zuluaga, R Pratt, M Aertsen, A L David, J Deprest, T Vercauteren, S Ourselin, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerWang, G., Zuluaga, M.A., Pratt, R., Aertsen, M., David, A.L., Deprest, J., Ver- cauteren, T., Ourselin, S.: Slic-seg: slice-by-slice segmentation propagation of the placenta in fetal mri using one-plane scribbles and online learning. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 29-37. Springer (2015)
Hemodynamic responses of the placenta and brain to maternal hyperoxia in fetuses with congenital heart disease by using blood oxygen-level dependent MRI. W You, N N Andescavage, K Kapse, M T Donofrio, M Jacobs, C Limperopoulos, Radiology. 2941You, W., Andescavage, N.N., Kapse, K., Donofrio, M.T., Jacobs, M., Limperopou- los, C.: Hemodynamic responses of the placenta and brain to maternal hyperoxia in fetuses with congenital heart disease by using blood oxygen-level dependent MRI. Radiology 294(1), 141-148 (2020)
Robust motion correction and outlier rejection of in vivo functional MR images of the fetal brain and placenta during maternal hyperoxia. W You, A Serag, I E Evangelou, N Andescavage, C Limperopoulos, SPIE Medical Imaging. 9417SPIEYou, W., Serag, A., Evangelou, I.E., Andescavage, N., Limperopoulos, C.: Robust motion correction and outlier rejection of in vivo functional MR images of the fetal brain and placenta during maternal hyperoxia. In: SPIE Medical Imaging. vol. 9417, pp. 177-189. SPIE (2015)
| [
"https://github.com/mabulnaga/automatic-placenta-segmentation.",
"https://github.com/mabulnaga/automatic-placenta-segmentation."
]
|
[
"An Empirical Evaluation of Competitive Programming AI: A Case Study of AlphaCode",
"An Empirical Evaluation of Competitive Programming AI: A Case Study of AlphaCode"
]
| [
"Sila Lertbanjongngam \nDepartment of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand\n",
"Bodin Chinthanet \nGraduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan\n",
"Takashi Ishio \nGraduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan\n",
"Raula Gaikovina Kula \nGraduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan\n",
"Pattara Leelaprute [email protected] \nDepartment of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand\n",
"Bundit Manaskasemsak \nDepartment of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand\n",
"Arnon Rungsawang [email protected] \nDepartment of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand\n",
"Kenichi Matsumoto [email protected] \nGraduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan\n"
]
| [
"Department of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand",
"Graduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan",
"Graduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan",
"Graduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan",
"Department of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand",
"Department of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand",
"Department of Computer Engineering\nFaculty of Engineering\nKasetsart University\nBangkokThailand",
"Graduate School of Science and Technology\nNara Institute of Science and Technology\nNaraJapan"
]
| []
| AlphaCode is a code generation system for assisting software developers in solving competitive programming problems using natural language problem descriptions. Despite the advantages of the code generating system, the open source community expressed concerns about practicality and data licensing. However, there is no research investigating generated codes in terms of code clone and performance. In this paper, we conduct an empirical study to find code similarities and performance differences between AlphaCode-generated codes and human codes. The results show that (i) the generated codes from AlphaCode are similar to human codes (i.e., the average maximum similarity score is 0.56) and (ii) the generated code performs on par with or worse than the human code in terms of execution time and memory usage. Moreover, AlphaCode tends to generate more similar codes to humans for low-difficulty problems (i.e., four cases have the exact same codes). It also employs excessive nested loops and unnecessary variable declarations for highdifficulty problems, which cause low performance regarding our manual investigation. The replication package is available at https:/doi.org/10.5281/zenodo.6820681 | 10.1109/iwsc55060.2022.00010 | [
"https://export.arxiv.org/pdf/2208.08603v2.pdf"
]
| 251,643,500 | 2208.08603 | 9b61de7038290751377b64293baaf42f3e7cf441 |
An Empirical Evaluation of Competitive Programming AI: A Case Study of AlphaCode
Sila Lertbanjongngam
Department of Computer Engineering
Faculty of Engineering
Kasetsart University
BangkokThailand
Bodin Chinthanet
Graduate School of Science and Technology
Nara Institute of Science and Technology
NaraJapan
Takashi Ishio
Graduate School of Science and Technology
Nara Institute of Science and Technology
NaraJapan
Raula Gaikovina Kula
Graduate School of Science and Technology
Nara Institute of Science and Technology
NaraJapan
Pattara Leelaprute [email protected]
Department of Computer Engineering
Faculty of Engineering
Kasetsart University
BangkokThailand
Bundit Manaskasemsak
Department of Computer Engineering
Faculty of Engineering
Kasetsart University
BangkokThailand
Arnon Rungsawang [email protected]
Department of Computer Engineering
Faculty of Engineering
Kasetsart University
BangkokThailand
Kenichi Matsumoto [email protected]
Graduate School of Science and Technology
Nara Institute of Science and Technology
NaraJapan
An Empirical Evaluation of Competitive Programming AI: A Case Study of AlphaCode
10.5281/zenodo.6820681Index Terms-code generationcode similaritycode perfor- mance
AlphaCode is a code generation system for assisting software developers in solving competitive programming problems using natural language problem descriptions. Despite the advantages of the code generating system, the open source community expressed concerns about practicality and data licensing. However, there is no research investigating generated codes in terms of code clone and performance. In this paper, we conduct an empirical study to find code similarities and performance differences between AlphaCode-generated codes and human codes. The results show that (i) the generated codes from AlphaCode are similar to human codes (i.e., the average maximum similarity score is 0.56) and (ii) the generated code performs on par with or worse than the human code in terms of execution time and memory usage. Moreover, AlphaCode tends to generate more similar codes to humans for low-difficulty problems (i.e., four cases have the exact same codes). It also employs excessive nested loops and unnecessary variable declarations for highdifficulty problems, which cause low performance regarding our manual investigation. The replication package is available at https:/doi.org/10.5281/zenodo.6820681
I. INTRODUCTION
Over the past few years, Artificial Intelligence (AI) applications have become very popular, especially in the software engineering field, as they help software developers to work faster and more efficiently [1]. The examples of tasks that AI can assist developers in the software development process are (i) software defect prediction [2,3], (ii) cost estimation [4,5], (iii) task prioritization [6], (iv) expert recommendation [7], (v) security vulnerability detection [8,9], and (vi) code generation [10,11]. These AI applications rely on massive amounts of data from software development tools such as version control systems, issue tracking systems, and continuous integration and deployment systems (CI/CD).
In the case of the code generation system, AI models are expected to synthesize new and unseen codes, while they know existing programs from their training dataset [12]. Recently, several works focus on evaluating code generation systems on Codex [10] and GitHub Copilot [13] from security [14] and correctness aspects [15]. These code generation systems require a code comment or a short natural language as input for solving the given task.
AlphaCode, a transformer-based AI code generation system developed by DeepMind [11], is currently the state of the art for competitive programming AI. The advantage of AlphaCode compared to other code generation systems is that it can generate source codes from competitive programming problem descriptions, which usually require an understanding of algorithms and complex natural languages. However, as the model behind the AI used human codes for the training process, concerns about data-related issues were raised in the open source community, such as data privacy, licensing, and data extraction attacks [16,17]. As there is also no research investigating AlphaCode, it is unclear whether generated codes are clones of human codes. It is also unclear whether the performance of the generated codes at which, if deployed, how many resources (i.e., time and memory) are required to run the program.
In this paper, we conduct an empirical study to find code similarities and performance differences between AlphaCodegenerated codes and human codes. We define two following research questions:
• (RQ 1 ) Are generated codes similar to human codes? • (RQ 2 ) Can generated codes perform better than human codes?
We collected 44 samples of valid generated codes in C++ and Python languages from the AlphaCode official website [18] that solve 22 problems on Codeforces. We then retrieved 31,736 human codes by using the Codeforces API. For RQ 1 , we compare source code similarity between the generated and human codes using metrics to detect source code reuse [19]. For RQ 2 , we measure the performance of the codes based on an existing performance comparison work [20]. The results show that (i) the generated codes from Al-phaCode are similar to the human codes (i.e., the average maximum similarity score is 0.56 and 0.50 for C++ and Python), while the code fragments in the generated code are comprised of various human codes (i.e., the uniqueness of 3.30% and 8.94% for C++ and Python) and (ii) the generated code performs on par with or worse than the human code in terms of execution time and memory usage. Furthermore, AlphaCode generates code that is more similar to human code for low-difficulty problems (i.e., four cases with the same code) and employs excessive nested loops and unnecessary variable declarations (i.e., using long long instead of int, unused integer list), resulting in poor performance.
Regular Bracket Sequences
Example Input
3 3 1 3 Example Output ()()() ((())) (()()) () ((())) (())() ()(())
A bracket sequence is a string containing only characters "(" and ")". A regular bracket sequence is a bracket sequence that can be transformed into a correct arithmetic expression by inserting characters "1" and "+" between the original characters of the sequence. For example, bracket sequences "()()" and "(())" are regular (the resulting expressions are: "(1)+(1)" and "((1+1)+1)"), and ")(", "(" and ")" are not.
You are given an integer . Your goal is to construct and print exactly different regular bracket sequences of length 2 .
Input
The first line contains one integer (1 ≤ ≤ 50) -the number of test cases. Each test case consists of one line containing one integer (1 ≤ ≤ 50).
Output
For each test case, print lines, each containing a regular bracket sequence of length exactly 2 . All bracket sequences you output for a testcase should be different (though they may repeat in different test cases). If there are multiple answers, print any of them. It can be shown that it's always possible. The rest of this paper is organized as follows. Section II describes the background of this research. Section III shows our analysis method. Section IV describes results of the analysis. Section V discusses limitations and threats to validity. Section VI concludes this paper and suggests future work.
II. BACKGROUND
In this section, we briefly explain terms that have been used throughout the paper.
A. Competitive Programming
Competitive programming is a competition that gives wellknown computer science problems to contestants and asks them to solve those problems by writing source codes as quickly as possible [22]. To solve a problem, contestants have to analyze the description and use their logical, mathematical, and computer science skills to find the optimal solution, which is usually not directly stated in the description text. There are several well-known programming competitions that are annually held and supported by software organizations, such as Google Code Jam [23], Meta Hacker Cup [24], and ACM ICPC [25]. The format of competitions can be different depending on the host; for example, the number of problems, competition times, and available programming languages. In this paper, we focus on Codeforces, one of the most popular competitive programming platforms that has over one million users and regularly holds a weekly competition [26]. Figure 1 shows the competitive programming problem example, called Regular Bracket Sequences, hosted on Codeforces. The problem description consists of (i) the details of the problem, and (ii) input and output constraints, which include the format and the example test case. In this example, the contestants are asked to generate n different bracket sequences that can be transformed into a correct expression for each test case. There are multiple solutions to this problem, but they have to be executed within limited memory and time.
B. AlphaCode: A Competitive-level AI Code Generation
In February 2022, DeepMind introduced AlphaCode, a large-scale transformer-based code generation system. The main target of AlphaCode is to generate code for solving competitive programming problems that require an understanding of algorithms and complex natural languages. One of the highlights of AlphaCode is that it can generate the entire program from a long natural language description compared to Codex, a code generation system used in GitHub Copilot [10,13], which is capable of generating code for a simple task with a short solution (e.g., function, API usage).
Several studies focused on the evaluation of the transformerbased code generation system. Chen et al. [10] evaluated Codex and found that it has a strong performance for easy interview problems. Nguyen and Nadi [15] evaluated the suggested codes of GitHub Copilot with LeetCode and found that it achieved at most 57% of correctness score. Pearce et al. [14] found that generated code can introduce the security vulnerability issue. In the case of AlphaCode, the evaluation is still limited as only DeepMind provides such information [11]. They found that AlphaCode achieved an average ranking within the top 54.3% with over 5,000 human contestants. Additionally, AlphaCode achieved 20.36% and 7.75% solve rates (i.e., pass@k metric) in introductory and competition tasks. However, the understanding of generated codes from AlphaCode is still unclear especially in terms of the quality of codes compared to the actual human codes. Hence, this study empirically evaluates the code similarities and performance differences between generated codes and human codes, as described in the following section.
III. EXPERIMENT SETUP
This section presents the dataset and analysis methods to understand how generated codes are similar to human codes.
A. Dataset
Our dataset consists of (i) the generated codes from Al-phaCode and (ii) the human codes from Codeforces. From the AlphaCode official website [18], there are 141 generated codes from 43 different problems written in C++ and Python languages. However, some of these generated codes do not successfully solve the problems. Additionally, some problems have generated code in only one language. In this paper, we consider only problems that have both C++ and Python generated codes and those that can solve the problems correctly. We crawled and collected the 44 generated codes that solve 22 problems written in C++ and Python languages.
To empirically evaluate the generated codes, we retrieved human codes from Codeforces using a provided API on May 17, 2022. However, we were able to retrieve at most 1,000 submitted codes per language for each problem due to API limitations. We also collected the difficulty score of each problem. In the end, 21,508 C++ and 10,228 Python human codes were collected for our experiment. The summary of our dataset is described in Table I To analyze the source code similarity between generated and human codes, we make a comparison for each problem at the file-level granularity. We have employed two source code similarity metrics that have been used to measure the degree of source code reuse [19].
trigrams(a) trigrams(b) < , , if >, < , if, ( >, < , , if >, < , if, ( >, <if, (, x >, <(, x, != >, <if, (, y >, <(, y, == >, <x, !=, y >, <!=, y, ) >, <y, ==, x >, <==, x, ) >, <y, ), ; >, <), ;, x >,<xsim (f 1 , f 2 ) = |trigrams (f 1 ) ∩ trigrams (f 2 )| |trigrams (f 1 ) ∪ trigrams (f 2 )| unique (f, H) = trigrams (f ) \ h∈H trigrams (h) |trigrams (f )|
where trigrams(f ) is a multiset of token trigrams (three consecutive tokens) extracted from a file f . Table II shows example values of the metrics for a pair of code fragments (i.e., a sample of lines in a file). We calculate source code similarity for each pair of generated and human codes. A higher similarity indicates that a larger amount of source code could be reused from the human code. As Juergens et al. [27] reported that different authors write different source codes, the similarity is expected to be low.
The uniqueness of a generated code is defined as the ratio of trigrams that are unique to the generated code. In other words, it measures the number of trigrams in the generated code that are never included in human codes. A higher uniqueness indicates that the generated code includes only its own code. The uniqueness becomes low if token trigrams in generated codes are also found in human codes.
C. Analysis for RQ 2
To analyze the performance difference between generated codes and human codes, we implement the system to measure the execution time and memory usage similar to Leelaprute et al. [20]. We first generate the input data based on the constraints of Codeforces problems. Due to the limitations of the Codeforces API, it is impossible to retrieve the largest input data for each problem. Hence, we use the upper-bond constraint described in the problem description as the size of the input in order to simulate the worst-case scenario. We then compile and execute both generated and human codes by using our input data. In this case, we use only human codes that are the most similar to generated codes for each problem. We repeatedly execute codes 100 times to reduce the threat from non-deterministic and other factors that affect the execution time and memory usage. We also set an execution time limit of five seconds to avoid the infinity loop case.
In our experiment, we use memory-profiler, a Python library for measuring the amount of memory consumption for each code execution with the execution timestamp [28]. For compiler and interpreter setup, we use the GNU G++20 11.2.0 compiler with -O2 option for C++ code compilation and CPython 3.9.7 for Python code execution. Our machine uses an AMD Ryzen Threadripper 3970X CPU and 128 GB of DDR4 RAM on Ubuntu 20.04 operating system.
To statistically validate the differences between the execution time and memory usage of generated codes and human codes, we use the independent sample t-test, which compares the means between two groups [29]. Our hypothesis is that "the execution time and memory usage between generated and human codes are the same or not". We also measure the effect size using Cohen's d, which shows differences based on means and standard deviations of two groups [30]. The interpretation is listed as follows: (i) 0 ≤ d < 0.1 as Very small, (ii) 0.1 ≤ d < 0.35 as Small, (iii) 0.35 ≤ d < 0.65 as Medium, (iv) 0.65 ≤ d < 0.9 as Large, or (v) d ≥ 0.9 as Very large. In our experiment, we use NumPy [31], Scipy [32], and researchpy [33] for the statistical test.
IV. RESULTS
A. (RQ 1 ) Are generated codes similar to human codes?
Similarity between generated and human codes. Figure 2 shows the similarity score between generated codes and human codes for each problem in C++ and Python. We find that, overall, AlphaCode can generate similar codes to human as the mean of maximum similarity scores for C++ ( Figure 2a) and Python (Figure 2b) are 0.56 and 0.50 respectively. Only four cases that generated codes are exactly the same as human codes for both languages. We also find that these four cases have difficulty score at 800, which is one of the lowest score among all problems in our study. Uniqueness of generated code fragments. Table III shows percentage of uniqueness of generated codes compared to human codes. We find that the code fragments in generated codes are common in human codes as the mean of uniqueness for C++ language is 3.30% and 8.93% for Python language. Considering these two results, it indicates that the AlphaCode model might not directly clone the training data to solve the competitive programming problems. However, AlphaCode generates code that is a mixture of multiple human codes.
Answer to RQ 1 : Yes, generated codes from Alpha-Code can be similar to human codes (the average maximum similarity is 0.56). The code fragments used by AlphaCode are common across all human codes. our generated input for generated codes and human codes written in C++ and Python, respectively. From our comparison, we find that 11 out of 22 C++ human codes (50%) and 6 out of 22 (27.27%) for Python significantly outperform generated codes (i.e., less execution times and highlighted in yellow). We also find that 8 out of 22 C++ generated codes (36.36%) and 5 (22.73%) for Python cannot execute within 5 seconds (i.e., time limit exceeded and highlighted in gray).
On the other hand, only 3 out of 22 generated codes (13.64%) for both C++ and Python beat human codes with less than 0.01 second difference (i.e., highlighted in green). From our manual investigation for the time limit exceeded cases, we find that AlphaCode introduced unnecessary nested loops in the generated codes that are from high-difficulty problems.
Comparison of memory usage. Tables VI and VII show the summary statistics of the memory usage for generated codes and human codes written in C++ and Python, respectively. For the limit exceeded cases (i.e., highlighted in gray), the memory usages do not represent the total memory to solve each problem, but the usages before the process got terminated. From our comparison, we find that 4 out of 22 C++ and Python human codes (18.18%) significantly use a smaller amount of memory compared to generated codes (highlighted in yellow). Interestingly, the C++ generated code for the problem 1554C uses over 8,000 MB. On the other hand, only one Python generated code significantly beats human codes with less than 1 MB difference (i.e., highlighted in green). Our manual investigation for the large memory usage cases shows that AlphaCode did not optimize the variable type (i.e., use long long instead of int) and allocated unnecessary variables (i.e., generated list of integer, but did not use it). Similar to the execution time, these cases usually are high-difficulty problems.
Answer to RQ 2 : No, in terms of execution time and memory usage, generated codes perform on par with or worse than human codes. We find that AlphaCode employs excessive nested loops and unnecessary variable declarations to solve the problems.
V. LIMITATIONS AND THREATS TO VALIDITY
Limitations.
A key limitation of this work is that we do not have access to the model of AlphaCode, which limit us to replicate the entire training and testing process for our research. We also do not have the access to a complete list of generated solutions from their official website, which limits our ability to understand the characteristic of generated codes. However, the competitive programming dataset for machine learning is available on GitHub [34], which could be useful for further investigation of AlphaCode learning process.
Threats to internal validity. The first threat is the correctness of test case for evaluation. Due to the limitations of Codeforces, we cannot retrieve the edge test cases with the maximum input size. Instead, we generated those edge test cases based on the problem descriptions, with a manual check to minimize a threat. The second threat is the selection of generated codes. Due to the limited access of AlphaCode model, we can only collect the randomly selected generated codes from AlphaCode official website. Even though those codes were randomly selected, in some cases having performance issues, the result shows that the similarity to human codes is in the same trend regardless of their performance and language.
Threats to external validity. The main external threat is the generality of results to other code generation systems. In this study, we investigated AlphaCode, which is mainly focused on generating competitive programming code. As a result, our findings may not apply to other types of code generation systems, such as GitHub Copilot. However, our approaches can be applied to those systems for further evaluation. Another threat is the sample size of the analyzed data. We analyzed only 44 generated codes with 31,736 human codes due to the dataset limitations. This small sample might not be able to represent the population. However, we manually checked those data to ensure the quality and to reduce bias.
VI. CONCLUSIONS AND FUTURE WORKS
This study conducted an empirical analysis to examine the performance and comparability of the AlphaCode-generated codes to human codes. Our results show that (i) AlphaCode generates similar codes to humans, which are comprised of various human code fragments and (ii) the generated code performs on par with or worse than the human code in terms of execution time and memory utilization. These results indicate that software developers should review the generated codes as they might contain problematic codes and introduce performance issues.
In future work, we want to extend the study to a larger dataset including more languages and problems. While we have used source code similarity, source code naturalness [35] might be an interesting metric to understand the generated codes. We are also interested in other code generation models such as GitHub Copilot [13].
Fig. 1 :
1Example competitive programming problem from Codeforces (1574A -Regular Bracket Sequences)[21].
, ), ; >, <), ;, y >, <;, x, + >, <x, +, + >, <;, y, + >, <y, +, + >, <+, +, ; >, <+, ;, >, <+, +, ; >, <+, ;, >, <;, , >, <;, , >, sim(a, b) = 5 21 = 0.238, unique(a, {b}) = 8 13 = 0.615
Fig. 2 :
2Similarity between generated codes and human codes.
B. (RQ 2 ) Can generated codes perform better than human codes?Comparison of execution time.Tables IV and V show the summary statistics of the execution time when receiving
TABLE I :
ISummary information of our dataset.Summary information of collected generated codes
Detail
Mean
Median
SD
Total
C++ language
-# codes
1
1
1
22
-# line of code
31.772
26
11.688
699
Python language
-# codes
1
1
1
22
-# lines of code
16.318
15.5
6.724
359
Summary information of collected human codes
Detail
Mean
Median
SD
Total
C++ language
-# codes
977
1000
102.48
21,508
-# lines of code
52.117
36.0
48.740
1,120,933
Python language
-# codes
464
461
352.19
10,228
-# lines of code
14.274
11.0
16.990
145,998
.B. Analysis for RQ 1
TABLE II :
IIAn example of similarity value. A bold trigram is unique to a code fragment.Code fragment a
Code fragment b
if ( x != y ); x++;
if ( y == x ); y++;
TABLE III :
IIIPercentage of code fragment uniqueness of generated codesLanguages
Mean
Median Min
Max
SD
C++
3.30%
2.60%
0%
11.36%
3.21%
Python
8.94%
3.24%
0%
31.75% 10.66%
TABLE IV :
IVSummary statistics and comparison of execution times (in second) between generated codes and human codes written in C++. Note that we show |∆| and Cohen's d if generated codes perform significant difference (p-value < 0.001).Difficulty
AlphaCode
Human
|∆|
Cohen's d
Mean
SD
Mean
SD
1549A
800
0.0009 0.0022 0.0017 0.0005
-
-
1549C
1400
5.821
0.0069
0.0591 0.0191 5.7619
399.22*
1551A
800
5.8216 0.0047
0.0133 0.0027 5.8083
1501.59*
1551B1
800
0.0067 0.0018 0.0065 0.0011
-
-
1552A
800
0.004
0.0006 0.0041 0.0008
-
-
1552B
1500
5.8219 0.0057 0.0463 0.0017
5.775
1377.4*
1553A
800
0.0019 0.0004 0.0019 0.0005
-
-
1553H
2900
5.821
0.0038
0.0899 0.0017
5.731
1961.69*
1554A
800
5.8199 0.0048
0.0589 0.0025
5.761
1487.15*
1554B
1700
5.82
0.0045
0.0203 0.0017
5.799
1683.69*
1554C
1800
5.82
0.0052
0.009
0.0017
5.811
1488.17*
1556A
800
0.0032 0.0006 0.0104 0.0016 0.0072
5.74*
1557A
800
0.0914 0.0047 0.0911 0.0031
-
-
1559A
900
0.0024 0.0027
0.002
0.0005
-
-
1560A
800
0.001
0.0026 0.0007 0.0005
-
-
1561A
800
0.0038 0.0066 0.0034 0.0006
-
-
1566A
800
5.8194 0.0041
0.0146 0.0024 5.8048
1722.22*
1566D1
1100
0.0271 0.0038 0.0121 0.0018
0.015
5.0*
1567E
2200
5.8213 0.0103
1.4461 0.0044 4.3752
548.34*
1569A
800
0.0024
0.002
0.0049 0.0008 0.0025
1.7*
1573C
1800
0.0291 0.0033 0.0327 0.0036 0.0036
1.05*
1574A
800
0.0038 0.0017
0.0021 0.0004 0.0017
1.33*
TABLE V :
VSummary statistics and comparison of execution times
(in second) between generated codes and human codes written in
Python. Note that we show |∆| and Cohen's d if generated codes
perform significant difference (p-value < 0.001).
Difficulty
AlphaCode
Human
|∆|
Cohen's d
Mean
SD
Mean
SD
1549A
800
0.2069 0.0067 0.2082 0.0056
-
-
1549C
1400
5.8217 0.0035
0.6428 0.0084 5.1789
796.38*
1551A
800
0.2271 0.0037 0.2272 0.0084
-
-
1551B1
800
0.2208 0.0052 0.2203 0.0067
-
-
1552A
800
0.211
0.0044 0.2116
0.004
-
-
1552B
1500
0.4594 0.0133 0.3199 0.0095 0.1395
11.99*
1553A
800
0.2116 0.0067 0.2092 0.0111
-
-
1553H
2900
5.8183 0.0063
-
-
-
-
1554A
800
0.3241 0.0064
0.327
0.0089
-
-
1554B
1700
0.2195
0.002
-
-
-
-
1554C
1800
5.8167 0.0202
0.2937 0.0363
5.523
186.91*
1556A
800
0.2158
0.008
0.2297 0.0084 0.0139
1.7*
1557A
800
5.8175 0.0047
0.2709 0.0041 5.5466
1250.18*
1559A
900
0.3236 0.0113
0.2099 0.0053 0.1137
12.83*
1560A
800
0.2443 0.0455 0.2472 0.0455
-
-
1561A
800
0.3008 0.0151 0.3226 0.0172 0.0218
1.34*
1566A
800
0.2296 0.0078 0.2292 0.0075
-
-
1566D1
1100
0.3863 0.0142 0.3776 0.0201
-
-
1567E
2200
5.8178 0.0078
-
-
-
-
1569A
800
0.2228 0.0044
0.214
0.007
0.0088
1.51*
1573C
1800
0.4602 0.0041 0.4695 0.0063 0.0093
1.74*
1574A
800
0.2041 0.0063
0.205
0.0101
-
-
TABLE VI :
VISummary statistics and comparison of memory usages
(in MB) between generated codes and human codes written in C++.
Note that we show |∆| and Cohen's d if generated codes perform
significant difference (p-value < 0.001).
Difficulty
Alpha Code
Human Code
|∆|
Cohen's d
Mean
SD
Mean
SD
1549A
800
1.5249
0.2347
1.5765
0.397
-
-
1549C
1400
14.0974
0.065
14.1612 0.0914
-
-
1551A
800
1.5235
0.0386
1.5093
0.1562
-
-
1551B1
800
1.5273
0.2525
1.5041
0.1566
-
-
1552A
800
1.5762
0.3989
1.5416
0.3124
-
-
1552B
1500
4.9586
0.1488
4.2563
0.0446
0.7023
6.36*
1553A
800
1.5039
0.1555
1.5303
0.2427
-
-
1553H
2900
5.0258
0.055
15.0579 0.0506
-
-
1554A
800
3.4807
0.0491
3.7433
0.0508
-
-
1554B
1700
3.5278
0.075
3.5279
0.0799
-
-
1554C
1800
8194.9658
0.1861
1.5427
0.1841
8193.4231
44044.6*
1556A
800
1.5289
0.241
1.5258
0.2391
-
-
1557A
800
3.5021
0.0568
3.4982
0.0478
-
-
1559A
900
1.5283
0.257
1.538
0.2529
-
-
1560A
800
1.5284
0.2379
1.5052
0.1563
-
-
1561A
800
1.5012
0.1567
1.5014
0.1568
-
-
1566A
800
1.5178
0.0381
1.5242
0.2357
-
-
1566D1
1100
1.5177
0.2495
1.5085
0.1564
-
-
1567E
2200
3.7976
0.0674
4.5731
0.0806
-
-
1569A
800
1.5225
0.2495
1.507
0.1554
-
-
1573C
1800
13.635
1.3726
12.1131 1.2187
1.5219
1.17*
1574A
800
1.5523
0.3235
0.5532
0.062
0.9991
4.27*
TABLE VII :
VIISummary statistics and comparison of memory usages
(in MB) between generated codes and human codes written in Python.
Note that we show |∆| and Cohen's d if generated codes perform
significant difference (p-value < 0.001).
Difficulty
AlphaCode
Human
|∆|
Cohen's d
Mean
SD
Mean
SD
1549A
800
39.201
0.1217 39.4353 0.1096 0.2343
2.01*
1549C
1400
83.2158 0.1259
78.5995 0.1287 4.6163
36.07*
1551A
800
39.1544 0.1226
39.188
0.1253
-
-
1551B1
800
39.1969 0.1079 39.1793 0.1105
-
-
1552A
800
39.1747
0.122
39.1938 0.1168
-
-
1552B
1500
53.8854 0.1133
53.885
0.1344
-
-
1553A
800
39.1811 0.1125 39.1672 0.1248
-
-
1553H
2900
110.542 0.1253
-
-
-
-
1554A
800
55.1097 0.1666 55.1638 0.1779
-
-
1554B
1700
50.6909 0.1316
-
-
-
-
1554C
1800
39.1715 0.1258 39.1646
0.124
-
-
1556A
800
39.4315 0.1109
39.1876
0.106
0.2439
2.24*
1557A
800
50.1959 0.1177 58.7845 0.1173
-
-
1559A
900
39.1851
0.126
39.1881 0.1183
-
-
1560A
800
39.1883 0.1046 39.1852 0.1104
-
-
1561A
800
39.1639 0.1255 39.1803 0.1248
-
-
1566A
800
39.1911 0.1101 39.1841 0.1201
-
-
1566D1
1100
41.4968 0.1182 39.1843 0.1131 2.3125
19.88*
1567E
2200
62.6948 0.1079
-
-
-
-
1569A
800
39.1977 0.1198 39.1744 0.1205
-
-
1573C
1800
74.2036 0.1187
65.5114 0.1295 8.6922
69.62*
1574A
800
39.1903 0.1124 39.1848 0.1237
-
-
ACKNOWLEDGMENT This work has been supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers JP20H05706 and JP20K19774.
Explainable ai for software engineering. C K Tantithamthavorn, J Jiarpakdee, IEEE/ACM International Conference on Automated Software Engineering (ASE). C. K. Tantithamthavorn and J. Jiarpakdee, "Explainable ai for software engineering," in IEEE/ACM International Conference on Automated Software Engineering (ASE), 2021, pp. 1-2.
Automated parameter optimization of classification techniques for defect prediction models. C Tantithamthavorn, S Mcintosh, A E Hassan, K Matsumoto, International Conference on Software Engineering (ICSE). C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Mat- sumoto, "Automated parameter optimization of classification techniques for defect prediction models," in International Conference on Software Engineering (ICSE), 2016, pp. 321- 332.
Software defect prediction using bayesian networks. A Okutan, O T Yıldız, Empirical Software Engineering (EMSE). 191A. Okutan and O. T. Yıldız, "Software defect prediction using bayesian networks," Empirical Software Engineering (EMSE), vol. 19, no. 1, pp. 154-181, 2014.
Software cost estimation using an albus perceptron (cmac). B Samson, D Ellison, P Dugard, Information and Software Technology (IST). 39B. Samson, D. Ellison, and P. Dugard, "Software cost estimation using an albus perceptron (cmac)," Information and Software Technology (IST), vol. 39, no. 1, pp. 55-60, 1997.
An empirical analysis of data preprocessing for machine learning-based software cost estimation. J Huang, Y.-F Li, M Xie, Information and Software Technology (IST). 67J. Huang, Y.-F. Li, and M. Xie, "An empirical analysis of data preprocessing for machine learning-based software cost estima- tion," Information and Software Technology (IST), vol. 67, pp. 108-127, 2015.
A machine learning approach to software requirements prioritization. A Perini, A Susi, P Avesani, IEEE Transactions on Software Engineering (TSE). 394A. Perini, A. Susi, and P. Avesani, "A machine learn- ing approach to software requirements prioritization," IEEE Transactions on Software Engineering (TSE), vol. 39, no. 4, pp. 445-461, 2012.
Developer prioritization in bug repositories. J Xuan, H Jiang, Z Ren, W Zou, International Conference on Software Engineering (ICSE). J. Xuan, H. Jiang, Z. Ren, and W. Zou, "Developer prioritization in bug repositories," in International Conference on Software Engineering (ICSE), 2012, pp. 25-35.
D2a: a dataset built for aibased vulnerability detection methods using differential analysis. Y Zheng, S Pujar, B Lewis, L Buratti, E Epstein, B Yang, J Laredo, A Morari, Z Su, International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). Y. Zheng, S. Pujar, B. Lewis, L. Buratti, E. Epstein, B. Yang, J. Laredo, A. Morari, and Z. Su, "D2a: a dataset built for ai- based vulnerability detection methods using differential anal- ysis," in International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 2021, pp. 111- 120.
Human-in-the-loop xai-enabled vulnerability detection, investigation, and mitigation. T N Nguyen, R Choo, IEEE/ACM International Conference on Automated Software Engineering (ASE). T. N. Nguyen and R. Choo, "Human-in-the-loop xai-enabled vulnerability detection, investigation, and mitigation," in IEEE/ACM International Conference on Automated Software Engineering (ASE), 2021, pp. 1210-1212.
Evaluating Large Language Models Trained on Code. M Chen, J Tworek, H Jun, Q Yuan, H P D O Pinto, J Kaplan, H Edwards, Y Burda, N Joseph, G Brockman, arXiv:2107.03374arXiv preprintM. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., "Evaluating Large Language Models Trained on Code," arXiv preprint arXiv:2107.03374, 2021.
Competition-level code generation with alphacode. Y Li, D Choi, J Chung, N Kushman, J Schrittwieser, R Leblond, T Eccles, J Keeling, F Gimeno, A D Lago, arXiv:2203.07814arXiv preprintY. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago et al., "Competition-level code generation with alphacode," arXiv preprint arXiv:2203.07814, 2022.
The adverse effects of code duplication in machine learning models of code. M Allamanis, ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!). M. Allamanis, "The adverse effects of code duplication in machine learning models of code," in ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), 2019, p. 143-153.
Github copilot -your ai pair programmer. 08/04/2022"Github copilot -your ai pair programmer," https://github.com/ features/copilot, (Accessed on 08/04/2022).
Asleep at the keyboard? assessing the security of github copilot's code contributions. H Pearce, B Ahmad, B Tan, B Dolan-Gavitt, R Karri, IEEE Symposium on Security and Privacy (SP). H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. Karri, "Asleep at the keyboard? assessing the security of github copilot's code contributions," in IEEE Symposium on Security and Privacy (SP), may 2022, pp. 980-994.
An empirical evaluation of github copilot's code suggestions. N Nguyen, S Nadi, IEEE/ACM Mining Software Repositories Conference (MSR). N. Nguyen and S. Nadi, "An empirical evaluation of github copilot's code suggestions," in IEEE/ACM Mining Software Repositories Conference (MSR), 2022, pp. 1-5.
Extracting training data from large language models. N Carlini, F Tramèr, E Wallace, M Jagielski, A Herbert-Voss, K Lee, A Roberts, T Brown, D Song, Ú Erlingsson, A Oprea, C Raffel, USENIX Security Symposium (USENIX Security 21). N. Carlini, F. Tramèr, E. Wallace, M. Jagielski, A. Herbert- Voss, K. Lee, A. Roberts, T. Brown, D. Song,Ú. Erlingsson, A. Oprea, and C. Raffel, "Extracting training data from large language models," in USENIX Security Symposium (USENIX Security 21), Aug. 2021, pp. 2633-2650.
Github copilot research recitation -the github blog. 08/04/2022"Github copilot research recitation -the github blog," https: //github.blog/2021-06-30-github-copilot-research-recitation/, (Accessed on 08/04/2022).
Alphacode attention visualization. 22022The AlphaCode teamThe AlphaCode team, "Alphacode attention visualization," https://alphacode.deepmind.com/, 2 2022, (Accessed on 07/11/2022).
Same File, Different Changes: The Potential of Meta-Maintenance on GitHub. H Hata, R G Kula, T Ishio, C Treude, International Conference on Software Engineering (ICSE). H. Hata, R. G. Kula, T. Ishio, and C. Treude, "Same File, Differ- ent Changes: The Potential of Meta-Maintenance on GitHub," in International Conference on Software Engineering (ICSE), May 2021, pp. 773-784.
Does coding in pythonic zen peak performance? preliminary experiments of nine pythonic idioms at scale. P Leelaprute, B Chinthanet, S Wattanakriengkrai, R G Kula, P Jaisri, T Ishio, International Conference on Program Comprehension (ICPC). P. Leelaprute, B. Chinthanet, S. Wattanakriengkrai, R. G. Kula, P. Jaisri, and T. Ishio, "Does coding in pythonic zen peak per- formance? preliminary experiments of nine pythonic idioms at scale," in International Conference on Program Comprehension (ICPC), 2022, pp. 575-579.
Problem -1574A -Codeforces. 08/04/2022"Problem -1574A -Codeforces," https://codeforces.com/ problemset/problem/1574/A, (Accessed on 08/04/2022).
Competitive programming 3. S Halim, F Halim, S S Skiena, M A Revilla, Code Jam -Google's Coding Competitions. Lulu Independent Publish Morrisville, NC, USAS. Halim, F. Halim, S. S. Skiena, and M. A. Revilla, Competitive programming 3. Lulu Independent Publish Morrisville, NC, USA, 2013. [23] "Code Jam - Google's Coding Competitions,"
. 08/04/2022Meta Hacker Cup. "Meta Hacker Cup," https://www.facebook.com/ codingcompetitions/hacker-cup, (Accessed on 08/04/2022).
The ICPC International Collegiate Programming Contest. 08/04/2022"The ICPC International Collegiate Programming Contest," https://icpc.global/, (Accessed on 08/04/2022).
Codeforces: Results of 2020. M Mirzayanov, Codeforces. Annual Report. Accessed on 08/04/2022M. Mirzayanov, "Codeforces: Results of 2020 [Annual Re- port] -Codeforces," https://codeforces.com/blog/entry/89502, Apr 2021, (Accessed on 08/04/2022).
Code similarities beyond copy & paste. E Juergens, F Deissenboeck, B Hummel, European Conference on Software Maintenance and Reengineering (CSMR). 28] "memory-profiler · pypi. Accessed on 08/04/2022E. Juergens, F. Deissenboeck, and B. Hummel, "Code similari- ties beyond copy & paste," in European Conference on Software Maintenance and Reengineering (CSMR), Mar. 2010. [28] "memory-profiler · pypi," https://pypi.org/project/ memory-profiler/, 2022, (Accessed on 08/04/2022).
. A Ross, V L Willson, SensePublishersIndependent Samples T-Test. RotterdamA. Ross and V. L. Willson, Independent Samples T-Test. Rot- terdam: SensePublishers, 2017, pp. 13-16.
J Cohen, Statistical Power Analysis for the Behavioral Sciences. Routledge. J. Cohen, Statistical Power Analysis for the Behavioral Sciences. Routledge, 1988.
Array programming with NumPy. C R Harris, K J Millman, S J Van Der Walt, R Gommers, P Virtanen, D Cournapeau, E Wieser, Nature. 5857825C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser et al., "Array program- ming with NumPy," Nature, vol. 585, no. 7825, pp. 357-362, Sep. 2020.
SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, E Burovski, P Peterson, Nature Methods. 17P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson et al., "SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python," Nature Methods, vol. 17, pp. 261-272, 2020.
Deepmind/code contests. 08/04/2022"Deepmind/code contests," https://github.com/deepmind/code contests, (Accessed on 08/04/2022).
Natural Software Revisited. M Rahman, D Palani, P C Rigby, International Conference on Software Engineering (ICSE). M. Rahman, D. Palani, and P. C. Rigby, "Natural Software Revisited," in International Conference on Software Engineering (ICSE), May 2019, pp. 37-48.
| [
"https://github.com/deepmind/code"
]
|
[
"Imagination Improves Multimodal Translation",
"Imagination Improves Multimodal Translation"
]
| [
"Desmond Elliott Andákos Kádár \nSchool of Informatics\nILLC\nUniversity of Amsterdam\nUniversity of Edinburgh † TiCC\nTilburg University\n\n"
]
| [
"School of Informatics\nILLC\nUniversity of Amsterdam\nUniversity of Edinburgh † TiCC\nTilburg University\n"
]
| [
"Proceedings of the The 8th International Joint Conference on Natural Language Processing"
]
| We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attentionbased encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text. | null | [
"https://www.aclweb.org/anthology/I17-1014.pdf"
]
| 20,272,964 | 1705.04350 | 9d8dc1b4a63070d9f732158ee95675e8fdee949e |
Imagination Improves Multimodal Translation
November 27 -December 1, 2017 c 2017
Desmond Elliott Andákos Kádár
School of Informatics
ILLC
University of Amsterdam
University of Edinburgh † TiCC
Tilburg University
Imagination Improves Multimodal Translation
Proceedings of the The 8th International Joint Conference on Natural Language Processing
the The 8th International Joint Conference on Natural Language ProcessingTaipei, TaiwanNovember 27 -December 1, 2017 c 2017
We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attentionbased encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text.
Introduction
Multimodal machine translation is the task of translating sentences in context, such as images paired with a parallel text . This is an emerging task in the area of multilingual multimodal natural language processing. Progress on this task may prove useful for translating the captions of the images illustrating online news articles, and for multilingual closed captioning in international television and cinema.
Initial efforts have not convincingly demonstrated that visual context can improve translation quality. In the results of the First Multimodal Translation Shared Task, only three systems outperformed an off-the-shelf text-only phrase-based machine translation model, and the best performing system was equally effective with or without the visual features . There remains an open question about how translation models should take advantage of visual context. We present a multitask learning model that decomposes multimodal translation into learning a translation model and learning visually grounded representations. This decomposition means that our model can be trained over external datasets of parallel text or described images, making it possible to take advantage of existing resources. Figure 1 presents an overview of our model, Imagination, in which source language representations are shared between tasks through the Shared Encoder. The translation decoder is an attention-based neural machine translation model (Bahdanau et al., 2015), and the image prediction decoder is trained to predict a global feature vector of an image that is associated with a sentence (Chrupała et al., 2015, IMAGINET). This decomposition encourages grounded learning in the shared encoder because the IMAGINET decoder is trained to imagine the image associated with a sentence. It has been shown that grounded representations are qualitatively different from their text-only counterparts (Kádár et al., 2016) and correlate better with human similarity judgements (Chrupała et al., 2015). We assess the success of the grounded learning by evaluating the image prediction model on an image-sentence ranking task to determine if the shared representations are useful for image retrieval (Hodosh et al., 2013). In contrast with most previous work, our model does not take images as input at translation time, rather it learns grounded representations in the shared encoder.
We evaluate Imagination on the Multi30K dataset using a combination of in-domain and out-of-domain data. In the indomain experiments, we find that multitasking translation with image prediction is competitive with the state of the art. Our model achieves 55.8 Meteor as a single model trained on multimodal in-domain data, and 57.6 Meteor as an ensemble.
In the experiments with out-of-domain resources, we find that the improvement in translation quality holds when training the IMAGINET decoder on the MS COCO dataset of described images (Chen et al., 2015). Furthermore, if we significantly improve our text-only baseline using out-of-domain parallel text from the News Commentary corpus (Tiedemann, 2012), we still find improvements in translation quality from the auxiliary image prediction task. Finally, we report a state-of-the-art result of 59.3 Meteor on the Multi30K corpus when ensembling models trained on in-and out-of-domain resources.
The main contributions of this paper are:
• We show how to apply multitask learning to multimodal translation. This makes it possible to train models for this task using external resources alongside the expensive triplealigned source-target-image data.
• We decompose multimodal translation into two tasks: learning to translate and learning grounded representations. We show that each task can be trained on large-scale external resources, e.g. parallel news text or images described in a single language.
• We present a model that achieves state of the art results without using images as an input. Instead, our model learns visually grounded source language representations using an auxiliary image prediction objective. Our model does not need any additional parameters to translate unseen sentences.
Problem Formulation
Multimodal translation is the task of producing target language translation y, given the source language sentence x and additional context, such as an image v . Let x be a source language sentence consisting of N tokens: x 1 , x 2 , . . ., x n and let y be a target language sentence consisting of M tokens: y 1 , y 2 , . . ., y m . The training data consists of tuples D ∈ (x, y, v), where x is a description of image v, and y is a translation of x. Multimodal translation has previously been framed as minimising the negative log-likelihood of a translation model that is additionally conditioned on the image, i.e. J(θ) = − j log p(y j |y <j , x, v). Here, we decompose the problem into learning to translate and learning visually grounded representations. The decomposition is based on sharing parameters θ between these two tasks, and learning task-specific parameters φ. We learn the parameters in a multitask model with shared parameters in the source language encoder. The translation model has taskspecific parameters φ t in the attention-based decoder, which are optimized through the translation loss J T (θ, φ t ). Grounded representations are learned through an image prediction model with task-specific parameters φ g in the imageprediction decoder by minimizing J G (θ, φ g ). The joint objective is given by mixing the translation and image prediction tasks with the parameter w:
J(θ, φ) = wJ T (θ, φ t ) + (1 − w)J G (θ, φ g ) (1)
Our decomposition of the problem makes it straightforward to optimise this objective without paired tuples, e.g. where we have an external dataset of described images D image ∈ (x, v) or an external parallel corpus D text ∈ (x, y).
We train our multitask model following the approach of Luong et al. (2016). We define a primary task and an auxiliary task, and a set of parameters θ to be shared between the tasks. A minibatch of updates is performed for the primary task with probability w, and for the auxiliary task with 1−w. The primary task is trained until convergence and weight w determines the frequency of parameter updates for the auxiliary task.
Imagination Model
Shared Encoder
The encoder network of our model learns a representation of a sequence of N tokens x 1...n in the source language with a bidirectional recurrent neural network (Schuster and Paliwal, 1997). This representation is shared between the different tasks. Each token is represented by a one-hot vector x i , which is mapped into an embedding e i through a learned matrix E:
e i = x i · E(2)
A sentence is processed by a pair of recurrent neural networks, where one captures the sequence left-to-right (forward), and the other captures the sequence right-to-left (backward). The initial state of the encoder h −1 is a learned parameter:
− → h i = −−→ RNN( −−→ h i−1 , e i ) (3) ← − h i = ←−− RNN( ←−− h i−1 , e i )(4)
Each token in the source language input sequence is represented by a concatenation of the forward and backward hidden state vectors:
h i = [ − → h i ; ← − h i ](5)
Neural Machine Translation Decoder
The translation model decoder is an attentionbased recurrent neural network (Bahdanau et al., 2015). Tokens in the decoder are represented by a one-hot vector y j , which is mapped into an embedding e j through a learned matrix E y :
e j = y j · E y(6)
The inputs to the decoder are the previously predicted token y j−1 , the previous decoder state d j−1 , and a timestep-dependent context vector c j calculated over the encoder hidden states:
d j = RNN(d j−1 , y j−1 , e j )(7)
The initial state of the decoder d -1 is a nonlinear transform of the mean of the encoder states, where W init is a learned parameter:
d -1 = tanh(W init · 1 N N i h i )(8)
The context vector c j is a weighted sum over the encoder hidden states, where N denotes the length of the source sentence:
c j = N i=1 α ji h i (9)
The α ji values are the proportion of which the encoder hidden state vectors h 1...n contribute to the decoder hidden state when producing the jth token in the translation. They are computed by a feed-forward neural network, where v a , W a and U a are learned parameters:
α ji = exp(e ji ) N l=1 exp(e li )(10)e ji = v a · tanh(W a · d j−1 + U a · h i ) (11)
From the hidden state d j the network predicts the conditional distribution of the next token y j , given a target language embedding e j−1 of the previous token, the current hidden state d j , and the calculated context vector c j . Note that at training time, y j−1 is the true observed token; whereas for unseen data we use the inferred tokenŷ j−1 sampled from the output of the softmax:
p(y j |y <j , c) = softmax(tanh(e j−1 + d j + c j ))(12)
The translation model is trained to minimise the negative log likelihood of predicting the target language output:
J N LL (θ, φ t ) = − j log p(y j |y <j , x) (13)
Imaginet Decoder
The image prediction decoder is trained to predict the visual feature vector of the image associated with a sentence (Chrupała et al., 2015). It encourages the shared encoder to learn grounded representations for the source language.
A source language sentence is encoded using the Shared Encoder, as described in Section 3.1. Then we transform the shared encoder representation into a single vector by taking the mean pool over the hidden state annotations, the same way we initialise the hidden state of the translation decoder (Eqn. 8). This sentence representation is the input to a feed-forward neural network that predicts the visual feature vectorv associated with a
v = tanh(W vis · 1 N N i h i )(14)
This decoder is trained to predict the true image vector v with a margin-based objective, parameterised by the minimum margin α, and the cosine distance d(·, ·). A margin-based objective has previously been used in grounded representation learning (Vendrov et al., 2016;Chrupała et al., 2017). The contrastive examples v are drawn from the other instances in a minibatch:
J M AR (θ, φ t ) = v =v max{0, α − d(v, v) + d(v, v )}(15)
Data
We evaluate our model using the benchmark Multi30K dataset , which is the largest collection of images paired with sentences in multiple languages. This dataset contains 31,014 images paired with an English language sentence and a German language translation: 29,000 instances are reserved for training, 1,014 for development, and 1,000 for evaluation. 1 The English and German sentences are preprocessed by normalising the punctuation, lowercasing and tokenizing the text using the Moses toolkit. We additionally decompound the German text using Zmorge (Sennrich and Kunz, 2014). 1 The Multi30K dataset also contains 155K independently collected descriptions in German and English. In order to make our experiments more comparable with previous work, we do not make use of this data. This results in vocabulary sizes of 10,214 types for English and 16,022 for German.
We also use two external datasets to evaluate our model: the MS COCO dataset of English described images (Chen et al., 2015), and the English-German News Commentary parallel corpus (Tiedemann, 2012). When we perform experiments with the News Commentary corpus, we first calculate a 17,597 sub-word vocabulary using SentencePiece (Schuster and Nakajima, 2012) over the concatentation of the Multi30K and News Commentary datasets. This gives us a shared vocabulary for the external data that reduces the number of out-of-vocabulary tokens.
Images are represented by 2048D vectors extracted from the 'pool5/7x7 s1' layer of the GoogLeNet v3 CNN (Szegedy et al., 2015).
Experiments
We evaluate our multitasking approach with inand out-of-domain resources. We start by reporting results of models trained using only the Multi30K dataset. We also report the results of training the IMAGINET decoder with the COCO dataset. Finally, we report results on incorporating the external News Commentary parallel text into our model. Throughout, we report performance of the En→De translation using Meteor (Denkowski and Lavie, 2014) and BLEU (Papineni et al., 2002) against lowercased tokenized references.
Hyperparameters
The encoder is a 1000D Gated Recurrent Unit bidirectional recurrent neural network (Cho et al., 2014, GRU) with 620D embeddings. We share all of the encoder parameters between the primary and auxiliary task. The translation decoder is a 1000D GRU recurrent neural network, with a 2000D context vector over the encoder states, and 620D word embeddings (Sennrich et al., 2017). The Imaginet decoder is a single-layer feed-forward network, where we learn the parameters W vis ∈ R 2048x2000 to predict the true image vector with α = 0.1 for the Imaginet objective (Equation 15). The models are trained using the Adam optimiser with the default hyperparameters (Kingma and Ba, 2015) in minibatches of 80 instances. The translation task is defined as the primary task and convergence is reached when BLEU has not increased for five epochs on the validation data. Gradients are clipped when their norm ex- (Gal and Ghahramani, 2016). Translations are decoded using beam search with 12 hypotheses.
In-domain experiments
We start by presenting the results of our multitask model trained using only the Multi30K dataset. We compare against state-of-the-art approaches and text-only baselines. Moses is the phrase-based machine translation model (Koehn et al., 2007) reported in . NMT is a text-only neural machine translation model. is a double-attention model over the source language and the image. Calixto and Liu (2017) is a multimodal translation model that conditions the decoder on semantic image vector extracted from the VGG-19 CNN. Hitschler et al. (2016) uses visual features in a target-side retrieval model for translation. Toyama et al. (2016) is most comparable to our approach: it is a multimodal variational NMT model that infers latent variables to represent the source language semantics from the image and linguistic data. Table 2 shows the results of this experiment. We can see that the combination of the attention-based translation model and the image prediction model is a 1.8 Meteor point improvement over the NMT baseline, but it is 1.1 Meteor points worse than the strong Moses baseline. Our approach is competitive with previous approaches that use visual features as inputs to the decoder and the targetside reranking model. It also competitive with Table 4: Translation results with out-of-domain parallel text and described images. We find further improvements when we multitask with the News Commentary (NC) and COCO datasets. Toyama et al. (2016), which also only uses images for training. These results confirm that our multitasking approach uses the image prediction task to improve the encoder of the translation model.
External described image data
Recall from Section 2 that we are interested in scenarios where x, y, and v are drawn from different sources. We now experiment with separating the translation data from the described image data using D image : MS COCO dataset of 83K described images 2 and D text : Multi30K parallel text. Table 3 shows the results of this experiment. We find that there is no significant difference between training the IMAGINET decoder on in-domain (Multi30K) or out-of-domain data (COCO). This result confirms that we can separate the parallel text from the described images.
External parallel text data
We now experiment with training our model on a combination of the Multi30K and the News Commentary English-German data. In these experiments, we concatenate the Multi30K and News Table 4 presents the results. The text-only NMT model using sub-words is 1.2 Meteor points lower than decompounding the German text. Nevertheless, the model trained over a concatenation of the parallel texts is a 2.7 Meteor point improvement over this baseline (+ NC) and matches the performance of our Multitasking model that uses only in-domain data (Section 5.2). We do not see an additive improvement for the multitasking model with the concatenated parallel text and the indomain data (+ Imagination) using a training objective interpolation of w = 0.89 (the ratio of the training dataset sizes). This may be because we are essentially learning a translation model and the updates from the IMAGINET decoder are forgotten. Therefore, we experiment with multitasking the concatenated parallel text and the COCO dataset (w = 0.5). We find that balancing the datasets improves over the concatenated text model by 0.4 Meteor (+ Imagination (COCO)). Our multitasking approach improves upon Calixto et al. by 0.3 Meteor points. Our model can be trained in 48 hours using 240K parallel sentences and 414K described images from out-of-domain datasets. Furthermore, recall that our model does not use images as an input for translating unseen data, which results in 6.2% fewer parameters compared to using the 2048D Inception-V3 visual features to initialise the hidden state of the decoder. Table 5 presents the results of ensembling different randomly initialised models. We achieve a start-of-the-art result of 57.6 Meteor for a model trained on only in-domain data. The improvements are more pronounced for the models trained using sub-words and out-of-domain data. An ensemble of baselines trained on sub-words is initially worse than an ensemble trained on Zmorge decompounded words. However, we always see an improvement from ensembling models trained on in-and out-of-domain data. Our best ensemble is trained on Multi30K parallel text, the News Commentary parallel text, and the COCO descriptions to set a new state-of-the-art result of 59.3 Meteor.
Ensemble results
Multi30K 2017 results
We also evaluate our approach against 16 submissions to the WMT Shared Task on Multimodal Translation and Multilingual Image Description (Elliott et al., 2017). This shared task features a new evaluation dataset: Multi30K Test 2017 (Elliott et al., 2017), which contains 1,000 new evaluation images. The shared task submissions are evaluated with Meteor and human direct assessment (Graham et al., 2017). We submitted two systems, based on whether they used only the Multi30K dataset (constrained) or used additional external resources (unconstrained). Our constrained submission is an ensemble of three Imagination models trained over only the Multi30K training data. This achieves a Meteor score of 51.2, and a joint 3rd place ranking according to human assessment. Our unconstrained submission is an ensemble of three Imagination models trained with the Multi30K, News Commentary, and MS COCO datasets. It achieves a Meteor score of Source: two children on their stomachs lay on the ground under a pipe NMT: zwei kinder auf ihren gesichtern liegen unter dem boden auf dem boden Ours: zwei kinder liegen bäuchlings auf dem boden unter einer schaukel Source: small dog in costume stands on hind legs to reach dangling flowers NMT: ein kleiner hund steht auf dem hinterbeinen und läuft , nach links von blumen zu sehen Ours: ein kleiner hund in einem kostüm steht auf den hinterbeinen , um die blumen zu erreichen Source: a bird flies across the water NMT: ein vogel fliegtüber das wasser Ours: ein vogel fliegt durch das wasser Table 6: Examples where our model improves or worsens the translation compared to the NMT baseline. Top: NMT translates the wrong body part; both models skip "pipe". Middle: NMT incorrectly translates the verb and misses several nouns. Bottom: Our model incorrectly translates the preposition.
53.5, and 2nd place in the human assessment. Table 6 shows examples of where the multitasking model improves or worsens translation performance compared to the baseline model 3 . The first example shows that the baseline model makes a significant error in translating the pose of the children, translating "on their stomachs" as "on their faces"). The middle example demonstrates that the baseline model translates the dog as walking ("läuft") and then makes grammatical and sense errors after the clause marker. Both models neglect to translate the word "dangling", which is a low-frequency word in the training data. There are instances where the baseline produces better translations than the multitask model: In the bottom example, our model translates a bird flying through the water ("durch") instead of "over" the water.
Qualitative examples
6 Discussion 6.1 Does the model learn grounded representations?
A natural question to ask if whether the multitask model is actually learning representations that are relevant for the images. We answer this question by evaluating the Imaginet decoder in an imagesentence ranking task. Here the input is a source language sentence, from which we predict its im- 3 We used MT-ComparEval (Klejch et al., 2015) age vectorv. The predicted vectorv can be compared against the true image vectors v in the evaluation data using the cosine distance to produce a ranked order of the images. Our model returns a median rank of 11.0 for the true image compared to the predicted image vector. Figure 2 shows examples of the nearest neighbours of the images predicted by our multitask model. We can see that the combination of the multitask source language representations and IMAGINET decoder leads to the prediction of relevant images. This confirms that the shared encoder is indeed learning visually grounded representations.
The effect of visual feature vectors
We now study the effect of varying the Convolutional Neural Network used to extract the visual features used in the Imaginet decoder. It has previously been shown that the choice of visual features can affect the performance of vision and language models (Jabri et al., 2016;Kiela et al., 2016). We compare the effect of training the IMAGINET decoder to predict different types of image features, namely: 4096D features extracted from the 'fc7'' layer of the VGG-19 model (Simonyan and Zisserman, 2015), 2048D features extracted from the 'pool5/7x7 s1' layer of InceptionNet V3 (Szegedy et al., 2015), and 2048D features extracted from 'avg pool' layer of ResNet-50 (He et al., 2016). Table 7 shows the results of this experiment. There is a clear difference between predicting the 2048D (a) Nearest neighbours for "a native woman is working on a craft project ."
(b) Nearest neighbours for "there is a cafe on the street corner with an oval painting on the side of the building ." Figure 2: We can interpret the IMAGINET Decoder by visualising the predictions made by our model.
Meteor Median Rank
Inception-V3 56.0 ± 0.1 11.0 ± 0.0
Resnet-50 54.7 ± 0.4 11.7 ± 0.5 VGG-19 53.6 ± 1.8 13.0 ± 0.0
Related work
Initial work on multimodal translation used semantic or spatially-preserving image features as inputs to a translation model. Semantic image features are typically extracted from the final layer of a pre-trained object recognition CNN, e.g. 'pool5/7x7 s1' in GoogLeNet (Szegedy et al., 2015). This type of vector has been used as input to the encoder (Elliott et al., 2015;Huang et al., 2016), the decoder (Libovický et al., 2016), or as features in a phrase-based translation model (Shah et al., 2016;Hitschler et al., 2016). Spatially-preserving image features are extracted from deeper inside a CNN, where the position of a feature is related to its position in the image. These features have been used in "double-attention models", which calculate independent context vectors for the source language and a convolutional image features (Calixto et al., 2016;Caglayan et al., 2016;. We use an attentionbased translation model but our multitask model does not use images for translation.
More related to our work is an extension of Variational Neural Machine Translation to infer latent variables to explicitly model the semantics of source sentences from visual and linguistic information (Toyama et al., 2016). They report improvements on the Multi30K data set but their model needs additional parameters in the "neural inferrer" modules. In our model, the grounded semantics are represented implicitly in the shared encoder. They assume Source-Target-Image training data, whereas our approach achieves equally good results if we train on separate Source-Image and Source-Target datasets. Saha et al. (2016) study cross-lingual image description where the task is to generate a sentence in language L 1 given the image, using only Image-L 2 and L 1 -L 2 training corpora. They propose a Correlational Encoder-Decoder to model the Image-L 2 and L 1 -L 2 data, which learns correlated representations for paired Image-L 2 data and decodes L 1 from the joint representation. Similar to our work, the encoder is trained by minimizing two loss functions: the Image-L 2 correlation loss, and the L 1 decoding cross-entropy loss. Nakayama and Nishida (2017) consider a zero-resource problem, where the task is to translate from L 1 to L 2 with only Image-L 1 and Image-L 2 corpora. Their model embeds the image, L 1 , and L 2 in a joint multimodal space learned by minimizing a multi-task ranking loss between both pairs of examples. In this paper, we focus on enriching source language representations with visual information instead of zeroresource learning.
Multitask Learning improves the generalisability of a model by requiring it to be useful for more than one task (Caruana, 1997). This approach has recently been used to improve the performance of sentence compression using eye gaze as an auxiliary task (Klerke et al., 2016), and to improve shallow parsing accuracy through the auxiliary task of predicting keystrokes in an out-of-domain corpus (Plank, 2016). More recently, Bingel and Søgaard (2017) analysed the beneficial relationships between primary and auxiliary sequential prediction tasks. In the translation literature, multitask learning has been used to learn a one-to-many languages translation model (Dong et al., 2015), a multi-lingual translation model with a single attention mechanism shared across multiple languages (Firat et al., 2016), and in multitask sequence-tosequence learning without an attention-based decoder (Luong et al., 2016). We explore the benefits of grounded learning in the specific case of multimodal translation. We combine sequence prediction with continuous (image) vector prediction, compared to previous work which multitasks different sequence prediction tasks.
Visual representation prediction has been studied using static images or videos. Lin and Parikh (2015) use a conditional random field to imagine the composition of a clip-art scene for visual paraphrasing and fill-in-the-blank tasks. Chrupała et al. (2015) predict the image vector associated with a sentence using an L2 loss; they found this improves multi-modal word similarity compared to text-only baselines. Gelderloos and Chrupała (2016) predict the image vector associated with a sequence of phonemes using a max-margin loss, similar to our image prediction objective. Collell et al. (2017) learn to predict the visual feature vector associated with a word for word similarity and relatedness tasks. As a video reconstruction problem, Srivastava et al. (2015) propose an LSTM Autoencoder to predict video frames as a reconstruction task or as a future prediction task. Pasunuru and Bansal (2017) propose a multitask model for video description that combines unsupervised video reconstruction, lexical entailment, and video description. They find improvements from using out-of-domain resources for entailment and video prediction, similar to the improvements we find from using out-of-domain parallel text and described images.
Conclusion
We decompose multimodal translation into two sub-problems: learning to translate and learning visually grounded representations. In a multitask learning framework, we show how these subproblems can be addressed by sharing an encoder between a translation model and an image prediction model 5 . Our approach achieves state-of-theart results on the Multi30K dataset without using images for translation. We show that training on separate parallel text and described image datasets does not hurt performance, encouraging future research on multitasking with diverse sources of data. Furthermore, we still find improvements from image prediction when we improve our textonly baseline with the out-of-domain parallel text. Future work includes adapting our decomposition to other NLP tasks that may benefit from out-ofdomain resources, such as semantic role labelling, dependency parsing, and question-answering; exploring methods for inputting the (predicted) image into the translation model; experimenting with different image prediction architectures; multitasking different translation languages into a single shared encoder; and multitasking in both the encoder and decoder(s).
Figure 1 :
1The Imagination model learns visuallygrounded representations by sharing the encoder network between the Translation Decoder with image prediction in the IMAGINET Decoder.
Table 3 :
3Translation results when using out-ofdomain described images. Our approach is still effective when the image prediction model is trained over the COCO dataset.Meteor
BLEU
Table 5 :
5Ensemble decoding results. Zmorge denotes models trained with decompounded German words; Sub-word denotes joint SentencePiece word splitting (see Section 4 for more details).Commentary datasets into a single D text train-
ing dataset, similar to Freitag and Al-Onaizan
(2016). We compare our model against Calixto
et al. (2017), who pre-train their model on the
WMT'15 English-German parallel text and back-
translate (Sennrich et al., 2016) additional sen-
tences from the bilingual independent descriptions
in the Multi30K dataset (Footnote 2).
Table 7 :
7The type of visual features predicted by the IMAGINET Decoder has a strong impact on the Multitask model performance.vectors (Inception-V3 and ResNet-50) compared
to the 4096D vector from VGG-19). This dif-
ference is reflected in both the translation Meteor
score and the Median rank of the images in the val-
idation dataset. This is likely because it is easier to
learn the parameters of the image prediction model
that has fewer parameters (8.192 million for VGG-
19 vs. 4.096 million for Inception-V3 and ResNet-
50). However, it is not clear why there is such a
pronounced difference between the Inception-V3
and ResNet-50 models 4 .
Due to differences in the vocabularies of the respective datasets, we do not train on examples where more than 10% of the tokens are out-of-vocabulary in the Multi30K dataset.
We used pre-trained CNNs (https://github.com/ fchollet/deep-learning-models), which claim equal ILSVRC object recognition performance for both models: 7.8% top-5 error with a single-model and single-crop.
Code: http://github.com/elliottd/imagination
AcknowledgmentsWe are grateful to the anonymous reviewers for their feedback. We thank Joost Bastings for sharing his multitasking Nematus model, Wilker Aziz for discussions about formulating the problem, Stella Frank for finding and explaining the qualitative examples to us, and Afra Alishahi, Grzegorz Chrupała, and Philip Schulz for feedback on earlier drafts of the paper. DE acknowledges the support of an Amazon Research Award, NWO Vici grant nr. 277-89-002 awarded to K. Sima'an, and a hardware grant from the NVIDIA Corporation.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations.
Identifying beneficial task relations for multi-task learning in deep neural networks. J Bingel, A Søgaard, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsJ. Bingel and A. Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 164-169.
Multimodal attention for neural machine translation. Ozan Caglayan, Loïc Barrault, Fethi Bougares, abs/1609.03976CoRROzan Caglayan, Loïc Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. CoRR, abs/1609.03976.
DCU-UvA Multimodal MT System Report. Iacer Calixto, Desmond Elliott, Stella Frank, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationIacer Calixto, Desmond Elliott, and Stella Frank. 2016. DCU-UvA Multimodal MT System Report. In Pro- ceedings of the First Conference on Machine Trans- lation, pages 634-638.
Incorporating global visual features into attention-based neural machine translation. Iacer Calixto, Qun Liu, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingIacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1003-1014.
Doubly-Attentive Decoder for Multi-modal Neural Machine Translation. Iacer Calixto, Qun Liu, Nick Campbell, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-Attentive Decoder for Multi-modal Neural Machine Translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1913- 1924.
Multitask learning. Rich Caruana, Machine Learning. 28Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41-75.
Microsoft COCO captions: Data collection and evaluation server. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, C Lawrence Zitnick, abs/1504.00325CoRRXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO cap- tions: Data collection and evaluation server. CoRR, abs/1504.00325.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bah- danau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. pages 1724-1734.
Representations of language in a model of visually grounded speech signal. Grzegorz Chrupała, Lieke Gelderloos, Afra Alishahi, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Grzegorz Chrupała, Lieke Gelderloos, and Afra Al- ishahi. 2017. Representations of language in a model of visually grounded speech signal. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 613-622.
Learning language through pictures. Grzegorz Chrupała, Ákos Kádár, Afra Alishahi, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingGrzegorz Chrupała,Ákos Kádár, and Afra Alishahi. 2015. Learning language through pictures. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing, pages 112-118.
Imagined visual representations as multimodal embeddings. Guillem Collell, Teddy Zhang, Marie-Francine Moens, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)Guillem Collell, Teddy Zhang, and Marie-Francine Moens. 2017. Imagined visual representations as multimodal embeddings. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence (AAAI-17), pages 4378-4384.
Meteor universal: Language specific translation evaluation for any target language. Michael Denkowski, Alon Lavie, Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. the EACL 2014 Workshop on Statistical Machine TranslationMichael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation.
Multi-task learning for multiple language translation. D Dong, H Wu, W He, D Yu, H Wang, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingD. Dong, H. Wu, W. He, D. Yu, and H. Wang. 2015. Multi-task learning for multiple language transla- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1723-1732.
Findings of the second shared task on multimodal machine translation and multilingual image description. Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, Lucia Specia, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, Denmark2Shared Task Papers. Association for Computational LinguisticsDesmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine transla- tion and multilingual image description. In Proceed- ings of the Second Conference on Machine Trans- lation, Volume 2: Shared Task Papers, pages 215- 233, Copenhagen, Denmark. Association for Com- putational Linguistics.
Multi-language image description with neural sequence models. Desmond Elliott, Stella Frank, Eva Hasler, abs/1510.04709CoRRDesmond Elliott, Stella Frank, and Eva Hasler. 2015. Multi-language image description with neural se- quence models. CoRR, abs/1510.04709.
Multi30K: Multilingual English-German Image Descriptions. Desmond Elliott, Stella Frank, Khalil, Lucia Sima'an, Specia, Proceedings of the 5th Workshop on Vision and Language. the 5th Workshop on Vision and LanguageDesmond Elliott, Stella Frank, Khalil. Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-German Image Descriptions. In Proceed- ings of the 5th Workshop on Vision and Language.
Multi-way, multilingual neural machine translation with a shared attention mechanism. O Firat, K Cho, Y Bengio, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesO. Firat, K. Cho, and Y. Bengio. 2016. Multi-way, mul- tilingual neural machine translation with a shared attention mechanism. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875.
Fast domain adaptation for neural machine translation. Markus Freitag, Yaser Al-Onaizan, abs/1612.06897CoRRMarkus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR, abs/1612.06897.
A theoretically grounded application of dropout in recurrent neural networks. Yarin Gal, Zoubin Ghahramani, Advances in Neural Information Processing Systems. 29Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems 29, pages 1019-1027.
From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning. Lieke Gelderloos, Grzegorz Chrupała, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics. COLING 2016, the 26th International Conference on Computational LinguisticsLieke Gelderloos and Grzegorz Chrupała. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded lan- guage learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics, pages 1309-1319.
Can machine translation systems be evaluated by the crowd alone. Yvette Graham, Timothy Baldwin, Alistair Moffat, Justin Zobel, Natural Language Engineering. 231Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering, 23(1):3-30.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778.
Multimodal Pivots for Image Caption Translation. Julian Hitschler, Shigehiko Schamoni, Stefan Riezler, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsJulian Hitschler, Shigehiko Schamoni, and Stefan Rie- zler. 2016. Multimodal Pivots for Image Caption Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 2399-2409.
Framing image description as a ranking task: Data, models and evaluation metrics. Micah Hodosh, Peter Young, Julia Hockenmaier, Journal of Artificial Intelligence Research. 47Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research, 47:853-899.
Attention-based multimodal neural machine translation. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, Chris Dyer, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationPo-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multi- modal neural machine translation. In Proceedings of the First Conference on Machine Translation, pages 639-645.
Revisiting visual question answering baselines. Allan Jabri, Armand Joulin, Laurens Van Der Maaten, European conference on computer vision. Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2016. Revisiting visual question answer- ing baselines. In European conference on computer vision, pages 727-739.
Representation of linguistic form and function in recurrent neural networks. Akos Kádár, Grzegorz Chrupała, Afra Alishahi, arXiv:1602.08952arXiv preprintAkos Kádár, Grzegorz Chrupała, and Afra Alishahi. 2016. Representation of linguistic form and func- tion in recurrent neural networks. arXiv preprint arXiv:1602.08952.
Comparing Data Sources and Architectures for Deep Visual Representation Learning in Semantics. Douwe Kiela, Anita L Verő, Stephen Clark, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-16). the Conference on Empirical Methods in Natural Language Processing (EMNLP-16)Douwe Kiela, Anita L. Verő, and Stephen Clark. 2016. Comparing Data Sources and Architectures for Deep Visual Representation Learning in Semantics. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP-16), pages 447-456.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations.
Mt-compareval: Graphical evaluation interface for machine translation development. Ondřej Klejch, Eleftherios Avramidis, The Prague Bulletin of Mathematical Linguistics. 1041Aljoscha Burchardt, and Martin PopelOndřej Klejch, Eleftherios Avramidis, Aljoscha Bur- chardt, and Martin Popel. 2015. Mt-compareval: Graphical evaluation interface for machine transla- tion development. The Prague Bulletin of Mathe- matical Linguistics, 104(1):63-74.
Improving sentence compression by learning to predict gaze. Sigrid Klerke, Yoav Goldberg, Anders Søgaard, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1528-1533.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Proceedings of the 45th Annual meeting of Association for Computational Linguistics. the 45th Annual meeting of Association for Computational LinguisticsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th Annual meeting of Association for Computational Linguistics, pages 177-180.
Cuni system for wmt16 automatic post-editing and multimodal translation tasks. Jindřich Libovický, Jindřich Helcl, Marek Tlustý, Ondřej Bojar, Pavel Pecina, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationJindřich Libovický, Jindřich Helcl, Marek Tlustý, Ondřej Bojar, and Pavel Pecina. 2016. Cuni system for wmt16 automatic post-editing and multimodal translation tasks. In Proceedings of the First Con- ference on Machine Translation, pages 646-654.
Don't just listen, use your imagination: Leveraging visual common sense for non-visual tasks. Xiao Lin, Devi Parikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionXiao Lin and Devi Parikh. 2015. Don't just listen, use your imagination: Leveraging visual common sense for non-visual tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 2984-2993.
Multi-task sequence to sequence learning. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser, ICLR. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. In ICLR.
Zeroresource machine translation by multimodal encoder-decoder network with multimedia pivot. Machine Translation. Hideki Nakayama, Noriki Nishida, 31Hideki Nakayama and Noriki Nishida. 2017. Zero- resource machine translation by multimodal encoder-decoder network with multimedia pivot. Machine Translation, 31(1-2):49-64.
Bleu: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. the 40th Annual Meeting on Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 311-318.
Multi-Task Video Captioning with Video and Entailment Generation. R Pasunuru, M Bansal, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1R. Pasunuru and M. Bansal. 2017. Multi-Task Video Captioning with Video and Entailment Generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1273-1283.
Keystroke dynamics as signal for shallow syntactic parsing. Barbara Plank, 26th International Conference on Computational Linguistics. Barbara Plank. 2016. Keystroke dynamics as sig- nal for shallow syntactic parsing. In 26th Inter- national Conference on Computational Linguistics, pages 609-619.
A correlational encoder decoder architecture for pivot based sequence generation. Amrita Saha, M Mitesh, Sarath Khapra, Janarthanan Chandar, Kyunghyun Rajendran, Cho, 26th International Conference on Computational Linguistics: Technical Papers. Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Ja- narthanan Rajendran, and Kyunghyun Cho. 2016. A correlational encoder decoder architecture for pivot based sequence generation. In 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 109-118.
Japanese and korean voice search. Mike Schuster, Kaisuke Nakajima, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152.
Bidirectional recurrent neural networks. Mike Schuster, K Kuldip, Paliwal, IEEE Transactions on Signal Processing. 4511Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
Nematus: a Toolkit for Neural Machine Translation. R Sennrich, O Firat, K Cho, A Birch, B Haddow, J Hitschler, M Junczys-Dowmunt, S Läubli, A Valerio Miceli Barone, J Mokry, M Nȃdejde, R. Sennrich, O. Firat, K. Cho, A. Birch, B. Haddow, J. Hitschler, M. Junczys-Dowmunt, S. Läubli, A. Va- lerio Miceli Barone, J. Mokry, and M. Nȃdejde. 2017. Nematus: a Toolkit for Neural Machine Translation. pages 65-68.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 86-96.
Zmorge: A german morphological lexicon extracted from wiktionary. Rico Sennrich, Beat Kunz, Language Resources and Evaluation Conference. Rico Sennrich and Beat Kunz. 2014. Zmorge: A german morphological lexicon extracted from wik- tionary. In Language Resources and Evaluation Conference, pages 1063-1067.
Shef-multimodal: Grounding machine translation on images. Kashif Shah, Josiah Wang, Lucia Specia, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationKashif Shah, Josiah Wang, and Lucia Specia. 2016. Shef-multimodal: Grounding machine translation on images. In Proceedings of the First Conference on Machine Translation, pages 660-665.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsKaren Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations.
A shared task on multimodal machine translation and crosslingual image description. Lucia Specia, Stella Frank, Khalil Sima'an, Desmond Elliott, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationLucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multi- modal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation, pages 543-553.
Unsupervised learning of video representations using LSTMs. Nitish Srivastava, Elman Mansimov, Ruslan Salakhudinov, International Conference on Machine Learning. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. 2015. Unsupervised learning of video representations using LSTMs. In Interna- tional Conference on Machine Learning, pages 843- 852.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, abs/1512.00567CoRRChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Re- thinking the inception architecture for computer vi- sion. CoRR, abs/1512.00567.
Parallel data, tools and interfaces in opus. Jörg Tiedemann, Eight International Conference on Language Resources and Evaluation (LREC'12). Jörg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Eight International Conference on Language Resources and Evaluation (LREC'12).
Neural machine translation with latent semantic of image and text. Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo, abs/1611.08459CoRRJoji Toyama, Masanori Misono, Masahiro Suzuki, Ko- taro Nakayama, and Yutaka Matsuo. 2016. Neural machine translation with latent semantic of image and text. CoRR, abs/1611.08459.
Order-embeddings of images and language. Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun, ICLRIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. ICLR.
| [
"http://github.com/elliottd/imagination"
]
|
[
"MonoGRNet: A General Framework for Monocular 3D Object Detection",
"MonoGRNet: A General Framework for Monocular 3D Object Detection"
]
| [
"Zengyi Qin ",
"Jinglu Wang ",
"Yan Lu "
]
| []
| []
| Detecting and localizing objects in the real 3D space, which plays a crucial role in scene understanding, is particularly challenging given only a monocular image due to the geometric information loss during imagery projection. We propose MonoGRNet for the amodal 3D object detection from a monocular image via geometric reasoning in both the observed 2D projection and the unobserved depth dimension. MonoGRNet decomposes the monocular 3D object detection task into four sub-tasks including 2D object detection, instance-level depth estimation, projected 3D center estimation and local corner regression. The task decomposition significantly facilitates the monocular 3D object detection, allowing the target 3D bounding boxes to be efficiently predicted in a single forward pass, without using object proposals, post-processing or the computationally expensive pixel-level depth estimation utilized by previous methods. In addition, MonoGRNet flexibly adapts to both fully and weakly supervised learning, which improves the feasibility of our framework in diverse settings. Experiments are conducted on KITTI, Cityscapes and MS COCO datasets. Results demonstrate the promising performance of our framework in various scenarios. | 10.1109/tpami.2021.3074363 | [
"https://arxiv.org/pdf/2104.08797v1.pdf"
]
| 233,296,137 | 2104.08797 | b291ec535d3f1365bedbeb62183575a967c438a6 |
MonoGRNet: A General Framework for Monocular 3D Object Detection
Zengyi Qin
Jinglu Wang
Yan Lu
MonoGRNet: A General Framework for Monocular 3D Object Detection
1Index Terms-3D object detectionmonocularweakly supervised learning !
Detecting and localizing objects in the real 3D space, which plays a crucial role in scene understanding, is particularly challenging given only a monocular image due to the geometric information loss during imagery projection. We propose MonoGRNet for the amodal 3D object detection from a monocular image via geometric reasoning in both the observed 2D projection and the unobserved depth dimension. MonoGRNet decomposes the monocular 3D object detection task into four sub-tasks including 2D object detection, instance-level depth estimation, projected 3D center estimation and local corner regression. The task decomposition significantly facilitates the monocular 3D object detection, allowing the target 3D bounding boxes to be efficiently predicted in a single forward pass, without using object proposals, post-processing or the computationally expensive pixel-level depth estimation utilized by previous methods. In addition, MonoGRNet flexibly adapts to both fully and weakly supervised learning, which improves the feasibility of our framework in diverse settings. Experiments are conducted on KITTI, Cityscapes and MS COCO datasets. Results demonstrate the promising performance of our framework in various scenarios.
INTRODUCTION
A Crucial task in scene understanding is 3D object detection, which aims to predict the amodal 3D bounding boxes of objects from input sensory data such as LiDAR point clouds and images. Compared to LiDAR based [1], [2], [3] and multi-view based [4], [5], [6] 3D object detectors, monocular image-based methods [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17] only take a single-view RGB image as input in inference, which means they have lower requirement on sensors and are less expensive in realworld implementation. If achieving satisfactory detection performance, they can become an important module in mobile robot perception. It would be more attractive if the monocular 3D detection can be learned without 3D labels, which require intensive labors. However, monocular 3D object detection is an ill-posed problem due to the depth information loss in 2D image planes, let alone the challenges when 3D annotations are not offered.
To accurately detect and localize the objects in 3D using a monocular image, recent approaches [10], [11], [12], [18] have been proposed to first predict the pixel-level depth and convert the monocular image to 3D point cloud representations, then apply the well-developed LiDAR or multi-view based 3D object detectors to the point cloud. These methods with hybrid structures can achieve impressive detection accuracy, but also have inevitable limitations. They introduce additional expensive computational cost for predicting high-resolution depth maps from images, making them hardly feasible in mobile platforms where the energy and computing resource are limited. Furthermore, pixellevel depth estimation does not aim at predicting depth of objects of targeting classes but all pixels in the whole images. The uncertainty in irrelevant or unreliable pixels could bring precision loss into the final 3D object detection. Apart from the pixel-level depth based approaches, there is another stream of methods [9], [14], [19], [20], [21] that first predict the sparse 2D representations such as keypoints and 2D bounding boxes, then utilize optimization to fit a 3D bounding box, which can be very efficient. However, when the object is truncated by the image boundaries, the sparse 2D representations are partly missing, which impose significant challenges to fitting the 3D bounding box. In light of this, we will not rely on post-processing in our object detection framework. In addition, we hope that our framework can be free of object proposals to reduce the computational burden and improve the simplicity and generality.
In this paper, we present MonoGRNet, a general framework for learning Monocular 3D object detection by Geometric Reasoning. Taking a monocular RGB image as input, MonoGRNet predicts the 3D bounding boxes of objects in a single forward pass and can be implemented in a straightforward way. The proposed framework decouples the 3D object detection task into four progressive subtasks that are easy to solve using a monocular image. The four sub-tasks are 2D object detection, instance-level depth estimation, projected 3D center estimation and local corner regression, which are solved in parallel in the proposed unified network shown in Figure 1. The network starts from 2D perception and then extends the geometric reasoning into 3D space. The instance-level depth estimation is crucial for bridging the 2D-to-3D gap, which is different from the computationally expensive pixel-level depth estimation utilized by many previous methods. Instance-level depth is defined as the depth of the 3D bounding box center, which can be interpreted as the depth of an object. Using the predicted instance-level depth, we backproject the estimated projected 3D center from the 2D image plane to 3D space to Fig. 1: Network structure. Our monocular 3D object detector consists of four sub-networks for 2D detection (brown), instance-level depth estimation (green), projected 3D center estimation (blue) and local corner regression (yellow). The output of the four sub-networks are combined to produce the 3D bounding boxes, which are refined to give the final outputs. Best viewed in color. It should be noticed that this overview figure only shows the inference stage. The training is achieved via either fully (see Section 4.3) or weakly (see Section 4.4) supervised learning.
obtain the 3D center location of the object. At the same time, the local corner regression provides the coordinates of the eight corners of the 3D bounding box with respect to its 3D center.
The proposed framework is general for variable learning schemes since sub-tasks are decoupled loosely. It can be extended from fully supervised learning to a weakly supervised learning scenario, where the ground truth 3D bounding boxes are not available in training but we have the ground truth 2D bounding boxes instead. Labeling 2D bounding boxes saves much labor compared to labeling 3D bounding boxes based on the investigation [22]. This helps to improve the feasibility and efficiency of applying monocular 3D object detectors to various scenarios. In addition, widely used 3D shape datasets, such as PASCAL3D+ [23] and ShapeNet [24], and their corresponding images make it easy to learn view angles of centered objects in a local extent. We present a geometric-guided method to learn the 3D location from labeled 2D bounding boxes and unlabeled frames, as well as an object-centric transfer learning method to learn the local corner regression with easily accessible 3D shape datasets. As we will see, the task decomposition that we propose is a crucial enabler of the flexible extension to weakly supervised learning of monocualr 3D object detection. The network structure and loss functions mostly remain unchanged in such an extension.
Our experiments are conducted in three public datasets, including KITTI [25], CiteScapes [26] and MS COCO [27]. Notice that CiteScapes [26] and MS COCO [27] do not provide ground truth 3D bounding boxes. Most of the existing 3D object detectors require full supervision, so these two datasets have not gained attention of the 3D object detection community. Our quantitative results on KITTI [25] and qualitative results on all the three datasets demonstrate the promising performance of the proposed framework and its strong generalization capability to diverse training and testing scenarios. In summary, our contributions are:
• We propose to decompose the monocular 3D object detection task into four sub-tasks including 2D ob-ject detection, instance-level depth estimation, projected 3D center estimation and local corner regression. Such formulation frees the detector from object proposals and computationally expensive pixel-level dense depth estimation used by previous methods. • We propose a unified network to efficiently solve the four sub-tasks in parallel, which can be trained in an end-to-end fashion. We also present a method to train the network using ground truth 2D bounding boxes and easily accessible additional data when the ground truth 3D bounding boxes are unavailable.
•
We conduct comprehensive experiments on the KITTI [25], Cityscapes [26] and MS COCO [27] datasets, demonstrating the advantage of the proposed framework in diverse scenarios. We also conduct ablation studies to examine the effectiveness of the crucial components in our framework.
A preliminary version of our MonoGRNet has been published [7] and gained attention. In this manuscript, we make the following improvement. 1) We simplify the structure of MonoGRNet to emphasize the most important concepts that we propose, and at the same time the quantitative performance is improved. 2) We extend the framework from fully supervised to weakly supervised learning, and present an effective method to train the network using ground truth 2D bounding boxes and easily accessible additional data when the ground truth 3D bounding boxes are not available during training. 3) We provide extensive quantitative and qualitative experimental results to show the performance of our framework in different settings and examine the effectiveness of the key modules in ablation studies.
RELATED WORK
2D Object Detection
2D object detection deep networks are extensively studied. Region proposal based methods [28], [29] generate impressive results but perform slowly due to complex multi-stage pipelines. Another group of methods [30], [31], [32], [33] focusing on fast training and inferencing apply a single stage detection. Multi-net [34] introduces an encoder-decoder architecture for real-time semantic reasoning. Its detection decoder combines the fast regression in Yolo [30] with the size-adjusting RoiAlign of Mask-RCNN [35], achieving a satisfied speed-accuracy ratio. All these methods predict 2D bounding boxes of objects while none 3D geometric features are considered.
3D object detection
Existing methods range from single-view RGB [8], [16], [36], [37], multi-view RGB [4], [38], to RGB-D [5], [39], [40]. When geometric information of the depth dimension is provided, the 3D detection task is much easier. MV3D [4] generates 3D object proposals from bird's eye view maps given LiDAR point clouds, and then fuses features in RGB images, LiDAR front views and bird's eye views to predict 3D boxes. AVOD [6] fuses the RGB and LiDAR information in the region proposal stage to reduce the missed detections. Given RGB-D data, F-PointNet [5] extrudes 2D region proposals to a 3D viewing frustum and then segments out the point cloud of interest object. Recently proposed state-of-the-art 3D object detectors include STD [41], Part-Aˆ2 Net [42] and UberATG-MMF [43].
The most related approaches to ours are using a monocular RGB image. Information loss in the depth dimension significantly increases the task's difficulty. Performances of state-of-the-art such methods still have large margins to RGB-D and multi-view methods. Mono3D [44] leverages segmentation mask and contextual information to generate 3D object proposals. Mono3D++ [17] exploits pseudo 3D keypoints and shapes prior to localizing objects in 3D by minimizing matching loss. [6] combines monocular 3D detection with tracking in autonomous scenarios. MonoGR-Net [7] proposes instance-level depth estimation to extend the 2D perception to 3D reasoning without reluctant intermediate representations. MonoPSR [15] leverages shape reconstruction from monocular images to facilitate 3D detection. Another line of research [10], [11] uses monocular images to generate pseudo point cloud, which is passed to existing point cloud based 3D detectors. Nevertheless, since extensive 3D labels are difficult to obtain in practice, the fully supervised methods have limitations in real-world applications.
Monocular Depth Estimation.
Pixel-level depth estimation networks [11], [45], [46] have been proposed. However, when regressing the pixel-level depth, the loss function takes into account every pixel in the depth map and treats them without significant difference. In a common practice, the loss values from each pixel are summed up as a whole to be optimized. Nevertheless, there is a likelihood that the pixels lying in an object are much fewer than those lying in the background, and thus the low average error does not indicate the depth values are accurate in pixels contained in an object. In addition, dense depths are often estimated from disparity maps that may produce large errors at far regions, which may downgrade the 3D localization performance drastically. Different from the pixel-level depth estimation methods, we propose an instance-level depth estimation (IDE) network that predicts the 3D center depth of objects. IDE does not require the densely labeled pixel-level depth for training and avoids the computationally expensive pixel-level monocular depth estimation in testing.
Instance-level depth has been studied in [47], [48], [49]. In [47], the depths of objects are regressed via a single fully convolutional neural network. In [48], the instance depth ordering is jointly learned with instance segmentation. In [49], the authors proposed a self-supervised method to learn instance depth from video sequences. To the best of our knowledge, our MonoGRNet is the first to introduce instance-level depth estimation in the context of monocular 3D object detection. The instance-level depth enables the back-projection from 2D to 3D to obtain the 3D center locations of targeted objects.
Weakly supervised object detection
Most existing studies focus on 2D object detection, while weakly supervised 3D detection has not been extensively explored. [50] starts by inferring the geometric structure implied in low-level and middle-level features to describe the objects of interest, then learns from high-level features by iterative training to detect objects in 2D. [51] trains the detection network by iteratively selecting a subset of bounding boxes that are the most reliable. ACol [52] utilizes the output heatmaps for complementary learning. [53] trains the weakly supervised localization network first on easy examples and then on hard ones. [54] localizes objects by clustering. Different from these previous studies on 2D detection, we aim at bridging the gap between weakly supervised learning and 3D object detection.
Transfer learning on object detection
One popular usage of transfer learning in object detection is model parameter initialization, where the networks trained on large-scale image recognition datasets are used as CNN backbones in object detectors [55], [56], [57], [58], [59]. Lim et al. proposed to borrow examples from existing classes to learn to detect objects from unseen classes [60]. Lampert et al. propose to use high-level attributes to detect object classes, making it easier to transfer to unseen classes [61]. Tang et al. incorporate the visual and semantic similarities of objects in the transferring process. An important step in transfer learning is narrowing the gap between the source and the target dataset. Previous work has focused on domain adaptation [62], [63], [64] at an image level, while we take a different route by ignoring the background and targeting at the object level, saving consideration on the gap between the source and target datasets in terms of the object sizes and background scenes in transfer learning.
PROBLEM STATEMENT
Given a monocular RGB image, the objective is to detect and localize objects of specific classes in the 3D space. A target object is represented by a class label and a 3D bounding box, which bounds the complete object regardless of occlusion or truncation. A 3D bounding box B 3D is defined by a 3D Figure 2 shows the notations. The 3D location C is defined in the camera coordinate frame and the local corners O are in a local coordinate frame whose origin is C. It is clear that
8 k=1 O k = 0 due to symmetry.
Fully supervised learning of monocular 3D object detection means the fully annotated 3D bounding boxes B 3D are provided for all objects of interest in the dataset. For weakly supervised learning, we assume the ground truth B 3D is inaccessible throughout training, while the ground truth 2D bounding boxes B 2D are available. The B 2D provides much weaker supervision than B 3D , since the 3D information is almost lost. It should be noticed that previous work [50], [51], [52], [53], [54] on weakly supervised object detectors focused on learning from image classification labels to predict the B 2D , while we assume that we already have the labeled B 2D and focus on learning to predict B 3D . It is not impossible to learn from image-level classification labels rather than B 2D to predict B 3D , but that is beyond the scope of this paper.
METHOD
We propose MonoGRNet, a general framework for monocular 3D object detection. MonoGRNet takes a monocular image as input and outputs the 3D bounding boxes of objects in a single forward pass, and is free of the object proposal stage and the computationally expensive pixel-level depth prediction. The framework adapts to both fully supervised and weakly supervised learning. Since directly predicting 3D bounding boxes from a 2D monocular image could be difficult due to the dimension reduction, we propose to decompose the task into four progressive sub-tasks that are easier to solve using a monocular image, even when the ground truth 3D bounding boxes are not available. The subtasks are (1) 2D object detection, (2) instance-level depth estimation, (3) projected 3D center estimation and (4) local corner regression. The 3D bounding box prediction result can be directly derived by combining the output of the four sub-tasks. In this section, we first describe the four sub-tasks and the network structure, then detail the learning process in fully and weakly supervised scenarios.
Task Decomposition
The 3D bounding box B 3D is the final goal of prediction. Before predicting the B 3D of an object on the image, the network should be aware of the presence of the object. Therefore, the first task is 2D object detection, which aims to predict the 2D bounding box B 2D of the object and its class.
B 2D = (w 2D , h 2D , u b , v b ) where (w 2D , h 2D ) indicates the size of B 2D and (u b , v b ) represents the center of B 2D on the image plane. As is stated in Section 3, B 3D is parameterized by a 3D center C = (X c , Y c , Z c ) in camera coordinates and eight corners O = {O k }, k = 1, ..., 8, defined in local coordinates.
It can be difficult to directly regress C of an object from the image features because the image itself does not have explicit depth information. Recent works [10], [11] propose to first predict the depth of each pixel of the monocular image in order to convert the image to point cloud representations, which are fed to a point cloud based 3D object detector. The pixel-level depth prediction demands considerable computation resource, limiting its feasibility in diverse application scenarios such as mobile robot platforms. Moreover, the final prediction of 3D object detection is instance-level rather than pixel-level. The uncertainty in irrelevant background pixels could hamper the prediction accuracy. In light of this, we propose the instance-level depth estimation as the second task, which aims to predict the depth Z c of the 3D center C of the 3D bounding box. Examples of the predicted instance-level depths are illustrated in Figure 3 (c). The third task is projected 3D center estimation, which will bring us closer to obtaining C = (X c , Y c , Z c ). The projected 3D center is a 2D point c = (u c , v c ) defined as the projection of C on image. X c and Y c can be derived by:
X c = (u c − p u ) * Z c /f u , Y c = (v c − p v ) * Z c /f v (1)
where f u and f v are the focal length along X and Y axes, p u and p v are coordinates of the principle point. These are a part of the intrinsic parameters of the camera and can be easily obtained when the camera is calibrated. Equation 1
can be interpreted as back-projecting c from 2D to 3D using Z c and the camera parameters. We have demonstrated how to predict C by estimating Z c and c. c is a 2D point on the image plane, so regressing c is much easier than regressing X c and Y c . The fourth task is local corner regression, which aims to estimate the eight corners
O = {O k }, k = 1, ..., 8 of the 3D
bounding box B 3D in a local coordinate system described in Section 3 and Figure 2. To summarize, the first task gives B 2D and the object class, the second and third tasks give C, and the forth task gives O. C and O completely parameterize the B 3D , which is the final output. Due to the welldecomposed task, our framework can also be extended from fully supervised learning to weakly supervised learning, which will be detailed in Section 4.4.
Network Structure
We design a unified end-to-end network to efficiently solve the four sub-tasks in parallel, which is illustrated in Figure 1. The network takes a monocular image as input and outputs a set of B 3D corresponding to the objects on the image. The four sub-tasks share the same backbone and only differ in head layers, which enable feature reuse and improve the inference efficiency. Compared to previous monocular 3D object detectors [10], [11], [15], [16], the proposed network does not require the dense pixel-level depth prediction, instance segmentation or ground plane estimation, and is free of extensive object proposals [29].
The input image is divided into an S u × S v grid G, where a cell is indicated by g. The image is passed to a fully convolutional backbone network, whose output feature map is also of size S u × S v , followed by the head layers of the sub-tasks. The head layers do not down-sample the feature maps, so the resolution remains S u × S v . Each pixel in the feature map corresponds to a cell in the image grid, and will predict the nearest object on image. A single pixel in the head layer feature maps, namely a grid cell, can have multiple channels to regress multiple values.
In the 2D object detection branch, each grid cell g outputs the object classification probability P g and the 2D bounding box
B g 2D = (w g 2D , h g 2D , u g b , v g b )
, indicated by the superscript g. We use softmax activation for P g and no activation for B g 2D in the last layer. The regression target for
(w g 2D , h g 2D ) is itself, while the regression target for (u g b , v g b ) is (∆ g u b , ∆ g v b ) = (u g b − u g , v g b − v g )
, which is the residuals between the the central location of B g 2D and g = (u g , v g ). In the instance-level depth estimation branch, each grid cell g regresses a Z g c . In the projected 3D center estimation branch, each grid cell predicts a c g = (u g c , v g c ). The regression target is the residuals
(∆ g uc , ∆ g vc ) = (u g c − u g , v g c − v g ).
In the local corner regression branch, for each grid cell, we use the RoIAlign [35] to crop the features bounded by B g 2D from the feature maps produced by the backbone network. Then the features are passed to fully connected layers to regresses eight 3D corners O g = {O g k }, k = 1, ..., 8. The last layers for the three tasks are without an activation function.
The 3D center C g is easily calculated from Equation 1 using Z g c and c g . C g and O g k defines a 3D bounding box predicted at grid cell g. Finally, we project the 3D box onto the image plane, obtain a 2D box and use RoIAlign [35] to extract features bounded by the 2D box from the feature maps of the backbone network. Then we pass the features to fully connected layers to regress ∆C g and ∆O g k to refine the prediction. C g + ∆C g and O g k + ∆O g k , k = 1, ..., 8 represent the refined 3D bounding box, which is the final output of grid cell g. Note that the 3D bounding box refinement is a complementary stage and is not contained in the four fundamental sub-tasks. For a single image, the network gives S u × S v 3D bounding boxes B 3D in total. The final prediction of the network is obtained by applying nonmaximum suppression to the B 3D .
Fully Supervised Learning
In fully supervised learning, the ground truth B 3D is provided, and the corresponding B 2D is obtained by projecting the B 3D to the 2D image plane. Here we formally formulate the loss functions of the sub-task under fully supervision. In this subsection, to distinguish between the network prediction and ground truth, the ground truth is modified by thê (·) symbol in the loss functions.
We start from assigning ground truth to each cell g in the S u × S v grid. A ground truth object is assigned to a cell g if the distance between the 2D bounding box and g is less than σ scope . If a g is assigned to multiple objects, we only choose the object with the smallest instance-level depth Z c . In a frame, some g does not have any ground truth because they are too far from the objects. These g are considered as background, while the remaining is foreground. We use FG to denote the set of foreground g.
The classification output is trained using softmax cross entropy (CE) loss and the 2D bounding box regression is trained by L1 distance loss:
L P = g∈G CE(P g ,P g ) L B 2D = g∈FG (|w g 2D −ŵ g 2D | + |h g 2D −ĥ g 2D | + |∆ g u b −∆ g u b | + |∆ g v b −∆ g v b |)(2)
The instance-depth estimation, projected 3D center estimation and local corner regression are trained with L1 loss:
L Zc = g∈FG |Z g c −Ẑ g c | (3) L c = g∈FG (|∆ g uc −∆ g uc | + |∆ g vc −∆ g vc |)(4)L O = g∈FG 8 k=1 |O g k −Ô g k |(5)
The 3D bounding box refinement is trained with L1 loss:
L ∆C = g∈FG |∆C g − (Ĉ g − C g )|(6)L ∆O = g∈FG 8 k=1 |∆O g k − (Ô g k − O g k )|(7)
Finally, we sum up the losses to produce the final loss function to be minimized.
Weakly Supervised Learning
In weakly supervised learning, we consider the scenario where the training set provides 2D bounding boxes instead of 3D bounding boxes, as is stated in Section 3. We also assume that 3D data such as point clouds or depth maps are not available in neither training nor testing. As the ground truth 2D bounding boxes are provided, we use the same loss ෨ c = · Τ ℎ 3 ℎ 2 Camera ℎ 3 ( 1 , 1 , 1 ) ( 2 , 2 , 2 ) ≈ 1 ( 2 − 1 ) Fig. 4: Geometry-guided learning of 3D location. (a) illustrates the projective geometry based on which we calculate Z c as a pseudo ground truth of Z c to supervise the instancelevel depth estimation. (b) shows the motion of the same object across neighbouring frames, where its acceleration should be no greater than a certain threshold. This acceleration constraint is imposed onto the network predictions to increase the 3D object localization performance.
functions to train the 2D object detection branch as is in Section 4.3. The main challenge is to learn the remaining three tasks. For instance-level depth estimation and projected 3D center estimation, we propose the geometry-guided learning method, which makes full use of the projective geometry to guide the learning process using 2D bounding boxes and unlabeled frames. For local corner regression, we propose the object-centric transfer learning method that can effectively transfer the knowledge from another easily accessible dataset to the target object detector to facilitate the learning process.
Geometry-Guided Learning of 3D Location
We consider the second and third tasks, instance-level depth estimation and projected 3D center estimation. Their goal is to obtain the 3D location C. Using ground truth 2D bounding boxes as supervision, we propose to leverage the projective geometry and unlabeled frames to learn the two tasks. Denote the prior height of the object as h 3D and the camera focal length along the v-axis as f v . Since we have the height h 2D of the ground truth B 2D , we can calculate a rough instance-level depth Z c = f v · h 3D / h 2D that is illustrated in Figure 4 (a). We regard Z c as a pseudo ground truth Z c and use it to replaceẐ c in Equation 3 to train the network. We remove the subscript g for simplicity. Also, we regardĉ = (u b , v b ), the center of the ground truth B 2D , as a pseudo ground truth to calculate the regression targets ∆ g uc and ∆ g vc in Equation 4 to train the network. Given that the pseudo ground truth is only a rough approximation, we will refine the network predictions soon.
A rough 3D location C can be calculated from the estimated Z c and c of the grid cell using Equation 1. Then we refine this prediction by regressing ∆C = (∆X c , ∆Y c , ∆Z c ), so that ∆C + C is closer to the real 3D location. Since the 3D ground truth is absent, we propose to utilize the firstorder approximation of ∆C to train the network. Here we assume we already have the local corners O and will explain how to obtain O in Section 4.4.2. A coarse 3D bounding box B 3D can be determined by C and O. By projecting B 3D to the image, we obtain a projected 2D bounding box, then we subtract it from the ground truth 2D bounding box to get ∆B 2D = (∆w 2D , ∆h 2D , ∆u v , ∆v b ). The approximated ∆ C is formulated as:
∆ C = ( ∂X c ∂b u ∆b u , ∂Y c ∂b v ∆b v , ∂Z c ∂h 2D ∆h 2D )(8)
where the partial derivatives are:
∂X c ∂b u ≈ ∂X c ∂c u = Z c f u , ∂Y c ∂b v ≈ ∂Y c ∂c v = Z c f v ∂Z c ∂h 2D ≈ ∂ Z c ∂h 2D = − f v · h 3D h 2 2D(9)
In Equation 6 we replace (Ĉ g − C g ) with ∆ C to train the network. The previous method Deep3DBox [9] minimizes the re-projection discrepancy to obtain 3D locations, but it regards the optimization as post-processing and has to predict the 2D bounding boxes before calculating the 3D bounding boxes in inference. Our approach uses the 2D bounding boxes to endow the network with the ability of 3D location estimation directly from image inputs, which means the pipeline can predict the 3D bounding boxes in an end-to-end fashion.
If neighbouring image frames, such as video data, are available, we can impose another regularization based on a real-world kinematics prior that the acceleration of objects should be limited to a certain threshold. Figure 4 (b) shows the motion of an object across frames. We formulate this acceleration constraint as:
L a = K k=1 n∈FN k clip(|a k n | − α a , 0, β a ) a k n = v k n − v k n+1 t n − t n+1 , v k n = E FG k,n [C] − E FG k,n+1 [C] t n − t n+1(10)
where FN k is the set of indices of the frames in which object k is present, FG k,n refers to the set of foreground cells of object k in the n th frame, a k n and v k n are the instantaneous acceleration and velocity of the object relative to the camera, t n indicates the corresponding time. The gradient from L a back-propagates to Z c and c through C. L a equals to zero if |a k n | is less than the threshold α a . β a is to clip the loss to avoid instability. We choose α a = 0.3 and β a = 3.0 by grid search. We do not need the ground truth 2D bounding boxes in frame 2, 3, · · · , N , since the foreground region in these frames are estimated using the bounding boxes in the first frame and the inter-frame optical flow that is obtained using the off-the-shelf PWC-Net [65]. Note that motion consistency is employed in training, while a single image is enough in inference.
Note that the geometric-guided learning of 3D location detailed above requires the prior size (or at least the prior height) of the object and the camera intrinsics, including the principle point (p u , p v ) and focal lengths (f u , f v ). Each type of objects is assigned a prior size that equals to the average size of that type of objects. In terms of the camera parameters, since (p u , p v ) is always very close to the image center, when the camera intrinsics are unknown such as in MS COCO [27] dataset, the principle point is set to be the image center. For (f u , f v ), we set f u to be 0.8 times the image width and f v = f u , which is also adopted by [66]. If the users can provide an accurate size (height denoted as Fig. 5: Object-centric transfer learning for local corner estimation. The blue part and green part are the abstraction of the student and teacher networks respectively. The student network is our monocular 3D object detector. This figure shows how the local corner regression branch is learned using easily accessible additional data that is denoted by the source dataset. The teacher network serves as a media transferring knowledge from the source to the target dataset, saving the arduous annotation on the target dataset. The teacher network only deals with local regions with objects of interest, rather than taking the whole image as input.
h 3D ) of the object and the real focal lengthf v of the camera, then they can multiply the predicted instance-level depth by constantĥ 3D f v /h 3Dfv to obtain a more accurate prediction.
Object-Centric Transfer Learning
We consider the fourth task, local corner regression. The ground truth corners O k are not contained in the labeled B 2D and should be learned by introducing another source of knowledge. Note that the corners can be computed from the size and orientation of the B 3D . For simplicity, we use the prior size of each class of objects, so the unknown parameters are reduced to the orientation. It is noticed that there are many easily accessible datasets (e.g., PASCAL3D+ [23] and ShapeNet [24]) annotated with object view angles. We present an object-centric transfer learning method (see Figure 5) to use the additional data to train the local corner regression branch, which saves labor-intensive annotation on new datasets. We use PASCAL3D+ [23] as an example source dataset. It should be noted that only the ground truth view angle is required, which means the source dataset needs not to be annotated with the complete 6-DoF poses as is in PASCAL3D+ [23]. The first step is training a teacher network on the source dataset [23] using its ground truth view angle. The teacher regresses the cosine and sine values of the view angle. Full annotations of 6-DoF poses are not required. The most critical operation is cropping out the objects and scaling them to a fixed size (we use 64 × 64) before feeding to the network. In this way, we eliminate the interference of background distributions and object sizes, making each instance scaleinvariant. The 2D bounding boxes are obtained via off-theshelf object detectors [29] if they are not labeled.
The second step is transferring the knowledge to our monocular 3D object detector. Using the ground truth 2D bounding box, we crop out the objects and scale them to the same size that the teacher is trained. The teacher predicts the view angle of each object online. Using the annotated 2D bounding box B 2D = (w 2D , h 2D , u b , u p ) and camera intrinsics, we convert the view angle to orientation. In our setting, only the orientation on the ground plane is considered. Let ϕ be the view angle around the axis perpendicular to the ground plane, the orientation θ can be analytically calculated as θ = ϕ − arctan(
u b −up fu ),
where u p is the horizontal principal point and f u is the horizontal focal length of the calibrated monocular camera. Then we use the orientation and the prior size to compute the approximated ground truth O k to supervise the targeted 3D detector. We replaceÔ k with O k in Equation 5 to train the network.
EXPERIMENT
Different from most of the previous studies on 3D object detection, our evaluation is not limited to datasets that offer ground truth 3D bounding boxes for training. In addition to the popular KITTI [25] dataset, we also experiment on the challenging Cityscapes [26] and MS COCO [27] dataset where the 3D bounding boxes are not provided. We evaluate both our fully supervised MonoGRNet (F) and weakly supervised version MonoGRNet (W).
Implementation.
The whole framework is implemented using Python [67] and Tensorflow [68]. We employ VGG16 [69] pretrained on ImageNet [70] as the backbone network in MonoGRNet. We remove the original fully connected layers in VGG16 [69] to obtain a fully convolutional backbone. For hyperparameters, we choose S u × S v = 39 × 12, which is the size of the grid G, or namely, the feature map resolution of the head layers. The whole network is trained using Adam [71] optimizer for 40 epochs with a constant learning rate of 10 −5 . L2 regularization is applied to model parameters at a decay weight of 5 × 10 −5 . Fig. 7: Qualitative results on KITTI. F and W are short for full and weak supervision. The predicted 3D bounding boxes are shown in images and in the 3D space from an oblique view. The proposed method has satisfactory performance even when the object is far away, occluded, in the shadows or exposed in strong light. 3D point clouds (in gray) are only for referenced visualization.
MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W) MonoGRNet (F) MonoGRNet (F) MonoGRNet (W)
Qualitative Results
Previous work could train 3D object detectors on KITTI [25] but not on Cityscapes [26] and MS COCO [27], since the last two do not offer labeled 3D bounding boxes. Our approach is not limited by 3D labels and can handle all three datasets.
On KITTI Dataset.
The detection results in various scenarios are shown in Figure 7, including the residential areas and the highways, with objects at short and long distances. The detector is robust enough to corner cases including strong light, shadows, truncation and occlusion. This robustness is crucial in practical use. Comparing to the fully supervised MonoGRNet (F), the weakly supervised MonoGRNet (W) exhibits promising qualitative performance.
On MS COCO Dataset.
MS COCO [27] dataset contains a wide variety of objects. Both the indoor and outdoor scenes are included. Although MS COCO does not provide ground truth 3D bounding boxes, the proposed method is not subject to this limitation and can learn from the 2D bounding box labels. The dataset does not offer the camera parameters, which cannot be simply estimated given a single RGB image. But fortunately our method only need a relative focal length, and thus we use an empirical camera configuration in the experiment. We do not apply the motion consistency loss since the neighbouring frames are inaccessible. We only test MonoGRNet (W) since the ground truth is not available for training MonoGRNet (F). The visualization results are presented in Figure 8. This experiment shows the potentials of our method on general 3D object detection at low cost, i.e., based on cheap cameras can be deployed in various scenarios.
On Cityscapes Dataset.
We test our MonoGRNet (W) on Cityscapes [26] dataset without using any label on Cityscapes [26] in training. We first train a Faster-RCNN [29] 2D object detector on the KITTI dataset and directly apply it to Cityscapes to obtain 2D bounding boxes, then our network learns 3D bounding boxes using the detected 2D bounding boxes using our weakly supervised method. The proposed method exhibits stable performance as is shown in Figure 9. This experiment demonstrates that we do not necessarily require manually labeled 2D bounding boxes when transferring to new datasets.
Quantitative Results
The quantitative experiments are conducted on the KITTI dataset but not on the Cityscapes and MS COCO where the ground truth 3D bounding boxes are not provided for quantitative evaluation. The evaluation criteria include the commonly used 3D average precision AP 3D , the bird's eye view (BEV) average precision AP BEV and the average orientation similarity AOS. AP 3D measures the intersection of union (IoU) between the ground truth 3D bounding box and the predicted one. If the IoU is greater than a given threshold, the ground truth is successfully recalled. A higher IoU threshold means the ground truth is more difficult to recall. Different from AP 3D , AP BEV ignores the vertical dimension, measuring the IoU from the bird's eye view. AOS measures the average orientation similarity between the predicted 3D bounding boxes and the ground truth. The calculation of AOS is similar to that of average precision and more details can be found in Section 2.5 of [25]. All the three criteria have been widely accepted [4], [38] in the assessment of 3D object detectors. We follow the publicly available train-val split [4], [38] on KITTI [25] 3D object detection dataset.
Comparison to Fully Supervised Methods
To the best of our knowledge, this is a pioneering work unifying the fully and weakly supervised learning of monocular 3D object detection. Most of the published work [7], [9], [44] only use fully annotated 3D bounding boxes as supervision. Figure 6 compares the speed and average precision, showing that MonoGRNet (F) requires the least computational time while achieving promising average precision.
The weakly supervised MonoGRNet (W) even demonstrates similar average precision with the recent full supervised method FQNet [72]. Figure 10 and Table 1 and Table 2 compare the AP 3D and AP BEV . Figure 12 illustrates the localization error with respect to distances. It is shown that we have promising performance compared to these approaches. Even though there is still a gap between the weakly and fully supervised methods, we believe the potentially large amount of weakly labeled data can further narrow the performance gap. In addition, 3D labels for large-scale real datasets are difficult to obtain in practice, and our method has the potentials to be an alternative approach to saving the annotation cost and promote the feasibility of monocular 3D object detection in future applications.
In Table 4, we also compare to the most recent state-ofthe-art fully supervised methods [11], [15], [73]. Although these methods demonstrate better AP 3D than the fully supervised MonoGRNet, our results are noteworthy due to three facts. First, these methods [11], [15], [73] on monocular 3D object detection utilize the 3D LiDAR point clouds as a powerful supervision to learn dense 3D reconstructions that improve the detection performance, while MonoGRNet targets at a more general scenario where we do not assume the existence of point clouds or dense depth data in any part of the training phase. In fact, a very large proportion of monocular images in the real world are acquired without 3D point clouds. Second, the computational cost and time consumption of these methods [11], [15], [73] are much more than our MonoGRNet. It is reported that the 3D reconstruction module [74] used by [73] takes ∼500 ms for each frame during inference, but MonoGRNet only requires ∼60ms under the same GPU configuration. Third, one of the main contributions of this paper is that we unified fully supervised and weakly supervised monocular object detection in a single framework, which can flexibly adapt to various scenarios no matter whether the fully annotated 3D
bounding boxes are available in training. To the best of our knowledge, this work is the first that achieves this. A ground truth object is successfully recalled if the 3D intersection of union (IoU) between its true 3D bounding box and the predicted box is no less than a certain threshold. Recalling an object in 3D is much more difficult than in 2D image given the extensive searching space and the lack of 3D data.
Comparison to Weakly Supervised Baselines
We also examine our approach by comparing to weakly supervised baselines. As no weakly supervised monocular 3D object detector has been published, we borrowed ideas from previous transfer learning methods [76] and stateof-the-art monocular 3D object detectors [7], [9] to construct baseline approaches that only require annotated 2D bounding boxes in training. In our object-centric transfer learning of orientation, the background is mostly ignored, and the teacher network mainly focuses on the object itself. Previous work [76] on transfer learning usually takes the whole image as input, making image-level transfer learning.
Other work [77] on 3D object detection also leverages the background information around the objects to improve the detection performance. In the baseline methods, we enlarge the 2D bounding boxes before cropping the image as is shown in Figure 5, allowing more background into the crop. The baselines for orientation prediction are named Background-Aware (BA) × λ, where the height and width of the 2D bounding boxes are both enlarged λ + 1 times, i.e., the original
B 2D = (w 2D , h 2D , b u , b v ) is expanded to B 2D = [(λ + 1)w 2D , (λ + 1)h 2D , b u , b v ]
. B 2D is truncated by the image boundaries. As λ becomes larger, the baseline is increasingly similar to methods feeding the whole image as input.
In our geometry-guided learning of 3D location, the network learns from the 3D-to-2D projective geometric constrain and the motion consistency. Inspired by Deep3DBox [9], we develop another baseline method to obtain the 3D location from the 2D bounding box. First, we detect the 2D bounding boxes on image. Then, we find the corresponding 3D bounding box locations by minimizing their projection error with the associated 2D boxes. This baseline approach to recovering the 3D location is denoted as MinProjErr in our experiments. Results are shown in Table 3 and Table 5. It is clear that our geometry-guided learning method can bring significant performance improvement in all the listed criteria. The object-centric transfer learning performs better than the background-aware counterparts.
Recently, the work [78], [79] proposed self-supervised methods to learn a dense monocular depth predictor. The absolute relative error (AbsRel) achieved by these methods are 0.07m for [78] and 0.11m for [79], while the AbsRel of our weakly supervised instance-level depth estimation is only 0.05m. Besides, the dense pixel-level depth predicted by these methods is the the depth of object surfaces, while the instance depth predicted by our MonoGRNet is the 3D center depth of an object. The latter can be directed used to compute the 3D center coordinates of the targeted objects via Equation 1, but the former requires additional steps that could bring extra computational cost.
Ablation Study on Loss Functions
The loss functions proposed in Section 4.4.1 are crucial in supervising the 3D location using labeled 2D bounding boxes. In order to examine the effectiveness of the loss functions, we experiment with different loss configurations and present the results in Table 6. If none of L Zc , L c or L ∆C is applied, we obtain C by minimizing the reprojection error. It is shown that L Zc and L c can bring a considerable performance gain, while L ∆C can further refine the prediction.
CONCLUSION
We have presented the MonoGRNet framework for 3D object detection from monocular images. MonoGRNet decomposes the monocular 3D object detection task into four subtasks, which are 2D object detection, instance-level depth estimation, projected 3D center estimation and local corner regression. The task decomposition allows the 3D bounding boxes to be predicted in a single forward pass without the object-proposal stage or the computationally expensive pixel-level depth prediction. We also demonstrate that the framework can flexibly adapt to both fully and weakly supervised learning without changing the network structure or most of the loss functions. Extensive experiments are conducted on three public datasets, KITTI, Cityscapes and MS COCO. Although the last two does not offer ground truth 3D bounding boxes, our framework is still trainable in such a setting. Qualitative and quantitative results have shown the promising performance of our method in diverse scenarios.
Fig. 2 :
2Notations. (a) illustrates the relationship between the camera coordinate system and the local coordinate system from the bird's eye view. (b) shows the 3D bounding box corners defined in the local coordinate system. center C = (X c , Y c , Z c ) in global context and eight corners O = {O k }, k = 1, ..., 8, related to local context.
Fig. 3 :
3Instance-level depth. (a) Each grid cell g is assigned to a nearest object within a distance σ scope to the 2D bbox center b i . Objects closer to the camera are assigned to handle occlusion. Here Z 1 c < Z 2 c . (b) An image with detected 2D bounding boxes. (c) Predicted instance depth for each cell.
Fig. 6 :
6Efficiency comparison. The inference time of our method MonoGRNet (F/W) is 0.06s on a single Geforce GTX Titan X GPU on KITTI[25] dataset. F and W denote fully and weakly supervised respectively.
Fig. 8 :
8Qualitative results on MS COCO. Our approach demonstrates potentials in general 3D object detection from a monocular image. We use our weakly supervised MonoGRNet (W) on MS COCO, which does not provide ground truth 3D bounding boxes to train the fully supervised MonoGRNet (F).
Fig. 9 :
9Qualitative results on Cityscapes. The experiment was conducted using MonoGRNet (W).
Fig. 10 :
10Recall-precision curve of 3D and BEV object detection on KITTI val set. F and W are short for full and weak supervision. The 3D IoU threshold is 0.3. Although MonoGRNet (W) is learned from the weak supervision of 2D bounding boxes, it exhibits promising performance compared to methods with full supervision from 3D bounding boxes.
Fig. 11 :Fig. 12 :
1112Recall-precision curve of 3D object localization on KITTI val set. F and W are short for full and weak supervision. The distance thresholds are set to 1m, 2m and 3m, corresponding to the first, second and the third row. 3D localization error with respect to the ground truth distance on KITTI val set. F and W are short for full and weak supervision. MonoGRNet (F/W) demonstrates a robust localization performance.
TABLE 1 :
1Average precision of 3D object detection on KITTI val set compared to previous fully supervised methods.
Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. HardMethod
SP
AP 3D (IoU=0.1)
AP 3D (IoU=0.2)
AP 3D (IoU=0.3)
AP 3D (IoU=0.5)
AP 3D (IoU=0.7)
Mono3D [44]
Full
54.71 43.67 39.59 41.21 33.40 28.89 28.29 23.21 19.49 25.19 18.20 15.22
2.53
2.31
2.31
Deep3DBox [9]
Full
82.57 69.74 60.86 69.97 56.62 48.59 54.30 43.42 36.57 27.04 20.55 15.88
5.85
4.10
3.84
MF3D [36]
Full
-
-
-
-
-
-
-
-
-
47.88 29.48 26.44 10.53
5.69
5.39
FQNet [72]
Full
-
-
-
-
-
-
-
-
-
28.16 21.02 19.91
5.98
5.50
4.75
ROI-10D [75]
Full
-
-
-
-
-
-
-
-
-
37.59 25.14 21.83
6.91
6.63
6.29
MonoGRNet (F)
Full
88.10 76.89 67.76 82.18 64.54 55.63 73.66 61.74 47.75 43.66 36.20 30.22 13.88 10.19
7.62
MonoGRNet (W) Weak
80.70 64.42 55.42 69.60 53.70 45.45 56.16 42.61 35.36 25.66 21.57 17.40
6.92
5.63
4.89
TABLE 2 :
2Average precision of bird's eye view (BEV) object detection on KITTI val set compared to previous fully supervised methods. A ground truth object is successfully recalled if the BEV intersection of union (IoU) between its true 3D bounding box and the predicted box is no less than a certain threshold.Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard 77.09 67.91 83.27 65.13 56.18 74.92 63.22 54.65 52.52 39.98 33.14 24.97 19.44 16.30Method
SP
AP BEV (IoU=0.1)
AP BEV (IoU=0.2)
AP BEV (IoU=0.3)
AP BEV ((IoU=0.5)
AP BEV ((IoU=0.7)
Mono3D [44]
Full
55.47 44.25 40.14 46.26 35.22 34.23 32.76 25.15 23.65 30.50 22.39 19.16
5.22
5.19
4.13
Deep3DBox [9]
Full
84.29 70.40 61.49 71.27 58.88 49.92 57.14 47.20 39.06 30.17 23.77 18.84
9.99
7.71
5.30
MF3D [36]
Full
-
-
-
-
-
-
-
-
-
55.02 36.73 31.27 22.03 13.63 11.60
FQNet [72]
Full
-
-
-
-
-
-
-
-
-
32.57 24.60 21.25
9.50
8.02
7.71
ROI-10D [75]
Full
-
-
-
-
-
-
-
-
-
46.85 34.05 30.46 14.50
9.91
8.73
MonoGRNet (F)
Full
88.22 MonoGRNet (W) Weak
81.12 64.76 55.76 71.38 60.11 46.27 58.61 48.75 41.49 32.23 26.88 22.47 12.54
9.67
8.25
TABLE 3 :
3Average precision of 3D and bird's eye view (BEV) object detection on KITTI val set compared to weakly supervised baselines. BA and OC are short for background-aware and object-centric respectively. MinProjErr denotes the baseline method derived from[9] to recover 3D location as described in Section 5.3.2. GeoGL indicates the proposed geometry-guided learning of 3D location.Method
AP 3D / AP BEV (IoU=0.3)
AP 3D / AP BEV (IoU=0.5)
Orientation
3D Location
Easy
Moderate
Hard
Easy
Moderate
Hard
BA × 2
MinProjErr
37.67 / 42.80
28.50 / 35.69
23.60 / 29.99
15.38 / 18.63
13.19 / 15.70
11.41 / 14.80
BA × 1
MinProjErr
39.30 / 43.92
29.80 / 36.39
24.40 / 30.53
16.81 / 21.47
14.10 / 18.38
11.78 / 15.16
BA × 2
GeoGL (Ours)
50.10 / 54.31
39.16 / 41.24
32.57 / 34.35
22.35 / 28.13
18.72 / 21.65
15.85 / 17.45
BA × 1
GeoGL (Ours)
54.44 / 57.36
41.40 / 47.59
34.39 / 35.77
25.03 / 31.15
20.85 / 25.97
17.10 / 21.37
OC (Ours)
GeoGL (Ours)
56.16 / 58.61
42.61 / 48.75
35.36 / 41.49
25.66 / 32.23
21.57 / 26.88
17.40 / 22.47
TABLE 4 :
4Comparison to fully supervised methods involving LiDAR or only monocular information during training. The numbers are reported on KITTI val set.Method
Time
Data
AP 3D (IoU=0.5)
Train
Eval
Easy Mod. Hard
P-LiDAR [11]
> 0.5 s LiDAR+Mono Mono
66.3
42.3
38.5
MonoPSR [15]
0.2 s LiDAR+Mono Mono
49.65 41.71 29.95
Ma et al. [73]
> 0.5 s LiDAR+Mono Mono
68.86 49.19 42.24
FQNet [72]
> 0.4 s
Mono
Mono
28.16 21.02 19.91
MonoGRNet (F)
0.06 s
Mono
Mono
43.66 36.20 30.22
TABLE 5 :
5Average orientation similarity on KITTI val set. The three approaches use GeoGL to predict the 3D location. The only difference is how they learn object orientations.Method
AOS (IoU=0.3)
AOS (IoU=0.5)
Easy
Moderate
Hard
Easy
Moderate
Hard
BA × 2
64.23
62.86
49.96
64.14
56.93
49.85
BA × 1
83.58
80.54
64.13
83.56
73.09
63.97
OC (Ours)
90.00
88.78
70.89
89.94
80.31
70.76
TABLE 6 :
6Ablation study on loss functions for weakly supervised method on KITTI val set. The loss functions are crucial in learning the 3D location of objects using their ground truth 2D bounding boxes and the motion consistency across unlabeled framesLoss Configuration
AP 3D (IoU=0.3)
L Zc
L c
L ∆C
L a
Easy
Moderate
Hard
38.28
28.56
24.13
44.50
32.33
29.10
52.72
40.59
33.71
54.21
41.19
34.19
56.16
42.61
35.36
Pixor: Real-time 3d object detection from point clouds. B Yang, W Luo, R Urtasun, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionB. Yang, W. Luo, and R. Urtasun, "Pixor: Real-time 3d object detection from point clouds," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018, pp. 7652-7660. 1
Voxelnet: End-to-end learning for point cloud based 3d object detection. Y Zhou, O Tuzel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Zhou and O. Tuzel, "Voxelnet: End-to-end learning for point cloud based 3d object detection," in Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, 2018, pp. 4490- 4499. 1
Pointpillars: Fast encoders for object detection from point clouds. A H Lang, S Vora, H Caesar, L Zhou, J Yang, O Beijbom, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition705A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Bei- jbom, "Pointpillars: Fast encoders for object detection from point clouds," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 12 697-12 705. 1
Multi-view 3d object detection network for autonomous driving. X Chen, H Ma, J Wan, B Li, T Xia, IEEE CVPR. 110X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, "Multi-view 3d object detection network for autonomous driving," in IEEE CVPR, vol. 1, no. 2, 2017, p. 3. 1, 3, 10
Frustum pointnets for 3d object detection from rgb-d data. C R Qi, W Liu, C Wu, H Su, L J Guibas, arXiv:1711.0848813arXiv preprintC. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, "Frustum pointnets for 3d object detection from rgb-d data," arXiv preprint arXiv:1711.08488, 2017. 1, 3
Joint monocular 3d vehicle detection and tracking. H.-N Hu, Q.-Z Cai, D Wang, J Lin, M Sun, P Krähenbühl, T Darrell, F Yu, arXiv:1811.1074213arXiv preprintH.-N. Hu, Q.-Z. Cai, D. Wang, J. Lin, M. Sun, P. Krähenbühl, T. Darrell, and F. Yu, "Joint monocular 3d vehicle detection and tracking," arXiv preprint arXiv:1811.10742, 2018. 1, 3
Monogrnet: A geometric reasoning network for monocular 3d object localization. Z Qin, J Wang, Y Lu, AAAI. 1012Z. Qin, J. Wang, and Y. Lu, "Monogrnet: A geometric reasoning network for monocular 3d object localization," AAAI, 2019. 1, 2, 3, 10, 12
Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. F Chabot, M Chaouch, J Rabarisoa, C Teulière, T Chateau, Computer Vision and Pattern Recognit. 13F. Chabot, M. Chaouch, J. Rabarisoa, C. Teulière, and T. Chateau, "Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image," in Computer Vision and Pattern Recognit.(CVPR), 2017, pp. 2040-2049. 1, 3
3d bounding box estimation using deep learning and geometry. A Mousavian, D Anguelov, J Flynn, J Košecká, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Mousavian, D. Anguelov, J. Flynn, and J. Košecká, "3d bound- ing box estimation using deep learning and geometry," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
. IEEE. 113IEEE, 2017, pp. 5632-5640. 1, 6, 10, 12, 13
Pseudo-lidar++: Accurate depth for 3d object detection in autonomous driving. Y You, Y Wang, W.-L Chao, D Garg, G Pleiss, B Hariharan, M Campbell, K Q Weinberger, ArXiv. Y. You, Y. Wang, W.-L. Chao, D. Garg, G. Pleiss, B. Hariharan, M. Campbell, and K. Q. Weinberger, "Pseudo-lidar++: Accurate depth for 3d object detection in autonomous driving," ArXiv, vol. abs/1906.06310, 2019. 1, 3, 4, 5
Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. Y Wang, W.-L Chao, D Garg, B Hariharan, M Campbell, K Weinberger, CVPR. 1012Y. Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Weinberger, "Pseudo-lidar from visual depth estimation: Bridg- ing the gap in 3d object detection for autonomous driving," in CVPR, 2019. 1, 3, 4, 5, 10, 12
Monocular 3d object detection with pseudo-lidar point cloud. X Weng, K M Kitani, abs/1903.09847ArXiv. 1X. Weng and K. M. Kitani, "Monocular 3d object detection with pseudo-lidar point cloud," ArXiv, vol. abs/1903.09847, 2019. 1
Learning depth-guided convolutions for monocular 3d object detection. M Ding, Y Huo, H Yi, Z Wang, J Shi, Z Lu, P Luo, arXiv:1912.04799arXiv preprintM. Ding, Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu, and P. Luo, "Learning depth-guided convolutions for monocular 3d object detection," arXiv preprint arXiv:1912.04799, 2019. 1
Monocular 3d object detection and box fitting trained end-to-end using intersection-over-union loss. E Jörgensen, C Zach, F Kahl, abs/1906.08070ArXiv. 1E. Jörgensen, C. Zach, and F. Kahl, "Monocular 3d object detection and box fitting trained end-to-end using intersection-over-union loss," ArXiv, vol. abs/1906.08070, 2019. 1
Monocular 3d object detection leveraging accurate proposals and shape reconstruction. J Ku, A D Pon, S L Waslander, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition112J. Ku, A. D. Pon, and S. L. Waslander, "Monocular 3d object de- tection leveraging accurate proposals and shape reconstruction," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 867-11 876. 1, 3, 5, 10, 12
Monocular 3d object detection for autonomous driving. X Chen, K Kundu, Z Zhang, H Ma, S Fidler, R Urtasun, Conference on Computer Vision and Pattern Recognition (CVPR). 15X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun, "Monocular 3d object detection for autonomous driving," in Con- ference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2147-2156. 1, 3, 5
Mono3d++: Monocular 3d vehicle detection with two-scale 3d hypotheses and task priors. T He, S Soatto, arXiv:1901.0344613arXiv preprintT. He and S. Soatto, "Mono3d++: Monocular 3d vehicle detection with two-scale 3d hypotheses and task priors," arXiv preprint arXiv:1901.03446, 2019. 1, 3
Monocular 3d object detection with decoupled structured polygon estimation and height-guided depth estimation. Y Cai, B Li, Z Jiao, H Li, X Zeng, X Wang, abs/2002.01619ArXiv. 1Y. Cai, B. Li, Z. Jiao, H. Li, X. Zeng, and X. Wang, "Monocular 3d object detection with decoupled structured polygon estimation and height-guided depth estimation," ArXiv, vol. abs/2002.01619, 2020. 1
Smoke: Single-stage monocular 3d object detection via keypoint estimation. Z Liu, Z Wu, R Tóth, ArXiv. 1Z. Liu, Z. Wu, and R. Tóth, "Smoke: Single-stage monocu- lar 3d object detection via keypoint estimation," ArXiv, vol. abs/2002.10111, 2020. 1
Monocular 3d object detection via geometric reasoning on keypoints. I Barabanau, A Artemov, E V Burnaev, V Y Murashkin, abs/1905.05618ArXiv. 1I. Barabanau, A. Artemov, E. V. Burnaev, and V. Y. Murashkin, "Monocular 3d object detection via geometric reasoning on key- points," ArXiv, vol. abs/1905.05618, 2019. 1
Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving. P.-X Li, H Zhao, P Liu, F Cao, abs/2001.03343ArXiv. 1P.-X. Li, H. Zhao, P. Liu, and F. Cao, "Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving," ArXiv, vol. abs/2001.03343, 2020. 1
Extreme clicking for efficient object annotation. D P Papadopoulos, J R Uijlings, F Keller, V Ferrari, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionD. P. Papadopoulos, J. R. Uijlings, F. Keller, and V. Ferrari, "Ex- treme clicking for efficient object annotation," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4930- 4939. 2
Beyond pascal: A benchmark for 3d object detection in the wild. Y Xiang, R Mottaghi, S Savarese, IEEE Winter Conference on Applications of Computer Vision (WACV). 7Y. Xiang, R. Mottaghi, and S. Savarese, "Beyond pascal: A bench- mark for 3d object detection in the wild," in IEEE Winter Conference on Applications of Computer Vision (WACV), 2014. 2, 7
. A X Chang, T A Funkhouser, L J Guibas, P Hanrahan, Q.-X , A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q.-X.
Shapenet: An information-rich 3d model repository. Z Huang, S Li, M Savarese, S Savva, H Song, J Su, L Xiao, F Yi, Yu, abs/1512.03012ArXiv. 27Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, "Shapenet: An information-rich 3d model repository," ArXiv, vol. abs/1512.03012, 2015. 2, 7
Are we ready for autonomous driving? the kitti vision benchmark suite. A Geiger, P Lenz, R Urtasun, Computer Vision and Pattern Recognition (CVPR). IEEE210A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite," in Computer Vision and Pattern Recognition (CVPR). IEEE, 2012, pp. 3354-3361. 2, 7, 8, 10
The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)89M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Be- nenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 7, 8, 9
Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, 7T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," 2014. 2, 6, 7, 8
Fast r-cnn. R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionR. Girshick, "Fast r-cnn," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448. 2
Faster r-cnn: towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, IEEE Transactions on Pattern Analysis & Machine Intelligence. 79S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017. 2, 5, 7, 9
You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779- 788. 3
Yolo9000: Better, faster, stronger. J Redmon, A Farhadi, Computer Vision and Pattern Recognition (CVPR). IEEEJ. Redmon and A. Farhadi, "Yolo9000: Better, faster, stronger," in Computer Vision and Pattern Recognition (CVPR). IEEE, 2017, pp. 6517-6525. 3
Ssd: Single shot multibox detector. W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, European conference on computer vision (ECCV). SpringerW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector," in European conference on computer vision (ECCV). Springer, 2016, pp. 21-37. 3
Dssd: Deconvolutional single shot detector. C.-Y Fu, W Liu, A Ranga, A Tyagi, A C Berg, arXiv:1701.06659arXiv preprintC.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, "Dssd: Decon- volutional single shot detector," arXiv preprint arXiv:1701.06659, 2017. 3
Multinet: Real-time joint semantic reasoning for autonomous driving. M Teichmann, M Weber, M Zoellner, R Cipolla, R Urtasun, arXiv:1612.07695arXiv preprintM. Teichmann, M. Weber, M. Zoellner, R. Cipolla, and R. Urtasun, "Multinet: Real-time joint semantic reasoning for autonomous driving," arXiv preprint arXiv:1612.07695, 2016. 3
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, arXiv:1703.0687035arXiv preprintK. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," arXiv preprint arXiv:1703.06870, 2017. 3, 5
Multi-level fusion based 3d object detection from monocular images. B Xu, Z Chen, Computer Vision and Pattern Recognition (CVPR). 312B. Xu and Z. Chen, "Multi-level fusion based 3d object detection from monocular images," in Computer Vision and Pattern Recogni- tion (CVPR), 2018, pp. 2345-2353. 3, 12
Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. W Kehl, F Manhardt, F Tombari, S Ilic, N Navab, Proceedings of the International Conference on Computer Vision (ICCV 2017). the International Conference on Computer Vision (ICCV 2017)Venice, ItalyW. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, "Ssd- 6d: Making rgb-based 3d detection and 6d pose estimation great again," in Proceedings of the International Conference on Computer Vision (ICCV 2017), Venice, Italy, 2017, pp. 22-29. 3
3d object proposals for accurate object class detection. X Chen, K Kundu, Y Zhu, A G Berneshawi, H Ma, S Fidler, R Urtasun, Advances in Neural Information Processing Systems. 310X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun, "3d object proposals for accurate object class detection," in Advances in Neural Information Processing Systems, 2015, pp. 424-432. 3, 10
Deep sliding shapes for amodal 3d object detection in rgb-d images. S Song, J Xiao, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Song and J. Xiao, "Deep sliding shapes for amodal 3d object detection in rgb-d images," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 3
Higher-order crf structural segmentation of 3d reconstructed surfaces. J Liu, J Wang, T Fang, C.-L Tai, L Quan, IEEE International Conference on Computer Vision. J. Liu, J. Wang, T. Fang, C.-L. Tai, and L. Quan, "Higher-order crf structural segmentation of 3d reconstructed surfaces," in IEEE International Conference on Computer Vision, 2015. 3
Std: Sparse-to-dense 3d object detector for point cloud. Z Yang, Y Sun, S Liu, X Shen, J Jia, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZ. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia, "Std: Sparse-to-dense 3d object detector for point cloud," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 1951-1960. 3
Part-aˆ2 net: 3d part-aware and aggregation neural network for object detection from point cloud. S Shi, Z Wang, X Wang, H Li, arXiv:1907.03670arXiv preprintS. Shi, Z. Wang, X. Wang, and H. Li, "Part-aˆ2 net: 3d part-aware and aggregation neural network for object detection from point cloud," arXiv preprint arXiv:1907.03670, 2019. 3
Multi-task multi-sensor fusion for 3d object detection. M Liang, B Yang, Y Chen, R Hu, R Urtasun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionM. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun, "Multi-task multi-sensor fusion for 3d object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7345-7353. 3
Monocular 3d object detection for autonomous driving. X Chen, K Kundu, Z Zhang, H Ma, S Fidler, R Urtasun, IEEE CVPR. 1012X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun, "Monocular 3d object detection for autonomous driving," in IEEE CVPR, 2016. 3, 10, 12
Deep ordinal regression network for monocular depth estimation. H Fu, M Gong, C Wang, K Batmanghelich, D Tao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionH. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao, "Deep ordinal regression network for monocular depth estimation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 3
Deep robust single image depth estimation neural network using scene understanding. H Ren, M Elkhamy, J Lee, H. Ren, M. Elkhamy, and J. Lee, "Deep robust single image depth estimation neural network using scene understanding," pp. 37-45, 2019. 3
Object depth estimation from a single image using fully convolutional neural network. A J Afifi, O Hellwich, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEEA. J. Afifi and O. Hellwich, "Object depth estimation from a single image using fully convolutional neural network," in 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2016, pp. 1-7. 3
Monocular object instance segmentation and depth ordering with cnns. Z Zhang, A G Schwing, S Fidler, R Urtasun, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZ. Zhang, A. G. Schwing, S. Fidler, and R. Urtasun, "Monocular object instance segmentation and depth ordering with cnns," in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2614-2622. 3
Instance-wise depth and motion learning from monocular videos. S Lee, S Im, S Lin, I S Kweon, arXiv:1912.09351arXiv preprintS. Lee, S. Im, S. Lin, and I. S. Kweon, "Instance-wise depth and motion learning from monocular videos," arXiv preprint arXiv:1912.09351, 2019. 3
Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. J Han, D Zhang, G Cheng, L Guo, J Ren, IEEE Transactions on Geoscience and Remote Sensing. 5364J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, "Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning," IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3325-3337, 2015. 3, 4
Self paced deep learning for weakly supervised object detection. E Sangineto, M Nabi, D Culibrk, N Sebe, IEEE transactions on pattern analysis and machine intelligence. 414E. Sangineto, M. Nabi, D. Culibrk, and N. Sebe, "Self paced deep learning for weakly supervised object detection," IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 3, pp. 712- 725, 2019. 3, 4
Adversarial complementary learning for weakly supervised object localization. X Zhang, Y Wei, J Feng, Y Yang, T S Huang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition34X. Zhang, Y. Wei, J. Feng, Y. Yang, and T. S. Huang, "Adversarial complementary learning for weakly supervised object localiza- tion," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1325-1334. 3, 4
Zigzag learning for weakly supervised object detection. X Zhang, J Feng, H Xiong, Q Tian, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition34X. Zhang, J. Feng, H. Xiong, and Q. Tian, "Zigzag learning for weakly supervised object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4262-4270. 3, 4
Pcl: Proposal cluster learning for weakly supervised object detection. P Tang, X Wang, S Bai, W Shen, X Bai, W Liu, A L Yuille, IEEE transactions on pattern analysis and machine intelligence. 34P. Tang, X. Wang, S. Bai, W. Shen, X. Bai, W. Liu, and A. L. Yuille, "Pcl: Proposal cluster learning for weakly supervised object detec- tion," IEEE transactions on pattern analysis and machine intelligence, 2018. 3, 4
Stereo r-cnn based 3d object detection for autonomous driving. P Li, X Chen, S Shen, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionP. Li, X. Chen, and S. Shen, "Stereo r-cnn based 3d object detection for autonomous driving," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7644-7652. 3
Triangulation learning network: from monocular to stereo 3d object detection. Z Qin, J Wang, Y Lu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Z. Qin, J. Wang, and Y. Lu, "Triangulation learning network: from monocular to stereo 3d object detection," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 3
Pose-rcnn: Joint object detection and pose estimation using 3d object proposals. M Braun, Q Rao, Y Wang, F Flohr, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC). IEEEM. Braun, Q. Rao, Y. Wang, and F. Flohr, "Pose-rcnn: Joint object detection and pose estimation using 3d object proposals," in 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2016, pp. 1546-1551. 3
Learning features by watching objects move. D Pathak, R Girshick, P Dollár, T Darrell, B Hariharan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionD. Pathak, R. Girshick, P. Dollár, T. Darrell, and B. Hariharan, "Learning features by watching objects move," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2701-2710. 3
R-fcn: Object detection via region-based fully convolutional networks. J Dai, Y Li, K He, J Sun, Advances in neural information processing systems. J. Dai, Y. Li, K. He, and J. Sun, "R-fcn: Object detection via region-based fully convolutional networks," in Advances in neural information processing systems, 2016, pp. 379-387. 3
Transfer learning by borrowing examples for multiclass object detection. J J Lim, R R Salakhutdinov, A Torralba, Advances in neural information processing systems. J. J. Lim, R. R. Salakhutdinov, and A. Torralba, "Transfer learning by borrowing examples for multiclass object detection," in Ad- vances in neural information processing systems, 2011, pp. 118-126. 3
Learning to detect unseen object classes by between-class attribute transfer. C H Lampert, H Nickisch, S Harmeling, 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEEC. H. Lampert, H. Nickisch, and S. Harmeling, "Learning to detect unseen object classes by between-class attribute transfer," in 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009, pp. 951-958. 3
Automotive 3d object detection without target domain annotations. F Gustafsson, E Linder-Norén, 2018. 3F. Gustafsson and E. Linder-Norén, "Automotive 3d object detec- tion without target domain annotations," 2018. 3
Lsda: Large scale detection through adaptation. J Hoffman, S Guadarrama, E S Tzeng, R Hu, J Donahue, R Girshick, T Darrell, K Saenko, Advances in Neural Information Processing Systems. J. Hoffman, S. Guadarrama, E. S. Tzeng, R. Hu, J. Donahue, R. Girshick, T. Darrell, and K. Saenko, "Lsda: Large scale detection through adaptation," in Advances in Neural Information Processing Systems, 2014, pp. 3536-3544. 3
Leveraging rgb-d data: Adaptive fusion and domain adaptation for object detection. L Spinello, K O Arras, 2012 IEEE International Conference on Robotics and Automation. IEEEL. Spinello and K. O. Arras, "Leveraging rgb-d data: Adaptive fusion and domain adaptation for object detection," in 2012 IEEE International Conference on Robotics and Automation. IEEE, 2012, pp. 4469-4474. 3
PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. D Sun, X Yang, M.-Y Liu, J Kautz, D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume," 2018. 6
Single-shot multi-person 3d pose estimation from monocular rgb. D Mehta, O Sotnychenko, F Mueller, W Xu, S Sridhar, G Pons-Moll, C Theobalt, 2018 International Conference on 3D Vision (3DV). IEEED. Mehta, O. Sotnychenko, F. Mueller, W. Xu, S. Sridhar, G. Pons- Moll, and C. Theobalt, "Single-shot multi-person 3d pose estima- tion from monocular rgb," in 2018 International Conference on 3D Vision (3DV). IEEE, 2018, pp. 120-130. 6
Centrum voor Wiskunde en Informatica Amsterdam. G Van Rossum, F L DrakeJr, Python Tutorial, The NetherlandsG. Van Rossum and F. L. Drake Jr, Python tutorial. Centrum voor Wiskunde en Informatica Amsterdam, The Netherlands, 1995. 7
M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, Tensor-Flow: Large-scale machine learning on heterogeneous systems. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Good- fellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "Tensor- Flow: Large-scale machine learning on heterogeneous systems," 2015. 7
Visualizing and understanding convolutional networks. D Z Matthew, F Rob, European Conference on Computer Vision. D. Z. Matthew and F. Rob, "Visualizing and understanding con- volutional networks," in European Conference on Computer Vision, 2014, pp. 818-833. 7
ImageNet Large Scale Visual Recognition Challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, International Journal of Computer Vision (IJCV). 1153O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, "ImageNet Large Scale Visual Recognition Challenge," International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211-252, 2015. 7
Adam: A method for stochastic optimization. D P Kingma, J Ba, International Conference for Learning Representations. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," in International Conference for Learning Representations, 2015. 7
Deep fitting degree scoring network for monocular 3d object detection. L Liu, J Lu, C Xu, Q Tian, J Zhou, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1012L. Liu, J. Lu, C. Xu, Q. Tian, and J. Zhou, "Deep fitting degree scoring network for monocular 3d object detection," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 10, 12
Accurate monocular 3d object detection via color-embedded 3d reconstruction for autonomous driving. X Ma, Z Wang, H Li, P Zhang, W Ouyang, X Fan, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision1012X. Ma, Z. Wang, H. Li, P. Zhang, W. Ouyang, and X. Fan, "Accurate monocular 3d object detection via color-embedded 3d reconstruction for autonomous driving," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 6851-6860. 10, 12
Deep ordinal regression network for monocular depth estimation. H Fu, M Gong, C Wang, K Batmanghelich, D Tao, Computer Vision and Pattern Recognition (CVPR). H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao, "Deep ordinal regression network for monocular depth estimation," in Computer Vision and Pattern Recognition (CVPR), 2018. 10
Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape. F Manhardt, W Kehl, A Gaidon, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition12F. Manhardt, W. Kehl, and A. Gaidon, "Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2069-2078. 12
Transfer learning. L Torrey, J Shavlik, Handbook of research on machine learning applications and trends: algorithms, methods, and techniques. IGI Global1213L. Torrey and J. Shavlik, "Transfer learning," in Handbook of research on machine learning applications and trends: algorithms, methods, and techniques. IGI Global, 2010, pp. 242-264. 12, 13
3d object proposals using stereo imagery for accurate object class detection. X Chen, K Kundu, Y Zhu, H Ma, S Fidler, R Urtasun, IEEE Transactions on Pattern Analysis and Machine Intelligence. 13X. Chen, K. Kundu, Y. Zhu, H. Ma, S. Fidler, and R. Urtasun, "3d object proposals using stereo imagery for accurate object class detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. 13
3d packing for self-supervised monocular depth estimation. V Guizilini, R Ambrus, S Pillai, A Raventos, A Gaidon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionV. Guizilini, R. Ambrus, S. Pillai, A. Raventos, and A. Gaidon, "3d packing for self-supervised monocular depth estimation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2485-2494. 13
Digging into self-supervised monocular depth estimation. C Godard, O Mac Aodha, M Firman, G J Brostow, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionC. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow, "Digging into self-supervised monocular depth estimation," in Proceedings of the IEEE international conference on computer vision, 2019, pp. 3828- 3838. 13
| []
|
[
"Tuning the Topology of a Two-Dimensional Catenated DNA Network",
"Tuning the Topology of a Two-Dimensional Catenated DNA Network"
]
| [
"Indresh Yadav \nDepartment of Chemical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUnited States\n",
"Dana Al Sulaiman \nDivision of Physical Science and Engineering\nKing Abdullah University of Science and Technology\n23955-6900ThuwalSaudi Arabia\n",
"Patrick S Doyle \nDepartment of Chemical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUnited States\n"
]
| [
"Department of Chemical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUnited States",
"Division of Physical Science and Engineering\nKing Abdullah University of Science and Technology\n23955-6900ThuwalSaudi Arabia",
"Department of Chemical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUnited States"
]
| []
| Molecular topology of polymers plays a key role in determining their physical properties. We studied herein the topological effects on the static and dynamic properties of a 2D catenated network of DNA rings called a kinetoplast. Restriction enzymes, that cleave DNA at sequence-specific sites, are used to selectively cut and remove rings from the network and hence tune the molecular topology while maintaining overall structural integrity. We find that topology has minimal effects over the spatial extension of the 2D network, however it significantly affects the relaxation behavior. The shape fluctuations of the network are governed by two distinct characteristic time scales attributed to the thermal fluctuations and confinement of the network. The relationship between the time constant of thermal relaxation and the amplitude of anisotropy fluctuations yields a universal scaling. Interestingly, this scaling is independent of the detailed arrangements of rings and/or perforation within the catenated networks. This study provides a route to tune the elastic properties of 2D catenated DNA networks by modifying the underlying topology in a rational and highly controllable manner.The topology of polymer molecules plays an important role in determining their static (e.g., size and shape) and dynamic (e.g., relaxation and diffusion) properties[1,2]. Tuning the topology of molecules is thus imperative in controlling the physiochemical properties tailored for desired applications[3]. Recently, a class of mechanically interlinked polymers molecules called polycatenanes have received considerable attention because of their unique rheological, mechanical, and thermal properties[4][5][6]. Catenated ring networks, or Olympic gels[7], have been developed with muscle-like properties [8] and reconfigurable topologies[9,10]. To understand the physics of catenated ring networks and hence engineer them for a desired application, a robust model system is required wherein topology can be tuned and studied at the single molecule level. Nature provides such a model system in the form of giant 2D catenated ring networks called kinetoplasts[11,12].Kinetoplast DNA (kDNA) from trypanosomatid Crithidia fasciculata is a natural Olympic gel wherein approximately 5000 minicircles (∼ 2.5 kbp) and 25 maxicircles (∼ 40 kbp) are topologically interlocked in a quasi-2D plane[12]. The catenation valency of minicircles is approximately three[12]. Maxicircles are also linked to each other, however the catenation valency is unknown. While the network of maxicircles and minicircles are interlocked with each other, each network can be sustained independently[13]. The nucleotide sequence of minicircles and maxicircles is different, and this property can be exploited to selectively digest maxicircles or minicircles in a controllable manner. Moreover, there are two different classes of minicircles, namely the major class (accounting for about ∼ 90% of the network) and the minor class (accounting for about ∼ 10% of the network), where each class is homogeneous in its base pair sequence but dif- * [email protected] ferent from the other class[14]. There is currently no understanding of how these various classes of rings affect the kDNA conformation, polymer dynamics or mechanical properties.Recently our group has studied various aspects of kDNA and proposed it as a model system for 2D catenated polymers[15][16][17][18][19]. For example, it has been shown that in response to constriction or electric field stretching, kDNA behave as an elastic sheet[15,16]. Its properties as a 2D polyelectrolyte have also been studied in response to the degree of confinement and ionic strength[17,18]. The unsatisfactory match of scaling laws based on a generalized Flory approach for a 2D polymer indicates that the physics of catenated polymers is yet to be fully realized. Furthermore, our group has demonstrated that the characteristic of polymer-and-salt-induced phase transition of kDNA is quite different than that of linear DNA[19].In this Letter, we study the equilibrium size, shape and relaxation spectrum of kDNA as a function of its molecular topology. A number of restriction enzymes were used to selectively remove the rings from the catenated network. Time-controlled digestion was also performed to randomly remove rings from the network to varying degrees in a controllable manner. Using this approach, we could selectively tune the topology of the kDNA while maintaining its overall molecular integrity, and thus generated a library of molecules with different topologies. Our results show that while topology has minimal effects on the kDNA size and shape, it significantly affects the relaxation behavior owing to the mechanical strength of the kDNA network. We envision that this strategy can provide opportunities to selectively tune the physical properties of catenated DNA networks and shed light over the much debated physics of 2D polymers. The results also provide rich information and guide the design of future 2D materials with specific properties.The heterogeneity in base pair sequencing of differ-arXiv:2209.04486v1 [cond-mat.soft] 9 Sep 2022 | 10.1103/physrevresearch.5.013141 | [
"https://export.arxiv.org/pdf/2209.04486v1.pdf"
]
| 252,199,597 | 2209.04486 | e593088c8e8233005a982ded25b71b6d495497ab |
Tuning the Topology of a Two-Dimensional Catenated DNA Network
Indresh Yadav
Department of Chemical Engineering
Massachusetts Institute of Technology
02139CambridgeMassachusettsUnited States
Dana Al Sulaiman
Division of Physical Science and Engineering
King Abdullah University of Science and Technology
23955-6900ThuwalSaudi Arabia
Patrick S Doyle
Department of Chemical Engineering
Massachusetts Institute of Technology
02139CambridgeMassachusettsUnited States
Tuning the Topology of a Two-Dimensional Catenated DNA Network
Molecular topology of polymers plays a key role in determining their physical properties. We studied herein the topological effects on the static and dynamic properties of a 2D catenated network of DNA rings called a kinetoplast. Restriction enzymes, that cleave DNA at sequence-specific sites, are used to selectively cut and remove rings from the network and hence tune the molecular topology while maintaining overall structural integrity. We find that topology has minimal effects over the spatial extension of the 2D network, however it significantly affects the relaxation behavior. The shape fluctuations of the network are governed by two distinct characteristic time scales attributed to the thermal fluctuations and confinement of the network. The relationship between the time constant of thermal relaxation and the amplitude of anisotropy fluctuations yields a universal scaling. Interestingly, this scaling is independent of the detailed arrangements of rings and/or perforation within the catenated networks. This study provides a route to tune the elastic properties of 2D catenated DNA networks by modifying the underlying topology in a rational and highly controllable manner.The topology of polymer molecules plays an important role in determining their static (e.g., size and shape) and dynamic (e.g., relaxation and diffusion) properties[1,2]. Tuning the topology of molecules is thus imperative in controlling the physiochemical properties tailored for desired applications[3]. Recently, a class of mechanically interlinked polymers molecules called polycatenanes have received considerable attention because of their unique rheological, mechanical, and thermal properties[4][5][6]. Catenated ring networks, or Olympic gels[7], have been developed with muscle-like properties [8] and reconfigurable topologies[9,10]. To understand the physics of catenated ring networks and hence engineer them for a desired application, a robust model system is required wherein topology can be tuned and studied at the single molecule level. Nature provides such a model system in the form of giant 2D catenated ring networks called kinetoplasts[11,12].Kinetoplast DNA (kDNA) from trypanosomatid Crithidia fasciculata is a natural Olympic gel wherein approximately 5000 minicircles (∼ 2.5 kbp) and 25 maxicircles (∼ 40 kbp) are topologically interlocked in a quasi-2D plane[12]. The catenation valency of minicircles is approximately three[12]. Maxicircles are also linked to each other, however the catenation valency is unknown. While the network of maxicircles and minicircles are interlocked with each other, each network can be sustained independently[13]. The nucleotide sequence of minicircles and maxicircles is different, and this property can be exploited to selectively digest maxicircles or minicircles in a controllable manner. Moreover, there are two different classes of minicircles, namely the major class (accounting for about ∼ 90% of the network) and the minor class (accounting for about ∼ 10% of the network), where each class is homogeneous in its base pair sequence but dif- * [email protected] ferent from the other class[14]. There is currently no understanding of how these various classes of rings affect the kDNA conformation, polymer dynamics or mechanical properties.Recently our group has studied various aspects of kDNA and proposed it as a model system for 2D catenated polymers[15][16][17][18][19]. For example, it has been shown that in response to constriction or electric field stretching, kDNA behave as an elastic sheet[15,16]. Its properties as a 2D polyelectrolyte have also been studied in response to the degree of confinement and ionic strength[17,18]. The unsatisfactory match of scaling laws based on a generalized Flory approach for a 2D polymer indicates that the physics of catenated polymers is yet to be fully realized. Furthermore, our group has demonstrated that the characteristic of polymer-and-salt-induced phase transition of kDNA is quite different than that of linear DNA[19].In this Letter, we study the equilibrium size, shape and relaxation spectrum of kDNA as a function of its molecular topology. A number of restriction enzymes were used to selectively remove the rings from the catenated network. Time-controlled digestion was also performed to randomly remove rings from the network to varying degrees in a controllable manner. Using this approach, we could selectively tune the topology of the kDNA while maintaining its overall molecular integrity, and thus generated a library of molecules with different topologies. Our results show that while topology has minimal effects on the kDNA size and shape, it significantly affects the relaxation behavior owing to the mechanical strength of the kDNA network. We envision that this strategy can provide opportunities to selectively tune the physical properties of catenated DNA networks and shed light over the much debated physics of 2D polymers. The results also provide rich information and guide the design of future 2D materials with specific properties.The heterogeneity in base pair sequencing of differ-arXiv:2209.04486v1 [cond-mat.soft] 9 Sep 2022
Molecular topology of polymers plays a key role in determining their physical properties. We studied herein the topological effects on the static and dynamic properties of a 2D catenated network of DNA rings called a kinetoplast. Restriction enzymes, that cleave DNA at sequence-specific sites, are used to selectively cut and remove rings from the network and hence tune the molecular topology while maintaining overall structural integrity. We find that topology has minimal effects over the spatial extension of the 2D network, however it significantly affects the relaxation behavior. The shape fluctuations of the network are governed by two distinct characteristic time scales attributed to the thermal fluctuations and confinement of the network. The relationship between the time constant of thermal relaxation and the amplitude of anisotropy fluctuations yields a universal scaling. Interestingly, this scaling is independent of the detailed arrangements of rings and/or perforation within the catenated networks. This study provides a route to tune the elastic properties of 2D catenated DNA networks by modifying the underlying topology in a rational and highly controllable manner.
The topology of polymer molecules plays an important role in determining their static (e.g., size and shape) and dynamic (e.g., relaxation and diffusion) properties [1,2]. Tuning the topology of molecules is thus imperative in controlling the physiochemical properties tailored for desired applications [3]. Recently, a class of mechanically interlinked polymers molecules called polycatenanes have received considerable attention because of their unique rheological, mechanical, and thermal properties [4][5][6]. Catenated ring networks, or Olympic gels [7], have been developed with muscle-like properties [8] and reconfigurable topologies [9,10]. To understand the physics of catenated ring networks and hence engineer them for a desired application, a robust model system is required wherein topology can be tuned and studied at the single molecule level. Nature provides such a model system in the form of giant 2D catenated ring networks called kinetoplasts [11,12].
Kinetoplast DNA (kDNA) from trypanosomatid Crithidia fasciculata is a natural Olympic gel wherein approximately 5000 minicircles (∼ 2.5 kbp) and 25 maxicircles (∼ 40 kbp) are topologically interlocked in a quasi-2D plane [12]. The catenation valency of minicircles is approximately three [12]. Maxicircles are also linked to each other, however the catenation valency is unknown. While the network of maxicircles and minicircles are interlocked with each other, each network can be sustained independently [13]. The nucleotide sequence of minicircles and maxicircles is different, and this property can be exploited to selectively digest maxicircles or minicircles in a controllable manner. Moreover, there are two different classes of minicircles, namely the major class (accounting for about ∼ 90% of the network) and the minor class (accounting for about ∼ 10% of the network), where each class is homogeneous in its base pair sequence but dif- * [email protected] ferent from the other class [14]. There is currently no understanding of how these various classes of rings affect the kDNA conformation, polymer dynamics or mechanical properties.
Recently our group has studied various aspects of kDNA and proposed it as a model system for 2D catenated polymers [15][16][17][18][19]. For example, it has been shown that in response to constriction or electric field stretching, kDNA behave as an elastic sheet [15,16]. Its properties as a 2D polyelectrolyte have also been studied in response to the degree of confinement and ionic strength [17,18]. The unsatisfactory match of scaling laws based on a generalized Flory approach for a 2D polymer indicates that the physics of catenated polymers is yet to be fully realized. Furthermore, our group has demonstrated that the characteristic of polymer-and-salt-induced phase transition of kDNA is quite different than that of linear DNA [19].
In this Letter, we study the equilibrium size, shape and relaxation spectrum of kDNA as a function of its molecular topology. A number of restriction enzymes were used to selectively remove the rings from the catenated network. Time-controlled digestion was also performed to randomly remove rings from the network to varying degrees in a controllable manner. Using this approach, we could selectively tune the topology of the kDNA while maintaining its overall molecular integrity, and thus generated a library of molecules with different topologies. Our results show that while topology has minimal effects on the kDNA size and shape, it significantly affects the relaxation behavior owing to the mechanical strength of the kDNA network. We envision that this strategy can provide opportunities to selectively tune the physical properties of catenated DNA networks and shed light over the much debated physics of 2D polymers. The results also provide rich information and guide the design of future 2D materials with specific properties.
The heterogeneity in base pair sequencing of differ- ent classes of rings and the precise action of restriction enzymes have been harnessed to tune the topology of kDNAs. A summary of the experimental methodology is presented in Figure 1. The restriction enzyme EcoRI has been used to cut the minor class of minicircles [14,20], PstI to cut the maxicircles [21], and a combination of EcoRI and PstI to cut both the minor class of minicircles and maxicircles. Next, we used MluI to digest the major class of minicircles as well as maxicircles [14] where fractional digestion of the network was controlled by digestion time. Single molecule experiments were conducted in straight microfluidic channels with a 2 µm height, 40 µm width and a 1 cm length. Our prior work [15,19] showed that this channel height orients kDNAs for ease of imaging and only moderately confines it. See Supplemental Material (SM) for experimental details [22]. kDNA and kDNA remodeled using various enzymes. The image displays a single frame (frame number 1137, chosen arbitrarily). Each row displays the typical variability in shape and size of kDNA molecules. All the molecules have a nearly elliptical shape with a clearly identified outline. The brightness of the outline (or edge) is attributed to the dense fibril network of rings at the periphery of kDNAs [12]. This observation is consistent with previous work from our group [15,18]. These molecules have a topologically flat conformation with positive Gaussian curvature [19,23]. Interestingly, we observe qualitatively that there is no major difference in the shape and size of the molecules after removal of the minor class of minicircles, the maxicircles, or a combination of both.
The influence of molecular topology was further quantified by measuring the radius of gyration (R g ) from the 2D projection of the fluorescence intensity of each kDNA, which maps kDNA's extension in a plane [16,19]. Here, R g was determined as the square root of the trace of the radius of gyration tensor [24]. Violin plots, represent-ing the distribution of R g , are presented in Figure 2(c). Each violin plot maps the probability distribution of R g , where the width of a plot corresponds to the frequency of the data points in each region. The central solid line represents the median, while dotted lines represent the ends of first and third quartiles. Statistical testing comparing the different distributions with the control gives p > 0.05, which implies that modifying kDNA topology by removing the minor class of minicircles and/or the maxicircles does not significantly influence the molecule's size. This non-trivial result implies, interestingly, that among the components of the kDNA network, it is the major class of the minicircles that plays the most important role in dictating the overall molecular size. This is also evidence that the catenated polymers offer great variability in their topology but very robust geometrical structure.
To investigate this result further, we chose to tune the amount of the major class of minicircles. While there is no enzyme which can independently digest this type of ring, we used MluI which digests both the major class of minicircles and the maxicircles. Representative images of molecules corresponding to different degrees of digestion, which is a function of digestion time, is shown in Figure 2(b). Molecules corresponding to a digestion time of ≥ 10 minutes show a change in shape compared to the control. For a reaction time of 20 minutes, below which molecular integrity is intact, significant visual deviation in shape of the molecules can be seen. The R g distributions are presented in Figure 2(c). Again, with regards to the molecular size of kDNA, statistical testing shows no significant difference in the R g distribution. Another interesting featured is that we did not observe the crumpling of molecules even with the maximum digestion we could achieve while still maintaining the molecular integrity.
Irrespective of their topology, kDNA molecules undergo small shape fluctuations (Movie S1 and S2) and for a given environmental condition fluctuation in shape is stationary. Here, thermal fluctuation is measured in terms of variation in anisotropy as a function of time for individual molecules. The shape of a kDNA can be described as wrinkled bowl with an elliptical crosssection [15]. Principle eigenvalues of the gyration tensor calculated from the projected fluorescence intensity give the length of the minor and major axes of the ellipse. The anisotropy then is defined as the ratio of the minor axis and the major axis. The anisotropy measured in this way provides information about symmetry where a value of 1 represents circular conformation and a value of 0 a straight line. Hence, studying the temporal behaviour of anisotropy not only provides the degree of thermal fluctuation but also the robustness/flexibility of the shape of the network manifold. The distribution of anisotropy for molecules in different samples are presented in Figure S3. Though there is significant molecule-to-molecule variation in anisotropy within a sample, the average value for different samples remains almost same (p > 0.05). This also implies that the shape of the molecule does not change drastically by tuning its topology. The amplitude of fluctuation relative to its mean value, however, is significantly affected by tuning the topology. Variation in anisotropy as a function of time for quintessential molecules for the control and a MluI -digested molecule are presented in Figure 3(a). The amplitude of anisotropy fluctuation for a given molecule is quantified in terms of the standard deviation of anisotropy (σ) over a given time series. It is clear that σ value increases after removing a fraction of rings from the network. The ensemble averaged standard deviation (σ) of the anisotropy calculated using equation S1 for different samples is presented in Table S1. To study the dynamic behavior of the kDNA network we calculated the autocorrelation function C(τ ) of the anisotropy A measured over T camera frames
C(τ ) = 1 σ 2 T T −τ t=1 (A(t) −Ā)(A(t + τ ) −Ā)(1)
where σ 2 is the anisotropy variance andĀ is the average value of anisotropy over a given time series. An ensemble of autocorrelation functions and their average for control molecules is presented in Figure 3(b). Qualitatively the autocorrelation for each molecule has a fast decay at short lag time followed by a tail at longer lag times. Accordingly, the ensemble average autocorrelation is fitted with the sum of two exponentials [SM, Equation S2]. The fast time scale (τ 1 ) was found to be 0.177 ± 0.018 s (SD), which is of the order of the relaxation timescale for the much smaller λ-DNA [24,25]. The characteristic timescale of the longer time tail (τ 2 ) was found to be 5.90 ± 0.81 s (SD). removed from the network, the time constant for fast relaxation consequently increases. For the slower relaxation, however, the autocorrelation decreases relatively faster as we remove rings from the network, i.e., the time constant for slow relaxation decreases. Similar opposing trends between τ 1 and τ 2 but with more significant differences in magnitude are obtained for molecules after time-dependent digestion of rings using MluI ( Figure 4(b)).
The first time constant (τ 1 ) related to the faster decay of the autocorrelation function corresponds to a conformational decorrelation i.e., time over which the network forgets its initial configuration. For a linear polymer, it also corresponds to the timescale over which a molecule diffuses a distance equivalent to its radius of gyration and is the relaxation time for the lowest mode in the Rouse model [26,27]. For free draining, the Rouse model considers the polymer as a set of beads connected by Gaussian springs and predicts the relaxation time as τ 1 = ζ/k tot where ζ is the total viscous drag on the chain and k tot (= k B T /∆x 2 ) is the effective spring constant with ∆x being the extension/contraction in Gaussian springs from its equilibrium size. k B T is the thermal energy. Topologically 2D network of kDNA can be modelled as a 2D network of self-avoiding beads connected with Gaussian springs ( Figure S4) [28,29]. Hydrodynamic interactions become decorrelated at length scales proportional to the channel height [30], accordingly the Rouse dynamics will dominate the shape fluctuations. Extending the Rouse formalism to 2D network it can be shown that τ 1 ∼ ζσ 2 /k B T [22].
The scaling of τ 1 with σ for molecules digested with two different classes of enzymes is shown in Figure S5. The slopes of the regression line for two sets of samples are very close and they fall onto a master plot with a universal scaling (τ 1 ∼ σ 0.96 ) (Figure 4(c)). The nature of this master plot implies that the scaling relation between τ 1 and σ is independent of the detailed arrangements of the rings and/or perforation withing the catenated network. This observation is in accord with studies of perforated sheets wherein the crumpling critical temperature depends on the perforation area, but is independent of detailed arrangement of perforated holes within the sheet [31]. The relaxation time also encrypts the mechanical strength of the network. It has been shown that bending rigidity (κ) and relaxation time (τ 1 ) for vesicles are related as κ = ηr 3 /πτ 1 , where η is solvent viscosity and r is the equilibrium radius [32]. For a given η and r, κ ∼ τ −1 1 . Hence, removing rings softens the network. The calculated values of κ for different samples are listed in Table S1.
The deviation from the Rouse prediction (τ 1 ∼ σ 2 ) could be due to oversimplistic nature of the spring model of kDNA. While the size of the molecules remains invariant, the number of beads in the modelled network should reduce in proportion to the degree of digestion and hence the effective drag. Influence of digestion on effective drag is discussed in the SM. Inset in Figure 4(c) and Figure S6 show that when we account for this modification in drag, the scaling exponent increases to (1.37 − 1.52), which is still smaller than the Rouse prediction. It should also be noted that though there are many theoretical and simulation studies available about 2D polymers, [23,29,31,[33][34][35] experimental results are scarce, and the prediction of the statistical models for 2D polymers has not been tested. To the best of our knowledge, this is the first experimental data presenting the universal scaling between two measured quantities for a 2D polymer.
The time constant (τ 2 ) associated with second mode of relaxation decreases as we remove rings from the network, in contrast to τ 1 . This trend could not be rationalized by the spring model of the network in bulk. Our prior work on pristine kDNA conjectured that the longer time constant is the result of microfluidic confinement which supresses out-of-plane motion and can lead to long-lived local folds along the edge of the molecule [15]. Examples are shown in Figure S7 and S8 [22]. The energy barrier of conformational fold transitions is related to κ of the network, which decreases monotonically with fraction of digestion (Table S1). The decline of κ will also weaken the sense of confinement for the molecules and facilitate out-of-plane motion. Together, these contributions lead to a reduction of τ 2 upon ring digestion.
In this Letter, we tuned the topology of a 2D cate-nated DNA network to understand its influence on equilibrium conformations and shape fluctuations. Though the molecular topology has minimal influence on the spatial extension of this 2D network, it significantly affects shape fluctuations. Remarkably, irrespective of details of the molecular topology, the relationship between the time constant of shape fluctuations and variance of shape anisotropy shows a universal scaling. Our results provide a route to selectively tune the physical properties of 2D catenated DNA networks and in doing so, we discovered unanticipated trends in properties related to topological modifications which will hopefully spurn further studies of this emergent class of polymers.
FIG. 1 .
1(a) Schematic diagram of a kDNA, a catenated 2D network (Olympic gel) made of thousands of connected rings of circular DNA. (b) With the precise action of restriction enzymes, either ring class or a combination of classes of rings can be removed from the network, hence selectively tuning the topology. Super resolution confocal microscopy image (c) showing the pristine kDNA shape as a wrinkled bowl with positive Gaussian curvature, (d) a representative molecule after digestion by the MluI enzyme for 10 minutes, where digestion creates observable perforations within the network. Scale bar is 5 µm.
Figure 2
2presents a montage of fluorescence microscopy images of kDNAs. The ionic strength of the solution is 32.3 mM [17] which corresponds to a Debye length of 1.69 nm compared to the 2 nm bare width of dsDNA. Figure 2(a) shows representative images of control (pristine)
FIG. 2 .
2(a) and (b) Fluorescence microscopy images of kD-NAs corresponding to control and kDNAs remodelled with restriction enzymes. (c) Violin plots of the size distribution of kDNAs corresponding to control and kDNAs remodelled with restriction enzymes. Scale bar is 5 µm.
FIG. 3 .
3(a) Upper panel shows the anisotropy fluctuation for control (pristine) kDNA and the bottom panel for a molecule digested with MluI for 20 min. Though the mean value of anisotropy remains almost the same (∼ 0.9), the standard deviation (σ) varies for different topologies. (b) Semilogarithmic plot of temporal autocorrelation function of anisotropy for the control molecules (N=49) and their ensemble average (black).
Figure 4 (
4a) presents the autocorrelation of anisotropy corresponding to the digestion of the minor class of minicircles or/and maxicircles. Inset of the figure shows a zoom in view of the autocorrelation for short lag times. It is clear from the inset that as rings are systematically FIG. 4. Ensemble averaged time autocorrelation function (a) with well control removal of minicircles/maxicircles, (b) random removal of minicircles/maxicircles from the kDNA network. (c) Master curve ( τ1 versus σ ) showing the universal nature of τ1 ∼ σ 0.96 . (d) τ2 versus σ showing the monotonic decrease of τ2. Inset in (c) presents the influence of drag changing with degree of digestion (d) on scaling exponent.
Topologydependent anomalous dynamics of ring and linear DNA are sensitive to cytoskeleton crosslinking. D M Wulstein, K E Regan, J Garamella, R J Mcgorty, R M Robertson-Anderson, 10.1126/sciadv.aay5912Science Advances. 55912D. M. Wulstein, K. E. Regan, J. Garamella, R. J. McGorty, and R. M. Robertson-Anderson, Topology- dependent anomalous dynamics of ring and linear DNA are sensitive to cytoskeleton crosslinking, Science Ad- vances 5, eaay5912 (2019).
Long-Lived Self-Entanglements in Ring Polymers. B W Soh, A R Klotz, R M Robertson-Anderson, P S Doyle, 10.1103/PhysRevLett.123.048002Physical Review Letters. 12348002B. W. Soh, A. R. Klotz, R. M. Robertson-Anderson, and P. S. Doyle, Long-Lived Self-Entanglements in Ring Poly- mers, Physical Review Letters 123, 48002 (2019).
Topological aspects of the physics of polymers: The theory and its biophysical applications. M D Frank-Kamenetskiȋ, A V Vologodskiȋ, Soviet Physics Uspekhi. 24679M. D. Frank-Kamenetskiȋ and A. V. Vologodskiȋ, Topo- logical aspects of the physics of polymers: The theory and its biophysical applications, Soviet Physics Uspekhi 24, 679 (1981).
Material properties and applications of mechanically interlocked polymers. L F Hart, J E Hertzog, P M Rauscher, B W Rawe, M M Tranquilli, S J Rowan, 10.1038/s41578-021-00278-zNature Reviews Materials. 6508L. F. Hart, J. E. Hertzog, P. M. Rauscher, B. W. Rawe, M. M. Tranquilli, and S. J. Rowan, Material properties and applications of mechanically interlocked polymers, Nature Reviews Materials 6, 508 (2021).
Polycatenanes: synthesis, characterization, and physical understanding. G Liu, P M Rauscher, B W Rawe, M M Tranquilli, S J Rowan, Chemical Society Reviews. G. Liu, P. M. Rauscher, B. W. Rawe, M. M. Tranquilli, and S. J. Rowan, Polycatenanes: synthesis, characteriza- tion, and physical understanding, Chemical Society Re- views (2022).
Swelling of olympic gels. M Lang, J Fischer, M Werner, J.-U Sommer, Physical Review Letters. 112238001M. Lang, J. Fischer, M. Werner, and J.-U. Sommer, Swelling of olympic gels, Physical Review Letters 112, 238001 (2014).
Progressive construction of an "Olympic" gel. E Raphaël, C Gay, P G De Gennes, 10.1007/BF02770756Journal of Statistical Physics. 89111E. Raphaël, C. Gay, and P. G. De Gennes, Progressive construction of an "Olympic" gel, Journal of Statistical Physics 89, 111 (1997).
Elastomers without Covalent Cross-Linking: Concatenated Rings Giving Rise to Elasticity. P Hu, J Madsen, Q Huang, A L Skov, ACS Macro Letters. 101458P. Hu, J. Madsen, Q. Huang, and A. L. Skov, Elas- tomers without Covalent Cross-Linking: Concatenated Rings Giving Rise to Elasticity, ACS Macro Letters 10, 1458 (2020).
Van Der Maarel, Gelation of the genome by topoisomerase II targeting anticancer agents. Y S Kim, B Kundukad, A Allahverdi, L Nordensköld, P S Doyle, J R , 10.1039/c2sm27229fSoft Matter. 91656Y. S. Kim, B. Kundukad, A. Allahverdi, L. Nordensköld, P. S. Doyle, and J. R. Van Der Maarel, Gelation of the genome by topoisomerase II targeting anticancer agents, Soft Matter 9, 1656 (2013).
Active DNA Olympic Hydrogels Driven by Topoisomerase Activity. B A Krajina, A Zhu, S C Heilshorn, A J Spakowitz, 10.1103/PhysRevLett.121.148001Physical Review Letters. 121148001B. A. Krajina, A. Zhu, S. C. Heilshorn, and A. J. Spakowitz, Active DNA Olympic Hydrogels Driven by Topoisomerase Activity, Physical Review Letters 121, 148001 (2018).
The Structure and Replication of Kinetoplast DNA. T A Shapiro, P T Englund, Annual review of Microbiology. 49117T. A. Shapiro and P. T. Englund, The Structure and Replication of Kinetoplast DNA, Annual review of Mi- crobiology 49, 117 (1995).
The topology of the kinetoplast DNA network. J Chen, C A Rauch, J H White, P T Englund, N R Cozzarelli, 10.1016/0092-8674(95)90451-4Cell. 8061J. Chen, C. A. Rauch, J. H. White, P. T. Englund, and N. R. Cozzarelli, The topology of the kinetoplast DNA network, Cell 80, 61 (1995).
Kinetoplast DNA maxicircles: Networks within networks. T A Shapiro, 10.1073/pnas.90.16.7809Proceedings of the National Academy of Sciences of the United States of America. 907809T. A. Shapiro, Kinetoplast DNA maxicircles: Networks within networks, Proceedings of the National Academy of Sciences of the United States of America 90, 7809 (1993).
The majority of miniicircle DNA in Crithidia fasciculat strain CF-Cl is of a single class with nearly homogeneous DNA sequence. L Birkenmeyer, H Sugisaki, D S Ray, Nucleic Acids Research. 137107L. Birkenmeyer, H. Sugisaki, and D. S. Ray, The majority of miniicircle DNA in Crithidia fasciculat strain CF-Cl is of a single class with nearly homogeneous DNA sequence, Nucleic Acids Research 13, 7107 (1985).
Equilibrium structure and deformation response of 2D kinetoplast sheets. A R Klotz, B W Soh, P S Doyle, 10.1073/pnas.1911088116Proceedings of the National Academy of Sciences of the United States of America. 117121A. R. Klotz, B. W. Soh, and P. S. Doyle, Equilibrium structure and deformation response of 2D kinetoplast sheets, Proceedings of the National Academy of Sciences of the United States of America 117, 121 (2020).
Deformation Response of Catenated DNA Networks in a Planar Elongational Field. B W Soh, P S Doyle, 10.1021/acsmacrolett.0c00360ACS Macro Letters. 9944B. W. Soh and P. S. Doyle, Deformation Response of Catenated DNA Networks in a Planar Elongational Field, ACS Macro Letters 9, 944 (2020).
Ionic Effects on the Equilibrium Conformation of Catenated DNA Networks. B W Soh, A Khorshid, D Sulaiman, P S Doyle, 10.1021/acs.macromol.0c01706Macromolecules. 538502B. W. Soh, A. Khorshid, D. Al Sulaiman, and P. S. Doyle, Ionic Effects on the Equilibrium Conformation of Cate- nated DNA Networks, Macromolecules 53, 8502 (2020).
Equilibrium Conformation of Catenated DNA Networks in Slitlike Confinement. B W Soh, P S Doyle, 10.1021/acsmacrolett.1c00299ACS Macro Letters. 880B. W. Soh and P. S. Doyle, Equilibrium Conformation of Catenated DNA Networks in Slitlike Confinement, ACS Macro Letters , 880 (2021).
Phase Transition of Catenated DNA Networks in Poly(ethylene glycol) Solutions. I Yadav, D Sulaiman, B W Soh, P S Doyle, 10.1021/acsmacrolett.1c00463ACS Macro Letters. 101429I. Yadav, D. Al Sulaiman, B. W. Soh, and P. S. Doyle, Phase Transition of Catenated DNA Networks in Poly(ethylene glycol) Solutions, ACS Macro Letters 10, 1429 (2021).
Kinetoplast DNA in the Insect Trypanosomes Crithidia lucilae and Crithidia fasciculata. J H J Hoeijmakers, B Schoutsen, P Borst, Plasmid. 7199J. H. J. Hoeijmakers, B. Schoutsen, and P. Borst, Kineto- plast DNA in the Insect Trypanosomes Crithidia lucilae and Crithidia fasciculata, Plasmid 7, 199 (1982).
Kinetoplast maxicircle DNA replication in Crithidia fasciculata and Trypanosoma brucei. L R Carpenter, P T Englund, 10.1128/mcb.15.12.6794Molecular and Cellular Biology. 156794L. R. Carpenter and P. T. Englund, Kinetoplast maxi- circle DNA replication in Crithidia fasciculata and Try- panosoma brucei, Molecular and Cellular Biology 15, 6794 (1995).
See supplemental material for experimental details, data analysis and statistics, theoretical derivations, representative movies, additional data for scaling laws and discussion about second relaxation time. See supplemental material for experimental details, data analysis and statistics, theoretical derivations, represen- tative movies, additional data for scaling laws and dis- cussion about second relaxation time, (2022).
Flatness and intrinsic curvature of linked-ring membranes. J M Polson, E J Garcia, A R Klotz, 10.1039/d1sm01307farXiv:2110.13111Soft Matter. 1710505J. M. Polson, E. J. Garcia, and A. R. Klotz, Flatness and intrinsic curvature of linked-ring membranes, Soft Matter 17, 10505 (2021), arXiv:2110.13111.
An experimental study of DNA rotational relaxation time in nanoslits. C C Hsieh, A Balducci, P S Doyle, 10.1021/ma070570kMacromolecules. 405196C. C. Hsieh, A. Balducci, and P. S. Doyle, An experimen- tal study of DNA rotational relaxation time in nanoslits, Macromolecules 40, 5196 (2007).
Statics and dynamics of single DNA molecules confined in nanochannels. W Reisner, K J Morton, R Riehn, Y M Wang, Z Yu, M Rosen, J C Sturm, S Y Chou, E Frey, R H Austin, 10.1103/PhysRevLett.94.196101Physical Review Letters. 941W. Reisner, K. J. Morton, R. Riehn, Y. M. Wang, Z. Yu, M. Rosen, J. C. Sturm, S. Y. Chou, E. Frey, and R. H. Austin, Statics and dynamics of single DNA molecules confined in nanochannels, Physical Review Letters 94, 1 (2005).
M Doi, S F Edwards, The Theory of Polymer Dynamics. New YorkOxford University PressM. Doi and S. F. Edwards, The Theory of Polymer Dy- namics (Oxford University Press, New York, 1988).
Intramolecular dynamics of dsDNA confined to a quasi-one-dimensional nanochannel. I Yadav, W Rosencrans, R Basak, J A Van Kan, J R C Van Der Maarel, 10.1103/physrevresearch.2.013294Physical Review Research. 2I. Yadav, W. Rosencrans, R. Basak, J. A. van Kan, and J. R. C. van der Maarel, Intramolecular dynamics of dsDNA confined to a quasi-one-dimensional nanochan- nel, Physical Review Research 2, 10.1103/physrevre- search.2.013294 (2020).
Statistical mechanics of tethered surfaces. Y Kantor, M Kardar, D R Nelson, 10.1103/PhysRevLett.57.791Physical Review Letters. 57791Y. Kantor, M. Kardar, and D. R. Nelson, Statistical me- chanics of tethered surfaces, Physical Review Letters 57, 791 (1986).
Crumpling Transition in Polymerized Membranes. Y Kantor, D R Nelson, 10.1103/PhysRevLett.18.301Physical Review Letters. 582774Y. Kantor and D. R. Nelson, Crumpling Transition in Polymerized Membranes, Physical Review Letters 58, 2774 (1987).
Intrachain dynamics of large dsdna confined to slitlike channels. J J Jones, J R Van Der Maarel, P S Doyle, Physical review letters. 11068101J. J. Jones, J. R. van der Maarel, and P. S. Doyle, Intra- chain dynamics of large dsdna confined to slitlike chan- nels, Physical review letters 110, 068101 (2013).
Thermal crumpling of perforated twodimensional sheets. D Yllanes, S S Bhabesh, D R Nelson, M J Bowick, 10.1038/s41467-017-01551-yarXiv:1705.07379Nature Communications. 81D. Yllanes, S. S. Bhabesh, D. R. Nelson, and M. J. Bowick, Thermal crumpling of perforated two- dimensional sheets, Nature Communications 8, 1 (2017), arXiv:1705.07379.
Stretching and relaxation of vesicles, Physical Review E -Statistical. H Zhou, B B Gabilondo, W Losert, W Van De, Water, 10.1103/PhysRevE.83.011905Nonlinear, and Soft Matter Physics. 831H. Zhou, B. B. Gabilondo, W. Losert, and W. Van De Water, Stretching and relaxation of vesicles, Physical Re- view E -Statistical, Nonlinear, and Soft Matter Physics 83, 1 (2011).
W Helfrich, 10.1515/znc-1973-11-1209Elastic Properties of Lipid Bilayers: Theory and Possible Experiments. 28693W. Helfrich, Elastic Properties of Lipid Bilayers: Theory and Possible Experiments, Zeitschrift fur Naturforschung -Section C Journal of Biosciences 28, 693 (1973).
Absence of a crumpling transition in strongly self-avoiding tethered membranes. M Plischke, D Boal, 10.1103/PhysRevA.38.4943Physical Review A. 384943M. Plischke and D. Boal, Absence of a crumpling transi- tion in strongly self-avoiding tethered membranes, Phys- ical Review A 38, 4943 (1988).
Universality classes of self-avoiding fixedconnectivity membranes. M J Bowick, A Cacciuto, G Thorleifsson, A Travesset, 10.1007/s101890170071European Physical Journal E. 5149M. J. Bowick, A. Cacciuto, G. Thorleifsson, and A. Travesset, Universality classes of self-avoiding fixed- connectivity membranes, European Physical Journal E 5, 149 (2001).
| []
|
[
"Warm β-exponential inflation and the swampland conjectures",
"Warm β-exponential inflation and the swampland conjectures",
"Warm β-exponential inflation and the swampland conjectures",
"Warm β-exponential inflation and the swampland conjectures"
]
| [
"F B M Dos Santos \nDepartamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil\n",
"R Silva \nDepartamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil\n\nDepartamento de Física\nUniversidade do Estado do Rio Grande do Norte\n59610-210MossoróBrasil\n",
"S Santos Da Costa \nIstituto Nazionale di Fisica Nucleare (INFN) Sezione di Pisa\nLargo B. Pontecorvo 356127PisaItaly\n",
"M Benetti \nScuola Superiore Meridionale\nLargo San Marcellino 1080138NapoliItaly\n\nIstituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli\nComplesso Universitario di Monte Sant'Angelo\nEdificio G, Via CinthiaI-80126NapoliItaly\n",
"J S Alcaniz \nDepartamento de Astronomia\nObservatório Nacional\nRua General José Cristino\n20921-400Rio de Janeiro-RJBrasil\n",
"F B M Dos Santos \nDepartamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil\n",
"R Silva \nDepartamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil\n\nDepartamento de Física\nUniversidade do Estado do Rio Grande do Norte\n59610-210MossoróBrasil\n",
"S Santos Da Costa \nIstituto Nazionale di Fisica Nucleare (INFN) Sezione di Pisa\nLargo B. Pontecorvo 356127PisaItaly\n",
"M Benetti \nScuola Superiore Meridionale\nLargo San Marcellino 1080138NapoliItaly\n\nIstituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli\nComplesso Universitario di Monte Sant'Angelo\nEdificio G, Via CinthiaI-80126NapoliItaly\n",
"J S Alcaniz \nDepartamento de Astronomia\nObservatório Nacional\nRua General José Cristino\n20921-400Rio de Janeiro-RJBrasil\n"
]
| [
"Departamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil",
"Departamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil",
"Departamento de Física\nUniversidade do Estado do Rio Grande do Norte\n59610-210MossoróBrasil",
"Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Pisa\nLargo B. Pontecorvo 356127PisaItaly",
"Scuola Superiore Meridionale\nLargo San Marcellino 1080138NapoliItaly",
"Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli\nComplesso Universitario di Monte Sant'Angelo\nEdificio G, Via CinthiaI-80126NapoliItaly",
"Departamento de Astronomia\nObservatório Nacional\nRua General José Cristino\n20921-400Rio de Janeiro-RJBrasil",
"Departamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil",
"Departamento de Física\nUniversidade Federal do Rio Grande do Norte\n59072-970Natal -RNBrasil",
"Departamento de Física\nUniversidade do Estado do Rio Grande do Norte\n59610-210MossoróBrasil",
"Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Pisa\nLargo B. Pontecorvo 356127PisaItaly",
"Scuola Superiore Meridionale\nLargo San Marcellino 1080138NapoliItaly",
"Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli\nComplesso Universitario di Monte Sant'Angelo\nEdificio G, Via CinthiaI-80126NapoliItaly",
"Departamento de Astronomia\nObservatório Nacional\nRua General José Cristino\n20921-400Rio de Janeiro-RJBrasil"
]
| []
| We investigate theoretical and observational aspects of a warm inflation scenario driven by the β-exponential potential, which generalizes the well-known power law inflation. In such a scenario, the decay of the inflaton field into radiation happens during the inflationary phase. In our study, we consider a dissipation coefficient (Γ) with cubic dependence on the temperature (T ) and investigate the consequences in the inflationary dynamics, focusing on the impact on the spectral index n s , its running n run and tensor-to-scalar ratio r. We find it possible to realize inflation in agreement with current cosmic microwave background data in weak and strong dissipation regimes. We also investigate theoretical aspects of the model in light of the swampland conjectures, as warm inflation in the strong dissipation regime has been known as a way to satisfy the three conditions currently discussed in the literature. We find that when Γ ∝ T 3 , the β-exponential model can be accommodated into the conjectures. arXiv:2209.06153v2 [astro-ph.CO] 1 Dec 2022 | 10.1140/epjc/s10052-023-11329-w | [
"https://export.arxiv.org/pdf/2209.06153v2.pdf"
]
| 252,211,891 | 2209.06153 | 80c2580c41d627f56f704d85bddda3eb867f0f4d |
Warm β-exponential inflation and the swampland conjectures
F B M Dos Santos
Departamento de Física
Universidade Federal do Rio Grande do Norte
59072-970Natal -RNBrasil
R Silva
Departamento de Física
Universidade Federal do Rio Grande do Norte
59072-970Natal -RNBrasil
Departamento de Física
Universidade do Estado do Rio Grande do Norte
59610-210MossoróBrasil
S Santos Da Costa
Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Pisa
Largo B. Pontecorvo 356127PisaItaly
M Benetti
Scuola Superiore Meridionale
Largo San Marcellino 1080138NapoliItaly
Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli
Complesso Universitario di Monte Sant'Angelo
Edificio G, Via CinthiaI-80126NapoliItaly
J S Alcaniz
Departamento de Astronomia
Observatório Nacional
Rua General José Cristino
20921-400Rio de Janeiro-RJBrasil
Warm β-exponential inflation and the swampland conjectures
We investigate theoretical and observational aspects of a warm inflation scenario driven by the β-exponential potential, which generalizes the well-known power law inflation. In such a scenario, the decay of the inflaton field into radiation happens during the inflationary phase. In our study, we consider a dissipation coefficient (Γ) with cubic dependence on the temperature (T ) and investigate the consequences in the inflationary dynamics, focusing on the impact on the spectral index n s , its running n run and tensor-to-scalar ratio r. We find it possible to realize inflation in agreement with current cosmic microwave background data in weak and strong dissipation regimes. We also investigate theoretical aspects of the model in light of the swampland conjectures, as warm inflation in the strong dissipation regime has been known as a way to satisfy the three conditions currently discussed in the literature. We find that when Γ ∝ T 3 , the β-exponential model can be accommodated into the conjectures. arXiv:2209.06153v2 [astro-ph.CO] 1 Dec 2022
I. INTRODUCTION
The ΛCDM model, combined with the idea of primordial inflation, constitutes a remarkable description of the universe evolution from very early to late times. In particular, inflation solves some of the problems that arise in the big bang theory by assuming a rapid expansion of the universe while generating initial conditions for the subsequent cosmic evolution [1][2][3][4][5][6]. In the inflationary framework, the inflaton field is responsible for the early accelerated expansion, whose evolution is driven by a specific potential function. Naturally, over the years, many possible candidates appeared [7], from which some of them are viewed as viable models, as they agree with current observations provided by cosmic microwave background (CMB) experiments [8,9].
After inflation, however, one needs to direct the attention to a reheating period, that connects the inflationary era to radiation dominance [10,11]. In this epoch, the inflaton couples to other fields such that the remaining energy is converted to create new particles that compose the radiation energy density. While much progress has been made in the description of this era and its connection with CMB data [12][13][14][15][16][17][18][19][20][21][22][23][24][25], the exact mechanism is still unknown since many factors may appear, and they can be very dependent on the inflationary model in consideration. In this scenario of cold inflation, the coupling to other fields is neglected until inflation ends. On the other hand, an alternative is to consider that this coupling is relevant during inflation, which characterizes the warm inflation picture [26][27][28]. The coupling of the inflaton to other fields * [email protected] † [email protected] ‡ [email protected] § [email protected] ¶ [email protected] creates a thermal bath in which the production of relativistic particles reheats the universe so that the universe can go smoothly to a radiation-dominated era by the end of inflation. Indeed, a dissipative term in the equations of motion provides extra friction, which implies a modification in the description of the accelerated expansion and, consequently, in the observational predictions. The warm inflation picture has been widely studied in recent literature. In general, from a phenomenological perspective, models driven by potentials that are disfavored by data in the cold inflation picture may become viable as, for example, scenarios described by monomial potentials [29][30][31][32][33][34] (See [35,36] for the predictions of other known models). This happens because as the dissipation coefficient introduces another term of friction in the equation of motion of the inflaton, an extra factor appears in the slow-roll parameters so that they can be suppressed more effectively, even when the potential is steep [37,38]. From a more fundamental point of view, warm inflation might arise from concrete particle physics scenarios [39][40][41][42], being able to sustain particle production leading to a 'graceful exit' to the radiation era.
This work investigates the warm inflation scenario driven by a class of β-exponential potentials that generalizes the wellknown power law inflation [43]. As shown in [44], such a model can arise from brane dynamics and showed a good agreement with Planck 2015 data. An updated analysis with Planck 2018 and clustering data showed that the model with a non-minimal coupling of the field with gravity seems to be a more viable approach for this class of models [45]. Here we investigate how the predictions of the β-exponential inflationary model change when considering the warm inflation picture, as recent studies have investigated the viability of exponential potentials in this context. For example, in [46], a pure exponential potential was considered in the strong dissipation regime, with a dissipative coefficient Γ ∝ T 3 ; a coupling of the type was motivated in [41], where the assumption is that the scalar field has an axionic coupling to gauge bosons. The application to an exponential potential, as investigated in [46], showed that either inflation still would not end by violation of the slow-roll conditions or the predicted spectral index was too red-tilted in the strong dissipative regime. The distortion in the exponential form caused by the β-exponential function may address both issues. It is worth noting that another generalization of the exponential function was considered in [37,47], showing that runaway-type potentials are also an option in the warm inflation picture. In particular, the tensor-toscalar ratio becomes significantly suppressed if one wants to achieve the central Planck value for n s .
Another point of investigation concerns the recently proposed swampland conjectures [48][49][50][51]. In this concern, some works constrain scalar field theories based on the assumption that they can be embedded in more general theories, such as string theory [52][53][54][55][56][57]. A discussion started from the difficulty in obtaining de Sitter vacua in these theories [48,58,59]; therefore, in order for a model to be theoretically consistent, it should obey certain limits to stay in the landscape of well-motivated scenarios. In particular, it was shown in [37,46,55,[60][61][62][63][64] that warm inflation realized in the strong dissipative regime makes it possible for all current conjectures to be satisfied. The warm inflation idea combined with extensions of the canonical picture as done in Refs. [61,65,66] can also be considered to recover concordance with observations. Adding to these recent interesting studies, in this work we want to determine how far from a simple exponential form one may go to provide reasonable predictions. This way, we study if the β-exponential model can be another option in a warm inflation construction in which the strong regime can be realized while being consistent with CMB data and its impact on the swampland conjectures.
This work is organized in the following manner: in section II, we review warm inflation and the respective slow-roll equations. In Section III, we introduce the β-exponential model into the warm inflation framework, while in Section IV, we discuss if the model is consistent with the swampland conjectures. To conclude, in Section V, we present our considerations.
II. WARM INFLATION
The dissipation of inflaton into other particles is often modeled by the presence of a dissipation coefficient Γ in the equation of motion of the scalar field. It means that the Klein-Gordon equation for φ becomes
φ + (3H + Γ)φ + V ,φ = 0,(1)
during inflation. Here, a dot denotes a derivative in time, while the subscript ,φ represents a derivative w.r.t. the field. We see that the additional term with Γ constitutes an additional source of friction added to the Hubble one. Since the field decays into radiation, the energy density evolution comes from the conservation of the energy-momentum tensor aṡ
ρ r + 4Hρ r = Γφ 2 ,ρ φ + 3H(ρ φ + P φ ) = −Γφ 2 ,(2)
where we note that the term proportional to Γ represents the energy transferred to the radiation particles from the inflaton.
To close the set of equations, we need the Friedmann equation, which gives us the background expansion
3H 2 M 2 p = 1 2φ 2 + V + ρ r ,(3)
with M p = 1 √ 8πG being the reduced Planck mass. The usual procedure in single scalar field inflation is to apply the slowroll approximation, in which the field slowly rolls down the potential; this is achieved by neglecting higher-order derivatives in the equations of motion and assuming that the potential dominates the energy budget of the field. As a consequence, Eqs.
(1-3) reduce tȯ φ − V ,φ 3H(1 + Q) , H 2 V 3M 2 p , 4Hρ r Γφ 2 .(4)
Here, we have introduced the ratio Q ≡ Γ 3H as standard practice, and we have neglectedρ r by assuming that the thermal equilibrium of the bath is quickly achieved. We note that the value of Q determines how effective the dissipation is: For Γ < H, we have Q < 1, characterizing the weak dissipative regime; on the other hand, if Γ > H, then Q > 1, and inflation proceeds in the strong dissipative regime. In the same manner as the cold inflation picture, one can derive slow-roll parameters expressed as
W ≡ V 1 + Q = M 2 p 2(1 + Q) V ,φ V 2 , η W ≡ η V 1 + Q = M 2 p 1 + Q V ,φφ V , β W ≡ M 2 p 1 + Q Γ ,φ V ,φ ΓV ,(5)
and during inflation, W , |η W |, |β W | 1. One interesting aspect of slow-roll parameters in Eq. (5) is that the slow-roll regime can be properly achieved even for steep potentials. When Q is relevant, all three parameters can take smaller values, thus making the slow-roll regime possible. The dissipation coefficient Γ is usually dependent on the temperature of the bath; therefore, let us determine it, especially since it is connected to the final temperature that starts the radiation era. By assuming quick thermalization, the radiation energy density can be written in terms of its temperature as
ρ r =g T 4 ,(6)
whereg ≡ π 2 g 30 and g is the number of relativistic degrees of freedom of the fields during inflation. This thermal equilibrium implies that T > H, along with the slow-roll conditions, is necessary for warm inflation. We can combine the expression for ρ r in Eq. (4) with the one in Eq. (6), to obtain the ratio T/H as a function of Q and φ
T H = 9QM 6 p V 2 ,φ 4g (1 + Q) 2 V 3 1/4 ,(7)
where we have also used the equations forφ and H in Eq. (4). The scalar power spectrum is also affected by dissipation during inflation. It has the form [29,67] (8) where n BE = 1 e H /T −1 is the Bose-Einstein distribution function, and G(Q ) is an enhancement term that has been argued to be present depending on the dependence of Γ on the temperature [39,67,68], and it arises from the interaction of the inflaton with radiation. In this work, we consider a cubic dependence on T ; a numerical fit of G(Q) for this case has been found as [39,68]
∆ 2 R = H 2 2πφ 2 1 + 2n BE + 2 √ 3πQ √ 3 + 4πQ T H G(Q ),G cubic (Q ) = 1 + 4.981Q 1.946 + 0.127Q 4.330 .(9)
We note that by taking the limits T → 0, Q → 0 in Eqs. (8) and (9), one achieves the cold inflation limit. All quantities are computed at the pivot scale k = k , at which the CMB scale leaves the horizon, for which the amplitude of the scalar power spectrum is estimated as log(10 10 ∆ 2 R ) = 3.044 ± 0.014 [8]. As for the tensor power spectrum, it is argued that we can approximate it as having the same form as in cold inflation [35,67]
∆ 2 h = 2H 2 π 2 M 2 p ,(10)
so that we can readily write the tensor-to-scalar ratio as
r = ∆ 2 h ∆ 2 R .(11)
The spectral index n s and its running n run can be derived from (8) as
n s − 1 = d log ∆ 2 R d log k d log ∆ 2 R dN , n run = d 2 log ∆ 2 R d log k 2 d 2 log ∆ 2 R dN 2 ,(12)
where N is the number of e-folds. In Appendices A and B we show a general manner for deriving an expression for n s and n run for a given dissipation coefficient Γ.
III. WARM β-INFLATIONARY MODEL
The application we consider in this work is to the βexponential model [43]
V(φ) = V 0 1 − λβ φ M p 1/β ,(13)
where λ is a dimensionless constant, and β is another constant that controls the deviation from the pure exponential function, achieved for β → 0. We note that β should not be confused with β W , which is one of the slow-roll parameters in Eq. (5).
This model was first proposed as a phenomenological gener-
alization of the exponential potential V = V 0 e −λ φ
Mp , for which dissipative effects were studied in [69].
The β-exponential potential is able to achieve the breakdown of the slow-roll regime, with the end of inflation by assuming V = 1, and is able to predict the low values for the tensor-to-scalar ratio, r, observed by recent experiments increasing the β value [43]. Further investigations of the model resulted in a concordance at the 2σ level with the Planck 2015 n s − r data and reasonably favored results when the model was statistically compared with the ΛCDM one [44]. Also, a more fundamental theoretical motivation for the model was found from brane dynamics; as a result, the ratio β/λ becomes associated with the brane tension σ. This relation imposes a constraint on both β and λ, where essentially, β must be larger than λ, with β ≥ 1/2. In [44,45], this limit is satisfied in the priors and numerical analysis results. Our analysis will also check if this constraint is still respected when computing the inflationary observables.
The slow-roll parameters for the model are
w = M 2 p λ 2 2(1 + Q) 1 − λβ φ M p 2 , η w = M 2 p λ 2 (1 − β) (1 + Q) 1 − λβ φ M p 2 , β w = w 1 + 2 1 − λβ φ M p λ Q ,φ Q ,(14)
We determine the end of inflation for both φ and Q. The ratio T H is computed from Eq. (7) as (15) where the dependence on Γ is implicit on Q. Hereafter, we shall apply the model to a dissipation coefficient with cubic dependence on the temperature. The general strategy is as follows: we first isolate an expression for Q end from the slowroll condition by setting W = 1. Then, by finding the general relation between Q and φ for each Γ, it is possible to compute a value for φ end numerically for each set of parameters. Next, we use Q end , φ end as initial conditions for the first-order differential equations that are evolved back to N , so that we obtain Q and φ and finally compute n s , n run and r for a given λ, β (Appendix A). Then we will see the impact of considering different λ and β in the n s − r and n s − n run plane and the temperature at the end of inflation. Also, we will see whether inflation with this model is favored in the weak or strong regime to find if the model addresses the swampland conjectures to be discussed in Section IV.
T H = 9 4g Q (1 + Q) 2 M 4 p V 0 λ 2 1 − λβ φ M p 1/β+2 1/4
The dissipation coefficient we will consider has a cubic de- pendence on the temperature, with the form
Γ = C φ T 3 φ 2 .(16)
This form can be motivated from a supersymmetric setting, where the correspondent superpotential involves interacting Φ, X, and Y superfields [70][71][72][73][74]. It comes from the possibility of a two-stage decay φ → χ → yy, in which the field χ acts as an intermediate to decay into light particles y. The potential obtained from the general superpotential generates a mass for the bosonic and fermionic components of the field X, denoted by χ, from which it is possible to derive a low-temperature approximation (T m χ ) for Γ, as studied in [71]. For large field multiplicities, and by knowing that m 2 χ = 2g 2 φ 2 , one arrives at the form given by Eq. (16) with C φ 1 16π h 2 N Y N X [74]. Due to the coupling of the inflaton with radiation, one would expect corrections to the inflationary potential to appear, altering its form and potentially spoiling the slow-roll regime. For the setting described, it is possible to estimate both fermionic and bosonic contributions to the inflationary potential due to these couplings. In [70], this issue was discussed, showing that while these corrections might be relevant, the effect on the slope of the potential is properly suppressed at a low-temperature regime. Thus, inflationary dy-namics under (16) can be well approximated as unaffected by radiative and thermal effects [29].
Proceeding with the predictions of the β-exponential model, one can find that the relation between Q and φ is given by
Q(1 + Q) 6 = Aλ 6 M p φ 8 1 − λβ φ M p 1/β−6 ,(17)
with
A ≡ C 4 φ 576g 3 V 0 M 4 p ,(18)
being another constant that encapsulates the dependence of C φ with the amplitude V 0 . The differential equation for the evolution of Q in this case is
dQ dN = λ 2 Q (1 + 7Q)(1 − λβφ/M p ) 2 6β − 1 − 1 − λβφ/M p M p λφ ,(19)
meaning that while the growth of Q with N is ensured. We then use Eq. (17) along with the slow-roll condition at the end of inflation W = 1 to compute φ end and Q end and evolve the differential equations for φ, in Eq. (4) and for Q, in Eq. (19).
φ M p > 1 λ (7β − 1) ,(20)
The predictions of the n s and r parameters are shown in Fig. 1. Choosing values of β in the interval 0.2 − 0.7, we have checked for any dependence on the λ chosen; we see immediately that λ greatly impacts the parameters. Considering the figure at the upper left, where λ = 0.05, we note that larger values of β are favored in the weak dissipative regime, as seen in the β = 0.3 curve (blue dotted line). An effect of choosing higher values of λ is shifting curves to lower n s , favoring higher values of β (dotted curves on the upper right panel), where we have chosen λ = 0.07. It is then clear the importance of λ, as it can completely change the character of a curve for a given β. When we look at the strong dissipative regime, represented by the solid lines, a lower value of β is preferred for both choices of λ. This happens because, as a higher Q results in an increase of the spectral index for the dissipation coefficient we are considering, we need a value of β that results in a low enough n s in the weak dissipation regime so that the curve will enter the confidence regions when Q > 1. From this, we can conclude that a small deviation from the exponential form is enough for inflation to end while also happening in a strong regime. Also, in Fig. 1, we plot the model's predictions for the running of the spectral index n run (lower panels). The effect of increasing Q is known [35], in which the running can become positive for a larger Q , but resulting in a spectral index too large, as seen in the figures. On the other hand, if β (the deviation from the exponential potential) is small enough, a strong dissipation is allowed by data, and we can have values for the running that are well into the Planck constraints. However, we also note that while it is not possible to achieve a positive running for the smallest values of β considered, a negative n run is estimated by the most recent Planck results of n run = −0.0045±0.0067 (Planck TT,TE,EE+lowE+lensing).
The difference between choices of λ in the spectral index is more clearly shown in Fig. 2, where we plot the n s − Q plane. Four values of β are chosen, and we note how the curves change when we increase or decrease λ. When β is low, it is challenging to achieve concordance with data in the weak dissipative regime, as the resulting n s is below the 2σ confidence limit, as seen in the β = 0.2 plot (upper left panel). As we go towards the strong dissipative regime, however, it is possible for the curves to reach the confidence region since there is an increase of n s . By slightly increasing β, we can accommodate both regimes into the constraints for n s , depending on the λ chosen (upper right panel). It results in an upward shift in curves for β = 0.25 choice, in which the shape of curves for each λ is essentially the same. This changes, however, for higher values of β. Choosing β = 0.5 (lower The main results for all these choices are the increase of n s to a higher constant value at Q 1, and the convergence of all lines for a very low/high Q , indicating that λ has a negligible effect for more extreme values. Finally, we note that for β ≥ 1/2, it is in general difficult to achieve concordance with data in the strong dissipative regime for N = 55, which is in conflict with the restriction for β given in [44]; on the other hand, there is an easy agreement with the Planck limits, when we consider inflation taking place in the weak regime.
In Fig. 3, we plot the dependence of the temperature at the end of inflation with Q end (left panel) and the Q − Q end plane (right panel). Looking at T , we first note that while the dependence on β is more evident, the impact of λ in the curves is not as strong as it is in the n s − r plane: we have almost the same predictions for both values of λ, indicated by the solid and dotted curves. The general behavior is as follows: if inflation ends in the weak regime, the temperature is high, varying from T ∼ 10 13 − 10 15 GeV, and an increase with Q end is noticeable. However, as soon as Q end ∼ 1, the picture changes completely. In the interval Q end ∼ 1 − 1000, the temperature decreases by many order of magnitude, being able to reach T ∼ 10 9 GeV for β = 0.2 and Q end ∼ 10 3 . As for the impact of β, we see that higher β will correspond to higher T , but there seems to be a maximum value for the temperature for each Q end as the curves become closer to one another as β increases. Looking now at the Q − Q end plane in the right panel, it is possible to see the relation between the values of Q with inflation proceeding totally in the weak/strong regime or when a transition between regimes happens. Curves inside the grey region correspond to a state where inflation happens entirely in the weak dissipation regime, while the yellow region corresponds to inflation in the strong regime. Curves represent a transition from weak to strong regimes are in the blue region, where we note that Q must be at least of order 10 −2 , and the higher β increase the parameter space for which the transition can happen.
Finally, we show in Fig. 4 how the coefficient C φ (left panel) and the potential amplitude V 0 (left panel) vary for a wide range of Q . We remember that C φ can be estimated from Eq. (18), while V 0 is obtained by fixing the amplitude of scalar perturbations in Eq. (8). In a supersymmetric implementation, C φ is interpreted as directly proportional to the product of multiplicities of chiral fields X and Y present in the model [74]. We note that a huge multiplicity is present in warm inflation models, where in particular, the number increases sharply in the strong dissipation regime, of order N X N Y ∼ 10 15 . This result was pointed out in [37], which is also confirmed here, and this can be seen as a problem associated with the warm inflation scenario. The different behavior between values of Q is also noticed when one estimates the amplitude of the potential V 0 , shown in the left panel of Fig. 4. A decrease of several orders of magnitude is seen when inflation occurs in the strong regime, ranging from V 0 ∼ 10 −23 to V 0 ∼ 10 −31 for the values of β chosen.
IV. IS THIS MODEL IN ACCORDANCE WITH THE SWAMPLAND CONJECTURES?
Since the establishment of quantum gravity theories, such as string theory, efforts have been made towards finding scenarios consistent within this context, including cosmological models. These continuous attempts ended up giving rise to what is known today as the swampland conjectures, which are translated into conditions that can establish bounds in many models, including scalar field theory-based ones. The first conjecture was determined from the search for a stable de-Sitter vacuum in string theory [48,58,59] resulting in the de Sitter (dS) Swampland criterion. This discussion is important in cosmology since we know two periods of accelerated universe expansion: inflation and dark energy domi- Figure 4. The coefficient C φ is shown as a function of Q , for the same values of β as in fig. 1 (left panel). On the right panel, we have the amplitude of the potential V 0 also as a function of Q . Once more, solid lines correspond to λ = 0.05, while dashed lines correspond to λ = 0.07.
nation. Both lead the universe to a state close to a de Sitter one. As more developments were made, other swampland criteria were proposed. The swampland distance conjecture [50] gives an upper limit on the excursion of the scalar field along the potential during inflation. These limits are important in the context of a universe whose acceleration is described by the evolution of a scalar field, which is (possibly) the case of inflation, and possible candidates for late-time acceleration, such as quintessence models. A third condition also has appeared, which restricts sub-Planckian modes from leaving the horizon during inflation, the Transplanckian Censorship conjecture (TCC) [75,76]. These conjectures have been used to restrict inflationary models in the past years. However, it was seen that the usual picture of canonical, single-field inflation that leaves a cold universe afterwards is in direct conflict with all of the conjectures. In this way, if one wants to conciliate slow-roll inflation with more general theories, it might be necessary to consider significant changes in how we view the early universe's evolution. It is possible to summarize the three criteria as conditions imposed on an inflationary potential. They are expressed as • dS conjecture: this criterion limits the slope of a given scalar potential in a way that the gradient of V must follow
M p |∇ φ V| V > a(21)
with a O(1). We see here the first problem: during inflation, both slow-roll parameters must be much less than one, so the simplest picture of inflation cannot satisfy this conjecture regardless of the model. This condition favors steep potentials, which are usually unsuitable for inflation; at the same time, it excludes potentials with plateaus, which are the most favored by data, such as the Starobinsky model [55].
• Distance conjecture: this conjecture gives a restriction on the excursion of the scalar field during inflation; essentially
|∆φ| M p < b,(22)
with b being another constant of order one. It restricts classes of inflationary models in which inflation happens. The field excursion is not super-Planckian; in particular, this conjecture tells that small field models of inflation might be favored, while large field models, such as those given by monomial potentials, are disfavored.
• TCC: by placing a limit on the scales that leave the horizon during inflation, one gets a limit on the energy scale of inflation itself
V 1/4 < 3 × 10 −10 M p .(23)
This bound comes from the imposition that scales of the order of Planck length should not leave the horizon during inflation. As a result, such limitation results in a tensor-to-scalar ratio of order r ∼ 10 −30 , which might be a problem from the observational standpoint, while stating that the primordial gravitational wave spectrum is significantly weaker when compared to the one produced by scalar perturbations.
Recent works in the literature attempted to fulfill the three conditions in extensions of the usual inflationary scenario. In particular, the idea of warm inflation comes as a possibility to realize inflation while satisfying the swampland conditions. For instance, by looking at (5), we see that due to the (1 + Q) factor, in the strong dissipative regime, it is possible to have V , η V > 1, as imposed by the dS conjecture, at the same time that W , |η W | 1, which are the conditions for slowroll inflation. Also, the presence of the Γφ term in Eq. (1) means that extra friction is being added as the field rolls down the potential; as a consequence, it can be possible for a larger class of models to be consistent with the distance conjecture (Eq. 22). Fig. 5 shows how the warm β-exponential model behaves when confronted with the swampland conjectures. We have considered the same range of β as described in the n s − r plane while fixing λ = 0.05, 0.07, as we have found that the results are very similar when other values are considered. We first plot |V φ /V| in the left panel. While in the weak dissipation regime (Q < 1), the smallness of the quantity is guaranteed, as soon as Q ≈ 1, it starts rising to values that can be larger than one. We note that smaller β leads to |V /V| = 1 being reached for a smaller Q . In the strong regime where Q ≥ 10, the dS conjecture is easily satisfied for all values of β considered, so we have the freedom to choose some β that are in concordance with the Planck constraints on n s and r. Next, we check if the model satisfies the distance conjecture by showing the field excursion ∆φ/M p as a function of Q . In the weak dissipation, the excursion is super-Planckian, but again, when Q approaches unity, ∆φ/M p decreases, so that when Q ≥ 10, the field excursion becomes sub-Planckian, as the dissipation is large enough to make even a steep potential to behave like a small-field model. As for the last conjecture, TCC is showed on the right panel of Fig.5, where we can see that V 1/4 end /M p is almost constant in the weak dissipative regime, but decreases sharply in the strong regime by many orders of magnitude, especially for the cubic coefficient case; here, the TCC is respected for Q ≥ 10 3 .
V. DISCUSSION AND CONCLUSIONS
The warm inflation picture has attracted much attention in the past years as the reheating process is surrounded by many questions on its realization. This is mainly due to the fact that it is a difficult epoch to probe and construct consistent connections with the inflation era from a microscopical perspec-tive. Also, well-motivated inflationary models, such as those given by potentials of monomial and exponential form, suffer from inconsistency with CMB data. Indeed, when one looks to inflationary parameters n s and r, some extension that allows concordance with data to be restored is usually required. While for instance, a non-minimal coupling of the inflaton with gravity might alleviate much of these issues [45,77], it is interesting (and plausible) to consider a situation where the energy stored in the inflaton is converted to other particles as inflation happens, resulting in a warm inflationary universe that can lead towards a radiation-dominated universe afterwards. Another issue related to inflationary models is their agreement with the recently proposed swampland conjectures [46,60], which result from attempts to incorporate models based on field theory into general ones such as string theory. In general, minimally-coupled scalar field inflationary models are inconsistent with these conjectures due to the essential smallness of the slow-roll parameters for inflation or a large energy scale of inflation. Recent works [37,46,61] have looked into the conjectures from a warm inflation perspective. It was found that the three proposed conjectures can be simultaneously satisfied when inflation takes place in a strong dissipation regime. At the same time, cosmological parameters such as the spectral index and tensor-to-scalar ratio can be driven into the Planck constraints.
In this work, we have considered how the β-exponential inflationary model [43][44][45] behaves when we allow dissipation of the inflaton energy into radiation in the warm inflation picture. In the original model, for N = 50 − 60, it is not possible to have a low enough r while satisfying the constraints for n s [45]. In contrast, in the warm inflation scenario, we can easily have concordance with data when inflation happens in the weak dissipation regime for a restricted interval of β. We have considered a dissipation coefficient, in which a cubic dependence on the temperature exists. It is found that both weak and strong dissipation regimes are allowed, and in particular, the strong regime leads n s to a constant value as Q increases, with little dependency on the λ parameter. This case is then viewed favorably in light of the swampland conjectures, as we have found (see Fig. 5) that all three conjectures are satisfied in the strong regime for the values of β considered.
This topic is far from over and further investigations are underway. In fact, considering a recently proposed warm inflation scenario where brane effects are considered for an exponential potential [61], in which inflation ends more naturally, the swampland conjectures are satisfied while also providing a connection with the late-time universe as a quintessence model. We consider it interesting to see how a deviation from the exponential form might affect the results. Another interesting and important aspect of our investigation is the study of how these models, especially in the strong dissipation regime, affect the predictions for the CMB power spectra. All these questions are being investigated and will be discussed in an upcoming paper.
Q = Q(10 V − 6η V + 8σ V ) 1 + 7Q + Q 2 Q − 7Q 2 1 + 7Q ,(B1)V = 2 V 1 + Q (2 V − η V ) ,(B2)η V = 1 1 + Q 2 V η V − ζ 2 , ζ 2 ≡ M 2 p V ,φφφ V ,φ V 2 ,(B3)σ V = 1 1 + Q 2 V σ V − η V σ + σ 2 V (B4) T H = 2 (T/H) 1 + 7Q − 7Q (T/H) (1 + 7Q) 2 (2 + 4Q) V 1 + Q − η V + 1 − Q 1 + Q σ V + 2 (T/H) 1 + 7Q 2 + 4Q 1 + Q V + 4 V 1 + Q Q − (2 + 4Q) V (1 + Q) 2 Q − η V − σ V 1 + Q Q + 1 − Q 1 + Q σ V − (1 − Q)σ V (1 + Q) 2 Q .(B5)
As we approximate the running as n run d 2 log ∆ 2 R dN 2 , from A1, we compute
n run = d 2 log P R dN 2 + d 2 log F dN 2 + d 2 log G dN 2 ,(B6)F ≡ 1 + 2n BE + f (Q) T H ,(B7)
where the individual terms can be obtained as
d 2 log P R dN 2 = (6 V − 2η V − 2Q ) (1 + Q) Q + 2η V − 6 V + 2Q 1 + Q (B8) d 2 log G dN 2 = Q 2 G d 2 G dQ 2 + dG dQ Q G − (Q dG dQ ) 2 G 2 (B9) d 2 log F dN 2 = F F − F F
Figure 1 .
1Predictions of n s − r and n s − n run planes for the β-exponential model with Γ = C φ T 3 /φ 2 . We have chosen fixed values of β while considering λ = 0.05 (left panels) and λ = 0.07 (right panels), with the curves varying with Q . The dashed lines represent the weak dissipation regime, while the solid lines represent the strong regime; the dotted lines correspond to a regime where T/H < 1. We have set N = 55.
Figure 2 .
2The spectral index n s as a function of Q , for fixed values of β and λ = 0.03, 0.05, 0.07, 0.08. As for the values of β, we have β = 0.2 (upper left), β = 0.25 (upper right), β = 0.5 (lower left) and β = 0.7 (lower right). The grey horizontal lines denote the 68%,95% limits of n s for Planck TT,TE,EE+lowl+lowE+lensing, n s = 0.9649 +0.0042+0.0082 −0.0042−0.0082[9].
Figure 3 .
3On the left panel, we plot the temperature of the thermal bath in GeV as a function of Q end , for the same values of β as inFig. 1. On the right panel, we have the Q − Q end plane, indicating inflation in the weak regime (grey region), strong regime (yellow region), and a transition from weak to the strong regime during inflation (blue region). Solid lines correspond to λ = 0.05, while dashed lines correspond to λ = 0.07. left panel), it is clear how the choice of λ affects the curves compared to the two previous choices, and the same goes for β = 0.7 (lower right panel).
Figure 5 .
5The ratio M p |V ,φ |/V, the field excursion ∆φ (left panel) and the potential V 1/4 end /M p (right panel) as a function of Q for the βexponential model when the dissipation coefficient is given by Eq.(16). We have fixed λ = 0.05 (solid lines), λ = 0.07 (dashed lines) and have considered different β in the range β = 0.2 − 0.7.
Appendix A: Deriving n s for a given Γ To obtain the spectral index n s by following Eq. (12), we can first write Eq.(8)in the formwhereand differentiate everything with respect to N, denoted by the primes. For the first term of (A1), with the help of eq. (??), we find thatFor the second term, we note thatin whichFinally, from the last term of (A1) we have simplyGathering all the terms together, we find thatTo compute n s for a specific Γ, we use the evolution equations[39,72]for φ and Q that must be substituted in (A8):with σ V ≡ M 2 p V ,φ /(φV).Appendix B: Deriving n run for a given Γ As the running involves second derivatives, it is useful to determine first the differential equations for Q , V , η V , σ V and (T/H) :
A New Type of Isotropic Cosmological Models Without Singularity. Alexei A Starobinsky, 10.1016/0370-2693(80)90670-XPhys. Lett. B. 91Alexei A. Starobinsky, "A New Type of Isotropic Cosmological Models Without Singularity," Phys. Lett. B 91, 99-102 (1980).
First Order Phase Transition of a Vacuum and Expansion of the Universe. K Sato, Mon. Not. Roy. Astron. Soc. 195K. Sato, "First Order Phase Transition of a Vacuum and Expan- sion of the Universe," Mon. Not. Roy. Astron. Soc. 195, 467- 479 (1981).
Inflationary universe: A possible solution to the horizon and flatness problems. Alan H Guth, 10.1103/physrevd.23.347Physical Review D. 23Alan H. Guth, "Inflationary universe: A possible solution to the horizon and flatness problems," Physical Review D 23, 347- 356 (1981).
A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems. Andrei D Linde, 10.1016/0370-2693(82)91219-9Phys. Lett. B. 108Andrei D. Linde, "A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems," Phys. Lett. B 108, 389-393 (1982).
Chaotic Inflation. Andrei D Linde, 10.1016/0370-2693(83)90837-7Phys. Lett. B. 129Andrei D. Linde, "Chaotic Inflation," Phys. Lett. B 129, 177- 181 (1983).
Inflationary cosmology: First 30+ years. Katsuhiko Sato, Jun'ichi Yokoyama, 10.1142/S0218271815300256Int. J. Mod. Phys. D. 241530025Katsuhiko Sato and Jun'ichi Yokoyama, "Inflationary cosmol- ogy: First 30+ years," Int. J. Mod. Phys. D 24, 1530025 (2015).
Encyclopaedia Inflationaris. Jerome Martin, Christophe Ringeval, Vincent Vennin, 10.1016/j.dark.2014.01.003arXiv:1303.3787Phys. Dark Univ. 5. 6astro-ph.COJerome Martin, Christophe Ringeval, and Vincent Vennin, "En- cyclopaedia Inflationaris," Phys. Dark Univ. 5-6, 75-235 (2014), arXiv:1303.3787 [astro-ph.CO].
Planck 2018 results. VI. Cosmological parameters. N Aghanim, Planck10.1051/0004-6361/201833910arXiv:1807.06209Astron. Astrophys. 641astro-ph.CON. Aghanim et al. (Planck), "Planck 2018 results. VI. Cos- mological parameters," Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO].
Planck 2018 results. X. Constraints on inflation. Y Akrami, Planck10.1051/0004-6361/201833887arXiv:1807.06211Astron. Astrophys. 641astro-ph.COY. Akrami et al. (Planck), "Planck 2018 results. X. Con- straints on inflation," Astron. Astrophys. 641, A10 (2020), arXiv:1807.06211 [astro-ph.CO].
Particle Production in the New Inflationary Cosmology. L F Abbott, Edward Farhi, Mark B Wise, 10.1016/0370-2693(82)90867-XPhys. Lett. B. 11729L. F. Abbott, Edward Farhi, and Mark B. Wise, "Particle Pro- duction in the New Inflationary Cosmology," Phys. Lett. B 117, 29 (1982).
Reheating an Inflationary Universe. Andreas Albrecht, Paul J Steinhardt, Michael S Turner, Frank Wilczek, 10.1103/PhysRevLett.48.1437Phys. Rev. Lett. 481437Andreas Albrecht, Paul J. Steinhardt, Michael S. Turner, and Frank Wilczek, "Reheating an Inflationary Universe," Phys. Rev. Lett. 48, 1437 (1982).
ON PARTICLE CREATION BY A TIME DEPENDENT SCALAR FIELD. A D Dolgov, D P Kirilova, Sov. J. Nucl. Phys. 51A. D. Dolgov and D. P. Kirilova, "ON PARTICLE CREATION BY A TIME DEPENDENT SCALAR FIELD," Sov. J. Nucl. Phys. 51, 172-177 (1990).
Particle Production During Out-of-equilibrium Phase Transitions. Jennie H Traschen, Robert H Brandenberger, 10.1103/PhysRevD.42.2491Phys. Rev. D. 42Jennie H. Traschen and Robert H. Brandenberger, "Particle Pro- duction During Out-of-equilibrium Phase Transitions," Phys. Rev. D 42, 2491-2504 (1990).
Structure of resonance in preheating after inflation. Patrick B Greene, Lev Kofman, Andrei D Linde, Alexei A Starobinsky, 10.1103/PhysRevD.56.6175arXiv:hep-ph/9705347Phys. Rev. D. 56Patrick B. Greene, Lev Kofman, Andrei D. Linde, and Alexei A. Starobinsky, "Structure of resonance in preheating after inflation," Phys. Rev. D 56, 6175-6192 (1997), arXiv:hep- ph/9705347.
Reheating after inflation. Lev Kofman, Andrei D Linde, Alexei A Starobinsky, 10.1103/PhysRevLett.73.3195arXiv:hep-th/9405187Phys. Rev. Lett. 73Lev Kofman, Andrei D. Linde, and Alexei A. Starobinsky, "Re- heating after inflation," Phys. Rev. Lett. 73, 3195-3198 (1994), arXiv:hep-th/9405187.
Towards the theory of reheating after inflation. Lev Kofman, Andrei D Linde, Alexei A Starobinsky, 10.1103/PhysRevD.56.3258arXiv:hep-ph/9704452Phys. Rev. D. 56Lev Kofman, Andrei D. Linde, and Alexei A. Starobinsky, "To- wards the theory of reheating after inflation," Phys. Rev. D 56, 3258-3295 (1997), arXiv:hep-ph/9704452.
Inflaton decay and heavy particle production with negative coupling. Brian R Greene, Tomislav Prokopec, Thomas G Roos, 10.1103/PhysRevD.56.6484arXiv:hep-ph/9705357Phys. Rev. D. 56Brian R. Greene, Tomislav Prokopec, and Thomas G. Roos, "Inflaton decay and heavy particle production with negative coupling," Phys. Rev. D 56, 6484-6507 (1997), arXiv:hep- ph/9705357.
Preheating with trilinear interactions: Tachyonic resonance. Jean Francois Dufaux, Gary N Felder, L Kofman, M Peloso, D Podolsky, 10.1088/1475-7516/2006/07/006arXiv:hep-ph/0602144JCAP. 076Jean Francois Dufaux, Gary N. Felder, L. Kofman, M. Peloso, and D. Podolsky, "Preheating with trilinear interactions: Tachy- onic resonance," JCAP 07, 006 (2006), arXiv:hep-ph/0602144.
Tachyonic Resonance Preheating in Expanding Universe. Ali Akbar Abolhasani, Hassan Firouzjahi, M M Sheikh-Jabbari, 10.1103/PhysRevD.81.043524arXiv:0912.1021Phys. Rev. D. 8143524hepthAli Akbar Abolhasani, Hassan Firouzjahi, and M. M. Sheikh- Jabbari, "Tachyonic Resonance Preheating in Expanding Uni- verse," Phys. Rev. D 81, 043524 (2010), arXiv:0912.1021 [hep- th].
Equation-of-State Parameter for Reheating. B Julian, Marc Munoz, Kamionkowski, 10.1103/PhysRevD.91.043521arXiv:1412.0656Phys. Rev. D. 9143521astro-ph.COJulian B. Munoz and Marc Kamionkowski, "Equation-of-State Parameter for Reheating," Phys. Rev. D 91, 043521 (2015), arXiv:1412.0656 [astro-ph.CO].
Reheating constraints to inflationary models. Liang Dai, Marc Kamionkowski, Junpu Wang, 10.1103/PhysRevLett.113.041302arXiv:1404.6704Phys. Rev. Lett. 11341302astro-ph.COLiang Dai, Marc Kamionkowski, and Junpu Wang, "Reheat- ing constraints to inflationary models," Phys. Rev. Lett. 113, 041302 (2014), arXiv:1404.6704 [astro-ph.CO].
Reheating predictions in single field inflation. Jessica L Cook, Emanuela Dimastrogiovanni, Damien A Easson, Lawrence M Krauss, 10.1088/1475-7516/2015/04/047arXiv:1502.04673JCAP. 0447astroph.COJessica L. Cook, Emanuela Dimastrogiovanni, Damien A. Eas- son, and Lawrence M. Krauss, "Reheating predictions in single field inflation," JCAP 04, 047 (2015), arXiv:1502.04673 [astro- ph.CO].
CMB and reheating constraints to αattractor inflationary models. Mehdi Eshaghi, Moslem Zarei, Nematollah Riazi, Ahmad Kiasatpour, 10.1103/PhysRevD.93.123517arXiv:1602.07914Phys. Rev. D. 93123517astro-ph.COMehdi Eshaghi, Moslem Zarei, Nematollah Riazi, and Ahmad Kiasatpour, "CMB and reheating constraints to α- attractor inflationary models," Phys. Rev. D 93, 123517 (2016), arXiv:1602.07914 [astro-ph.CO].
CMB constraints on the inflaton couplings and reheating temperature in α-attractor inflation. Marco Drewes, Jin U Kang, Ui Ri Mun, 10.1007/JHEP11(2017)072arXiv:1708.01197JHEP. 1172astro-ph.COMarco Drewes, Jin U Kang, and Ui Ri Mun, "CMB con- straints on the inflaton couplings and reheating temperature in α-attractor inflation," JHEP 11, 072 (2017), arXiv:1708.01197 [astro-ph.CO].
Accounting for the time evolution of the equation of state parameter during reheating. Pankaj Saha, Sampurn Anand, L Sriramkumar, 10.1103/PhysRevD.102.103511arXiv:2005.01874Phys. Rev. D. 102103511astro-ph.COPankaj Saha, Sampurn Anand, and L. Sriramkumar, "Ac- counting for the time evolution of the equation of state pa- rameter during reheating," Phys. Rev. D 102, 103511 (2020), arXiv:2005.01874 [astro-ph.CO].
Thermally induced density perturbations in the inflation era. Arjun Berera, Li-Zhi Fang, 10.1103/PhysRevLett.74.1912arXiv:astro-ph/9501024Phys. Rev. Lett. 74Arjun Berera and Li-Zhi Fang, "Thermally induced density per- turbations in the inflation era," Phys. Rev. Lett. 74, 1912-1915 (1995), arXiv:astro-ph/9501024.
Warm inflation. Arjun Berera, 10.1103/PhysRevLett.75.3218arXiv:astro-ph/9509049Phys. Rev. Lett. 75Arjun Berera, "Warm inflation," Phys. Rev. Lett. 75, 3218-3221 (1995), arXiv:astro-ph/9509049.
Is warm inflation possible?. Junichi Yokoyama, Andrei D Linde, 10.1103/PhysRevD.60.083509arXiv:hep-ph/9809409Phys. Rev. D. 6083509Junichi Yokoyama and Andrei D. Linde, "Is warm infla- tion possible?" Phys. Rev. D 60, 083509 (1999), arXiv:hep- ph/9809409.
The importance of being warm (during inflation). Sam Bartrum, Mar Bastero-Gil, Arjun Berera, Rafael Cerezo, Rudnei O Ramos, Joao G Rosa, 10.1016/j.physletb.2014.03.029arXiv:1307.5868Phys. Lett. B. 732hep-phSam Bartrum, Mar Bastero-Gil, Arjun Berera, Rafael Cerezo, Rudnei O. Ramos, and Joao G. Rosa, "The importance of being warm (during inflation)," Phys. Lett. B 732, 116-121 (2014), arXiv:1307.5868 [hep-ph].
Revisiting CMB constraints on warm inflation. Richa Arya, Arnab Dasgupta, Gaurav Goswami, Jayanti Prasad, Raghavan Rangarajan, 10.1088/1475-7516/2018/02/043arXiv:1710.11109JCAP. 0243astro-ph.CORicha Arya, Arnab Dasgupta, Gaurav Goswami, Jayanti Prasad, and Raghavan Rangarajan, "Revisiting CMB constraints on warm inflation," JCAP 02, 043 (2018), arXiv:1710.11109 [astro-ph.CO].
Constraining Warm Inflation with CMB data. Mar Bastero-Gil, Sukannya Bhattacharya, Koushik Dutta, Mayukh Raj Gangopadhyay, 10.1088/1475-7516/2018/02/054arXiv:1710.10008JCAP. 0254astro-ph.COMar Bastero-Gil, Sukannya Bhattacharya, Koushik Dutta, and Mayukh Raj Gangopadhyay, "Constraining Warm Inflation with CMB data," JCAP 02, 054 (2018), arXiv:1710.10008 [astro-ph.CO].
Study of warm inflationary models and their parameter estimation from CMB. Richa Arya, Raghavan Rangarajan, 10.1142/S0218271820500558arXiv:1812.03107Int. J. Mod. Phys. D. 292050055astroph.CORicha Arya and Raghavan Rangarajan, "Study of warm infla- tionary models and their parameter estimation from CMB," Int. J. Mod. Phys. D 29, 2050055 (2020), arXiv:1812.03107 [astro- ph.CO].
Gravity waves and primordial black holes in scalar warm little inflation. Mar Bastero, - Gil, Marta Subías Díaz- Blanco, 10.1088/1475-7516/2021/12/052arXiv:2105.08045JCAP. 1252hep-phMar Bastero-Gil and Marta Subías Díaz-Blanco, "Gravity waves and primordial black holes in scalar warm little infla- tion," JCAP 12, 052 (2021), arXiv:2105.08045 [hep-ph].
Warm Little Inflaton becomes Dark Energy. G João, Luís B Rosa, Ventura, 10.1016/j.physletb.2019.134984arXiv:1906.11835Phys. Lett. B. 798134984hep-phJoão G. Rosa and Luís B. Ventura, "Warm Little Inflaton becomes Dark Energy," Phys. Lett. B 798, 134984 (2019), arXiv:1906.11835 [hep-ph].
Warm inflation dissipative effects: predictions and constraints from the Planck data. Micol Benetti, Rudnei O Ramos, 10.1103/PhysRevD.95.023517arXiv:1610.08758Phys. Rev. D. 9523517astroph.COMicol Benetti and Rudnei O. Ramos, "Warm inflation dissipa- tive effects: predictions and constraints from the Planck data," Phys. Rev. D 95, 023517 (2017), arXiv:1610.08758 [astro- ph.CO].
Warm-assisted natural inflation. Yakefu Reyimuaji, Xinyi Zhang, 10.1088/1475-7516/2021/04/077arXiv:2012.07329JCAP. 0477astroph.COYakefu Reyimuaji and Xinyi Zhang, "Warm-assisted natu- ral inflation," JCAP 04, 077 (2021), arXiv:2012.07329 [astro- ph.CO].
Runaway potentials in warm inflation satisfying the swampland conjectures. Suratna Das, Rudnei O Ramos, 10.1103/PhysRevD.102.103522arXiv:2007.15268Phys. Rev. D. 102103522hep-thSuratna Das and Rudnei O. Ramos, "Runaway potentials in warm inflation satisfying the swampland conjectures," Phys. Rev. D 102, 103522 (2020), arXiv:2007.15268 [hep-th].
Dirac-Born-Infeld warm inflation realization in the strong dissipation regime. Meysam Motaharfar, Rudnei O Ramos, 10.1103/PhysRevD.104.043522arXiv:2105.01131Phys. Rev. D. 10443522hep-thMeysam Motaharfar and Rudnei O. Ramos, "Dirac-Born-Infeld warm inflation realization in the strong dissipation regime," Phys. Rev. D 104, 043522 (2021), arXiv:2105.01131 [hep-th].
Warm Little Inflaton. Mar Bastero-Gil, Arjun Berera, Rudnei O Ramos, Joao G Rosa, 10.1103/PhysRevLett.117.151301arXiv:1604.08838Phys. Rev. Lett. 117151301hep-phMar Bastero-Gil, Arjun Berera, Rudnei O. Ramos, and Joao G. Rosa, "Warm Little Inflaton," Phys. Rev. Lett. 117, 151301 (2016), arXiv:1604.08838 [hep-ph].
Warm inflation within a supersymmetric distributed mass model. Mar Bastero-Gil, Arjun Berera, Rafael Hernández-Jiménez, João G Rosa, 10.1103/PhysRevD.99.103520arXiv:1812.07296Phys. Rev. D. 99103520hep-phMar Bastero-Gil, Arjun Berera, Rafael Hernández-Jiménez, and João G. Rosa, "Warm inflation within a supersymmet- ric distributed mass model," Phys. Rev. D 99, 103520 (2019), arXiv:1812.07296 [hep-ph].
Minimal Warm Inflation. Kim V Berghaus, Peter W Graham, David E Kaplan, 10.1088/1475-7516/2020/03/034arXiv:1910.07525JCAP. 0334hep-phKim V. Berghaus, Peter W. Graham, and David E. Ka- plan, "Minimal Warm Inflation," JCAP 03, 034 (2020), arXiv:1910.07525 [hep-ph].
Warm inflation, neutrinos and dark matter: a minimal extension of the Standard Model. Miguel Levy, João G Rosa, Luis B Ventura, 10.1007/JHEP12(2021)176arXiv:2012.03988JHEP. 12176hep-phMiguel Levy, João G. Rosa, and Luis B. Ventura, "Warm inflation, neutrinos and dark matter: a minimal extension of the Standard Model," JHEP 12, 176 (2021), arXiv:2012.03988 [hep-ph].
Beta-exponential inflation. S Jailson, F C Alcaniz, Carvalho, 10.1209/0295-5075/79/39001arXiv:astro-ph/0612279EPL. 79Jailson S. Alcaniz and F. C. Carvalho, "Beta-exponential infla- tion," EPL 79, 39001 (2007), arXiv:astro-ph/0612279.
CMB constraints on β-exponential inflationary models. M A Santos, Micol Benetti, Jailson Alcaniz, F A Brito, R Silva, 10.1088/1475-7516/2018/03/023arXiv:1710.09808JCAP. 0323astro-ph.COM. A. Santos, Micol Benetti, Jailson Alcaniz, F. A. Brito, and R. Silva, "CMB constraints on β-exponential inflationary mod- els," JCAP 03, 023 (2018), arXiv:1710.09808 [astro-ph.CO].
Constraining non-minimally coupled β-exponential inflation with CMB data. Felipe Bruno Medeiros Dos Santos, Simony Santos Da Costa, Raimundo Silva, Micol Benetti, Jailson Alcaniz, 10.1088/1475-7516/2022/06/001arXiv:2110.14758JCAP. 061astroph.COFelipe Bruno Medeiros dos Santos, Simony Santos da Costa, Raimundo Silva, Micol Benetti, and Jailson Alcaniz, "Con- straining non-minimally coupled β-exponential inflation with CMB data," JCAP 06, 001 (2022), arXiv:2110.14758 [astro- ph.CO].
Swampland, axions, and minimal warm inflation. Suratna Das, Gaurav Goswami, Chethan Krishnan, 10.1103/PhysRevD.101.103529arXiv:1911.00323Phys. Rev. D. 101103529hep-thSuratna Das, Gaurav Goswami, and Chethan Krishnan, "Swampland, axions, and minimal warm inflation," Phys. Rev. D 101, 103529 (2020), arXiv:1911.00323 [hep-th].
Unified early and late Universe cosmology through dissipative effects in steep quintessential inflation potential models. B F Gustavo, Rudnei O Lima, Ramos, 10.1103/PhysRevD.100.123529arXiv:1910.05185Phys. Rev. D. 100123529astro-ph.COGustavo B. F. Lima and Rudnei O. Ramos, "Unified early and late Universe cosmology through dissipative effects in steep quintessential inflation potential models," Phys. Rev. D 100, 123529 (2019), arXiv:1910.05185 [astro-ph.CO].
De Sitter vacua in string theory. Shamit Kachru, Renata Kallosh, Andrei D Linde, Sandip P Trivedi, 10.1103/PhysRevD.68.046005arXiv:hep-th/0301240Phys. Rev. D. 6846005Shamit Kachru, Renata Kallosh, Andrei D. Linde, and Sandip P. Trivedi, "De Sitter vacua in string theory," Phys. Rev. D 68, 046005 (2003), arXiv:hep-th/0301240.
De Sitter Space and the Swampland. Georges Obied, Hirosi Ooguri, Lev Spodyneiko, Cumrun Vafa, arXiv:1806.08362hep-thGeorges Obied, Hirosi Ooguri, Lev Spodyneiko, and Cum- run Vafa, "De Sitter Space and the Swampland," (2018), arXiv:1806.08362 [hep-th].
On the Geometry of the String Landscape and the Swampland. Hirosi Ooguri, Cumrun Vafa, 10.1016/j.nuclphysb.2006.10.033arXiv:hep-th/0605264Nucl. Phys. B. 766Hirosi Ooguri and Cumrun Vafa, "On the Geometry of the String Landscape and the Swampland," Nucl. Phys. B 766, 21- 33 (2007), arXiv:hep-th/0605264.
The Swampland: Introduction and Review. Eran Palti, 10.1002/prop.201900037arXiv:1903.06239Fortsch. Phys. 671900037hep-thEran Palti, "The Swampland: Introduction and Review," Fortsch. Phys. 67, 1900037 (2019), arXiv:1903.06239 [hep-th].
On the Cosmological Implications of the String Swampland. Prateek Agrawal, Georges Obied, Paul J Steinhardt, Cumrun Vafa, 10.1016/j.physletb.2018.07.040arXiv:1806.09718Phys. Lett. B. 784hep-thPrateek Agrawal, Georges Obied, Paul J. Steinhardt, and Cumrun Vafa, "On the Cosmological Implications of the String Swampland," Phys. Lett. B 784, 271-276 (2018), arXiv:1806.09718 [hep-th].
Bounds on Slow Roll and the de Sitter Swampland. K Sumit, Chethan Garg, Krishnan, 10.1007/JHEP11(2019)075arXiv:1807.05193JHEP. 1175hep-thSumit K. Garg and Chethan Krishnan, "Bounds on Slow Roll and the de Sitter Swampland," JHEP 11, 075 (2019), arXiv:1807.05193 [hep-th].
The string swampland constraints require multi-field inflation. Ana Achúcarro, Gonzalo A Palma, 10.1088/1475-7516/2019/02/041arXiv:1807.04390JCAP. 0241hep-thAna Achúcarro and Gonzalo A. Palma, "The string swampland constraints require multi-field inflation," JCAP 02, 041 (2019), arXiv:1807.04390 [hep-th].
Warm inflation as a way out of the swampland. Meysam Motaharfar, Vahid Kamali, Rudnei O Ramos, 10.1103/PhysRevD.99.063513arXiv:1810.02816Phys. Rev. D. 9963513astro-ph.COMeysam Motaharfar, Vahid Kamali, and Rudnei O. Ramos, "Warm inflation as a way out of the swampland," Phys. Rev. D 99, 063513 (2019), arXiv:1810.02816 [astro-ph.CO].
Reheating After Swampland Conjecture. Vahid Kamali, 10.1007/JHEP01(2020)092arXiv:1902.00701JHEP. 0192gr-qcVahid Kamali, "Reheating After Swampland Conjecture," JHEP 01, 092 (2020), arXiv:1902.00701 [gr-qc].
Swampland conjecture in f (R) gravity by the Noether Symmetry Approach. Micol Benetti, Salvatore Capozziello, Leila Lobato Graef, 10.1103/PhysRevD.100.084013arXiv:1905.05654Phys. Rev. D. 10084013gr-qcMicol Benetti, Salvatore Capozziello, and Leila Lobato Graef, "Swampland conjecture in f (R) gravity by the Noether Symmetry Approach," Phys. Rev. D 100, 084013 (2019), arXiv:1905.05654 [gr-qc].
What if string theory has no de Sitter vacua?. H Ulf, Thomas Danielsson, Van Riet, 10.1142/S0218271818300070arXiv:1804.01120Int. J. Mod. Phys. D. 271830007hep-thUlf H. Danielsson and Thomas Van Riet, "What if string the- ory has no de Sitter vacua?" Int. J. Mod. Phys. D 27, 1830007 (2018), arXiv:1804.01120 [hep-th].
Obstacles to Constructing de Sitter Space in String Theory. Michael Dine, Jamie A P Law-Smith, Shijun Sun, Duncan Wood, Yan Yu, 10.1007/JHEP02(2021)050arXiv:2008.12399JHEP. 0250hep-thMichael Dine, Jamie A. P. Law-Smith, Shijun Sun, Duncan Wood, and Yan Yu, "Obstacles to Constructing de Sitter Space in String Theory," JHEP 02, 050 (2021), arXiv:2008.12399 [hep-th].
Note on single-field inflation and the swampland criteria. Suratna Das, 10.1103/PhysRevD.99.083510arXiv:1809.03962Phys. Rev. D. 9983510hep-thSuratna Das, "Note on single-field inflation and the swampland criteria," Phys. Rev. D 99, 083510 (2019), arXiv:1809.03962 [hep-th].
Warm brane inflation with an exponential potential: a consistent realization away from the swampland. Vahid Kamali, Meysam Motaharfar, Rudnei O Ramos, 10.1103/PhysRevD.101.023535arXiv:1910.06796Phys. Rev. D. 10123535gr-qcVahid Kamali, Meysam Motaharfar, and Rudnei O. Ramos, "Warm brane inflation with an exponential potential: a consis- tent realization away from the swampland," Phys. Rev. D 101, 023535 (2020), arXiv:1910.06796 [gr-qc].
Trans-Planckian censorship and other swampland bothers addressed in warm inflation. Arjun Berera, Jaime R Calderón, 10.1103/PhysRevD.100.123530arXiv:1910.10516Phys. Rev. D. 100123530hep-phArjun Berera and Jaime R. Calderón, "Trans-Planckian censor- ship and other swampland bothers addressed in warm inflation," Phys. Rev. D 100, 123530 (2019), arXiv:1910.10516 [hep-ph].
Distance, de Sitter and Trans-Planckian Censorship conjectures: the status quo of Warm Inflation. Suratna Das, 10.1016/j.dark.2019.100432arXiv:1910.02147Phys. Dark Univ. 27100432hep-thSuratna Das, "Distance, de Sitter and Trans-Planckian Censor- ship conjectures: the status quo of Warm Inflation," Phys. Dark Univ. 27, 100432 (2020), arXiv:1910.02147 [hep-th].
Strengthening the de Sitter swampland conjecture in warm inflation. Robert Brandenberger, Vahid Kamali, Rudnei O Ramos, 10.1007/JHEP08(2020)127arXiv:2002.04925JHEP. 08127hep-thRobert Brandenberger, Vahid Kamali, and Rudnei O. Ramos, "Strengthening the de Sitter swampland conjecture in warm in- flation," JHEP 08, 127 (2020), arXiv:2002.04925 [hep-th].
Observational Constraints on Warm Inflation in Loop Quantum Cosmology. Micol Benetti, Leila Graef, Rudnei O Ramos, 10.1088/1475-7516/2019/10/066arXiv:1907.03633JCAP. 1066astro-ph.COMicol Benetti, Leila Graef, and Rudnei O. Ramos, "Observa- tional Constraints on Warm Inflation in Loop Quantum Cosmol- ogy," JCAP 10, 066 (2019), arXiv:1907.03633 [astro-ph.CO].
Warm tachyon inflation and swampland criteria. Abolhassan Mohammadi, Tayeb Golanbari, Haidar Sheikhahmadi, Kosar Sayar, Lila Akhtari, M A Rasheed, Khaled Saaidi, 10.1088/1674-1137/44/9/095101arXiv:2001.10042Chin. Phys. C. 4495101gr-qcAbolhassan Mohammadi, Tayeb Golanbari, Haidar Sheikhah- madi, Kosar Sayar, Lila Akhtari, M. A. Rasheed, and Khaled Saaidi, "Warm tachyon inflation and swampland criteria," Chin. Phys. C 44, 095101 (2020), arXiv:2001.10042 [gr-qc].
Power spectrum for inflation models with quantum and thermal noises. O Rudnei, L A Ramos, Silva, 10.1088/1475-7516/2013/03/032arXiv:1302.3544JCAP. 0332astro-ph.CORudnei O. Ramos and L. A. da Silva, "Power spectrum for in- flation models with quantum and thermal noises," JCAP 03, 032 (2013), arXiv:1302.3544 [astro-ph.CO].
Shear viscous effects on the primordial power spectrum from warm inflation. Mar Bastero-Gil, Arjun Berera, Rudnei O Ramos, 10.1088/1475-7516/2011/07/030arXiv:1106.0701JCAP. 0730astro-ph.COMar Bastero-Gil, Arjun Berera, and Rudnei O. Ramos, "Shear viscous effects on the primordial power spectrum from warm in- flation," JCAP 07, 030 (2011), arXiv:1106.0701 [astro-ph.CO].
On the Dynamics of the Power Law Inflation Due to an Exponential Potential. Junichi Yokoyama, Kei-Ichi Maeda, 10.1016/0370-2693(88)90880-5Phys. Lett. B. 207Junichi Yokoyama and Kei-ichi Maeda, "On the Dynamics of the Power Law Inflation Due to an Exponential Potential," Phys. Lett. B 207, 31-35 (1988).
Thermal effects on pure and hybrid inflation. M H Lisa, Ian G Hall, Moss, 10.1103/PhysRevD.71.023514arXiv:hep-ph/0408323Phys. Rev. D. 7123514Lisa M H Hall and Ian G Moss, "Thermal effects on pure and hybrid inflation," Phys. Rev. D 71, 023514 (2005), arXiv:hep- ph/0408323.
Dissipation coefficients for supersymmetric inflatonary models. G Ian, Chun Moss, Xiong, arXiv:hep-ph/0603266Ian G Moss and Chun Xiong, "Dissipation coefficients for supersymmetric inflatonary models," (2006), arXiv:hep- ph/0603266.
Warm inflation model building. Mar Bastero, - Gil, Arjun Berera, 10.1142/S0217751X09044206arXiv:0902.0521Int. J. Mod. Phys. A. 24hep-phMar Bastero-Gil and Arjun Berera, "Warm inflation model building," Int. J. Mod. Phys. A 24, 2207-2240 (2009), arXiv:0902.0521 [hep-ph].
Dissipation coefficients from scalar and fermion quantum field interactions. Mar Bastero-Gil, Arjun Berera, Rudnei O Ramos, 10.1088/1475-7516/2011/09/033arXiv:1008.1929JCAP. 0933hep-phMar Bastero-Gil, Arjun Berera, and Rudnei O. Ramos, "Dissi- pation coefficients from scalar and fermion quantum field inter- actions," JCAP 09, 033 (2011), arXiv:1008.1929 [hep-ph].
General dissipation coefficient in low-temperature warm inflation. Mar Bastero-Gil, Arjun Berera, Rudnei O Ramos, Joao G Rosa, 10.1088/1475-7516/2013/01/016arXiv:1207.0445JCAP. 0116hep-phMar Bastero-Gil, Arjun Berera, Rudnei O. Ramos, and Joao G. Rosa, "General dissipation coefficient in low-temperature warm inflation," JCAP 01, 016 (2013), arXiv:1207.0445 [hep-ph].
Trans-Planckian Censorship and the Swampland. Alek Bedroya, Cumrun Vafa, 10.1007/JHEP09(2020)123arXiv:1909.11063JHEP. 09123hep-thAlek Bedroya and Cumrun Vafa, "Trans-Planckian Censorship and the Swampland," JHEP 09, 123 (2020), arXiv:1909.11063 [hep-th].
Trans-Planckian Censorship and Inflationary Cosmology. Alek Bedroya, Robert Brandenberger, Marilena Loverde, Cumrun Vafa, 10.1103/PhysRevD.101.103502arXiv:1909.11106Phys. Rev. D. 101103502hep-thAlek Bedroya, Robert Brandenberger, Marilena Loverde, and Cumrun Vafa, "Trans-Planckian Censorship and Infla- tionary Cosmology," Phys. Rev. D 101, 103502 (2020), arXiv:1909.11106 [hep-th].
Testing non-minimally coupled inflation with CMB data: a Bayesian analysis. Marcela Campista, Micol Benetti, Jailson Alcaniz, 10.1088/1475-7516/2017/09/010arXiv:1705.08877JCAP. 0910astro-ph.COMarcela Campista, Micol Benetti, and Jailson Alcaniz, "Testing non-minimally coupled inflation with CMB data: a Bayesian analysis," JCAP 09, 010 (2017), arXiv:1705.08877 [astro-ph.CO].
| []
|
[
"Stable Relationships",
"Stable Relationships"
]
| [
"Sam Ganzfried [email protected] ",
"Ganzfried Research "
]
| []
| []
| We study a dynamic model of the relationship between two people, A and B. The amount that they each "like" each other depends on the amount they liked each other at the previous timestep as well as the "power" in the relationship. Let A(t) denote the amount that A likes B at time t, and B(t) the amount that B likes A. Let P (t) denote the power of player A at time t (−P (t) denotes the power of player B).We assume that γ > 0, and that either α = 0 or β = 0 (if α = β = 0 then neither player's value depends on the power and the model is trivial). In general we can assume that γ = 1, and can substitute in α ′ = αγ, β ′ = βγ without affecting our analysis. This will reduce the number of parameters and simplify the model. However, we will choose to keep γ in the model, as the model is not very complex and this permits a more intuitive interpretation of the parameters.We can substitute P (t) into the other expressions to obtain the following system:Proposition 1. (A * , B * ) is an equilibrium point of the system defined by Equation 1 if and only if A * = B * .Proof. In an equilibrium, we have thatSimilarly, we have βB * = βA * .By assumption, either α = 0 or β = 0 (or both). In either case, we have that A * = B * . | 10.48550/arxiv.2206.06468 | [
"https://export.arxiv.org/pdf/2206.06468v4.pdf"
]
| 249,642,600 | 2206.06468 | 51eee0426ddc7db2459522101d45214d679c9742 |
Stable Relationships
15 Nov 2022
Sam Ganzfried [email protected]
Ganzfried Research
Stable Relationships
15 Nov 2022
We study a dynamic model of the relationship between two people, A and B. The amount that they each "like" each other depends on the amount they liked each other at the previous timestep as well as the "power" in the relationship. Let A(t) denote the amount that A likes B at time t, and B(t) the amount that B likes A. Let P (t) denote the power of player A at time t (−P (t) denotes the power of player B).We assume that γ > 0, and that either α = 0 or β = 0 (if α = β = 0 then neither player's value depends on the power and the model is trivial). In general we can assume that γ = 1, and can substitute in α ′ = αγ, β ′ = βγ without affecting our analysis. This will reduce the number of parameters and simplify the model. However, we will choose to keep γ in the model, as the model is not very complex and this permits a more intuitive interpretation of the parameters.We can substitute P (t) into the other expressions to obtain the following system:Proposition 1. (A * , B * ) is an equilibrium point of the system defined by Equation 1 if and only if A * = B * .Proof. In an equilibrium, we have thatSimilarly, we have βB * = βA * .By assumption, either α = 0 or β = 0 (or both). In either case, we have that A * = B * .
In order to ascertain the stability of the system, we calculate the eigenvalues of the matrix M corresponding to the system defined by Equation 1:
M = (1 − αγ) αγ βγ (1 − βγ)
The eigenvalues are
λ 1 = 1 λ 2 = 1 − αγ − βγ
The corresponding eigenvectors are
v 1 = (1, 1) v 2 = − α β , 1
The central concept in analysis of dynamic systems is that of stability [1]. A system is stable if |λ L | < 1, where λ L is the eigenvalue with largest absolute value. If a system is stable, then the dynamics will converge to an equilibrium point. If |λ L | > 1, then the system is unstable, and will diverge to ±∞. If |λ L | = 1, then the system is marginally stable. Marginal stability is often associated with being a middle ground between these two extremes: "A marginal system, sometimes referred to as having neutral stability, is between these two types [asymptotically stable and unstable]: when displaced, it does not return to near a common steady state, nor does it go away from where it started without limit" [2]. It turns out that a marginally stable system can actually exhibit a wide range of behavior such as oscillating between several points, but also converging to an equilibrium (as stable systems) and diverging to infinity (as an unstable system). Essentially, if a system is marginally stable this means that further investigation is needed to determine asymptotic behavior.
Stability characterization
Since λ 1 = 1, the system is either marginally stable or unstable depending on whether |λ 2 | > 1. If |λ 2 | > 1, then the system is unstable, and if |λ 2 | ≤ 1 it is marginally stable.
Proposition 2. The system is marginally stable if and only if
−α ≤ β ≤ 2 γ − α. Proof. λ 2 > 1 if and only if −αγ − βγ + 1 > 1 ↔ −αγ − βγ > 0 ↔ β < −α. λ 2 < −1 if and only if −αγ − βγ + 1 < −1 ↔ −αγ − βγ < −2 ↔ α + β > 2 γ
So the system is marginally stable if and only if
−α ≤ β ≤ 2 γ − α.
Note that α > 0 means that player A prefers more power, and β > 0 means that B prefers more power. Say that a player is dominant if they prefer more power, and submissive if they prefer less power. The dominance of A is α, and the submissiveness of A is −α; similarly, the dominance of B is β, and the submissiveness of B is −β.
We now analyze the implications of Proposition 2 on the different cases of dominance/submissiveness for the players.
1. Both A and B are dominant:
We have α ≥ 0, β ≥ 0, and furthermore:
− 2 γ ≤ −α ≤ 0 ≤ β ≤ 2 γ − α
This is equivalent to the constraint that the sum of the dominances is at most 2 γ .
Both
A and B are submissive:
We have α ≤ 0, β ≤ 0: 0 ≤ −α ≤ β ≤ 2 γ − α
This can only hold if α = β = 0, which is excluded by assumption in our model.
A is dominant and B is submissive:
We have α ≥ 0, β ≤ 0:
−α ≤ β ≤ 2 γ − α.
This means that B cannot be more submissive than A is dominant, and A cannot be more dominant than B's submissiveness plus 2 γ .
B is dominant and A is submissive:
We have α ≤ 0, β ≥ 0:
0 ≤ −α ≤ β ≤ 2 γ − α
This means that A cannot be more submissive than B is dominant, and B cannot be more dominant than A's submissiveness plus 2 γ .
In summary, we can achieve a marginally stable system in which both players are dominant, with the sum of dominance bounded by 2 γ . We can never reach a situation where both players are submissive. We can also reach situations in which A is more (or equally) dominant than B is submissive by at most 2 γ , and situations where B is more (or equally) dominant than A is submissive by at most 2 γ .
Further analysis of marginally stable behavior
The eigendecomposition of transition matrix M is M = QΛQ −1 , where
Q = 1 − α β 1 1 Λ = 1 0 0 1 − αγ − βγ We have that M t = (QΛQ −1 ) t = QΛ t Q −1 . M t = 1 α + β β + α(1 − αγ − βγ) t α − α(1 − αγ − βγ) t β − β(1 − αγ − βγ) t α + β(1 − αγ − βγ) t(2)
Asymptotic behavior of the system is determined by lim t→∞ M t .
1. 1 − αγ − βγ = 1:
At first glance the system is stable, since we have that M t = I. However, in this case the matrix M is not actually diagonalizable (the matrix Q is not invertible), so we cannot draw any conclusions based on the eigendecomposition. In Proposition 3, we show that the system is actually unstable.
2. 0 ≤ |1 − αγ − βγ| < 1: We have lim t→∞ M t = 1 α + β β α β α So we have that lim t→∞ A(t) = lim t→∞ B(t) = βA(0) + αB(0) α + β .
So the system is stable and converges to the equilibrium point (A * , B * ) with
A * = B * = βA(0) + αB(0) α + β . 3. 1 − αγ − βγ = −1: For t even, we have lim t→∞ M t = I. So lim t→∞ A(t) = A(0), lim t→∞ B(t) = B(0).
For t odd, we have
lim t→∞ M t = 1 α + β β − α 2α 2β α − β So lim t→∞ A(t) = (β − α)A(0) + 2αB(0) α + β lim t→∞ B(t) = (α − β)B(0) + 2βA(0) α + β
So the system will oscillate between two points:
P 1 = (A 1 , B 1 ) = (A(0), B(0)) P 2 = (A 2 , B 2 ) = (β − α)A(0) + 2αB(0) α + β , (α − β)B(0) + 2βA(0) α + β
If A(0) = B(0), then the system trivially just stays at the initial point since it is an equilibrium. Otherwise, the system will oscillate between P 1 and P 2 , neither of which are equilibrium points. However, the average of P 1 and P 2 is an equilibrium point:
(A * , B * ) = βA(0) + αB(0) α + β , βA(0) + αB(0) α + β
Proposition 3. If 1 − αγ − βγ = 1, then the system is unstable.
Proof. Note that the transition matrix is
M = (1 − αγ) αγ βγ (1 − βγ) = (1 − αγ) αγ −αγ (1 + αγ)
Since this matrix is not diagonalizable, we cannot construct the eigendecomposition. However, we can compute lim t→∞ M t by instead using the Jordan decomposition. We have M = SJS −1 where
S = 1 − 1 αγ 1 0 J = 1 1 0 1 S −1 = 0 1 −αγ αγ By Lemma 1, J t = 1 t 0 1 M t = SJ t S −1 = 1 − tαγ tαγ −tαγ tαγ
We can see that the system is clearly unstable, with
A(t) = A(0) + tαγ(B(0) − A(0)) B(t) = tαγ(B(0) − A(0)) Lemma 1. For all t ≥ 1, J t = 1 t 0 1
Proof. The statement clearly holds for t = 1. Suppose it holds for t = k for some k ≥ 1.
J k+1 = JJ k = 1 1 0 1 1 k 0 1 = 1 k + 1 0 1
Further analysis of unstable behavior
We have seen several scenarios under which the system is unstable. In this section we explore the asymptotic behavior of these scenarios. . Note that α and β cannot both be positive in this situation since α + β < 0. We are also excluding the case that α = 0 or β = 0.
3. 1 − αγ − βγ > 1, α = 0:
We must have β < 0. We have:
lim t→∞ M t = 1 β β 0 −sign(β)∞ sign(β)∞ = 1 0 −∞ ∞
This implies that: If both α and β are positive, then the system will alternate between (∞, −∞) and (−∞, ∞). If one of them is positive and the other is negative, the system will alternate between (∞, ∞) and (−∞, −∞). Note that we can not have both α and β negative under this case, since 1 − αγ − βγ < −1 implies that α + β > 2 γ .
6. 1 − αγ − βγ < −1, α = 0:
We must have β > 0. For even t: By analogous reasoning to case 6, the system will alternate between (∞, B(0)) and (−∞, B(0)).
lim t→∞ M t = 1 β β 0 −sign(β)∞ sign(β)∞ = 1 0 −∞ ∞ lim t→∞ A(t) = A(0) lim t→∞ B(t) = sign(B(0) − A(0))∞ For odd t: lim t→∞ M t = 1 β β 0 sign(β)∞ −sign(β)∞ = 1 0 ∞ −∞
Conclusions
We have seen that the system can exhibit a wide range of behavior depending on the value of λ 2 = 1 − αγ − βγ as well as the signs of α, β, and B(0) − A(0). While analysis of the eigenvalues shows that the system is either marginally stable or unstable, this does not tell the full story. Within the set of marginally stable scenarios we have shown that the system can be stable, unstable, or oscillatory. In particular, we show that stable relationships are possible under our dynamics under certain sets of conditions. The first is that both people are dominant, but not too dominant. They do not need to be equally dominant, but the sum of the dominances must be strictly below 2 γ . The second is that one person is dominant and the other is submissive, and the dominance of the dominant person strictly exceeds the submissiveness of the submissive person by less than 2 γ . Note that the magnitudes of the dominance and submissiveness must be similar, but both can be very small or large in absolute terms. Finally, if the sum of the dominances exactly equals 2 γ , then the system will oscillate between two points whose average is an equilibrium. Interestingly, while it is possible to have a stable relationship between two people who are both dominant, it is not possible if they are both submissive. It is necessary that at least one person is dominant, and that the amount of dominance outweighs the amount of submissiveness of the other. Relationships that do not satisfy the conditions described in the preceding paragraph are unstable, and the state of at least one person will diverge to ±∞. One special case is if the sum of the dominances is exactly zero. This situation may appear to be a "perfect match," where one player is exactly as dominant as the other is submissive. We have shown that in this situation the system will diverge either to (∞, ∞) or (−∞, −∞), depending on the initial conditions. The (∞, ∞) case actually seems to represent "too perfect" of a match.
We have also seen that the system can diverge in various ways, including both players going to ∞, both going to −∞, one going to ∞ and the other going to −∞, and both alternating between ∞ and −∞. In general the system is not robust to changes in the initial conditions. If A(0) exceeds B(0) by a small amount ǫ, the system may behave drastically differently than if B(0) exceeds A(0) by ǫ.
While our model is motivated by social or romantic relationships, it can also be applied to professional or business relationships as well as diplomatic relationships between nations. In all of these settings it is natural to assume that one party will obtain more "power" if the other "likes" or is reliant on them more. The model could also apply to certain biological interactions between organisms and between automated agents or robots.
1. 1
1− αγ − βγ = 1: lim t→∞ A(t) = sign(α)sign(B(0) − A(0))∞ lim t→∞ B(t) = sign(α)sign(B(0) − A(0))∞ = sign(β)sign(A(0) − B(0))∞Note that in this case we have α = −β, so by assumption we do not have α = β = 0. So α = 0. The asymptotic behavior depends on the sign of α and on the sign of the difference in initial states B(0) − A(0). If α > 0 and B(0) > A(0), or if α < 0 and A(0) > B(0), then the system will diverge to (∞, ∞). Otherwise, the system diverges to (−∞, −∞). Thus, the system diverges to (∞, ∞) if the dominant player has a lower initial state value, and diverges to (−∞, −∞) if the dominant player has a higher initial state value.2. 1 − αγ − βγ > 1, α = 0, β = 0:Recall the formula for M t from Equation 2. We have:Note that 1 − αγ − βγ > 1 implies that α + β < 0. So the denominator is negative. So therefore,lim t→∞ M t = −sign(α)∞ sign(α)∞ sign(β)∞ −sign(β)∞This implies that:lim t→∞ A(t) = sign(α)sign(B(0) − A(0))∞ lim t→∞ B(t) = sign(β)sign(A(0) − B(0))∞ If α and β have different signs, then the system will diverge to (∞, ∞) or (−∞, −∞) depending on the sign of B(0) − A(0). If α and β are both negative, then the system will diverge to (∞, −∞) or (−∞, ∞) depending on the sign of B(0) − A(0)
A
So the system will diverge to (A(0), ∞) or (A(0), −∞) depending on the sign of B(0) − A(0).4. 1 − αγ − βγ > 1, β = 0:By analogous reasoning to the previous case, we have:lim t→∞ A(t) = sign(A(0) − B(0))∞ lim t→∞ B(t) = B(0)So the system will diverge to (∞, B(0)) or (−∞, B(0)) depending on the sign of A(0) − B(0).5.1 − αγ − βγ < −1, α = 0, β = 0: From Equation 2, we see that the behavior will alternate between ∞ and −∞ for both players depending on the parity of t.For even t: (t) = sign(α)sign(B(0) − A(0))∞ lim t→∞ B(t) = sign(β)sign(A(0) − B(0))∞
t) = sign(A(0) − B(0))∞ So the system will alternate between (A(0), ∞) and (A(0), −∞). 7. 1 − αγ − βγ < −1, β = 0:
Introduction to Dynamic Systems: Theory, Models, and Applications. David G Luenberger, John Wiley & Sons, IncNew York, NYDavid G. Luenberger. Introduction to Dynamic Systems: Theory, Models, and Applications. John Wiley & Sons, Inc., New York, NY, 1979.
Wikipedia contributors. Marginal stability -Wikipedia, the free encyclopedia. 12Wikipedia contributors. Marginal stability -Wikipedia, the free encyclopedia, 2021. [Online; accessed 12-June-2022].
| []
|
[
"Possibility of Over-spinning Kerr Blackhole",
"Possibility of Over-spinning Kerr Blackhole"
]
| [
"Dishari Malakar \nDepartment of Physical Sciences\nThe Center of Excellence in Space Sciences India (CESSI\nIndian Institute of Science Education and Research Kolkata\n741246MohanpurWest BengalIndia\n",
"K Rajesh Nayak \nDepartment of Physical Sciences\nThe Center of Excellence in Space Sciences India (CESSI\nIndian Institute of Science Education and Research Kolkata\n741246MohanpurWest BengalIndia\n"
]
| [
"Department of Physical Sciences\nThe Center of Excellence in Space Sciences India (CESSI\nIndian Institute of Science Education and Research Kolkata\n741246MohanpurWest BengalIndia",
"Department of Physical Sciences\nThe Center of Excellence in Space Sciences India (CESSI\nIndian Institute of Science Education and Research Kolkata\n741246MohanpurWest BengalIndia"
]
| []
| We have developed a methodology to test the age-old cosmic censorship hypothesis in Kerr geometry. We have shown that the Kerr black hole can be overspun by particles captured from the innermost stable circular orbit. However, it appears that this does not happen for particles coming from infinity. We have also observed that overspinning becomes possible only when the black hole approaches a near-extremal limit, which approaches as we increase the black hole's mass. Our study demonstrates that an extremal black hole cannot be overspun. However, our methodology neglects backreaction and self-force effects, which could affect the overspinning. | null | [
"https://export.arxiv.org/pdf/2303.11232v1.pdf"
]
| 257,631,667 | 2303.11232 | c61c403042138fc7fd175f6d87fd9b1e417cb3bc |
Possibility of Over-spinning Kerr Blackhole
Dishari Malakar
Department of Physical Sciences
The Center of Excellence in Space Sciences India (CESSI
Indian Institute of Science Education and Research Kolkata
741246MohanpurWest BengalIndia
K Rajesh Nayak
Department of Physical Sciences
The Center of Excellence in Space Sciences India (CESSI
Indian Institute of Science Education and Research Kolkata
741246MohanpurWest BengalIndia
Possibility of Over-spinning Kerr Blackhole
PACS numbers: 04.20.Dw, 04.20.Cv, 04.70.Bw
We have developed a methodology to test the age-old cosmic censorship hypothesis in Kerr geometry. We have shown that the Kerr black hole can be overspun by particles captured from the innermost stable circular orbit. However, it appears that this does not happen for particles coming from infinity. We have also observed that overspinning becomes possible only when the black hole approaches a near-extremal limit, which approaches as we increase the black hole's mass. Our study demonstrates that an extremal black hole cannot be overspun. However, our methodology neglects backreaction and self-force effects, which could affect the overspinning.
I. INTRODUCTION
According to the Einstein's theory of relativity, matter undergoes a catastrophic convergence to a point-like region where both density of the matter and the curvature of the spacetime diverge to infinity -referred to as the singularity. Because of Raychaudhuri's equations within the framework of Einstein's general relativity, the singularities are often unavoidable [1]. The Cosmic Censorship Conjecture says that the singularity is hidden from outside region by an event horizon [2]. According to Penrose, there should exist some physical principle, which is not understood yet, that excludes naked singularities as the solutions to the equations of general relativity. When it is violated, the naked singularities are exposed to afar. In the absence of a rigorous proof, we look for counter examples and for possible violation of the conjecture. One of such methods is by perturbing the blackhole with a test particle and look for over-spinning or over-charging of the same. In this paper, we have considered the Kerr spacetime around a rotating, uncharged, axially-symmetric blackhole. We say that the blackhole has over-spun when a > M rendering the event horizon to be non-existent. Here, M and a are the blackhole's mass and angular momentum parameters. Similarly, we can describe the over-charging of charged blackholes like Reissner Nordström.
The first approach of this kind was developed by Wald where he showed that it was not possible to over-spin or overcharge an extremal blackhole when perturbed by test particles or fields [3,4]. Similar results were obtained by studying scalar and electromagnetic fields in extremal Kerr-Newman blackholes [5,6]. A recent study has also shown that it is not possible to destroy an extremal blackhole with test fields [7]. In recent times, Hubeny introduced the concept of a nearly extremal blackhole in the place of an extremal one and shown that it is possible to overcharge a near-extremal Reissner Nordström blackhole [8]. However, de Felice and Yu proved that an extremal Reissner Nordström blackhole can be evolved to a naked singularity by absorption of a neutral particle [9]. A quantum manifestation of the same was shown by Matsas and Silva [10]. Following Hubeny's methodology, Jacobson and Sotiriou have argued that when a point test particle is brought into the Kerr blackhole, it can overspin the black hole with specific energy and angular momentum neglecting the backreaction and the self-force effects [11]. The cosmic censorship gets restored when the backreaction effects were taken into account [12,13] [14][15][16]. There are several other studies that have discussed the cosmic censorship in various spacetimes using classical as well as quantum gravity [10,[17][18][19]. But, it has not yet been possible to attain a satisfying answer to this interesting problem.
In this paper, we have considered the Kerr spacetime to validate the cosmic censorship. We work closely along the line of Jacobson approach by taking Kerr blackholes with all possible rotational parameters [11]. By using the test particle approximation, we accrete a beam of particles into the blackhole at a steady rate. It is also assumed that the mass and angular momentum of the particles are absorbed by the blackhole after it is captured. In section II, we present a detailed formalism to of our approach. In section III and IV, two cases are considered -in the first case, we consider particles arriving from infinity are captured and in the second case, blackhole captures particles from the innermost stable circular orbit (ISCO). These particles are absorbed by the blackhole which leads to a change in its mass and angular momentum. A critical factor is the ration of absorbed energy and angular momentum, which determines the possibility of over-spinning. Finally we close the article with a brief concluding remarks in section V.
II. OUR APPROACH TO OVER-SPIN
Ted and Jacobson had shown in their work that it was possible to overspin a near extremal Kerr blackhole with test particle approximation [11]. We extend their result by considering a wider class of Kerr blackholes that is, their mass and angular momentum ratio can range anywhere between 0 and 1. Our idea for overspinning starts with a blackhole of mass M 0 and angular momentum parameter a 0 inside which we send a ray of particles of energy E, rest mass µ and angular momentum L. We have assumed that the particles follow equatorial geodesics and cross the outer event horizon to fall inside the blackhole. Moreover, the particle is considered to be a test particle. We have also considered that the mass and the angular momentum of the particle are absorbed by the blackhole which in turn change the blackhole's parameters adiabatically.
The goal is to see if it is possible to make the angular momentum of the composite object greater than its mass. Thus, after the addition of n particles, the solution should represent a Kerr spacetime with M n < a n , which implies the event horizon will cease to exist. Hence, we are left with a naked singularity. In this way, we can violate the weak cosmic censorship conjecture. Our approach is based on pure classical general relativity ignoring the back reaction effects -we have not considered any quantum processes as done in some recent works [10]. From here on, we will use the natural units i.e. G = c = 1.
A. Spin up Mechanism
The Kerr metric in standard Boyer-Lindquist coordinates (t, r, θ, φ) in (−, +, +, +) notation, is given by:
ds 2 = −dt 2 + Σ dr 2 ∆ + dθ 2 + r 2 + a 2 sin 2 θdφ 2 + 2M r Σ asin 2 θdφ − dt 2 , (2.1)
where, ∆ = r 2 + a 2 − 2M r and Σ = r 2 + a 2 cos 2 θ. The metric describes a stationary axially-symmetric vacuum solution, with mass parameter, M and angular momentum per unit mass, a. The Kerr spacetime allows a timelike Killing vector, ξ a = (1, 0, 0, 0) and spacelike Killing vector η a = (0, 0, 0, 1). The timelike Killing vector ξ a is not surface forming and hence define a vector field, χ a , which is timelike hyper-surface orthogonal.
χ a = ξ a − ξ b η b η c η c η a (2.2)
The χ a becomes Killing on the null surface and hence it is called the horizon generating Killing vector. The horizon is given by the condition, ∆ = 0, and explicit solution for the radial position in terms of M and a are given by,
r ± = M ± √ M 2 − a 2 .
Clearly, the inner and the outer horizons, r + and r − merge as M → a and the event horizon seize to exist for a > M with the naked singularity exposed to outside observer.
When the particle goes inside the horizon the mass and the angular momentum it carries with it in to the blackhole is related to the stress energy tensor, T ab , of the particle via:
δM = lim r→r+ T ab χ a dΣ b , δJ = lim r→r+ Ω H T ab η a dΣ b . (2.3)
In the above equation, Ω H = −a/2M r + is the angular velocity of the event horizon. And the surface element of null hypersurface:
dΣ a = χ a (r 2 + a 2 ) 2 − ∆a 2 sin 2 θ Σ dφ. (2.4)
We assume that the in-going particle is moving along equatorial plane. These expressions provide the flux for a single particle. By the method of induction, we compute the total mass and angular momentum of the blackhole due to n number of particles coming from infinity and ISCO as well as the possibility to overspin the blackhole in the following sections.
B. Stress Energy Tensor:
We start with stress energy tensor for particles moving along a geodesic. The stress energy tensor geodesic is approximated by perfect fluid with no pressure. In terms of the four-velocity, u a and rest mass µ, the we have, T ab = µ u a u b . The four-velocity, u a = ṫ ,ṙ, 0,φ , for geodesic motion in Kerr spacetime is described in terms of conserved quantities, energy E and angular momentum L:
t = 1 µ∆ [(r 2 + a 2 + 2M a 2 r )E − 2M a r L] , φ = 1 µ∆ [(1 − 2M r )L + 2M a r E] , µ 2 r 2ṙ2 = −µ 2 ∆ + r 2 E 2 + 2M r (L − aE) 2 − (L 2 − a 2 E 2 ). (2.5)
Note that, for simplicity, here we take θ = π 2 ,θ = 0 andθ = 0.
III. PARTICLES FROM INFINITY
In this section, we consider particles with rest mass µ, energy E and angular momentum L following equatorial geodesics fall into the blackhole. It is important to note that not all geodesics coming from infinity enter into the blackhole. We consider the maximum angular momentum of the particle (L = aE) for which it crosses the event horizon [21]. Using this condition, we find the non-zero components of the stress-energy tensor from the equations of motion(2.5). Thus, the mass and angular momentum flux carried into the blackhole by one particle per unit angle is computed using equations (2.3):
δM = πE 2 r 2 + µM , δJ = πJ 2 E 2 µM 3 . (3.1)
By the method of induction, the final mass and angular momentum of the blackhole is found out after n particles enter into the blackhole.
M n = M n−1 + E 2 r 2 (n−1)+
2µM n−1 , a n = a n−1 + E 2 a n−1 2µM 2 n−1 a n−1 − r 2 (n−1)+ .
(3.2)
Note that, here we calculate a n = J n /M n by binomial expansion. These equations are plotted against the number of particles and it is seen that the angular momentum parameter a decreases with the entry of more and more particles, while mass of the blackhole increases as shown in the figure-1. Thus, there is no such region where a overtakes M , making overspinning impossible with particles coming from infinity.
IV. PARTICLES FROM ISCO
In this section, we investigate the possibility of blackhole over-spinning by absorbing particles moving along innermost stable orbit. Once again we confine to the equatorial orbits. For such orbits, the effective radial potential is given by : [22]
V (r) = (1 − E 2 )r 4 − 2M r 3 + [a 2 (1 − E 2 ) + L 2 z ]r 2 − 2M (aE − L z ) 2 r. (4.1)
Here, E = E/µ ie. energy per unit mass and L z = L z /µ, angular momentum per unit mass of the particle. The E and L z are given by: We obtain the location of the innermost stable circular orbit by solving dE/dr = 0, we get:
E = 1 − 2M r + aM 1/2 r 3/2 1 − 3M r + 2aM 1/2 r 3/2 1/2 , L z = M 1/2 r 1/2 − 2aM r + a 2 M 1/2 r 3/2 1 − 3M r + 2aM 1/2 r 3/2 1/2 .1 − 6M r + 8aM 1/2 r 3/2 − 3a 2 r 2 = 0 . (4.3)
The value of r satisfying equation (4.3) gives the minimum energy stable orbit around the black hole. This is also the minimum angular momentum stable orbit. We use equation (4.3) to find the ISCO which in turn will provide the energy and angular momentum being added to the blackhole from equation (4.2). From equation (4.2), It can be seen clearly that the energy of the particle reaches a saturation at 1, the angular momentum of the particle at ISCO increases for increasing mass. With the above equations, we compute the mass and angular momentum flux into the blackhole for a single particle:
δM = 1 2M 2M E − aL r + 2 , δJ = JL 2M 2 r + 2M E − JL M r + . (4.4)
Hence the recursion relation for M n and a n = J n /M n after n particles are added, is given by:
M n = M n−1 + 1 2M n−1
2M n−1 E n−1 − a n−1 L n−1 r +(n−1) 2 , a n = a n−1 + a n−1 2M 2 n−1 2M n−1 E n−1 − a n−1 L n−1 r +(n−1) L n−1 r +(n−1)
(1 + a n−1 ) − 2M n−1 E n−1 .
(4.5)
We numerically find out the optimum value of initial angular momentum parameter, a i0 , depending on the mass of the blackhole, for which it over-spins. Below this optimum value, the angular momentum of the blackhole decreases with the entry of the particles, whereas the mass goes on increasing. Thus, over-spinning could not be achieved for such blackholes. We also see that for the extremal case i.e. M i = a i , the blackhole doesn't over-spin as the ISCO coincides with the horizon. For M i = 1.0, the blackhole over-spins for a i0 = 0.76632 with the entry of the 20th particle. For all the values of a i ranging between a i = a i0 and a i → M i , i.e, nearly extremal case, the blackhole does over-spin. It has also been seen that as the initial mass of the blackhole increases, the optimum value of initial angular momentum increases and a i0 /M i → 1. Thus, with increasing mass, only the near extremal blackholes can be overspun.
To see the reason behind this, we have plotted the change in angular momentum (∆a) and the change in mass (∆M ) against the initial angular momentum. The result is interesting. At low a i , the value of ∆a is less than that of ∆M and it increases at a point near the optimum angular momentum. Also we see that as mass of the blackhole is increased, the crossing point of ∆a and ∆M reaches to the near extremal value [ Figure 2]. Thus, with our methodology, we have been able to overspin the Kerr blackhole when it captures the particles from ISCO. In this work, we have tried to violate the weak cosmic censorship conjecture by throwing in particles into the Kerr Blackhole. We have considered two cases, where in one part, the particles are coming from infinity and in second part, the particles are captured by the blackhole from the innermost stable circular orbit. It was observed that the particles from infinity could not overspin the Kerr blackhole, whereas the particles from ISCO were successful in doing so.
The results from ISCO part are pretty interesting. As the initial mass of the blackhole increases, the chances of overspinning it go to the near extremal limit, as suggested by Ted and Jacobson [11]. Moreover, the extremal blackhole could not be overspun, agreeing to Wald's hypothesis [3].
In our work, we have considered the test particle approximation which nullifies the back reaction effects to some degree. But previous studies have already shown that self force effects can indeed impose the cosmic censorship [14,15]. Moreover, we do not know how the particle might behave inside the event horizon -our assumption that the captured particle's mass and angular momentum are directly absorbed by the blackhole might not be true in all cases. A detailed study keeping these in mind can be done to test the conjecture more thoroughly.
VI. ACKNOWLEDGMENT
DM would like to thank IISER Kolkata and CESSI for the hospitality during the work.
FIG. 1 :
1Plot of Mass (M) and angular momentum (a) added to Kerr blackhole for n particles coming from infinity. M 0 = 1.0, a 0 = 0.5 and E = µ = 0.01
FIG. 2 :
2Plot of ∆M and ∆a with initial angular momentum for different M i V. CONCLUSION
. There have been many studies where Barausse et. al., Colleoni et. al., and Zimmerman et. al. have argued self force might act as the cosmic censorship
. A Raychaudhuri, Phys. Rev. 981123A. Raychaudhuri, Phys. Rev. 98, 1123 (1955).
. R Penrose, Nuovo Cimento. 1R. Penrose, Nuovo Cimento 1, 252 (1969).
. R Wald, Ann. Phys. (N.Y.). 82548R. Wald, Ann. Phys. (N.Y.) 82, 548 (1974).
R M Wald, Black holes, gravitational radiation and the universe. SpringerR. M. Wald, in Black holes, gravitational radiation and the universe (Springer, 1999), pp. 69-86.
. I Semiz, Gen. Relativ. Gravit. 43833I. Semiz, Gen. Relativ. Gravit. 43, 833 (2011).
. G Z Tóth, Gen. Relativ. Gravit. 442019G. Z. Tóth, Gen. Relativ. Gravit. 44, 2019 (2012).
. J Natário, L Queimada, R Vicente, Class. Quantum Grav. 33175002J. Natário, L. Queimada, and R. Vicente, Class. Quantum Grav. 33, 175002 (2016).
. V E Hubeny, Phys. Rev. D. 5964013V. E. Hubeny, Phys. Rev. D 59, 064013 (1999).
. F De Felice, Y Yunqiang, Class. Quantum Grav. 181235F. de Felice and Y. Yunqiang, Class. Quantum Grav. 18, 1235 (2001).
. G E Matsas, A R Da, Silva, Phys. Rev. Lett. 99181301G. E. Matsas and A. R. Da Silva, Phys. Rev. Lett. 99, 181301 (2007).
. T Jacobson, T P Sotiriou, Phys. Rev. Lett. 103141101T. Jacobson and T. P. Sotiriou, Phys. Rev. Lett. 103, 141101 (2009).
. S Hod, Phys. Rev. Lett. 100121101S. Hod, Phys. Rev. Lett. 100, 121101 (2008).
. S Shaymatov, M Patil, B Ahmedov, P S Joshi, Phys. Rev. D. 9164025S. Shaymatov, M. Patil, B. Ahmedov, and P. S. Joshi, Phys. Rev. D 91, 064025 (2015).
. E Barausse, V Cardoso, G Khanna, Phys. Rev. Lett. 105261102E. Barausse, V. Cardoso, and G. Khanna, Phys. Rev. Lett. 105, 261102 (2010).
. M Colleoni, L Barack, A G Shah, M Van De, Meent, Phys. Rev. D. 9284044M. Colleoni, L. Barack, A. G. Shah, and M. Van De Meent, Phys. Rev. D 92, 084044 (2015).
. P Zimmerman, I Vega, E Poisson, R Haas, Phys. Rev. D. 8741501P. Zimmerman, I. Vega, E. Poisson, and R. Haas, Phys. Rev. D 87, 041501 (2013).
. L Ford, T A Roman, Phys. Rev. D. 413662L. Ford and T. A. Roman, Phys. Rev. D 41, 3662 (1990).
. S Hod, Phys. Rev. D. 60104031S. Hod, Phys. Rev. D 60, 104031 (1999).
. T P Singh, J Astrophys Astron. 20221T. P. Singh, J Astrophys Astron 20, 221 (1999).
. L Lehner, R C Myers, E Poisson, R D Sorkin, Phys. Rev. D. 9484046L. Lehner, R. C. Myers, E. Poisson, and R. D. Sorkin, Phys. Rev. D 94, 084046 (2016).
S Chandrasekhar, The mathematical theory of black holes. Oxford University Press69S. Chandrasekhar, The mathematical theory of black holes, vol. 69 (Oxford University Press, 1998).
Kerr black holes: Ii. precession, circular orbits. C M Hirata, C. M. Hirata, Kerr black holes: Ii. precession, circular orbits, and stability, URL http://www.tapir.caltech.edu/ chirata/ph236/2011-12/lec27.pdf.
| []
|
[
"RIEMANNIAN ADAPTIVE OPTIMIZATION METHODS",
"RIEMANNIAN ADAPTIVE OPTIMIZATION METHODS"
]
| [
"Gary Bécigneul [email protected] \nDepartment of Computer Science\nETH Zürich\nSwitzerland\n",
"Octavian-Eugen Ganea [email protected] \nDepartment of Computer Science\nETH Zürich\nSwitzerland\n"
]
| [
"Department of Computer Science\nETH Zürich\nSwitzerland",
"Department of Computer Science\nETH Zürich\nSwitzerland"
]
| []
| Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools − namely ADAM, ADAGRAD and the more recent AMSGRAD − remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincaré ball.arXiv:1810.00760v1 [cs.LG] 1 Oct 2018Under review as a conference paper at ICLR 2019Our contributions. In this work we (i) explain why generalizing these adaptive schemes to the most agnostic Riemannian setting in an intrinsic manner is compromised, and (ii) propose generalizations of the algorithms together with their convergence analysis in the particular case of a product of manifolds where each manifold represents one "coordinate" of the adaptive scheme. Finally, we (iii) empirically support our claims on the realistic task of hyperbolic taxonomy embedding.Our initial motivation. The particular application that motivated us in developing Riemannian versions of ADAGRAD and ADAM was the learning of symbolic embeddings in non-Euclidean spaces. As an example, the GloVe algorithm(Pennington et al., 2014)− an unsupervised method for learning Euclidean word embeddings capturing semantic/syntactic relationships − benefits significantly from optimizing with ADAGRAD compared to using SGD, presumably because different words are sampled at different frequencies. Hence the absence of Riemannian adaptive algorithms could constitute a significant obstacle to the development of competitive optimization-based Riemannian embedding methods. In particular, we believe that the recent rise of embedding methods in hyperbolic spaces | null | [
"https://arxiv.org/pdf/1810.00760v1.pdf"
]
| 52,898,806 | 1810.00760 | 0c406033ce0b53f3b1cfa4d84223e1a8a4c6cb6f |
RIEMANNIAN ADAPTIVE OPTIMIZATION METHODS
Gary Bécigneul [email protected]
Department of Computer Science
ETH Zürich
Switzerland
Octavian-Eugen Ganea [email protected]
Department of Computer Science
ETH Zürich
Switzerland
RIEMANNIAN ADAPTIVE OPTIMIZATION METHODS
Under review as a conference paper at ICLR 2019
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools − namely ADAM, ADAGRAD and the more recent AMSGRAD − remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincaré ball.arXiv:1810.00760v1 [cs.LG] 1 Oct 2018Under review as a conference paper at ICLR 2019Our contributions. In this work we (i) explain why generalizing these adaptive schemes to the most agnostic Riemannian setting in an intrinsic manner is compromised, and (ii) propose generalizations of the algorithms together with their convergence analysis in the particular case of a product of manifolds where each manifold represents one "coordinate" of the adaptive scheme. Finally, we (iii) empirically support our claims on the realistic task of hyperbolic taxonomy embedding.Our initial motivation. The particular application that motivated us in developing Riemannian versions of ADAGRAD and ADAM was the learning of symbolic embeddings in non-Euclidean spaces. As an example, the GloVe algorithm(Pennington et al., 2014)− an unsupervised method for learning Euclidean word embeddings capturing semantic/syntactic relationships − benefits significantly from optimizing with ADAGRAD compared to using SGD, presumably because different words are sampled at different frequencies. Hence the absence of Riemannian adaptive algorithms could constitute a significant obstacle to the development of competitive optimization-based Riemannian embedding methods. In particular, we believe that the recent rise of embedding methods in hyperbolic spaces
INTRODUCTION
Developing powerful stochastic gradient-based optimization algorithms is of major importance for a variety of application domains. It particular, for computational efficiency, it is common to opt for a first order method, when the number of parameters to be optimized is great enough. Such cases have recently become ubiquitous in engineering and computational sciences, from the optimization of deep neural networks to learning embeddings over large vocabularies.
This new need resulted in the development of empirically very successful first order methods such as ADAGRAD (Duchi et al., 2011), ADADELTA (Zeiler, 2012), ADAM (Kingma & Ba, 2015) or its recent update AMSGRAD (Reddi et al., 2018).
Note that these algorithms are designed to optimize parameters living in a Euclidean space R n , which has often been considered as the default geometry to be used for continuous variables. However, a recent line of work has been concerned with the optimization of parameters lying on a Riemannian manifold, a more general setting allowing non-Euclidean geometries. This family of algorithms has already found numerous applications, including for instance solving Lyapunov equations (Vandereycken & Vandewalle, 2010), matrix factorization (Tan et al., 2014), geometric programming (Sra & Hosseini, 2015), dictionary learning (Cherian & Sra, 2017) or hyperbolic taxonomy embedding (Nickel & Kiela, 2017;Ganea et al., 2018a;De Sa et al., 2018;Nickel & Kiela, 2018).
A few first order stochastic methods have already been generalized to this setting (see section 6), the seminal one being Riemannian stochastic gradient descent (RSGD) (Bonnabel, 2013), along with new methods for their convergence analysis in the geodesically convex case . However, the above mentioned empirically successful adaptive methods, together with their convergence analysis, remain to find their respective Riemannian counterparts.
Indeed, the adaptivity of these algorithms can be thought of as assigning one learning rate per coordinate of the parameter vector. However, on a Riemannian manifold, one is generally not given an intrinsic coordinate system, rendering meaningless the notions sparsity or coordinate-wise update.
Manifold, tangent space, Riemannian metric. A manifold M of dimension n is a space that can locally be approximated by a Euclidean space R n , and which can be understood as a generalization to higher dimensions of the notion of surface. For instance, the sphere S := {x ∈ R n | x 2 = 1} embedded in R n is an (n − 1)-dimensional manifold. In particular, R n is a very simple n-dimensional manifold, with zero curvature. At each point x ∈ M, one can define the tangent space T x M, which is an n-dimensional vector space and can be seen as a first order local approximation of M around x. A Riemannian metric ρ is a collection ρ := (ρ x ) x∈M of inner-products ρ x (·, ·) : T x M × T x M → R on T x M, varying smoothly with x. It defines the geometry locally on M. For x ∈ M and u ∈ T x M, we also write u x := ρ x (u, u). A Riemannian manifold is a pair (M, ρ).
Induced distance function, geodesics. Notice how a choice of a Riemannian metric ρ induces a natural global distance function on M. Indeed, for x, y ∈ M, we can set d(x, y) to be equal to the infimum of the lengths of smooth paths between x and y in M, where the length (c) of a path c is given by integrating the size of its speed vectorċ(t) ∈ T c(t) M, in the corresponding tangent space:
(c) := 1 t=0 ċ(t) c(t) dt. A geodesic γ in (M, ρ)
is a smooth curve γ : (a, b) → M which locally has minimal length. In particular, a shortest path between two points in M is a geodesic.
Exponential and logarithmic maps. Under some assumptions, one can define at point x ∈ M the exponential map exp x : T x M → M. Intuitively, this map folds the tangent space on the manifold. Locally, if v ∈ T x M, then for small t, exp x (tv) tells us how to move in M as to take a shortest path from x with initial direction v. In R n , exp x (v) = x + v. In some cases, one can also define the logarithmic map log x : M → T x M as the inverse of exp x .
Parallel transport. In the Euclidean space, if one wants to transport a vector v from x to y, one simply translates v along the straight-line from x to y. In a Riemannian manifold, the resulting transported vector will depend on which path was taken from x to y. The parallel transport P x (v; w) of a vector v from a point x in the direction w and in a unit time, gives a canonical way to transport v with zero acceleration along a geodesic starting from x, with initial velocity w.
RIEMANNIAN OPTIMIZATION
Consider performing an SGD update of the form
x t+1 ← x t − αg t ,(1)
where g t denotes the gradient of objective f t 1 and α > 0 is the step-size. In a Riemannian manifold (M, ρ), for smooth f : M → R, Bonnabel (2013) defines Riemannian SGD by the following update:
x t+1 ← exp xt (−αg t ),(2)
where g t ∈ T xt M denotes the Riemannian gradient of f t at x t . Note that when (M, ρ) is the Euclidean space (R n , I n ), these two match, since we then have exp
x (v) = x + v.
Intuitively, applying the exponential map enables to perform an update along the shortest path in the relevant direction in unit time, while remaining in the manifold.
In practice, when exp x (v) is not known in closed-form, it is common to replace it by a retraction map R x (v), most often chosen as R x (v) = x + v, which is a first-order approximation of exp x (v).
AMSGRAD, ADAM, ADAGRAD
Let's recall here the main algorithms that we are taking interest in.
ADAGRAD. Introduced by Duchi et al. (2011), the standard form of its update step is defined as 2
x i t+1 ← x i t − αg i t / t k=1 (g i k ) 2 .(3)
Such updates rescaled coordinate-wise depending on the size of past gradients can yield huge improvements when gradients are sparse, or in deep networks where the size of a good update may depend on the layer. However, the accumulation of all past gradients can also slow down learning.
ADAM. Proposed by Kingma & Ba (2015), the ADAM update rule is given by
x i t+1 ← x i t − αm i t / v i t ,(4)
where m t = β 1 m t−1 + (1−β 1 )g t can be seen as a momentum term and v i t = β 2 v i t−1 + (1−β 2 )(g i t ) 2 is an adaptivity term. When β 1 = 0, one essentially recovers the unpublished method RMSPROP (Tieleman & Hinton, 2012), the only difference to ADAGRAD being that the sum is replaced by an exponential moving average, hence past gradients are forgotten over time in the adaptivity term v t . This circumvents the issue of ADAGRAD that learning could stop too early when the sum of accumulated squared gradients is too significant. Let us also mention that the momentum term introduced by ADAM for β 1 = 0 has been observed to often yield huge empirical improvements.
AMSGRAD. More recently, Reddi et al. (2018) identified a mistake in the convergence proof of ADAM. To fix it, they proposed to either modify the ADAM algorithm with
x i t+1 ← x i t − αm i t / v i t , wherev i t = max{v i t−1 , v i t },(5)
which they coin AMSGRAD, or to choose an increasing schedule for β 2 , making it time dependent, which they call ADAMNC (for non-constant).
ADAPTIVE SCHEMES IN RIEMANNIAN MANIFOLDS
THE DIFFICULTY OF DESIGNING ADAPTIVE SCHEMES IN THE GENERAL SETTING
Intrinsic updates. It is easily understandable that writing any coordinate-wise update requires the choice of a coordinate system. However, on a Riemannian manifold (M, ρ), one is generally not provided with a canonical coordinate system. The formalism only allows to work with certain local coordinate systems, also called charts, and several different charts can be defined around each point x ∈ M. One usually says that a quantity defined using a chart is intrinsic to M if its definition does not depend on which chart was used. For instance, it is known that the Riemannian gradient gradf of a smooth function f : M → R can be defined intrinsically to (M, ρ), but its Hessian is only intrinsically defined at critical points. It is easily seen that the RSGD update of Eq.
(2) is intrinsic, since it only involves exp and grad, which are objects intrinsic to (M, ρ). However, it is unclear whether it is possible at all to express either of Eqs. (3,4,5) in a coordinate-free or intrinsic manner.
A tempting solution. Note that since an update is defined in a tangent space, one could be tempted to fix a canonical coordinate system e := (e (1) , ..., e (n) ) in the tangent space T x0 M R d at the initialization x 0 ∈ M, and parallel-transport e along the optimization trajectory, adapting Eq. (3) to:
x t+1 ← exp xt (∆ t ), e t+1 ← P xt (e t ; ∆ t ), with ∆ t := −αg t t k=1 (g k ) 2 ,(6)
where and (·) 2 denote coordinate-wise division and square respectively, these operations being taken relatively to coordinate system e t . In the Euclidean space, parallel transport between two points x and y does not depend on the path it is taken along because the space has no curvature. However, in a general Riemannian manifold, not only does it depend on the chosen path but curvature will also give to parallel transport a rotational component 3 , which will almost surely break the sparsity of the gradients and hence the benefit of adaptivity. Besides, the interpretation of adaptivity as optimizing different features (i.e. gradient coordinates) at different speeds is also completely lost here, since the coordinate system used to represent gradients depends on the optimization path. Finally, note that the techniques we used to prove our theorems would not apply to updates defined in the vein of Eq. (6).
ADAPTIVITY IS POSSIBLE ACROSS MANIFOLDS IN A PRODUCT
From now on, we assume additional structure on (M, ρ), namely that it is the cartesian product of n Riemannian manifolds (M i , ρ i ), where ρ is the induced product metric:
M := M 1 × · · · × M n , ρ := ρ 1 . . . ρ n .(7)
Product notations. The induced distance function d on M is known to be given by d(
x, y) 2 = n i=1 d i (x i , y i ) 2 , where d i is the distance in M i . The tangent space at x = (x 1 , ..., x n ) is given by T x M = T x 1 M 1 ⊕ · · · ⊕ T x n M n , and the Riemannian gradient g of a smooth function f : M → R at point x ∈ M is simply the concatenation g = ((g 1 ) T · · · (g n ) T ) T of the Riemannian gradients g i ∈ T x i M i of each partial map f i : y ∈ M i → f (x 1 , ..., x i−1 , y, x i+1 , ..., x n ).
Similarly, the exponential, log map and the parallel transport in M are the concatenations of those in each M i .
Riemannian ADAGRAD. We just saw in the above discussion that designing meaningful adaptive schemes − intuitively corresponding to one learning rate per coordinate − in a general Riemannian manifold was difficult, because of the absence of intrinsic coordinates. Here, we propose to see each component x i ∈ M i of x as a "coordinate", yielding a simple adaptation of Eq. (3) as
x i t+1 ← exp i x i t −αg i t / t k=1 g i k 2 x i k .(8)
On the adaptivity term. Note that we take (squared) Riemannian norms g i
t 2 x i t = ρ i x i
In section 2, we briefly presented ADAGRAD, ADAM and AMSGRAD. Intuitively, ADAM can be described as a combination of ADAGRAD with a momentum (of parameter β 1 ), with the slight modification that the sum of the past squared-gradients is replaced with an exponential moving average, for an exponent β 2 . Let's also recall that AMSGRAD implements a slight modification of ADAM, allowing to correct its convergence proof. Finally, ADAMNC is simply ADAM, but with a particular non-constant schedule for β 1 and β 2 . On the other hand, what is interesting to note is that the schedule initially proposed by Reddi et al. (2018) for β 2 in ADAMNC, namely β 2t := 1 − 1/t, lets v t recover the sum of squared-gradients of ADAGRAD. Hence, ADAMNC without momentum (i.e. β 1t = 0) yields ADAGRAD.
Assumptions and notations. For 1 ≤ i ≤ n, we assume (M i , ρ i ) is a geodesically complete Riemannian manifold with sectional curvature lower bounded by κ i ≤ 0. As written in Eq. (7), let (M, ρ) be the product manifold of the (M i , ρ i )'s. For each i, let X i ⊂ M i be a compact, geodesically convex set and define X := X 1 × · · · × X n , the set of feasible parameters. Define Π Xi : M i → X i to be the projection operator, i.e. Π Xi (x) is the unique y ∈ X i minimizing d i (y, x). Denote by P i , exp i and log i the parallel transport, exponential and log maps in (M i , ρ i ), respectively. For f : M → R, if g = gradf (x) for x ∈ M, denote by x i ∈ M i and by g i ∈ T x i M i the corresponding components of x and g. In the sequel, let (f t ) be a family of differentiable, geodesically convex functions from M to R. Assume that each X i ⊂ M i has a diameter bounded by D ∞ and that for all 1 ≤ i ≤ n, t ∈ [T ] and x ∈ X , (gradf t (x)) i xi ≤ G ∞ . Finally, our convergence guarantees will bound the regret, defined at the end of T rounds as R
T = T t=1 f t (x t ) − min x∈X T j=1 f j (x), so that R T = o(T ).
Finally, ϕ i x i →y i denotes any isometry from T x i M i to T y i M i , for x i , y i ∈ M i . Following the discussion in section 3.2 and especially Eq. (8), we present Riemannian AMSGRAD in Figure 1a. For comparison, we show next to it the standard AMSGRAD algorithm in Figure 1b.
Require: Write h i t := −α t m i t / v i t . As a natural choice for ϕ i , one could first parallel-transport 4 m i t from x i t tox i t+1 := exp x i t (h i t ) using P i (· ; h i t ), and then fromx i t+1 to x i t+1 along a minimizing geodesic. As can be seen, if (M i , ρ i ) = R for all i, RAMSGRAD and AMSGRAD coincide: we then have κ i = 0,
x 1 ∈ X , {α t } T t=1 , {β 1t } T t=1 , β 2 Set m 0 = 0, τ 0 = 0, v 0 = 0 andv 0 = 0 for t = 1 to T do g t = gradf t (x t ) m i t = β 1t τ i t−1 + (1 − β 1t )g i t v i t = β 2 v i t−1 + (1 − β 2 ) g i t 2 x i t v i t = max{v i t−1 , v i t } x i t+1 = Π Xi (exp i x i t (−α t m i t / v i t )) τ i t = ϕ i x i t →x i t+1 (m i t ) end for (a) RAMSGRAD in M1 × · · · × Mn. Require: x 1 ∈ X , {α t } T t=1 , {β 1t } T t=1 , β 2 Set m 0 = 0, v 0 = 0 andv 0 = 0 for t = 1 to T do g t = gradf t (x t ) m i t = β 1t m i t−1 + (1 − β 1t )g i t v i t = β 2 v i t−1 + (1 − β 2 )(g i t ) 2 v i t = max{v i t−1 , v i t } x i t+1 = Π Xi (x i t − α t m i t / v i t ) end for (b) AMSGRAD in R n .d i (x i , y i ) = |x i − y i |, ϕ i = Id, exp i x i (v i ) = x i + v i , M 1 × · · · × M n = R n , g i t 2 x i t = (g i t ) 2 ∈ R.
From these algorithms, RADAM and ADAM are obtained simply by removing the max operations,
i.e. replacingv i t = max{v i t−1 , v i t } withv i t = v i t .
The convergence guarantee that we obtain for RAMSGRAD is presented in Theorem 1, where the quantity ζ is defined by as
ζ(κ, c) := c |κ| tanh(c |κ|) = 1 + c 3 |κ| + O κ→0 (κ 2 ).(9)
For comparison, we also show the convergence guarantee of the original AMSGRAD in appendix C. Note that when (M i , ρ i ) = R for all i, convergence guarantees between RAMSGRAD and AMSGRAD coincide as well. Indeed, the curvature dependent quantity (ζ(κ i , D ∞ ) + 1)/2 in the Riemannian case then becomes equal to 1, recovering the convergence theorem of AMSGRAD. It is also interesting to understand at which speed does the regret bound worsen when the curvature is small but non-zero: by a multiplicative factor of approximately 1 + D ∞ |κ|/6 (see Eq. (9)). Similar remarks hold for RADAMNC, whose convergence guarantee is shown in Theorem 2. Finally, notice that β 1 := 0 in Theorem 2 yields a convergence proof for RADAGRAD, whose update rule we defined in Eq. (8).
Theorem 1 (Convergence of RAMSGRAD). Let (x t ) and (v t ) be the sequences obtained from Algorithm 1a, α t = α/ √ t, β 1 = β 11 , β 1t ≤ β 1 for all t ∈ [T ] and γ = β 1 / √ β 2 < 1. We then have:
R T ≤ √ T D 2 ∞ 2α(1 − β 1 ) n i=1 v i T + D 2 ∞ 2(1 − β 1 ) n i=1 T t=1 β 1t v i t α t + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) √ 1 − β 2 n i=1 ζ(κ i , D ∞ ) + 1 2 T t=1 g i t 2 x i t .(10)
Proof. See appendix A.
Theorem 2 (Convergence of RADAMNC). Let (x t ) and (v t ) be the sequences obtained from RADAMNC, α t = α/ √ t, β 1 = β 11 , β 1t = β 1 λ t−1 , λ < 1, β 2t = 1 − 1/t. We then have:
R T ≤ n i=1 D ∞ 2α(1 − β 1 ) + α(ζ(κ i , D ∞ ) + 1) (1 − β 1 ) 3 T t=1 g i t 2 x i t + β 1 D 2 ∞ G ∞ n 2α(1 − β 1 )(1 − λ) 2 . (11) Proof. See appendix B.
The role of convexity. Note how the notion of convexity in Theorem 5 got replaced by the notion of geodesic convexity in Theorem 1. Let us compare the two definitions: the differentiable functions f : R n → R and g : M → R are respectively convex and geodesically convex if for all x, y ∈ R n , u, v ∈ M:
f (x) − f (y) ≤ gradf (x), x − y , g(u) − g(v) ≤ ρ u (gradg(u), − log u (v)).(12)
But how does this come at play in the proofs? Regret bounds for convex objectives are usually obtained by bounding (12) for any x * ∈ X , which boils down to bounding each g t , x t − x * . In the Riemannian case, this term becomes ρ xt (g t , − log xt (x * )).
T t=1 f t (x t ) − f t (x * ) using Eq.
The role of the cosine law. How does one obtain a bound on g t , x t − x * ? For simplicity, let us look at the particular case of an SGD update, from Eq. (1). Using a cosine law, this yields
g t , x t − x * = 1 2α x t − x * 2 − x t+1 − x * 2 + α 2 g t 2 .(13)
One now has two terms to bound: (i) when summing over t, the first one simplifies as a telescopic summation; (ii) the second term T t=1 α t g t 2 will require a well chosen decreasing schedule for α. In Riemannian manifolds, this step is generalized using the analogue lemma 6 introduced by , valid in all Alexandrov spaces, which includes our setting of geodesically convex subsets of Riemannian manifolds with lower bounded sectional curvature. The curvature dependent quantity ζ of Eq. (10) appears from this lemma, letting us bound ρ i
x i t (g i t , − log i x i t (x i * )).
The benefit of adaptivity. Let us also mention that the above bounds significantly improve for sparse (per-manifold) gradients. In practice, this could happen for instance for algorithms embedding each word i (or node of a graph) in a manifold M i and when just a few words are updated at a time.
On the choice of ϕ i . The fact that our convergence theorems (see lemma 3) do not require specifying ϕ i suggests that the regret bounds could be improved by exploiting momentum/acceleration in the proofs for a particular ϕ i . Note that this remark also applies to AMSGRAD (Reddi et al., 2018).
EXPERIMENTS
We empirically assess the quality of the proposed algorithms: RADAM, RAMSGRAD and RADAGRAD compared to the non-adaptive RSGD method (Eq. 2). For this, we follow (Nickel & Kiela, 2017) and embed the transitive closure of the WordNet noun hierarchy (Miller et al., 1990) in the n-dimensional Poincaré model D n of hyperbolic geometry which is well-known to be better suited to embed tree-like graphs than the Euclidean space (Gromov, 1987;De Sa et al., 2018). In this case, each word is embedded in the same space of constant curvature −1, thus M i = D n , ∀i. The choice of the Poincaré model is justified by the access to closed form expressions for all the quantities used in Alg. 1a:
• Metric tensor: ρ x = λ 2 x I n , ∀x ∈ D n , where λ x = 2 1− x 2 is the conformal factor. • Riemannian gradients are rescaled Euclidean gradients: gradf (x) = (1/λ 2
x )∇ E f (x). • Distance function and geodesics, (Nickel & Kiela, 2017;Ungar, 2008;Ganea et al., 2018b).
• Exponential and logarithmic maps:
exp x (v) = x ⊕ tanh λx v 2 v v
, where ⊕ is the generalized Mobius addition (Ungar, 2008;Ganea et al., 2018b).
• Parallel transport along the unique geodesic from x to y: P x→y (v) = λx λy · gyr[y, −x]v. This formula was derived from (Ungar, 2008;Ganea et al., 2018b), gyr being given in closed form in (Ungar, 2008, Eq. (1.27)).
Dataset & Model. The transitive closure of the WordNet taxonomy graph consists of 82,115 nouns and 743,241 hypernymy Is-A relations (directed edges E). These words are embedded in D n such that the distance between words connected by an edge is minimized, while being maximized otherwise. We minimize the same loss function as (Nickel & Kiela, 2017) which is similar with log-likelihood, but approximating the partition function using sampling of negative word pairs (non-edges), fixed to 10 in our case. Note that this loss does not use the direction of the edges in the graph 5
L(θ) = (u,v)∈E e −d D (u,v) u ∈N (v) e −d D (u ,v)(14)
Metrics. We report both the loss value and the mean average precision (MAP) (Nickel & Kiela, 2017): for each directed edge (u, v), we rank its distance d(u, v) among the full set of ground truth negative examples {d(u , v)|(u , v) / ∈ E}. We use the same two settings as (Nickel & Kiela, 2017), namely: reconstruction (measuring representation capacity) and link prediction (measuring generalization). For link prediction we sample a validation set of 2% edges from the set of transitive closure edges that contain no leaf node or root. We only focused on 5-dimensional hyperbolic spaces.
Training details. For all methods we use the same "burn-in phase" described in (Nickel & Kiela, 2017) for 20 epochs, with a fixed learning rate of 0.03 and using RSGD with retraction as explained in Sec. 2.2. Solely during this phase, we sampled negative words based on their graph degree raised at power 0.75. This strategy improves all metrics. After that, when different optimization methods start, we sample negatives uniformly.
Optimization methods. Experimentally we obtained slightly better results for RADAM over RAMS-GRAD, so we will mostly report the former. Moreover, we unexpectedly observed convergence to lower loss values when replacing the true exponential map with its first order approximation − i.e. the retraction R x (v) = x + v − in both RSGD and in our adaptive methods from Alg. 1a. One possible explanation is that retraction methods need fewer steps and smaller gradients to "escape" points sub-optimally collapsed on the ball border of D n compared to fully Riemannian methods. As a consequence, we report "retraction"-based methods in a separate setting as they are not directly comparable to their fully Riemannian analogues. Results. We show in Tables 2 and 3 results for "exponential" based and "retraction" based methods. We ran all our methods with different learning rates from the set {0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0, 3.0}. For the RSGD baseline we show in orange the best learning rate setting, but we also show the previous lower (slower convergence, in blue) and the next higher (faster overfitting, in green) learning rates. For RADAM and RAMSGRAD we only show the best settings. We always use β 1 = 0.9 and β 2 = 0.999 for these methods as these achieved the lowest training loss. RADAGRAD was consistently worse, so we do not report it. As can be seen, RADAM always achieves the lowest training loss. On the MAP metric for both reconstruction and link prediction settings, the same method also outperforms all the other methods for the full Riemannian setting (i.e. Tab. 2). Interestingly, in the "retraction" setting, RADAM reaches the lowest training loss value and is on par with RSGD on the MAP evaluation for both reconstruction and link prediction settings. However, RAMSGRAD is faster to converge in terms of MAP for the link prediction task, suggesting that this method has a better generalization capability.
RELATED WORK
After Riemannian SGD was introduced by Bonnabel (2013), a pletora of other first order Riemannian methods arose, such as Riemannian SVRG , Riemannian Stein variational gradient descent (Liu & Zhu, 2017), Riemannian accelerated gradient descent Zhang & Sra, 2018) or averaged RSGD (Tripuraneni et al., 2018), along with new methods for their convergence analysis in the geodesically convex case . Stochastic gradient Langevin dynamics was generalized as well, to improve optimization on the probability simplex (Patterson & Teh, 2013).
Let us also mention that a first version of Riemannian ADAM for the Grassmann manifold G(1, n) was previously introduced by Cho & Lee (2017), proposing to transport the momentum term using parallel translation, which is an idea that we preserved. However, their algorithm completely removes the adaptive component, since the adaptivity term v t becomes a scalar. No adaptivity across manifolds is discussed, which is the main point of our discussion. Moreover, no convergence analysis is provided.
CONCLUSION
Driven by recent work in learning non-Euclidean embeddings for symbolic data, we propose to generalize popular adaptive optimization tools (e.g. ADAM, AMSGRAD, ADAGRAD) to Cartesian products of Riemannian manifolds in a principled and intrinsic manner. We derive convergence rates that are similar to the Euclidean corresponding models. Experimentally we show that our methods outperform popular non-adaptive methods such as RSGD on the realistic task of hyperbolic word taxonomy embedding.
Let's look at the first term. Using β 1t ≤ β 1 and with a change of indices, we have
n i=1 T t=1 v i t 2α t (1 − β 1t ) d i (x i t , x i * ) 2 − d i (x i t+1 , x i * ) 2 (21) ≤ 1 2(1 − β 1 ) n i=1 T t=2 v i t α t − v i t−1 α t−1 d i (x i t , x i * ) 2 + n i=1 v i 1 α 1 d i (x i 1 , x i * ) 2 (22) ≤ 1 2(1 − β 1 ) n i=1 T t=2 v i t α t − v i t−1 α t−1 D 2 ∞ + n i=1 v i 1 α 1 D 2 ∞ (23) = D 2 ∞ 2α T (1 − β 1 ) n i=1 v i T ,(24)
where the last equality comes from a standard telescopic summation. We now need the following lemma.
Lemma 3. T t=1 α t v i t m i t 2 x i t ≤ α √ 1 + log T (1 − β 1 )(1 − γ) √ 1 − β 2 T t=1 g i t 2 x i t(25)
Proof. Let's start by separating the last term, and removing the hat on v.
T t=1 α t v i t m i t 2 x i t ≤ T −1 t=1 α t v i t m i t 2 x i t + α T v i T m i T 2 x i T (26) ≤ T −1 t=1 α t v i t m i t 2 x i t + α T v i T m i T 2 x i T(27)
Let's now have a closer look at the last term. We can reformulate m i T as:
m i T = T j=1 (1 − β 1j ) T −j k=1 β 1,(T −k+1) ϕ i x i T −1 →x i T • · · · • ϕ i x i j →x i j+1 (g i j )(28)
Applying lemma 7, we get
m i T 2 x i T ≤ T j=1 (1 − β 1j ) T −j k=1 β 1,(T −k+1) × T j=1 (1 − β 1j ) T −j k=1 β 1,(T −k+1) ϕ i x i T −1 →x i T • · · · • ϕ i x i j →x i j+1 (g i j ) 2 x i T .(29)
Since ϕ i is an isometry, we always have ϕ i x→y (u) y = u x , i.e. ϕ i
x i T −1 →x i T • · · · • ϕ i x i j →x i j+1 (g i j ) 2 x i T = g i j 2 x i j .(30)
Using that β 1k ≤ β 1 for all k ∈ [T ],
m i T 2 x i T ≤ T j=1 (1 − β 1j )β T −j 1 T j=1 (1 − β 1j )β T −j 1 g i j 2 x i j .(31)
Finally, (1 − β 1j ) ≤ 1 and
where we used the facts that d → ζ(κ, d) is an increasing function, and that α t / v i
D USEFUL LEMMAS
The following lemma is a user-friendly inequality developed by in order to prove convergence of gradient-based optimization algorithms, for geodesically convex functions, in Alexandrov spaces. Lemma 6 (Cosine inequality in Alexandrov spaces). If a, b, c, are the sides (i.e., side lengths) of a geodesic triangle in an Alexandrov space with curvature lower bounded by κ, and A is the angle between sides b and c, then a 2 ≤ |κ|c tanh( |κ|c) b 2 + c 2 − 2bc cos(A).
Proof. See section 3.1, lemma 6 of .
Lemma 7 (An analogue of Cauchy-Schwarz). For all p, k ∈ N * , u 1 , ..., u k ∈ R p , a 1 , ..., a k ∈ R + , we have
i a i u i 2 2 ≤ i a i i a i u i 2 2 .(64)
Proof. The proof consists in applying Cauchy-Schwarz' inequality two times:
i a i u i 2 2 = i,j a i a j u T i u j (65) = i,j √ a i a j ( √ a i u i ) T ( √ a j u j ) (66) ≤ i,j √ a i a j √ a i u i 2 √ a j u j 2 (67) = i √ a i √ a i u i 2 2 (68) ≤ i a i i α i u i 2 2 .(69)
Finally, this last lemma is used by Reddi et al. (2018) in their convergence proof for ADAMNC. We need it too, in an analogue lemma. Lemma 8 ( (Auer et al., 2002)). For any non-negative real numbers y 1 , ..., y t , the following holds:
t i=1 y i i j=1 y j ≤ 2 t i=1 y i .(70)
Figure 1 :
1Comparison of the Riemannian and Euclidean versions of AMSGRAD.
Figure 2 :
2Results for methods doing updates with the exponential map. From left to right we report: training loss, MAP on the train set, MAP on the validation set.
Figure 3 :
3Results for methods doing updates with the retraction. From left to right we report: training loss, MAP on the train set, MAP on the validation set.
to be interpreted as the objective with the same parameters, evaluated at the minibatch taken at time t. 2 a small ε = 10 −8 is often added in the square-root for numerical stability, omitted here for simplicity.
t (g i t , g i t ) in the adaptivity term rescaling the gradient. In the Euclidean setting, this quantity is simply a scalar (g i t ) 2 , which is related to the size of an SGD update of the i th coordinate, rescaled by the learning rate (see Eq. (1)): |g i t | = |x i t+1 − x i t |/α. By analogy, note that the size of an RSGD update in3 The rotational component of parallel transport inherited from curvature is called the holonomy.
M i (see Eq.(2)) is given byd i (x i t+1 , x i t ) = d i (exp i x i t (−αg i t ), x i t ) = − αg i t x i t , hence we also recover g i t x i t = d i (x it+1 , x i t )/α, which indeed suggests replacing the scalar (g i t ) 2 by g i t 2x i t when transforming a coordinate-wise adaptive scheme into a manifold-wise adaptive one.4 RAMSGRAD, RADAMNC: CONVERGENCE GUARANTEES
The idea of parallel-transporting mt from Tx t M to Tx t+1 M previously appeared inCho & Lee (2017).
In a pair (u, v), u denotes the parent, i.e. u entails v.
Note that since each Xi is geodesically convex, logarithms are well-defined.
ACKNOWLEDGMENTS Gary Bécigneul is funded by the Max Planck ETH Center for Learning Systems. Octavian Ganea is funded by the Swiss National Science Foundation (SNSF) under grant agreement number 167176.A PROOF OF THEOREM 1Proof. Denote byx i t+1 := exp i x i t (−α t m i t / v i t ) and consider the geodesic triangle defined byx i t+1 ,Combining the following formula 6 :with the following inequality (given by lemma 6):a 2 ≤ ζ(κ, c)b 2 + c 2 − 2bc cos(A), with ζ(κ, c) := |κ|cwhere the use the notation ·, · x i for ρ i x i (·, ·) when it is clear which metric is used. By definition of Π Xi , we can safely replacex i t+1 by x i t+1 in the above inequality. PluggingNow applying Cauchy-Schwarz' and Young's inequalities to the last term yieldsFrom the geodesic convexity of f t for 1 ≤ t ≤ T , we haveLet's now look at v i T . It is given byCombining Eq. (32) and Eq. (33) allows us to bound the last term of Eq.(26):With this inequality, we can now bound every term of Eq.(26):Putting together Eqs.(19), (20), (24) and lemma 3 lets us bound the regret:t ≤ α t−1 / v i t−1 , which enables us to bound both the second and third terms of the right-hand side of Eq. (19) using lemma 3.Remark. Let us notice that similarly as for AMSGRAD, RAMSGRAD also has a regret bounded by O(G ∞ √ T ). This is easy to see from the proof of lemma 4. Hence the actual upper-bound on the regret is a minimum between the one in O(G ∞ √ T ) and the one of Theorem 1.B PROOF OF THEOREM 2Proof. Similarly as for the proof of Theorem 1 (and with same notations), we obtain the inequality:With the same techniques as before, we obtain the same bound on the first term:However, for the other terms, we now need a new lemma:Proof. Let's start by separating the last term.where the last inequality comes from lemma 8.Putting everything together, we finally obtainwhere we used that for this choice of α t and β 2t , we haveThis combined with Eq. (53) yields the final result.Remark. Notice the appearance of a factor n/α in the last term of the last equation. This term is missing in corollaries 1 and 2 of Reddi et al.(2018), which is a mistake. However, this dependence in n is not too harmful here, since this term does not depend on T .C AMSGRADTheorem 5 (Convergence of AMSGRAD). Let (f t ) be a family of differentiable, convex functions from R n to R. Let (x t ) and (v t ) be the sequences obtained from Algorithm 1b, α t = α/ √ t, β 1 = β 11 , β 1t ≤ β 1 for all t ∈ [T ] and γ = β 1 / √ β 2 < 1. Assume that each X i ⊂ R has a diameter bounded by D ∞ and that for all 1 ≤ i ≤ n, t ∈ [T ] and x ∈ X , (gradf t (x)) ∞ ≤ G ∞ . For (x t ) generated using the AMSGRAD (Algorithm 1b), we have the following bound on the regret
Adaptive and self-confident on-line learning algorithms. Peter Auer, Nicolo Cesa-Bianchi, Claudio Gentile, Journal of Computer and System Sciences. 641Peter Auer, Nicolo Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64(1):48-75, 2002.
Stochastic gradient descent on riemannian manifolds. Silvere Bonnabel, IEEE Transactions on Automatic Control. 589Silvere Bonnabel. Stochastic gradient descent on riemannian manifolds. IEEE Transactions on Automatic Control, 58(9):2217-2229, 2013.
Riemannian dictionary learning and sparse coding for positive definite matrices. Anoop Cherian, Suvrit Sra, IEEE transactions on neural networks and learning systems. 28Anoop Cherian and Suvrit Sra. Riemannian dictionary learning and sparse coding for positive definite matrices. IEEE transactions on neural networks and learning systems, 28(12):2859-2871, 2017.
Riemannian approach to batch normalization. Minhyung Cho, Jaehyung Lee, Advances in Neural Information Processing Systems. Minhyung Cho and Jaehyung Lee. Riemannian approach to batch normalization. In Advances in Neural Information Processing Systems, pp. 5225-5235, 2017.
Representation tradeoffs for hyperbolic embeddings. Albert Christopher De Sa, Christopher Gu, Frederic Ré, Sala, Christopher De Sa, Albert Gu, Christopher Ré, and Frederic Sala. Representation tradeoffs for hyperbolic embeddings. 2018. URL https://www.cs.cornell.edu/˜cdesa/papers/ arxiv2018_hyperbolic.pdf.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.
Hyperbolic entailment cones for learning hierarchical embeddings. Octavian-Eugen, Gary Ganea, Thomas Bécigneul, Hofmann, International Conference on Machine Learning. Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. Hyperbolic entailment cones for learning hierarchical embeddings. In International Conference on Machine Learning, 2018a.
Hyperbolic neural networks. Octavian-Eugen, Gary Ganea, Thomas Bécigneul, Hofmann, Advances in Neural Information Processing Systems. Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. Hyperbolic neural networks. In Advances in Neural Information Processing Systems, 2018b.
Hyperbolic groups. Mikhael Gromov, Essays in group theory. SpringerMikhael Gromov. Hyperbolic groups. In Essays in group theory, pp. 75-263. Springer, 1987.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Riemannian stein variational gradient descent for bayesian inference. Chang Liu, Jun Zhu, arXiv:1711.11216arXiv preprintChang Liu and Jun Zhu. Riemannian stein variational gradient descent for bayesian inference. arXiv preprint arXiv:1711.11216, 2017.
Accelerated first-order methods for geodesically convex optimization on riemannian manifolds. Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao, Advances in Neural Information Processing Systems. 30Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, and Licheng Jiao. Accelerated first-order methods for geodesically convex optimization on riemannian manifolds. In Advances in Neural Information Processing Systems 30, pp. 4868-4877. 2017.
Introduction to wordnet: An on-line lexical database. A George, Richard Miller, Christiane Beckwith, Derek Fellbaum, Katherine J Gross, Miller, International journal of lexicography. 34George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. Introduction to wordnet: An on-line lexical database. International journal of lexicography, 3(4): 235-244, 1990.
Learning continuous hierarchies in the lorentz model of hyperbolic geometry. Maximilian Nickel, Douwe Kiela, International Conference on Machine Learning. Maximilian Nickel and Douwe Kiela. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In International Conference on Machine Learning, 2018.
Poincaré embeddings for learning hierarchical representations. Maximillian Nickel, Douwe Kiela, Advances in Neural Information Processing Systems. Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pp. 6341-6350, 2017.
Stochastic gradient riemannian langevin dynamics on the probability simplex. Sam Patterson, Yee Whye Teh, Advances in Neural Information Processing Systems. Sam Patterson and Yee Whye Teh. Stochastic gradient riemannian langevin dynamics on the probability simplex. In Advances in Neural Information Processing Systems, pp. 3102-3110, 2013.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. 14Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532-43, 2014.
On the convergence of adam and beyond. J Sashank, Satyen Reddi, Sanjiv Kale, Kumar, ICLR. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In ICLR, 2018.
Introduction to differential geometry. ETH, Lecture Notes, preliminary version. W Joel, Robbin, A Dietmar, Salamon, Joel W Robbin and Dietmar A Salamon. Introduction to differential geometry. ETH, Lecture Notes, preliminary version, January, 2011.
A comprehensive introduction to differential geometry. volume four. Michael Spivak, Michael Spivak. A comprehensive introduction to differential geometry. volume four. 1979.
Conic geometric optimization on the manifold of positive definite matrices. Suvrit Sra, Reshad Hosseini, SIAM Journal on Optimization. 251Suvrit Sra and Reshad Hosseini. Conic geometric optimization on the manifold of positive definite matrices. SIAM Journal on Optimization, 25(1):713-739, 2015.
Riemannian pursuit for big matrix recovery. Mingkui Tan, W Ivor, Li Tsang, Bart Wang, Sinno Jialin Vandereycken, Pan, International Conference on Machine Learning. Mingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno Jialin Pan. Riemannian pursuit for big matrix recovery. In International Conference on Machine Learning, pp. 1539-1547, 2014.
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning. Tijmen Tieleman, Geoffrey Hinton, 4Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26-31, 2012.
Averaging stochastic gradient descent on riemannian manifolds. Nilesh Tripuraneni, Nicolas Flammarion, Francis Bach, Michael I Jordan , Conference On Learning Theory. Stockholm, Sweden, 6-9Nilesh Tripuraneni, Nicolas Flammarion, Francis Bach, and Michael I Jordan. Averaging stochastic gradient descent on riemannian manifolds. In Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018., 2018.
A gyrovector space approach to hyperbolic geometry. Abraham Albert Ungar, Synthesis Lectures on Mathematics and Statistics. 11Abraham Albert Ungar. A gyrovector space approach to hyperbolic geometry. Synthesis Lectures on Mathematics and Statistics, 1(1):1-194, 2008.
A riemannian optimization approach for computing low-rank solutions of lyapunov equations. Bart Vandereycken, Stefan Vandewalle, SIAM Journal on Matrix Analysis and Applications. 315Bart Vandereycken and Stefan Vandewalle. A riemannian optimization approach for computing low-rank solutions of lyapunov equations. SIAM Journal on Matrix Analysis and Applications, 31 (5):2553-2579, 2010.
Tran Dang, Quang Vinh, Yi Tay, Shuai Zhang, Gao Cong, Xiao-Li Li, arXiv:1809.01703Hyperbolic recommender systems. arXiv preprintTran Dang Quang Vinh, Yi Tay, Shuai Zhang, Gao Cong, and Xiao-Li Li. Hyperbolic recommender systems. arXiv preprint arXiv:1809.01703, 2018.
D Matthew, Zeiler, arXiv:1212.5701Adadelta: an adaptive learning rate method. arXiv preprintMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
First-order methods for geodesically convex optimization. Hongyi Zhang, Suvrit Sra, Conference on Learning Theory. Hongyi Zhang and Suvrit Sra. First-order methods for geodesically convex optimization. In Conference on Learning Theory, pp. 1617-1638, 2016.
Hongyi Zhang, Suvrit Sra, arXiv:1806.02812Towards riemannian accelerated gradient methods. arXiv preprintHongyi Zhang and Suvrit Sra. Towards riemannian accelerated gradient methods. arXiv preprint arXiv:1806.02812, 2018.
Riemannian svrg: Fast stochastic optimization on riemannian manifolds. Hongyi Zhang, J Sashank, Suvrit Reddi, Sra, Advances in Neural Information Processing Systems. Hongyi Zhang, Sashank J Reddi, and Suvrit Sra. Riemannian svrg: Fast stochastic optimization on riemannian manifolds. In Advances in Neural Information Processing Systems, pp. 4592-4600, 2016.
| []
|
[
"FULLY SUPERVISED SPEAKER DIARIZATION",
"FULLY SUPERVISED SPEAKER DIARIZATION"
]
| [
"Aonan Zhang \nGoogle Inc\nUSA\n\nColumbia University\nUSA\n",
"Quan Wang [email protected] \nGoogle Inc\nUSA\n",
"Zhenyao Zhu [email protected] \nGoogle Inc\nUSA\n",
"John Paisley [email protected] \nColumbia University\nUSA\n",
"Chong Wang [email protected] \nGoogle Inc\nUSA\n"
]
| [
"Google Inc\nUSA",
"Columbia University\nUSA",
"Google Inc\nUSA",
"Google Inc\nUSA",
"Columbia University\nUSA",
"Google Inc\nUSA"
]
| []
| In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN). Given extracted speaker-discriminative embeddings (a.k.a. d-vectors) from input utterances, each individual speaker is modeled by a parameter-sharing RNN, while the RNN states for different speakers interleave in the time domain. This RNN is naturally integrated with a distance-dependent Chinese restaurant process (ddCRP) to accommodate an unknown number of speakers. Our system is fully supervised and is able to learn from examples where time-stamped speaker labels are annotated. We achieved a 7.6% diarization error rate on NIST SRE 2000 CALLHOME, which is better than the state-of-the-art method using spectral clustering. Moreover, our method decodes in an online fashion while most state-of-the-art systems rely on offline clustering. | 10.1109/icassp.2019.8683892 | [
"https://arxiv.org/pdf/1810.04719v4.pdf"
]
| 52,966,666 | 1810.04719 | e36c8a988c58950ae48d5b1f0c2043d221cbfb68 |
FULLY SUPERVISED SPEAKER DIARIZATION
Aonan Zhang
Google Inc
USA
Columbia University
USA
Quan Wang [email protected]
Google Inc
USA
Zhenyao Zhu [email protected]
Google Inc
USA
John Paisley [email protected]
Columbia University
USA
Chong Wang [email protected]
Google Inc
USA
FULLY SUPERVISED SPEAKER DIARIZATION
Index Terms-Speaker diarizationd-vectorclusteringrecur- rent neural networksChinese restaurant process
In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN). Given extracted speaker-discriminative embeddings (a.k.a. d-vectors) from input utterances, each individual speaker is modeled by a parameter-sharing RNN, while the RNN states for different speakers interleave in the time domain. This RNN is naturally integrated with a distance-dependent Chinese restaurant process (ddCRP) to accommodate an unknown number of speakers. Our system is fully supervised and is able to learn from examples where time-stamped speaker labels are annotated. We achieved a 7.6% diarization error rate on NIST SRE 2000 CALLHOME, which is better than the state-of-the-art method using spectral clustering. Moreover, our method decodes in an online fashion while most state-of-the-art systems rely on offline clustering.
INTRODUCTION
Aiming to solve the problem of "who spoke when", most existing speaker diarization systems consist of multiple relatively independent components [1,2,3], including but not limited to: (1) A speech segmentation module, which removes the non-speech parts, and divides the input utterance into small segments; (2) An embedding extraction module, where speaker-discriminative embeddings such as speaker factors [4], i-vectors [5], or d-vectors [6] are extracted from the small segments; (3) A clustering module, which determines the number of speakers, and assigns speaker identities to each segment; (4) A resegmentation module, which further refines the diarization results by enforcing additional constraints [1].
For the embedding extraction module, recent work [2,3,7] has shown that the diarization performance can be significantly improved by replacing i-vectors [5] with neural network embeddings, a.k.a. d-vectors [6,8]. This is largely due to the fact that neural networks can be trained with big datasets, such that the model is sufficiently robust against varying speaker accents and acoustic conditions in different use scenarios.
However, there is still one component that is unsupervised in most modern speaker diarization systems -the clustering module. Examples of clustering algorithms that have been used in diarization systems include Gaussian mixture models [7,9], mean shift [10], agglomerative hierarchical clustering [2,11], k-means [3,12], Links [3,13], and spectral clustering [3,14].
The first author performed this work as an intern at Google. The implementation of the algorithms in this paper is available at: https://github.com/google/uis-rnn Since both the number of speakers and the segment-wise speaker labels are determined by the clustering module, the quality of the clustering algorithm is critically important to the final diarization performance. However, the fact that most clustering algorithms are unsupervised means that, we will not able to improve this module by learning from examples when the time-stamped speaker labels ground truth are available. In fact, in many domain-specific applications, it is relatively easy to obtain such high quality annotated data.
In this paper, we replace the unsupervised clustering module by an online generative process that naturally incorporates labelled data for training. We call this method unbounded interleaved-state recurrent neural network (UIS-RNN), based on these facts: (1) Each speaker is modeled by an instance of RNN, and these instances share the same parameters; (2) An unbounded number of RNN instances can be generated; (3) The states of different RNN instances, corresponding to different speakers, are interleaved in the time domain. Within a fully supervised framework, our method in addition handles complexities in speaker diarization: it automatically learns the number of speakers within each utterance via a Bayesian non-parametric process, and it carries information through time via the RNN.
The contributions of our work are summarized as follows:
1. Unbounded interleaved-state RNN, a trainable model for the general problem of segmenting and clustering temporal data by learning from examples. 2. Framework for a fully supervised speaker diarization system. 3. New state-of-the-art performance on NIST SRE 2000 CALL-HOME benchmark. 4. Online diarization solution with offline quality.
BASELINE SYSTEM USING CLUSTERING
Our diarization system is built on top of the recent work by Wang et al. [3]. Specifically, we use exactly the same segmentation module and embedding extraction module as their system, while replacing their clustering module by an unbounded interleaved-state RNN.
As a brief review, in the baseline system [3], a text-independent speaker recognition network is used to extract embeddings from sliding windows of size 240ms and 50% overlap. A simple voice activity detector (VAD) with only two full-covariance Gaussians is used to remove non-speech parts, and partition the utterance into nonoverlapping segments with max length of 400ms. Then we average window-level embeddings to segment-level d-vectors, and feed them into the clustering algorithm to produce final diarization results. The workflow of this baseline system is shown in Fig. 1.
The text-independent speaker recognition network for computing embeddings has three LSTM layers and one linear layer. The network is trained with the state-of-the-art generalized end-to-end loss [6]. We have been retraining this model for better performance, which will be later discussed in Section 4. Fig. 1. The baseline system architecture [3].
UNBOUNDED INTERLEAVED-STATE RNN
Overview of approach
Given an utterance, from the embedding extraction module, we get an observation sequence of embeddings X = (x1, x2, . . . , xT ), where each xt ∈ R d . Each entry in this sequence is a real-valued d-vector corresponding to a segment in the original utterance. In the supervised speaker diarization scenario, we also have the ground truth speaker labels for each segment Y = (y1, y2, . . . , yT ). Without loss of generality, let Y be a sequence of positive integers by the order of appearance. For example, Y = (1, 1, 2, 3, 2, 2) means this utterance has six segments, from three different speakers, where yt = k means segment t belongs to speaker k.
UIS-RNN is an online generative process of an entire utterance (X, Y), where 1
p(X, Y) = p(x1, y1) · T t=2 p(xt, yt|x [t−1] , y [t−1] ).
(1)
To model speaker changes, we use an augmented representation
p(X, Y, Z) = p(x1, y1)· T t=2 p(xt, yt, zt|x [t−1] , y [t−1] , z [t−1] ),(2)
where Z = (z2, . . . , zT ), and zt
= 1(yt = yt−1) ∈ {0, 1}
is a binary indicator for speaker changes. For example, if Y = (1, 1, 2, 3, 2, 2), then Z = (0, 1, 1, 1, 0). Note that Z is uniquely determined by Y, but Y cannot be uniquely determined by a given Z, since we don't know which speaker we are changing to. Here we leave z1 undefined, and factorize each product term in Eq. (2) as three parts that separately model sequence generation, speaker assignment, and speaker change:
p(xt, yt, zt|x [t−1] , y [t−1] ) = p(xt|x [t−1] , y [t] ) sequence generation · p(yt|zt, y [t−1] ) speaker assignment · p(zt|z [t−1] ) speaker change .(3)
For the first entry of the sequence, we let y1 = 1 and there is no need to model speaker assignment and speaker change. In Section 3.2, we introduce these components separately. 1 We denote an ordered set (1, 2, . . . , t) as [t].
Details on model components
Speaker change
We assume the probability of zt ∈ {0, 1} follows:
p(zt = 1|z [t−1] , λ λ λ) = g λ λ λ (z [t−1] ),(4)
where g λ λ λ (·) is a function paramaterized by λ λ λ. Since zt indicates speaker change at time t, we have
p(yt = yt−1|zt, y [t−1] ) = 1 − zt.(5)
In general, g λ λ λ (·) could be any function, such as an RNN. But for simplicy, in this work, we make it a constant value g λ λ λ (z [t−1] ) = p0 ∈ [0, 1]. This means {zt} t∈ [2,T ] are independent binary variables parameterized by λ λ λ = {p0}:
zt ∼ iid. Binary(p0).(6)
Speaker assignment process
One of the biggest challenges in speaker diarization is to determine the total number of speakers for each utterance. To model the speaker turn behavior in an utterance, we use a distance dependent Chinese restaurant process (ddCRP) [15], a Bayesian nonparametric model that can potentially model an unbounded number of speakers. Specifically, when zt = 0, the speaker remains unchanged. When zt = 1, we let
p(yt = k|zt = 1, y [t−1] ) ∝ N k,t−1 , p(yt = Kt−1 + 1|zt = 1, y [t−1] ) ∝ α.(7)
Here Kt−1 := max y [t−1] is the total number of unique speakers up to the (t − 1)-th entry. Since zt = 1 indicates a speaker change,
we have k ∈ [Kt−1] \ {yt−1}.
In addition, we let N k,t−1 be the number of blocks for speaker k in y [t −1] . A block is defined as a maximum-length subsequence of continuous segments that belongs to a single speaker. For example, if y [6] = (1, 1, 2, 3, 2, 2), then there are four blocks (1, 1)|(2)|(3)|(2, 2) separated by the vertical bar, with N1,5 = 1, N2,5 = 2, N3,5 = 1. The probability of switching back to a previously appeared speaker is proportional to the number of continuous speeches she/he has spoken. There is also a chance to switch to a new speaker, with a probability proportional to a constant α. The joint distribution of Y given Z is
p(Y|Z, α) = α K T −1 K T k=1 Γ(N k,T ) T t=2 ( k∈[K t−1 ]\{y t−1 } N k,t−1 + α) 1(z t =1)
.
Sequence generation
Our basic assumption is that, the observation sequence of speaker embeddings X is generated by distributions that are parameterized by the output of an RNN. This RNN has multiple instantiations, corresponding to different speakers, and they share the same set of RNN parameters θ θ θ. In our work, we use gated recurrent unit (GRU) [16] as our RNN model, to memorize long-term dependencies. At time t, we define ht as the state of the GRU corresponding to speaker yt, and
mt = f (ht|θ θ θ)(9)
as the output of the entire network, which may contain other layers. Let t := max{0, s < t : ys = yt} be the last time we saw speaker yt before t, then: Fig. 2. Generative process of UIS-RNN. Colors indicate labels for speaker segments. There are four options for y7 given x [6] , y [6] .
ht = GRU(x t , h t |θ θ θ),(10)
where we can assume x0 = 0 and h0 = 0, meaning all GRU instances are initialized with the same zero state. Based on the GRU outputs, we assume the speaker embeddings are modeled by:
xt|x [t−1] , y [t] ∼ N (µ µ µt, σ 2 I),(11)
where µ µ µt = ( t s=1 1(ys = yt)) −1 · ( t s=1 1(ys = yt)ms) is the averaged GRU output for speaker yt.
Summary of the model
We briefly summarize UIS-RNN in Fig. 2, where Z and λ λ λ are omitted for a simple demonstration. At the current stage (shown in solid lines) y [6] = (1, 1, 2, 3, 2, 2). There are four options for y7: 1, 2, 3 (existing speakers), and 4 (a new speaker). The probability for generating a new observation x7 (shown in dashed lines) depends both on previous label assignment sequence y [6] , and previous observation sequence x [6] .
MLE Estimation
Given a training set (X1, X2, . . . , XN ) containing N utterances together with their labels (Y1, Y2, . . . , YN ), we maximize the following log joint likelihood: max θ θ θ,α,σ 2 ,λ λ λ N n=1 ln p(Xn, Yn, Zn| θ θ θ, α, σ 2 , λ λ λ).
Here we include all hyper-parameters, and each term in Eq. (12) can be factorized exactly as Eq. (2).
The estimation of λ λ λ depends on how g λ λ λ (·) is defined. When we simply have g λ λ λ (z [t−1] ) = p0, we have a closed-form solution:
p * 0 = N n=1 Tn t=2 1(xn,t = xn,t−1) N n=1 Tn − N ,(13)
where Tn denotes the sequence length of the nth utterance. For θ θ θ and σ 2 , there is no closed-form update. We use stochastic gradient ascent by randomly selecting a subset B (τ ) ⊂ [N ] of |B (τ ) | = b utterances. For θ θ θ, we update:
θ θ θ (τ ) =θ θ θ (τ −1) + N ρ (τ ) b n∈B (τ )
∇ θ θ θ ln p(Xn| Yn, Zn, θ θ θ, −), (14) since θ θ θ is independent of (Yn, Zn). Eq. (15) also applies to σ 2 by replacing θ θ θ with σ 2 . For α, we update
α (τ ) = α (τ −1) + N ρ (τ ) b n∈B (τ ) ∇α ln p(Yn| Zn, α, −),(15)Data: X test = (x test 1 , x test 2 , . . . , x test T ) Result: Y * = (y * 1 , y * 2 , . . . , y * T ) initialize x0 = 0, h0 = 0; for t = 1, 2, . . . , T do (y * t , z * t ) = arg max (y t ,z t ) ln p(zt) Eq. (6) + ln p(yt|zt, y * [t−1] ) Eq. (5, 7) + ln p(xt|x [t−1] , y * [t−1] , yt)
Eq. (11) update N k,t−1 and GRU hidden states; end Algorithm 1: Online greedy MAP decoding for UIS-RNN.
where p(Yn| Zn, α, −) is given in Eq. (8). In our experiments, we run multiple iterations with a constant step size ρ (τ ) = ρ until convergence.
MAP Decoding
Since we can decode each testing utterance in parallel, here we assume we are given a testing utterance X test = (x1, x2 . . . , xT ) without labels. The ideal goal is to find
Y * = arg max Y ln p(X test , Y).(16)
However, this requires an exhaustive search over the entire combinatorial label space with complexity O(T !), which is impractical. Instead, we use an online decoding approach which sequentially performs a greedy search, as shown in Alg. 1. This will significantly reduce computational complexity to O(T 2 ). We observe that in most cases the maximum number of speakers per-utterance is bounded by a constant C. In that case, the complexity will further reduce to O(T ). In practice, we apply a beam search [17] on the decoding algorithm, and adjust the number of look-ahead entries to achieve better decoding results.
EXPERIMENTS
Speaker recognition model
We have been retraining the speaker recognition network with more data and minor tricks to improve its performance. Let's call the text-independent speaker recognition model in [3,6,18] as "d-vector V1". This model is trained with 36M utterances from 18K US English speakers, which are all mobile phone data based on anonymized voice query logs.
To train a new version of the model, which we call "d-vector V2" [19], we added: (1) non-US English speakers; (2) data from far-field devices; (3) public datasets including LibriSpeech [20], VoxCeleb [21], and VoxCeleb2 [22]. The non-public part contains 34M utterances from 138K speakers, while the public part is added to the training process using the MultiReader approach [6].
Another minor but important trick is that, the speaker recognizer model used in [3] and [6] are trained on windows of size 1600ms, which causes performance degradation when we run inference on smaller windows. For example, in the diarization system, the window size is only 240ms. Thus we have retrained a new model "dvector V3" by using variable-length windows, where the window size is drawn from a uniform distribution within [240ms, 1600ms] during training.
The speaker verification Equal Error Rate (EER) of the three models on two testing sets are shown in Table 1. On speaker verification tasks, adding more training data has significantly improved Table 1. Speaker verification EER of the three speaker recognition models. en-ALL represents all English locales. The EER=3.55% for d-vector V1 on en-US phone data is the same as the number reported in Table 3 of [6].
Model
EER (%) on en-US EER (%) on en-ALL phone data phone + farfield data d-vector V1 3.55 6.14 d-vector V2
3.06 2.03 d-vector V3
3.03 1.91 the performance, while using variable-length windows for training also slightly further improved EER.
UIS-RNN setup
For the speaker change, as we have stated in Section 3.2.1, we assume {zt} t∈[2,T ] follow independent identical binary distributions for simplicity. Our sequence generation model is composed of one layer of 512 GRU cells with a tanh activation, followed by two fully-connected layers each with 512 nodes and a ReLU [23] activation. The two fully-connected layers corresponds to Eq. (9).
For decoding, we use beam search of width 10.
Evaluation protocols
Our evaluation setup is exactly the same as [3], which is based on the pyannote.metrics library [24]. We follow these common conventions of other works:
• We evaluate on single channel audio.
• We exclude overlapped speech from evaluation.
• We tolerate errors less than 250ms in segment boundaries.
• We report the confusion error, which is usually directly referred to as Diarization Error Rate (DER) in the literature.
Datasets
For the evaluation, we use 2000 NIST Speaker Recognition Evaluation (LDC2001S97), Disk-8, which is usually directly referred to as "CALLHOME" in literature. It contains 500 utterances distributed across six languages: Arabic, English, German, Japanese, Mandarin, and Spanish. Each utterance contains 2 to 7 speakers. Since our approach is supervised, we perform a 5-fold cross validation on this dataset. We randomly partition the dataset into five subsets, and each time leave one subset for evaluation, and train UIS-RNN on the other four subsets. Then we combine the evaluation on five subsets and report the averaged DER.
Besides, we also tried to use two off-domain datasets for training UIS-RNN: (1) 2000 NIST Speaker Recognition Evaluation, Disk-6, which is often referred to as "Switchboard"; (2) ICSI Meeting Corpus [25]. We first tried to train UIS-RNN purely on off-domain datasets, and evaluate on CALLHOME; we then tried to add the offdomain datasets to the training partition of each of the 5-fold.
Results
We report the diarization performance results on 2000 NIST SRE Disk-8 in Table 2. For each version of the speaker recognition model, we compare UIS-RNN with two baseline approaches: k-means and spectral offline clustering. For k-means and spectral clustering, the number of speakers is adaptively determined as in [3]. From the table, we see that the biggest improvement in DER actually comes from upgrading the speaker recognition model from V2 to V3. This is because in V3, we have the window size consistent between training time and diarization inference time, which was a big issue in V1 and V2.
UIS-RNN performs noticeably better than spectral offline clustering, when using the same speaker recognition model. It is also important to note that UIS-RNN inference produces speaker labels in an online fashion. As discussed in [3], online unsupervised clustering algorithms usually perform significantly worse than offline clustering algorithms such as spectral clustering.
Also, adding more data to train UIS-RNN also improved DER, which is consistent with our expectation -UIS-RNN benefits from learning from more examples. Specifically, while large scale offdomain training already produces great results in practice (Disk-6 + ICSI), the availability of in-domain data can further improve the performance (5-fold + Disk-6 + ICSI).
CONCLUSIONS
In this paper, we presented a speaker diarization system where the commonly used clustering module is replaced by a trainable unbounded interleaved-state RNN. Since all components of this system can be learned in a supervised manner, it is preferred over unsupervised systems in scenarios where training data with high quality time-stamped speaker labels are available. On the NIST SRE 2000 CALLHOME benchmark, using exactly the same speaker embeddings, this new approach, which is an online algorithm, outperforms the state-of-the-art spectral offline clustering algorithm.
Besides, the proposed UIS-RNN is a generic solution to the sequential clustering problem, with other potential applications such as face clustering in videos. One interesting future work direction is to directly use accoustic features instead of pre-trained embeddings as the observation sequence for UIS-RNN, such that the entire speaker diarization system becomes an end-to-end model.
For UIS-RNN, we show results for three types of evaluation settings: (1) indomain training (5-fold); (2) off-domain training (Disk-6 + ICSI); and (3) in-domain plus off-domain training.
1 .
1arXiv:1810.04719v4 [eess.AS] 17 Dec 2018……
sliding
windows
window step
window size
……
d-vectors
……
segments
……
diarization
results
Run LSTM
Aggregate
Cluster
Process
Table 2 .
2DER on NIST SRE 2000 CALLHOME, with comparison to other systems in literature. VB is short for Variational Bayesian resegmentation[1]. The DER=12.0% for d-vector V1 and spectral clustering is the same as the number reported inTable 2of[3]. Garcia-Romero et al.[2] (+VB) 12.8(9.9) d-vector
Method
Training data
DER (%)
V1
k-means
-
17.4
spectral
-
12.0
UIS-RNN
5-fold
11.7
UIS-RNN
Disk-6 + ICSI
11.7
UIS-RNN 5-fold + Disk-6 + ICSI
10.6
V2
k-means
-
19.1
spectral
-
11.6
UIS-RNN
5-fold
10.9
UIS-RNN
Disk-6 + ICSI
10.8
UIS-RNN 5-fold + Disk-6 + ICSI
9.6
V3
k-means
-
12.3
spectral
-
8.8
UIS-RNN
5-fold
8.5
UIS-RNN
Disk-6 + ICSI
8.2
UIS-RNN 5-fold + Disk-6 + ICSI
7.6
Castaldo et al. [4]
13.7
Shum et al. [9]
14.5
Senoussaoui et al. [10]
12.1
Sell et al. [1] (+VB)
13.7 (11.5)
Diarization resegmentation in the factor analysis subspace. Gregory Sell, Daniel Garcia-Romero, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEGregory Sell and Daniel Garcia-Romero, "Diarization re- segmentation in the factor analysis subspace," in Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 4794-4798.
Speaker diarization using deep neural network embeddings. Daniel Garcia-Romero, David Snyder, Gregory Sell, Daniel Povey, Alan Mccree, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEDaniel Garcia-Romero, David Snyder, Gregory Sell, Daniel Povey, and Alan McCree, "Speaker diarization using deep neural network embeddings," in International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, pp. 4930-4934.
Speaker diarization with lstm. Quan Wang, Carlton Downey, Li Wan, Philip Andrew Mansfield, Ignacio Lopz Moreno, International Conference on Acoustics, Speech and Signal Processing. Quan Wang, Carlton Downey, Li Wan, Philip Andrew Mans- field, and Ignacio Lopz Moreno, "Speaker diarization with lstm," in International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5239-5243.
Stream-based speaker segmentation using speaker factors and eigenvoices. Fabio Castaldo, Daniele Colibro, Emanuele Dalmasso, Pietro Laface, Claudio Vair, International Conference on Acoustics, Speech and Signal Processing. ICASSPFabio Castaldo, Daniele Colibro, Emanuele Dalmasso, Pietro Laface, and Claudio Vair, "Stream-based speaker segmentation using speaker factors and eigenvoices," in International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2008, pp. 4133-4136.
Front-end factor analysis for speaker verification. Najim Dehak, J Patrick, Réda Kenny, Pierre Dehak, Pierre Dumouchel, Ouellet, IEEE Transactions on Audio, Speech, and Language Processing. 194Najim Dehak, Patrick J Kenny, Réda Dehak, Pierre Du- mouchel, and Pierre Ouellet, "Front-end factor analysis for speaker verification," IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788-798, 2011.
Generalized end-to-end loss for speaker verification. Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEELi Wan, Quan Wang, Alan Papir, and Ignacio Lopez Moreno, "Generalized end-to-end loss for speaker verification," in In- ternational Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP). IEEE, 2018, pp. 4879-4883.
Speaker diarization using convolutional neural network for statistics accumulation refinement. Zbynȇk Zajíc, Marek Hrúz, Ludȇk Müller, Zbynȇk Zajíc, Marek Hrúz, and Ludȇk Müller, "Speaker di- arization using convolutional neural network for statistics ac- cumulation refinement," in INTERSPEECH, 2017.
End-to-end text-dependent speaker verification. Georg Heigold, Ignacio Moreno, Samy Bengio, Noam Shazeer, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEGeorg Heigold, Ignacio Moreno, Samy Bengio, and Noam Shazeer, "End-to-end text-dependent speaker verification," in International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP). IEEE, 2016, pp. 5115-5119.
Unsupervised methods for speaker diarization: An integrated and iterative approach. H Stephen, Najim Shum, Réda Dehak, James R Dehak, Glass, IEEE Transactions on Audio, Speech, and Language Processing. 2110Stephen H Shum, Najim Dehak, Réda Dehak, and James R Glass, "Unsupervised methods for speaker diarization: An in- tegrated and iterative approach," IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 10, pp. 2015- 2028, 2013.
A study of the cosine distancebased mean shift for telephone speech diarization. Mohammed Senoussaoui, Patrick Kenny, Themos Stafylakis, Pierre Dumouchel, IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP). 221Mohammed Senoussaoui, Patrick Kenny, Themos Stafylakis, and Pierre Dumouchel, "A study of the cosine distance- based mean shift for telephone speech diarization," IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 22, no. 1, pp. 217-227, 2014.
Speaker diarization with plda i-vector scoring and unsupervised calibration. Gregory Sell, Daniel Garcia-Romero, Spoken Language Technology Workshop (SLT). IEEEGregory Sell and Daniel Garcia-Romero, "Speaker diariza- tion with plda i-vector scoring and unsupervised calibration," in Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE, 2014, pp. 413-417.
Developing on-line speaker diarization system. Dimitrios Dimitriadis, Petr Fousek, Dimitrios Dimitriadis and Petr Fousek, "Developing on-line speaker diarization system," pp. 2739-2743, 2017.
Links: A highdimensional online clustering method. Philip Andrew Mansfield, Quan Wang, Carlton Downey, Li Wan, Ignacio Lopez Moreno, arXiv:1801.10123arXiv preprintPhilip Andrew Mansfield, Quan Wang, Carlton Downey, Li Wan, and Ignacio Lopez Moreno, "Links: A high- dimensional online clustering method," arXiv preprint arXiv:1801.10123, 2018.
A spectral clustering approach to speaker diarization.," in IN-TERSPEECH. Huazhong Ning, Ming Liu, Hao Tang, Thomas S Huang, Huazhong Ning, Ming Liu, Hao Tang, and Thomas S Huang, "A spectral clustering approach to speaker diarization.," in IN- TERSPEECH, 2006.
Distance dependent chinese restaurant processes. M David, Peter I Blei, Frazier, Journal of Machine Learning Research. 12David M Blei and Peter I Frazier, "Distance dependent chinese restaurant processes," Journal of Machine Learning Research, vol. 12, no. Aug, pp. 2461-2488, 2011.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, "Learning phrase representations using rnn encoder-decoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014.
Speech understanding systems: Report of a steering committee. F Mark, Medress, S Franklin, Jim W Cooper, Forgie, Dennis H Green, Michael H Klatt, Edward P O'malley, Allen Neuburg, Newell, Reddy, Ritea, Artificial Intelligence. 93Mark F. Medress, Franklin S Cooper, Jim W. Forgie, CC Green, Dennis H. Klatt, Michael H. O'Malley, Edward P Neuburg, Allen Newell, DR Reddy, B Ritea, et al., "Speech understanding systems: Report of a steering committee," Arti- ficial Intelligence, vol. 9, no. 3, pp. 307-316, 1977.
Transfer learning from speaker verification to multispeaker text-to-speech synthesis. Ye Jia, Yu Zhang, Ron J Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Conference on Neural Information Processing Systems (NIPS). Ye Jia, Yu Zhang, Ron J Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Igna- cio Lopez Moreno, et al., "Transfer learning from speaker ver- ification to multispeaker text-to-speech synthesis," in Confer- ence on Neural Information Processing Systems (NIPS), 2018.
Voicefilter: Targeted voice separation by speaker-conditioned spectrogram masking. Quan Wang, Hannah Muckenhirn, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A Saurous, Ron J Weiss, Ye Jia, Ignacio Lopez Moreno, arXiv:1810.04826arXiv preprintQuan Wang, Hannah Muckenhirn, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A. Saurous, Ron J. Weiss, Ye Jia, and Ignacio Lopez Moreno, "Voicefilter: Targeted voice separation by speaker-conditioned spectrogram mask- ing," arXiv preprint arXiv:1810.04826, 2018.
Librispeech: an asr corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEVassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur, "Librispeech: an asr corpus based on public do- main audio books," in International Conference on Acous- tics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 5206-5210.
Voxceleb: a large-scale speaker identification dataset. Arsha Nagrani, Joon Son Chung, Andrew Zisserman, arXiv:1706.08612arXiv preprintArsha Nagrani, Joon Son Chung, and Andrew Zisserman, "Voxceleb: a large-scale speaker identification dataset," arXiv preprint arXiv:1706.08612, 2017.
Voxceleb2: Deep speaker recognition. Arsha Joon Son Chung, Andrew Nagrani, Zisserman, arXiv:1806.05622arXiv preprintJoon Son Chung, Arsha Nagrani, and Andrew Zisserman, "Voxceleb2: Deep speaker recognition," arXiv preprint arXiv:1806.05622, 2018.
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, Proceedings of the 27th international conference on machine learning (ICML). the 27th international conference on machine learning (ICML)Vinod Nair and Geoffrey E Hinton, "Rectified linear units im- prove restricted boltzmann machines," in Proceedings of the 27th international conference on machine learning (ICML), 2010, pp. 807-814.
metrics: a toolkit for reproducible evaluation, diagnostic, and error analysis of speaker diarization systems. Hervé Bredin, 10090hypothesisHervé Bredin, "pyannote.metrics: a toolkit for repro- ducible evaluation, diagnostic, and error analysis of speaker diarization systems," hypothesis, vol. 100, no. 60, pp. 90, 2017.
The icsi meeting corpus. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE1Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gel- bart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al., "The icsi meeting corpus," in International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2003, vol. 1, pp. I-I.
| [
"https://github.com/google/uis-rnn"
]
|
[
"Learning Where to Fixate on Foveated Images",
"Learning Where to Fixate on Foveated Images"
]
| [
"Hanxiao Wang \nBoston University\n\n",
"Venkatesh Saligrama \nBoston University\n\n",
"Stan Sclaroff [email protected] \nBoston University\n\n",
"Vitaly Ablavsky [email protected] \nBoston University\n\n"
]
| [
"Boston University\n",
"Boston University\n",
"Boston University\n",
"Boston University\n"
]
| []
| Foveation, the ability to sequentially acquire high-acuity regions of a scene viewed initially at low-acuity, is a key property of biological vision systems. In a computer vision system, foveation is also desired to increase data efficiency and derive task-relevant features. Yet, most existing deep learning models lack the ability to foveate. In this paper, we propose a deep reinforcement learning-based foveation model, DRIFT, and apply it to challenging finegrained classification tasks. Training of DRIFT requires only image-level category labels and encourages fixations to contain discriminative information while maintaining data efficiency. Specifically, we formulate foveation as a sequential decision-making process and train a foveation actor network with a novel Deep Deterministic Policy Gradient by Conditioned Critic and Coaching (DDPGC3) algorithm. In addition, we propose to shape the reward to provide informative feedback after each fixation to better guide the RL training. We demonstrate the effectiveness of our method on five fine-grained classification benchmark datasets, and show that the proposed approach achieves state-of-the-art performance using an order-of-magnitude fewer pixels. | null | [
"https://arxiv.org/pdf/1811.06868v1.pdf"
]
| 53,670,300 | 1811.06868 | c50916b8af46b577081b0d8dc338b01378e4b690 |
Learning Where to Fixate on Foveated Images
Hanxiao Wang
Boston University
Venkatesh Saligrama
Boston University
Stan Sclaroff [email protected]
Boston University
Vitaly Ablavsky [email protected]
Boston University
Learning Where to Fixate on Foveated Images
Foveation, the ability to sequentially acquire high-acuity regions of a scene viewed initially at low-acuity, is a key property of biological vision systems. In a computer vision system, foveation is also desired to increase data efficiency and derive task-relevant features. Yet, most existing deep learning models lack the ability to foveate. In this paper, we propose a deep reinforcement learning-based foveation model, DRIFT, and apply it to challenging finegrained classification tasks. Training of DRIFT requires only image-level category labels and encourages fixations to contain discriminative information while maintaining data efficiency. Specifically, we formulate foveation as a sequential decision-making process and train a foveation actor network with a novel Deep Deterministic Policy Gradient by Conditioned Critic and Coaching (DDPGC3) algorithm. In addition, we propose to shape the reward to provide informative feedback after each fixation to better guide the RL training. We demonstrate the effectiveness of our method on five fine-grained classification benchmark datasets, and show that the proposed approach achieves state-of-the-art performance using an order-of-magnitude fewer pixels.
Introduction
When we view a novel scene we do not perceive its full complexity at once, but rather, we foveate [22]. In doing so, our brain processes information from high-acuity foveal region and the coarser-resolution periphery. The resulting process is additive: Starting with an initial view, we "fill in" further details via fixations (see Fig.1). While modeling fixations in biologically-plausible ways is a challenging problem, we are inspired by the top-down process that (1) infers fixation points from low-acuity images where most of the contents are blurred; and (2) sequentially refines the next fixation decision based on the newly received high-acuity image region at the current fixation point(s).
An approach to automatic recognition that achieves high accuracy while also performing foveated exploration has several advantages compared to traditional approaches (that process the high-acuity image all at once). For fine-grained discrimination tasks foveating on the regions relevant to make the decision could yield higher accuracy, assuming the system can discover on its own where to foveate. In the scenario of "internet of things," high-resolution images captured by low-compute-power devices may need to be sent to a server for processing; given limited bandwidth, it is desirable to transmit a coarse-resolution full-scene image together with a selected handful of high-acuity sub-images. In real-time surveillance, a foveation strategy developed for static images could prove beneficial for active scene exploration via pan-tilt-zoom [3]. By contrast, the majority of current approaches to object/scene recognition with deep convolutional neural networks (CNN) lack the ability to foveate. Instead they require the entire high-resolution image and hope that irrelevant information (e.g., background in the case of object recognition) will be ignored during the forward pass. Recently, various attention models [35,9,14,27,20] have been proposed to enable CNNs to attend on specific image regions for multiple vision tasks. Nonetheless, these methods are still intrinsically different and less efficient in data exploration than a true foveation system. In particular, they still operate on high-acuity images and fail to infer atten- tions in a sequential manner, i.e., all image information is revealed as input instead of being accumulated via a series of fixations along the decision path.
In this work, we propose a novel Deep ReInforcement FoveaTion (DRIFT) model (Fig.2). The model consists of three neural networks: (1) a backbone convolutional neural network to extract visual features; (2) a foveation actor network to generate a sequence of fixation actions, i.e. the location of each fixation point and the size of the high-acuity region; and (3) an image classification network (for the sake of demonstrating our approach we apply it to image classification tasks). To train the foveation actor network, we use the classification network to decide whether a foveated image contains sufficient information for the input to be correctly classified. To achieve this, a novel reward function is designed to guide the foveation actor network, so that the predicted foveation regions can lead to high classification accuracy; data efficiency is maintained by restricting the amount of high-acuity regions being explored. Given a low-acuity image, the proposed model is able to not only infer the location of foreground objects, but also discover the most discriminative visual cues by its fixation points. Note that although this work mainly considers foveation within a classification context, the proposed foveation model can be applied to other domains, with the classification network and reward function being replaced accordingly.
Since the action space for foveation (location and size) is large, discretizing/enumerating this space would be intractable in practice. We propose retaining its continuity and train a foveation policy with a novel Deep Deterministic Policy Gradient by Conditioned Critic with Coaching (DDPGC3) algorithm. Compared to the original DDPG algorithm [16], several modifications are made to adapt it. First, DDPG trains a critic to approximate an action-value function [33] to evaluate the learned policy, and uses the evaluation results to guide reinforcement learning. While this action-value function is globally shared among all stateaction pairs in [16], we found this global function is difficult to approximate in our foveation problem. Instead, we propose training the critic to approximate a conditioned state-value function that is uniquely defined on every reinforcement learning episode and more easy to approximate. Second, the actor network parameters in [16] are updated completely based on the critic's evaluation. However, at the early training stage a deficient critic can easily misguide the updates. Consequently, we propose updating the actor network by coaching [10], i.e. by both the critic's policy evaluation as well as the actions generated by a heuristic oracle. It is observed that our improvements on DDPG stabilize the training procedure.
Contributions: (1) For the first time, a deep reinforcement learning based foveation model, DRIFT, is proposed, which is able to sequentially infer fixation points from lowacuity images; (2) A novel reward function is introduced so that the proposed DRIFT model can be trained with weak image-level class labels instead of more fine-grained labels on locations and sizes; (3) To facilitate RL training in the foveation problem, we propose training a foveation actor network via (I) a conditioned critic that approximates a unique state-value function conditioned on every input image, and (II) a coaching mechanism that combines the critic's evaluation with a heuristic oracle; (4) Extensive experiments were conducted on five popular fine-grained classification datasets: CUB-200-2011 [32], Stanford Cars [13], Stanford Dogs [7], Aircrafts [19], and Food-101 [4]. Our experiments show that, the proposed DRIFT model can achieve competitive performances with substantially fewer pixels compared to standard deep CNN models thanks to its foveation mechanism. Furthermore, since DRIFT discovers discriminative visual features, it can also be used to generate hard attentions which boost the standard classification performance for any existing deep CNN models.
Related Work
Foveation has inspired various computer vision researchers. Deng et al. [8] found that humans were able to correctly recognize an object by only revealing few highacuity regions, termed as fixations in this paper, on a heavily blurred image. They thus propose crowd-sourcing to collect such location annotations and training detectors on these discriminative features to boost classification accuracy. The idea is extended by Matzen et al. [21], which proposes an automatic but brute-force approach by initializing hundreds of random fixations per image, and iteratively optimizing to adjust each fixation based on the classification scores of their corresponding foveated images.
Different from [8,21], which take blurred inputs, Almeida et al. [2] and Recasens et al. [26] proposed generating attention maps from standard input images, and using them to either down-sample backgrounds [2] or up-sample foregrounds [26]. The approach of generating attention maps falls into a broader family of attention models, which has been broadly applied to image classification [38,9,39], segmentation [14], visual question answering [27,15], detection [37], image captioning [35], and so forth.
Unlike existing attention models, this paper focuses on automatically inferring fixations from extremely low-acuity inputs (e.g. 30 × 30), where traditional attention models fail to produce meaningful attention maps (see Sec. 4). Instead, we take a sequential and additive approach: The proposed DRIFT model is able to accumulate knowledge, recursively refine its fixations, and finally produce fixation locations that are optimized for classification accuracy as well as data efficiency. Benefiting from a reinforcement learning formulation, DRIFT avoids the exhaustive searching behavior, thus thus it is superior to the brute-force approach in [21].
In our reinforcement learning formulation, fixations are modeled as a sequence of actions generated by a foveation actor model, which shares a similar spirit to a few RL-based object detection works, e.g. [20,25,12,5,6]. However, our work is significantly different in that: (1) the proposed DRIFT learns fixation actions without any supervision on object locations, therefore more scalable to large size data;
(2) our action space is infinite, whereas in [25,12,5,6] the actions can only be chosen within a restricted list, which limits diversity of model outputs; and (3) our low-acuity input contains much less information compared to the fullresolution input images in these detection methods. Therefore, the foveation problem is more challenging in all dimensions of input, output and supervision.
Methodology
Foveation for Classification
First let us formally define the foveation problem. To narrow down the horizon of this study, we consider foveation within the context of image classification. Specifically, assume there are two types of representations for any scene or object: a low-acuity image I low with limited visual details, and a high-acuity image I high with abundant details. Directly operating on I high leads to high classification accuracy, but to collect or transmit all the details on I high is expensive.
The foveation pipeline is then defined as: (1) A foveation model infers a fixation point with I low as input, with the fixation point referring to a small circular image region uniquely defined by its spatial location and size; (2) The high-acuity content at this fixation point on I low is revealed, meaning that it is replaced by the corresponding region from I high . The resulting foveated image, I f ovea , has low-acuity content overall but high-acuity content only at fixation point(s). I f ovea can be used as input to repeat step (1) and generate the next fixation point, or directly used for classification after a few iterations. Intuitively, a good foveation model should reach a balance between accuracy and data efficiency, i.e. it should fixate on the most discriminative image regions so that good classification results can be achieved with I f ovea , while keeping the overall highacuity content explored to a very low extent.
We chose reinforcement learning (RL) to train the foveation model because that, the optimal foveation policy should not be learned with any explicit supervision other than the single objective of optimizing classification accuracy. It is thus difficult to define such a training loss for traditional supervised learning, but in RL, this training objective can be easily reflected by a reward function. In the following sections we show that the aforementioned foveation pipeline can be cast into a Markov Decision Process (MDP), and further trained by RL.
Markov Decision Process Formulation
We consider foveation as a sequential decision making problem, where the foveation model interacts with a dynamic environment E at discrete timesteps. At each timestep t, the foveation model receives an observation state s t , takes an action a t , and receives a scalar reward r t = r(s t , a t ). This MDP process can be formally modeled by: action space A, state space S, transition dynamics from s t to s t+1 after receiving a t , and reward function r(s t , a t ). The foveation model implements a policy function π, which maps states to a distribution over actions: 1]. Note that the return R t depends on the actions taken, and thus depends on the policy π. The goal of reinforcement learning is to find the best policy which maximizes the expected return E rt,st∼E,at∼π [ ∞ t=0 γ (t) r(s t , a t )]. Next we explain in detail our formulation for each component of the MDP. Episode: Given an image classification dataset, we take the originally provided images as the high-acuity representation I high , and generate the low-acuity representation I low by first down-sampling I high , then performing linear interpolation so that I low and I high have identical width and height. The generated I low is thus blurred and left with very limited visual details (see Fig.6). We then define one RL episode Environment actor critic Figure 3. Illustration on our RL pipeline. The critic Q θ is explained in Sec. 3.4. as: At t = 0 the foveation model initially observes the low-acuity image (I t = I low , t = 0); At each time step t ∈ {0, 1, · · · , T } where T is a pre-defined episode length, the foveation model predicts a fixation action a t based on I t , where a t specifies a small circular image region; After a t is generated, environment E renders I t+1 by replacing the low-acuity content within the region on I t specified by a t to the corresponding content of I high . Action Space: The fixation action a generated by the foveation model includes the predicted spatial location and size of a small circular image region. Specifically,
S → P(A). The return at timestep t is defined as the sum of discounted future rewards R t = ∞ i=t γ (i−t) r(s i , a i ) with a discount factor γ ∈ [0,a = (x, y, l), x, y, l ∈ [−1, 1],(1)
where (x, y) refer to the horizontal and vertical coordinates of the fixation center, and l the radius. To facilitate training, the actions are normalized to [−1, 1] rather than in real pixels. Suppose the original image size is (h, w), and the smallest and largest fixation point radius are predefined by b 1 and b 2 . With action a = (x, y, l), the real location and size of a fixation point can be obtained by
( (1+x) 2 w, (1+y) 2 h, b 1 + (1+l) 2 (b 2 − b 1 )
). State Space: As illustrated by Fig.2, we have a backbone network f and a classification network g, where f extracts visual features for any given input image, and g maps the extracted features to classification predictions. At time step t, the state s t of the observation I t is given by:
s t = [f (I t ), f (I t−1 ), f (I local t ), h t ],(2)
where f (I t ) and f (I t−1 ) are the feature vectors of the current and last step observations; f (I local t ) is the feature vector of the local image patch (resized to input size) on I t around the the newest fixation point a t−1 ; h t ∈ R dim(a)×T is an action history vector, represented by the concatenation of the past actions [a 0 , a 1 , · · · , a t−1 , O], with future actions padded by zeros. Initial State: At t = 0, the state s 0 is initialized by
[f (I 0 ), O], with f (I t−1 ), f (I local t
), h t padded by zeros.
Dense Reward by Relative Comparison
Our goal is to achieve high classification accuracy with the foveated image at t = T , with minimum high-acuity content explored by its fixation actions, i.e. accuracy and data efficiency. Taking Fig.1 as an example, to recognize the Chihuahua, good fixations should be focused on discriminative characteristics of the Chihuahua, such as its face and ears. A naive strategy is thus to check at the end of each episode whether I T can be correctly classified by g. However, this type of reward only provides an episode-level feedback which is sparse and temporally delayed, and thus difficult to be associated to each single action, also known as the credit assignment problem [24].
To mitigate such a problem, we propose a dense reward function defined at each time step t. Specifically, given action a t , the observation changes from I t to I t+1 , with the high-acuity region specified by a t revealed. Given the ground truth label y for current episode, we calculate two cross-entropy losses 1 t = XE(g(f (I t )), y) and 2 t = XE(g(f (I t+1 )), y). Intuitively, a good fixation a t should increase the classification model's confidence and make 2 t < 1 t . The accuracy reward is thus given by a relative comparison:
r a t = 1 t − 2 t(3)
In addition, we want to restrict the overall high-acuity content and prevent brute-force fixations. Let p t denote the overall amount of high-acuity pixels revealed at t, thr a pre-defined threshold, I(·) the indicator function, our data efficiency reward is:
r e t = −I(t = T, p t > thr)(4)
Reward r t is then defined by the sum: r t = r a t + λr e t , where λ is a hyperparameter controlling the trade-off between accuracy and data efficiency.
DDPG by Conditioned Critic with Coaching
Deep Deterministic Policy Gradient Recently proposed by Lillicrap et al. [16], DDPG algorithm trains deep neural networks to learn policies in high-dimensional, continuous action spaces, and thus is suitable for our problem. The key insight of DDPG is to apply an actor-critic setup [28]. Specifically, assume the policy π is modeled by an actor network parameterized by w, which outputs a continuous deterministic policy a = π w (s). To optimize the policy, it takes a typical policy evaluation and improvement scheme. Policy evaluation uses state-value function Q(s t , a t ) to evaluate the current policy's expected return, where Q(s t , a t ) = E r i≥t ,si>t∼E,ai>t∼π (R t |s t , a t ). Here the state-value function is approximated by a critic network parameterized by θ, denoted as Q θ , which only serves to train the actor network and is discarded during testing. Policy improvement uses the critic's estimation to improve the current policy model so that better Q(s t , a t ) can be reached.
Formally, the critic network is trained to optimize the following temporal-difference (TD) term according to the Bellman equation:
J θ = min θ E st,at,rt∼β [(Q θ (s t , a t ) − q t ) 2 ],(5)
where q t = r t + γQ θ (s t+1 , a t+1 ), β is the distribution of off-policy (s t , a t , r t , s t+1 , a t+1 ) samples stored in a replay buffer, Q θ is a separate target network used to generate TD targets q t . The weights of the target network is updated by having them slowly track the learned networks: θ = τ θ + (1 − τ )θ with τ 1. Both the replay buffer and target network are originally introduced in [23] to decorrelate training samples and stablize the training process.
In [16] the objective for training the actor network is simply to maximize the critic's estimation:
J w = max w E st,at,rt∼β [Q θ (s t , π w (s t ))].(6)
Conditioned Critic with Coaching In our experiments, DDPG fails to train a good foveation actor. Our analysis follows below: First, the global state-value function Q(s t , a t ) is too difficult to be approximated by the critic network Q θ (s t , a t ). Intuitively, given (s t , a t ), in the original formulation (Eq.5), the critic network is expected to estimate R t , which reflects the reward r t . r t depends on the ground-truth label y and g's predictionŷ, whileŷ further depends on the high-acuity region specified by a t . Since the critic does not have access to any of {I high , g, y, r t } by definition, the estimation of R t is difficult. Observing this issue, we propose training a conditioned critic which approximates a unique state-value function defined for each episode k, Q(s t , a t |C k ), where C k = [f (I k high ), y k ] is the condition, I k high and y k referring to the high-acuity image and ground-truth label for the k-th episode. By substituting the conditioned critic to Eq.5, now the new objective for critic training is:
J θ = min θ E st,at,rt∼β,C k ∼ψ [(Q θ (s t , a t |C k ) − q t ) 2 ],(7)
where q t = r t + γQ θ (s t+1 , a t+1 |C k ), and ψ is the distribution of episodes.
Second, in Eq.6 the actor network's optimization is solely based on the critic's estimation. Since the critic network parameter θ is randomly initialized, the critic's estimation is essentially randomly guessing at the beginning. This will significantly slow down the training process and impede the convergence of actor training. To solve this problem, we adopt the idea of coaching from imitation learning [10]. Specifically, we introduce a heuristic oracle 1 , which provides a policy better than randomly guessing, and can be also used to guide the actor training in the early stage. The actor training by coaching objective is defined as:
J w = max w E st,at,rt,a t ∼β,C k ∼ψ [(1 − )Q θ (s t , π w (s t )|C k ) − |π w (s t ) − a t | 2 ],(8)
where a t is the action taken by the heuristic oracle given s t , and is a exponentially decreasing factor with respect to the training progress. We name the actor-critic RL training strategy with Eq. 7-8 as DDPG by Conditioned Critic with Coaching (DDPGC3).
To obtain a heuristic oracle with low cost, inspired by [40], we take the final feature map prior to spatial pooling in f and perform a 1 × 1 convolution with the ground-truth class's classifier to get a response map m. For example, for an Inception-V3 backbone, the feature map is shaped 8 × 8 × 2048 and thus m is 8 × 8. We then sample a location (x , y ) based on m's values, randomly sample a radius l ∈ [−1, 1], and use (x , y , l ) to construct a t . Even though our naive a t only provides a coarse clue on the classifier's response over the low-acuity observation I t , it still helps to speed up and stabilize the actor training by significantly saving efforts spent on random explorations caused by the deficient critic during early training.
Implementation, Training and Deployment
Implementation We implemented our model using Tensorflow [1]. For the backbone f and classification network g, we adopted Inception-V3 architecture [30], i.e. f outputs a 2048-d feature vector, and g is a fully-connected layer followed by a softmax. The architectures for our actor newtwork π w and critic network Q θ are illustrated in Fig.4. For a given backbone with a default input size, e.g. 299 × 299 for Inception-V3, we define I high to be the standard input image, and generate I low by first down-sampling I high to 30 × 30 (only retaining 1% pixels), and interpolating it back to the input size. We set the smallest and largest fixation radius b 1 , b 2 to 15 and 75, episode length T to 5, dataefficiency reward trade-off λ to 5.0, threshold thr to 25% of I high pixels, discount factor γ to 0.9, target network update rate τ to 1e −4 . Similar to [16], we add Ornstein-Uhlenbeck noise [31] to our actor policy to tackle the problem of exploration in the action space. Training We first pre-train f • g on I high with a standard classification loss. Then we train π w and Q θ by the proposed DDPGC3 algorithm (Sec. 3.4) for 60 epochs, with a SGD optimizer, a batch size of 32, and a fixed learning rate of 1e −4 . For the beginning 50 epochs we freeze f and g, and then in the remaining epochs f and g are updated by a standard classification loss with the foveated images I T as input. The size of experience replay buffer β for RL was 50,000. The decreasing factor for coaching in Eq.8 is set to 0.7 initially and decays 0.96 every 1000 training updates. During training, (s t , a t , a t , r t , s t+1 , a t+1 ) samples are first pushed into the replay buffer, and then randomly sampled to update the actor and critic. Deployment Once the training finishes, the critic network is discarded. The remaining f • π w networks can be used to generate sequential fixation points, and f • g to classify the resulting foveated images. Note that, in this paper we focus on foveation with data efficiency while assuming a sufficient computational power. Thus, both π w and g take features generated by a shared backbone f . In real-world deployments where computation efficiency is also consid-ered, there is no limitation for π w to use cheaper features while g may use expensive features so that a balance between data and computation efficiency might be reached.
Experiments
We validated our formulation on a challenging task of fine-grained classification. Experiments were conducted on five fine-grained classification datasets: CUB-200-2011 [32], Stanford Cars [13], Dogs [7], Aircrafts [19], and Food-101 [4]. We chose these datasets since the distinctions among categories are subtle and highly local, which requires a foveation model to fixate on the most discriminative regions to classify an image. The detailed statistics of these datasets are summarized in Table 3 Table 3. The statistics of fine-grained datasets.
Classification with Foveated Images
Setting We first compare the proposed DRIFT to other foveation methods. Specifically, with the low-acuity images as input, we generate fixation points under different foveation strategies, and use the trained f • g to classify the foveated images. We measure the classification accuracy and the percentage of high-acuity pixels being explored. A 3 In both Table 1 good foveation strategy should achieve high classification accuracy while requiring fewer high-acuity pixels.
Baselines Seven foveation strategies are compared: (1) Random: Fixate at random locations; (2) Center: Fixate at the image center; (3) Saliency: Given an input image, we first obtain a class predictionŷ, generate a class-response saliency map forŷ following [40], and then sample a fixation location based on the map values. The procedure is repeated for T steps; (4) Attention: We trained a multiattention model, MA-CNN [39], with T parts. Given an input image I low , it generates T attention maps, from which T fixation locations are sampled. (5) BubbleNet: Bub-bleNet [21] initializes 128 fixation locations per image, iteratively optimizes each fixation and selects the best ones based on prediction entropy. We report its published results with the same Inception architecture; (6) DRIFT: Use the proposed model to generate sequential fixations; (7) DRIFT E : In this strategy, if the prediction's entropy on our I T is higher than a threshold, we explore all high-acuity pixels in I high instead. The threshold is controlled so that only 25% of the test images is used at full I high . For (1-2), we control the fixation radius so that 15% of high-acuity pixels are explored for easy comparisons. For (3)(4), we randomly sample the fixation radius.
Results The results are shown in Table 1. We also provide the direct classification results using I low and I high as context. First, observe that DRIFT consistently outperforms the other five foveation strategies. While exploring a similar number or fewer high-acuity pixels, the classifier achieves much higher accuracy with our fixation approach. For example, on Aircrafts [19] we achieve 86.7% accuracy by only exploring 14.4% of the high-acuity pixels, only 0.5% lower than the result with full high-acuity images (87.2%). Moreover, with DRIFT E we are able to obtain an even higher accuracy (88.0%). This indicates that DRIFT's fixations indeed contain the most discriminative regions, which can be validated by Fig.6. Taking a lowacuity image with limited information as input, it successfully fixates on objects of interest (e.g. the black dog), or the discriminative visual parts of an object (e.g. the headlight and grille of the BMW). Second, from Table 1 we see that traditional approaches like Saliency [40] and Attention [39] fail to infer good fixation locations from the low-acuity input (Fig.5), even worse than Center fixations. This is because they cannot sequentially accumulate knowledge and refine future fixations, which is tackled in DRIFT by the proposed state representations and RL training guided by the dense rewards. Third, observe that on all datasets except for Food-101, Center fixation performs much better than Random fixation. This is because these datasets are artificially constructed by human with a center bias. For images in real-world deployments where the center prior no longer holds, we can expect a larger performance gap between DRIFT and Center fixations.
Classification with High-Acuity Images
Setting and Baselines As shown in Fig.6, we can fit a bounding box around DRIFT's fixations. The box is similar to a hard attention, with the only difference that it is generated via a sequential foveation procedure from a lowacuity input image. In this setting, we simply treat DRIFT as an attention model, and verify whether it can boost classification results for any baseline classification model under the standard fine-grained classification setting. Specifically, we use DRIFT's hard attentions to zoom into the original images (Fig.6). Given a baseline model, we simply fuse its predictions on the original image and the zoomed image by DRIFT's attention. We tested two baseline models: Inception-V3 [30] and ResNet-50 [11]. For all the five datasets, they are pre-trained on ImageNet [7], and further trained for 30 epochs with a RMSProp optimizer and a batch size of 32. The learning rate is initialized as 0.01 and decays 0.96 every 4 epochs. The input sizes are 299 and 448 for Inception-V3 and ResNet-50, respectively. We use DRIFT I and DRIFT R to represent the corresponding classification results using our hard attentions together with Inception-V3 and ResNet-50 baseline models. Results Table 2 shows our results. We observe a clear pos- itive effect of DRIFT's attention selection on classification accuracy. In particular, on average DRIFT I is 1.9% higher than Inception-V3 in absolute accuracy, while DRIFT R is 2.1% higher than ResNet-50, and has already achieved better or comparable performance to existing state-of-the-arts. This again demonstrates DRIFT's ability to fixate on discriminative regions and filter out background clutter. Importantly, compared to existing attention models for finegrained classification, e.g. [9,18,29], DRIFT's attentions are generated by exploring only very limited (on average 12.3% in pixels) high-acuity data via its sequential fixation actions, making it more suitable for applications whose bottleneck is determined by data efficiency.
Discussion and Analysis
Where does DRIFT fixate? Two experiments were conducted. First, we aim to measure the correlation between DRIFT fixations to the locations of objects. Inspired by [36] we use hit rate to evaluate the localization performance. Specifically, taking the boxes generated by DRIFT as in Sec. 4.2, we count a box as a hit when its intersection with the ground-truth box 4 is greater than 90% of its own area, otherwise as a miss, and then measure #hits #hits+#misses . The localization performance is shown in Table 4. We also show results of a randomly generated box and a center located box of 1/2 image size. Observe that DRIFT's localization performance is consistently superior. It is evident that DRIFT's fixations are strongly correlated to object locations, even though trained without any location labels.
Second, we aim to discover and visualize common patterns in DRIFT's fixations to better understand its learned foveation policy. Specifically, we collect the local image patches specified by every fixation actions, and perform a k-means clustering (k = 50) over their visual features. The clusters with top popularity are shown in Fig. 7 Table 5. Ablative analysis with results on foveated images. dent that DRIFT performs implicit part detection during fixation. This experiment also shows the potential applications of DRIFT in visual discovery.
How much gain does 'C3' provide? While remaining all other settings fixed, we re-trained the foveation actor network π w with three different strategies: DDPG, DDPG + Conditioned Critic, and DDPG + Coaching. Table 5 shows the classification results on their foveated images (detailed test setting in Sec.4.1). Observe that the original actor-critic training scheme as in DDPG [16] fails in our foveation problem, due to the reasons analyzed in Sec.3.4, i.e. a global state-value function difficult to approximate by the critic, and less informative guides provided by a randomly initialized critic. By conditioning the critic on every training episode, on average the accuracy is improved by 8.0% over the three datasets. Moreover, by coaching the actor using the policy sampled from a heuristic oracle that reduces exploration efforts, on average a 19.0% performance gain is obtained. Finally, the full DRIFT model, trained with the proposed DDPGC3 algorithm, brings a 25.2% average improvement in absolute accuracy. It is evident that the proposed DDPGC3 trains a better foveation policy.
Conclusion
We have presented DRIFT, a novel deep-reinforcementlearning approach to sequentially generating foveated images to accomplish a task. Our approach avoids discretizing the state-action space, which would be prohibitively expensive, and instead solves a continuous-control problem. As part of our solution we introduce a novel use of a conditioned critic and a coaching strategy; we also provide an example of shaping the reward function to accelerate convergence. By demonstrating high accuracy and data-efficiency of our approach on challenging classification tasks, we have shown that adding foveation to a recognition formulation is both useful and feasible.
Figure 1 .
1Illustration of human's foveation.
Figure 2 .
2The proposed DRIFT model. The backbone, classification, foveation actor networks are referred as f , g and πw in Sec.3.
BN
Figure 4 .
4Our critic (left) and actor (right) network architecture. FC: fully-connected layer. BN: batch normalization layer. Numbers: the amount of neurons. Brackets: activation functions.
Figure 5 .
5Comparison on foveated images (the right three).
Figure 6 .
6Qualitative result of the proposed DRIFT model. Each cell contains 4 images, from left to right: the low-acuity image I low (input), the high-acuity image I high , DRIFT's foveated image IT , and the zoomed high-acuity image by the tightest bounding box (shown in green) around the fixation points. On IT , the fixation actions are also shown in red circles.
Figure 7 .
7Visual patterns in fixation actions. Each cell contains four example fixation patches which belongs to the same cluster.
. Only the imagelevel category labels were used for training, while extra annotations such as bounding boxes and parts were NOT used.datasets
CUB Cars
Dogs
Air
Food
# Category
200
196
120
100
101
# Train
5,994 8,144 12,000 6,667 75,750
# Test
5,794 8,041 8,580 3,333 25,250
Table 4. Localization results in hit rate.datasets
CUB
Cars
Dogs
Air
Random
8.0
34.8
25.6
11.9
Center
36.2
89.1
61.2
32.9
DRIFT
44.3
91.5
63.2
50.9
. It is evi-4 CUB, Cars, Dogs and Aircrafts provide ground-truth bounding boxes. acc (%) DDPG + con. critic + coaching DRIFTCUB
51.0
57.1
67.0
74.4
Cars
48.3
61.4
76.6
82.8
Dogs
53.8
58.5
66.6
71.6
We use the term heuristic since a true oracle is impossible to be obtained without greedily searching over the large action space.
Tensorflow: a system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, OSDI. 16M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensor- flow: a system for large-scale machine learning. In OSDI, volume 16, pages 265-283, 2016. 6
Deep networks for human visual attention: A hybrid model using foveal vision. A F Almeida, R Figueiredo, A Bernardino, J Santos-Victor, Iberian Robotics conference. SpringerA. F. Almeida, R. Figueiredo, A. Bernardino, and J. Santos- Victor. Deep networks for human visual attention: A hybrid model using foveal vision. In Iberian Robotics conference, pages 117-128. Springer, 2017. 3
A reinforcement learning approach to active camera foveation. A D Bagdanov, A Bimbo, W Nunziati, F Pernici, Proceedings of the 4th ACM international workshop on video surveillance and sensor networks. the 4th ACM international workshop on video surveillance and sensor networksACMA. D. Bagdanov, A. Del Bimbo, W. Nunziati, and F. Per- nici. A reinforcement learning approach to active camera foveation. In Proceedings of the 4th ACM international workshop on video surveillance and sensor networks, pages 179-186. ACM, 2006. 1
Food-101 -mining discriminative components with random forests. L Bossard, M Guillaumin, L Van Gool, European Conference on Computer Vision. 26L. Bossard, M. Guillaumin, and L. Van Gool. Food-101 - mining discriminative components with random forests. In European Conference on Computer Vision, 2014. 2, 6
Hierarchical object detection with deep reinforcement learning. M B Bueno, X Giró-I Nieto, F Marqués, J Torres, Deep Learning for Image Processing Applications. 31164M. B. Bueno, X. Giró-i Nieto, F. Marqués, and J. Torres. Hi- erarchical object detection with deep reinforcement learning. Deep Learning for Image Processing Applications, 31:164, 2017. 3
Active object localization with deep reinforcement learning. J C Caicedo, S Lazebnik, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionJ. C. Caicedo and S. Lazebnik. Active object localization with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2488-2496, 2015. 3
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR09. 67J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. 2, 6, 7
Fine-grained crowdsourcing for fine-grained recognition. J Deng, J Krause, L Fei-Fei, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition23J. Deng, J. Krause, and L. Fei-Fei. Fine-grained crowd- sourcing for fine-grained recognition. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 580-587, 2013. 2, 3
Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. J Fu, H Zheng, T Mei, CVPR. 2J. Fu, H. Zheng, and T. Mei. Look closer to see better: Recur- rent attention convolutional neural network for fine-grained image recognition. In CVPR, volume 2, page 3, 2017. 1, 3, 6, 8
Imitation learning by coaching. H He, J Eisner, H Daume, Advances in Neural Information Processing Systems. 25H. He, J. Eisner, and H. Daume. Imitation learning by coach- ing. In Advances in Neural Information Processing Systems, pages 3149-3157, 2012. 2, 5
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770-778, 2016. 7
Treestructured reinforcement learning for sequential object localization. Z Jie, X Liang, J Feng, X Jin, W Lu, S Yan, Advances in Neural Information Processing Systems. Z. Jie, X. Liang, J. Feng, X. Jin, W. Lu, and S. Yan. Tree- structured reinforcement learning for sequential object local- ization. In Advances in Neural Information Processing Sys- tems, pages 127-135, 2016. 3
3d object representations for fine-grained categorization. J Krause, M Stark, J Deng, L Fei-Fei, 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13). Sydney, Australia6J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object rep- resentations for fine-grained categorization. In 4th Interna- tional IEEE Workshop on 3D Representation and Recogni- tion (3dRR-13), Sydney, Australia, 2013. 2, 6
Tell me where to look: Guided attention inference network. K Li, Z Wu, K.-C Peng, J Ernst, Y Fu, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13K. Li, Z. Wu, K.-C. Peng, J. Ernst, and Y. Fu. Tell me where to look: Guided attention inference network. In The IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), June 2018. 1, 3
Focal visual-text attention for visual question answering. J Liang, L Jiang, L Cao, L.-J Li, A Hauptmann, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Liang, L. Jiang, L. Cao, L.-J. Li, and A. Hauptmann. Focal visual-text attention for visual question answering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6135-6143, 2018. 3
Continuous control with deep reinforcement learning. T P Lillicrap, J J Hunt, A Pritzel, N M O Heess, T Erez, Y Tassa, D Silver, D P Wierstra, 6Patent App. 15/217,758. 2, 4, 5T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. M. O. Heess, T. Erez, Y. Tassa, D. Silver, and D. P. Wierstra. Continuous control with deep reinforcement learning, Jan. 26 2017. US Patent App. 15/217,758. 2, 4, 5, 6, 8
Bilinear cnn models for fine-grained visual recognition. T.-Y Lin, A Roychowdhury, S Maji, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionT.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn mod- els for fine-grained visual recognition. In Proceedings of the IEEE international conference on computer vision, pages 1449-1457, 2015. 6
Fully convolutional attention networks for fine-grained recognition. X Liu, T Xia, J Wang, Y Yang, F Zhou, Y Lin, arXiv:1603.067656arXiv preprintX. Liu, T. Xia, J. Wang, Y. Yang, F. Zhou, and Y. Lin. Fully convolutional attention networks for fine-grained recogni- tion. arXiv preprint arXiv:1603.06765, 2016. 6, 8
Fine-grained visual classification of aircraft. S Maji, J Kannala, E Rahtu, M Blaschko, A Vedaldi, 67Technical reportS. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical re- port, 2013. 2, 6, 7
Reinforcement learning for visual object detection. S Mathe, A Pirinen, C Sminchisescu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition13S. Mathe, A. Pirinen, and C. Sminchisescu. Reinforcement learning for visual object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 2894-2902, 2016. 1, 3
Bubblenet: Foveated imaging for visual discovery. K Matzen, N Snavely, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision37K. Matzen and N. Snavely. Bubblenet: Foveated imaging for visual discovery. In Proceedings of the IEEE International Conference on Computer Vision, pages 1931-1939, 2015. 3, 7
The span of the effective stimulus during a fixation in reading. G W Mcconkie, K Rayner, Perception & Psychophysics. 176G. W. McConkie and K. Rayner. The span of the effective stimulus during a fixation in reading. Perception & Psy- chophysics, 17(6):578-586, 1975. 1
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, Nature. 5187540529V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529, 2015. 5
Curiositydriven exploration by self-supervised prediction. D Pathak, P Agrawal, A A Efros, T Darrell, International Conference on Machine Learning (ICML). 2017D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity- driven exploration by self-supervised prediction. In Inter- national Conference on Machine Learning (ICML), volume 2017, 2017. 4
Deep reinforcement learning of region proposal networks for object detection. A Pirinen, C Sminchisescu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionA. Pirinen and C. Sminchisescu. Deep reinforcement learn- ing of region proposal networks for object detection. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6945-6954, 2018. 3
Learning to zoom: a saliency-based sampling layer for neural networks. A Recasens, P Kellnhofer, S Stent, W Matusik, A Torralba, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)A. Recasens, P. Kellnhofer, S. Stent, W. Matusik, and A. Tor- ralba. Learning to zoom: a saliency-based sampling layer for neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 51-66, 2018. 3
Where to look: Focus regions for visual question answering. K J Shih, S Singh, D Hoiem, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition13K. J. Shih, S. Singh, and D. Hoiem. Where to look: Fo- cus regions for visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 4613-4621, 2016. 1, 3
Deterministic policy gradient algorithms. D Silver, G Lever, N Heess, T Degris, D Wierstra, M Riedmiller, ICML. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014. 4
Multi-attention multi-class constraint for fine-grained image recognition. M Sun, Y Yuan, F Zhou, E Ding, The European Conference on Computer Vision (ECCV). 6M. Sun, Y. Yuan, F. Zhou, and E. Ding. Multi-attention multi-class constraint for fine-grained image recognition. In The European Conference on Computer Vision (ECCV), September 2018. 6, 8
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition67C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826, 2016. 6, 7
On the theory of the brownian motion. G E Uhlenbeck, L S Ornstein, Physical review. 365823G. E. Uhlenbeck and L. S. Ornstein. On the theory of the brownian motion. Physical review, 36(5):823, 1930. 6
The caltech-ucsd birds. C Wah, S Branson, P Welinder, P Perona, S Belongie, C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 2, 6
Q-learning. C J Watkins, P Dayan, Machine learning. 83-4C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. 2
Grassmann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. X Wei, Y Zhang, Y Gong, J Zhang, N Zheng, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Wei, Y. Zhang, Y. Gong, J. Zhang, and N. Zheng. Grass- mann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), pages 355- 370, 2018. 6
Show, attend and tell: Neural image caption generation with visual attention. K Xu, J Ba, R Kiros, K Cho, A Courville, R Salakhudinov, R Zemel, Y Bengio, International conference on machine learning. 13K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudi- nov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Interna- tional conference on machine learning, pages 2048-2057, 2015. 1, 3
Top-down neural attention by excitation backprop. J Zhang, S A Bargal, Z Lin, J Brandt, X Shen, S Sclaroff, International Journal of Computer Vision. 12610J. Zhang, S. A. Bargal, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff. Top-down neural attention by excita- tion backprop. International Journal of Computer Vision, 126(10):1084-1102, 2018. 8
Progressive attention guided recurrent network for salient object detection. X Zhang, T Wang, J Qi, H Lu, G Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionX. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang. Progressive attention guided recurrent network for salient object detec- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 714-722, 2018. 3
Diversified visual attention networks for fine-grained object classification. B Zhao, X Wu, J Feng, Q Peng, S Yan, IEEE Transactions on Multimedia. 196B. Zhao, X. Wu, J. Feng, Q. Peng, and S. Yan. Diversified vi- sual attention networks for fine-grained object classification. IEEE Transactions on Multimedia, 19(6):1245-1256, 2017. 3
Learning multi-attention convolutional neural network for fine-grained image recognition. H Zheng, J Fu, T Mei, J Luo, Int. Conf. on Computer Vision. 67H. Zheng, J. Fu, T. Mei, and J. Luo. Learning multi-attention convolutional neural network for fine-grained image recog- nition. In Int. Conf. on Computer Vision, volume 6, 2017. 3, 7
Weakly supervised instance segmentation using class peak response. Y Zhou, Y Zhu, Q Ye, Q Qiu, J Jiao, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 57Y. Zhou, Y. Zhu, Q. Ye, Q. Qiu, and J. Jiao. Weakly su- pervised instance segmentation using class peak response. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 5, 7
| []
|
[
"Convergence rates of Gaussian ODE filters",
"Convergence rates of Gaussian ODE filters"
]
| [
"Hans Kersting ",
"· T J Sullivan ",
"Philipp Hennig "
]
| []
| [
"Statistics and Computing"
]
| A recently introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution x and its first q derivatives a priori as a Gauss-Markov process X, which is then iteratively conditioned on information aboutẋ. This article establishes worst-case local convergence rates of order q + 1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order q in the case of q = 1 and an integrated Brownian motion prior, and analyses how inaccurate information onẋ coming from approximate evaluations of f affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to q ∈ {2, 3, . . .}. | 10.1007/s11222-020-09972-4 | null | 51,718,742 | 1807.09737 | eea34461853828b84831a7bfad02fa22dd4e3b6d |
Convergence rates of Gaussian ODE filters
2020
Hans Kersting
· T J Sullivan
Philipp Hennig
Convergence rates of Gaussian ODE filters
Statistics and Computing
30202010.1007/s11222-020-09972-4Received: 20 March 2020 / Accepted: 27 August 2020 / Published online: 12 September 2020Probabilistic numerics · Ordinary differential equations · Initial value problems · Numerical analysis · Gaussian processes · Markov processes Mathematics Subject Classification 65L20 · 37H10 · 68W20 · 93E11
A recently introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution x and its first q derivatives a priori as a Gauss-Markov process X, which is then iteratively conditioned on information aboutẋ. This article establishes worst-case local convergence rates of order q + 1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order q in the case of q = 1 and an integrated Brownian motion prior, and analyses how inaccurate information onẋ coming from approximate evaluations of f affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to q ∈ {2, 3, . . .}.
Introduction
A solver of an initial value problem (IVP) outputs an approximate solutionx : [0, T ] → R d of an ordinary differential equation (ODE) with initial condition: Zuse Institute Berlin, Takustraße 7, 14195 Berlin, Germany solutionx is computed by iteratively collecting information on x (1) (t) by evaluating f : R d → R d at a numerical estimatex(t) of x(t) and using these approximate evaluations of the time derivative to extrapolate along the time axis. In other words, the numerical solution (or estimator)x of the exact solution (or estimand) x is calculated based on evaluations of the vector field f (or data). Accordingly, we treatx itself as an estimator, i.e., a statistic that translates evaluations of f into a probability distribution over C 1 ([0, T ]; R d ), the space of continuously differentiable functions from [0, T ] to R d . This probabilistic interpretation of numerical computations of tractable from intractable quantities as statistical inference of latent from observable quantities applies to all numerical problems and has been repeatedly recommended in the past (Poincaré 1896;Diaconis 1988;Skilling 1991;O'Hagan 1992;Ritter 2000). It employs the language of probability theory to account for the epistemic uncertainty (i.e., limited knowledge) about the accuracy of intermediate and final numerical computations, thereby yielding algorithms which can be more aware of-as well as more robust against-uncertainty over intermediate computational results. Such algorithms can output probability measures, instead of point estimates, over the final quantity of inter-est. This approach, now called probabilistic numerics (PN) (Hennig et al. 2015;Oates and Sullivan 2019), has in recent years been spelled out for a wide range of numerical tasks, including linear algebra, optimization, integration, and differential equations, thereby working towards the longterm goal of a coherent framework to propagate uncertainty through chained computations, as desirable, e.g., in statistical machine learning.
x (1) (t) := dx dt (t) = f (x(t)) , ∀t ∈ [0, T ], x(0) = x 0 ∈ R d .(1)
In this paper, we determine the convergence rates of a recent family of PN methods (Schober et al. 2014;Kersting and Hennig 2016;Magnani et al. 2017;Schober et al. 2019;Tronarp et al. 2019) which recast an IVP as a stochastic filtering problem (Øksendal 2003, Chapter 6), an approach that has been studied in other settings (Jazwinski 1970), but has not been applied to IVPs before. These methods assume a priori that the solution x and its first q ∈ N derivatives follow a Gauss-Markov process X that solves a stochastic differential equation (SDE).
The evaluations of f at numerical estimates of the true solution can then be regarded as imperfect evaluations oḟ x, which can then be used for a Bayesian update of X. Such recursive updates along the time axis yield an algorithm whose structure resembles that of Gaussian (Kalman) filtering (Särkkä 2013, Chapter 4). These methods add only slight computational overhead compared to classical methods (Schober et al. 2019) and have been shown to inherit local convergence rates from equivalent classical methods in specific cases (Schober et al. 2014;Schober et al. 2019). These equivalences (i.e., the equality of the filtering posterior mean and the classical method) are only known to hold in the case of the integrated Brownian motion (IBM) prior and noiseless evaluations of f (in terms of our later notation, the case R ≡ 0), as well as under the following restrictions:
Firstly, for q ∈ {1, 2, 3}, and if the first step is divided into sub-steps resembling those of Runge-Kutta methods, an equivalence of the posterior mean of the first step of the filter and the explicit Runge-Kutta method of order q was established in Schober et al. (2014) (but for q ∈ {2, 3} only in the limit as the initial time of the IBM tends to −∞). Secondly, it was shown by Schober et al. (2019) that, for q = 1, the posterior mean after each step coincides with the trapezoidal rule if it takes an additional evaluation of f at the end of each step, known as P(EC)1. The same paper shows that, for q = 2, the filter coincides with a third-order Nordsieck method (Nordsieck 1962) if the filter is in the steady state, i.e., after the sequence of error covariance matrices has converged. These results neither cover filters with the integrated Ornstein-Uhlenbeck process (IOUP) prior (Magnani et al. 2017) nor nonzero noise models on evaluations of f .
In this paper, we directly prove convergence rates without first fitting the filter to existing methods, and thereby lift many of the above restrictions on the convergence rates. While the more-recent work by Tronarp et al. (2020) also provide convergence rates of estimators of x in the Bayesian ODE filtering/smoothing paradigm, they concern the maximum a posteriori estimator (as computed by the iterated extended Kalman ODE smoother), and therefore differ from our convergence rates of the filtering mean (as computed by the Kalman ODE filter).
Contribution
Our main results-Theorems 8 and 14-provide local and global convergence rates of the ODE filter when the step size h goes to zero. Theorem 8 shows local convergence rates of h q+1 without the above-mentioned previous restrictionsi.e., for a generic Gaussian ODE filter for all q ∈ N, both IBM and IOUP prior, flexible Gaussian initialization (see Assumptions 2 and 3), and arbitrary evaluation noise R ≥ 0. As a first global convergence result, Theorem 14 establishes global convergence rates of h q in the case of q = 1, the IBM prior and all fixed measurement uncertainty models R of order p ∈ [1, ∞] (see Assumption 4). This global rate of the worst-case error is matched by the contraction rate of the posterior credible intervals, as we show in Theorem 15. Moreover, we also give closed-form expressions for the steady states in the global case and illustrate our results as well as their possible generalizability to q ≥ 2 by experiments in Sect. 9.
Related work on probabilistic ODE solvers
The Gaussian ODE filter can be thought of as a self-consistent Bayesian decision agent that iteratively updates its prior belief X over x : [0, T ] → R d (and its first q derivatives) with information onẋ from evaluating f . 1 For Gauss-Markov priors, it performs exact Bayesian inference and optimally (with respect to the L 2 -loss) extrapolates along the time axis. Accordingly, all of its computations are deterministic and-due to its restriction to Gaussian distributions-only slightly more expensive than classical solvers. Experiments demonstrating competitive performance with classical methods are provided in Schober et al. (2019, Section 5).
Another line of work (comprising the methods from Chkrebtii et al. (2016); Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018)) introduces probability measures to ODE solvers in a fundamentally different way-by representing the distribution of all numerically possible trajectories with a set of sample paths. To compute these sample paths, Chkrebtii et al. (2016) draws them from a (Bayesian) Gaussian process (GP) regression; Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Teymur et al. (2018) perturb classical estimates after an integration step with a suitably scaled Gaussian noise; and Abdulle and Garegnani (2020) perturbs the classical estimate instead by choosing a stochastic stepsize. While Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018) can be thought of as (non-Bayesian) 'stochastic wrappers' around classical solvers, which produce samples with the same convergence rate, Chkrebtii et al. (2016) employslike the filter-GP regression to represent the belief on x. While the Gaussian ODE filter can convergence with polynomial order (see results in this paper), However, Chkrebtii et al. (2016) only show first-order convergence rates and also construct a sample representation of numerical errors, from which samples are drawn iteratively. A conceptual and experimental comparison between the filter and Chkrebtii et al. (2016) can be found in Schober et al. (2019). An additional numerical test against Conrad et al. (2017) was given by Kersting and Hennig (2016). Moreover, Tronarp et al. (2019) recently introduced a particle ODE filter, which combines a filtering-based solver with a sampling-based uncertainty quantification (UQ), and compared it numerically with Conrad et al. (2017) and Chkrebtii et al. (2016).
All of the above sampling-based methods can hence represent more expressive, non-Gaussian posteriors (as, e.g., desirable for bifurcations), but multiply the computational cost of the underlying method by the number of samples. ODE filters are, in contrast, not a perturbation of known methods, but novel methods designed for computational speed and for a robust treatment of intermediate uncertain values (such as the evaluations of f at estimated points). Unless parallelization of the samples in the sampling-based solvers is possible and inexpensive, one can spend the computational budget for generating additional samples on dividing the step size h by the number of samples, and can thereby polynomially decrease the error. Its Gaussian UQ, however, should not be regarded as the true UQ-in particular for chaotic systems whose uncertainty can be better represented by samplingbased solvers, see, e.g., Conrad et al. (2017, Figure 1) and Abdulle and Garegnani (2020, Figure 2)-but as a rough inexpensive probabilistic treatment of intermediate values and final errors which is supposed to, on average, guide the posterior mean towards the true x. Therefore, it is in a way more similar to classical non-stochastic solvers than to sampling-based stochastic solvers and, unlike samplingbased solvers, puts emphasis on computational speed over statistical accuracy. Nevertheless, its Gaussian UQ is sufficient to make the forward models in ODE inverse problems more 'uncertainty-aware'; see Kersting et al. (2020, Section 3).
Accordingly, the convergence results in this paper concern the convergence rate of the posterior mean to the true solution, while the theoretical results from Teymur et al. (2016); Chkrebtii et al. (2016); Conrad et al. (2017); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018) provide convergence rates of the variance of the non-Gaussian empirical measure of samples (and not for an individual sample).
Relation to filtering theory
While Gaussian (Kalman) filtering was first applied to the solution of ODEs by Kersting and Hennig (2016) and Schober et al. (2019), it has previously been analyzed in the filtering, data assimilation as well as linear system theory community. The convergence results in this paper are concerned with its asymptotics when the step size h (aka time step between data points) goes to zero. In the classical filtering setting, where the data comes from an external sensor, this quantity is not treated as a variable, as it is considered a property of the data and not, like in our case, of the algorithm. Accordingly, the standard books lack such an analysis for h → 0-see Jazwinski (1970); Anderson and Moore (1979); Maybeck (1979) for filtering, Law et al. (2015); Reich and Cotter (2015) for data assimilation and Callier and Desoer (1991) for linear system theory-and we believe that our convergence results are completely novel. It is conceivable that, also for these communities, this paper may be of interest in settings where the data collection mechanism can be actively chosen, e.g., when the frequency of the data can be varied or sensors of different frequencies can be used.
Outline
The paper begins with a brief introduction to Gaussian ODE filtering in Sect. 2. Next, Sects. 3 and 5 provide auxiliary bounds on the flow map of the ODE and on intermediate quantities of the filter, respectively. With the help of these bounds, Sects. 6 and 7 establish local and global convergence rates of the filtering mean, respectively. In light of these rates, Sect. 8 analyses for which measurement noise models the posterior credible intervals are well calibrated. These theoretical results are experimentally confirmed and discussed in Sect. 9. Section 10 concludes with a high-level discussion.
Notation
We will use the notation [n] := {0, . . . , n − 1}. For vectors and matrices, we will use zero-based numbering, e.g., x = (x 0 , . . . , x d−1 ) ∈ R d . For a matrix P ∈ R n×m and (i, j) ∈ [n] × [m], we will write P i,: ∈ R 1×m for the ith row and P :, j for the jth column of P. A fixed but arbitrary norm on R d will be denoted by · . The minimum and maximum of two real numbers a and b will be denoted by a ∧ b and a ∨ b, respectively. Vectors that span all q modeled derivatives will be denoted by bold symbols, such as x.
Gaussian ODE filtering
This section defines how a Gaussian filter can solve the IVP Eq. (1). In the various subsections, we first explain the choice of prior on x, then describe how the algorithm computes a posterior output from this prior (by defining a numerical integrator Ψ ), and add explanations on the measurement noise of the derivative observations. To alternatively understand how this algorithm can be derived as an extension of generic Gaussian filtering in probabilistic state space models, see the concise presentation in , Supplement A).
Prior on x
In PN, it is common (Hennig et al. 2015, Section 3(a)) to put a prior measure on the unknown solution x. Often, for fast Bayesian inference by linear algebra (Rasmussen and Williams 2006, Chapter 2), this prior is Gaussian. To enable GP inference in linear time by Kalman filtering (Särkkä 2013, Chapter 4.3), we further restrict the prior to Markov processes. As discussed in Särkkä and Solin (2019, Chapter 12.4), a wide class of such Gauss-Markov processes can be captured by a law of the (strong) solution (Øksendal 2003, Chapter 5.3) of a linear SDE with Gaussian initial condition. Here-as we, by Eq. (1), have information on at least one derivative of x-the prior also includes the first q ∈ N derivatives. Therefore, for all j ∈ [d], we define the vector of time derivatives by X j = X
(0) j , . . . , X (q) j
. We define X j as a (q + 1)-dimensional stochastic process via the SDE
dX j (t) = dX (0) j (t), . . . , dX (q−1) j (t), dX (q) j (t) = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0. . . 0 . . . . . . . . . 0 . . . . . . 0 1 c 0 . . . . . . c q ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ X (0) j (t) . . . X (q−1) j (t) X (q) j (t) ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ dt + ⎛ ⎜ ⎜ ⎜ ⎝ 0 . . . 0 σ j ⎞ ⎟ ⎟ ⎟ ⎠ dB j (t),(2)
driven by mutually independent one-dimensional Brownian motions {B j ; j ∈ [d]} (independent of X(0)) scaled by σ j > 0, with initial condition X j (0) ∼ N (m j (0), P j (0) ). We assume that X j (0); j ∈ [d] are independent. In other words, we model the unknown ith derivative of the jth dimension of the solution x of the IVP Eq. (1), denoted by x (i) j , as a draw from a real-valued, one-dimensional GP X (i) j , for all i ∈ [q + 1] and j ∈ [d], such that X (q) j is defined by (c 0 , . . . , c q ) as well as the Brownian motion scale σ j and X (i−1) j is defined to be the integral of X (i) j . Note that, by the independence of the components of the d-dimensional Brownian motion, the components
X j (t); 0 ≤ t ≤ T ; j ∈ [d] of {X(t); 0 ≤ t ≤ T } are independent 2 . The (strong) solution of Eq.
(2) is a Gauss-Markov process with mean m j : [0, T ] → R q+1 and covariance matrix P j : [0, T ] → R (q+1)×(q+1) given by
m j (t) = A(t)m j (0),(3)P j (t) = A(t)P j (0)A(t) + Q(t),(4)
where the matrices A(t), Q(t) ∈ R (q+1)×(q+1) yielded by the SDE Eq.
(2) are known in closed form Särkkä (2006, Theorem 2.9) (see Eq. (77)). The precise choice of the prior stochastic process X depends on the choice of (c 0 , . . . , c q ) ∈ R q+1 in Eq.
(2). While the below algorithm works for all choices of c, we restrict our attention to the case of
(c 0 , . . . , c q ) := (0, . . . , 0, −θ), for some θ ≥ 0,(5)
where the q-times integrated Brownian motion (IBM) and the q-times integrated Ornstein-Uhlenbeck process (IOUP) with drift parameter θ is the unique solution of Eq.
(2), in the case of θ = 0 and θ > 0, respectively, (Karatzas and Shreve 1991, Chapter 5: Example 6.8). In this case, the matrices A and Q from Eqs.
(3) and (4) are given by
A(t) i j = I i≤ j t j−i ( j−i)! , if j = q, t q−i (q−i)! − θ ∞ k=q+1−i (−θ) k+i−q−1 t k k! , if j = q,(6)Q(t) i j = σ 2 t 2q+1−i− j (2q + 1 − i − j)(q − i)!(q − j)! + Θ t 2q+2−i− j .(7)
(Derivations of Eqs. (6) and (7), as well as the precise form of Q without Θ(t 2q+2−i− j ), are presented in Appendix A.) Hence, for all i ∈ [q + 1], the prediction of step size h of the ith derivative from any state u ∈ R q+1 is given by
[A(t)u] i = q k=i t k−i (k − i)! u k − θ ⎡ ⎣ ∞ k=q+1−i (−θ) k+i−q−1 k! t k ⎤ ⎦ u q .(8)
The algorithm
To avoid the introduction of additional indices, we will define the algorithm Ψ for d = 1; for statements on the general case of d ∈ N we will use the same symbols from Eq. (10)-(15) as vectors over the whole dimension-see, e.g., Eq. (31) for a statement about a general r ∈ R d . By the independence of the dimensions of X, due to Eq.
(2), extension to d ∈ N amounts to applying Ψ to every dimension independently (recall Footnote 2). Accordingly, we may in many of the below proofs w.l.o.g. assume d = 1. Now, as previously spelled out in Kersting and Hennig (2016); Schober et al. (2019), Bayesian filtering of X-i.e., iteratively conditioning X on the information on X (1) from evaluations of f at the mean of the current conditioned X (0) -yields the following numerical method Ψ . Let m(t) = (m (0) (t), . . . , m (q) (t)) ∈ R q+1 be an arbitrary state at some point in time t ∈ [0, T ] (i.e., m (i) (t) is an estimate for x (i) (t)), and let P(t) ∈ R (q+1)×(q+1) be the covariance matrix of x (i) (t). For t ∈ [0, T ], let the current estimate of x(t) be a normal distribution N (m(t), P(t)), i.e., the mean m(t) ∈ R q+1 represents the best numerical estimate (given data {y(h), . . . , y(t)}, see Eq. (12)) and the covariance matrix P(t) ∈ R (q+1)×(q+1) its uncertainty. For the time step t → t + h of size h > 0, the ODE filter first computes the prediction step consisting of predictive mean
m − (t + h):= A(h)m(t) ∈ R q+1 ,(9)
and predictive covariance
P − (t + h) := A(h)P(t)A(h) + Q(h) ∈ R (q+1)×(q+1) ,(10)
with A and Q generally defined by Eq. (77) and, in the considered particular case of Eq. (5), by Eqs. (6) and (7). In the subsequent step, the following quantities are computed first: the Kalman gain
β(t + h) = (β (0) (t + h), . . . , β (q) (t + h)) := P − (t + h) :1 (P − (t + h)) 11 + R(t + h) ∈ R (q+1)×1 ,(11)
the measurement/data onẋ
y(t + h) := f m −,(0) (t + h) ∈ R,(12)
and innovation/residual
r (t + h) := y(t + h) − m −,(1) (t + h) ∈ R.(13)
Here, R denotes the variance of y (the 'measurement noise') and captures the squared difference between the data y(t + h) = f (m − (t + h)) that the algorithm actually receives and the idealized dataẋ(t + h) = f (x(t + h)) that it 'should' receive (see Sect. 2.3). Finally, the mean and the covariance matrix are conditioned on this data, which yields the updated mean
Ψ P(t),h (m(t)) := m(t + h) = m − (t + h) + β(t + h)r (t + h),(14)
and the updated covariance
P(t + h) := P − (t + h) − P − (t + h) :,1 P − (t + h) 1,: P − (t + h) 11 + R(t + h) .(15)
This concludes the step t → t + h, with the Gaussian distribution N (m(t + h), P(t + h)) over x(t + h). The algorithm is iterated by computing m(t + 2h) := Ψ P(t+h),h (m(t + h)) as well as repeating Eq. (10) and (15), with P(t + h) instead of P(t), to obtain P(t + 2h). In the following, to avoid notational clutter, the dependence of the above quantities on t, h and σ will be omitted if their values are unambiguous. Parameter adaptation reminiscent of classical methods (e.g., for σ s.t. the added variance per step coincide with standard error estimates) has been explored in Schober et al. (2019, Section 4). This filter is essentially an iterative application of Bayes rule (see, e.g., Särkkä (2013, Chapter 4)) based on the prior X on x specified by Eq. (2) (entering the algorithm via A and Q) and the measurement model y ∼ N (ẋ, R). Since the measurement model is a likelihood by another name and therefore forms a complete Bayesian model together with the prior X, it remains to detail the measurement model (recall Sect. 2.1 for the choice of prior). Concerning the data generation mechanism for y Eq. (12), we only consider the maximum-a-posteriori point estimate ofẋ(t) given N (m −,(0) (t), P − 00 (t)); a discussion of more involved statistical models for y as well as an algorithm box for the Gaussian ODE filter can be found in Schober et al. (2019, Subsection 2.2). Next, for lack of such a discussion for R, we will examine different choices of R-which have proved central to the UQ of the filter (Kersting and Hennig 2016) and will turn out to affect global convergence properties in Sect. 7.
Measurement noise R
Two sources of uncertainty add to R(t): noise from imprecise knowledge of x(t) and f . Given f , previous integration steps of the filter (as well as an imprecise initial value) inject uncertainty about how close m − (t) is to x(t) and how close y = f (m − (t)) is toẋ(t)) = f (x(t)). This uncertainty stems from the discretization error m −,(0) (t) − x(t) and, hence, tends to increase with h. Additionally, there can be uncertainty from a misspecified f , e.g., when f has estimated parameters, or from numerically imprecise evaluations of f , which can be added to R-a functionality which classical solvers do not possess. In this paper, since R depends on h via the numerical uncertainty on x(t), we analyze the influence of noise R of order p ∈ [1, ∞] (see Assumption 4) on the quality of the solution to illuminate for which orders of noise we can trust the solution to which extent and when we should, instead of decreasing h, rather spend computational budget on specifying or evaluating f more precisely. The explicit dependence of the noise on its order p in h resembles, despite the fundamentally different role of R compared to additive noise in Conrad et al. (2017); Abdulle and Garegnani (2020), the variable p in Conrad et al. (2017, Assumption 1) and Abdulle and Garegnani (2020, Assumption 2.2) in the sense that the analysis highlights how uncertainty of this order can still be modeled without breaking the convergence rates. (Adaptive noise models are computationally feasible (Kersting and Hennig 2016) but lie outside the scope of our analysis.)
Regularity of flow
Before we proceed to the analysis of Ψ , we provide all regularity results necessary for arbitrary q, d ∈ N in this section.
Assumption 1
The vector field f ∈ C q (R d ; R d ) is globally Lipschitz, and all its derivatives of order up to q are uniformly bounded and globally Lipschitz, i.e., there exists some
L > 0 such that D α f ∞ ≤ L for all multi-indices α ∈ N d 0 with 1 ≤ i α i ≤ q, and D α f (a) − D α f (b) ≤ L a − b for all multi-indices α ∈ N d 0 with 0 ≤ i α i ≤ q.
Assumption 1 and the Picard-Lindelöf theorem imply that the solution x is a well-defined element of C q+1
([0, T ]; R d ). For i ∈ [q + 1], we denote d i x dt i by x (i) .
Recall that, by a bold symbol, we denote the vector of these derivatives: x ≡ (x (0) , . . . , x (q) ) . In particular, the solution x of Eq. (1) is denoted by x (0) . Analogously, we denote the flow of the ODE Eq. (1) by Φ (0) , i.e., Φ
t (x 0 ) ≡ x (0) (t), and, for all i ∈ [q + 1], its ith partial derivative with respect to t by Φ (i) , so that Φ (i) t (x 0 ) ≡ x (i) (t).(0)
Lemma 1 Under Assumption 1, for all a ∈ R d and all h > 0,
Φ (i) h (a) − q k=i h k−i (k − i)! Φ (k) 0 (a) ≤ K h q+1−i .(16)
Here, and in the sequel, K > 0 denotes a constant independent of h and θ which may change from line to line.
Proof By Assumption 1, Φ (q+1) exists and is bounded by Φ (q+1) ≤ L, which can be seen by applying the chain rule q times to both sides of Eq. (1). Now, applying
Φ (q+1) ≤ L to the term Φ (q+1) τ (a) (for some τ ∈ (0, h)) in the Lagrange remainder of the (q − i)th-order Taylor expansion of Φ (i) h (a) yields Eq. (16).
Lemma 2 Under Assumption 1 and for all sufficiently small
h > 0, sup a =b∈R d Φ (0) h (a) − Φ (0) h (b) a − b ≤ 1 + 2Lh.(17)
Proof Immediate corollary of Teschl (2012, Theorem 2.8).
Global convergence (Sect. 7) will require the following generalization of Lemma 2.
Lemma 3 Let q = 1. Then, under Assumption 1 and for all sufficiently small h
> 0, sup a =b∈R d |||Φ h (a) − Φ h (b)||| h a − b ≤ 1 + K h,(18)
where given the norm · on R d and h > 0, the new norm ||| · ||| h on R (q+1)×d is defined by
|||a||| h := q i=0 h i a i,: .(19)
Remark 1 The necessity of ||| · ||| h stems from the fact thatunlike other ODE solvers-the ODE filter Ψ additionally estimates and uses the first q derivatives in its state m ∈ R (q+1)×d , whose development cannot be bounded in · , but in ||| · ||| h . The norm ||| · ||| h is used to make rigorous the intuition that the estimates of the solution's time derivative are 'one order of h worse per derivative.'
Proof We bound the second summand of
|||Φ h (a) − Φ h (b)||| h eq. (19) = Φ (0) h (a) − Φ (0) h (b) ≤(1+2Lh) a−b , by eq. (17) + h Φ (1) h (a) = f Φ (0) h (a) − Φ (1) h (b) = f Φ (0) h (b) (20) by f Φ (0) h (a) − f Φ (0) h (b) Ass. 1 ≤ (21) L Φ (0) h (a) − Φ (0) h (b) eq. (17) ≤ L(1 + 2Lh) a − b .
Inserting Eq. (21) into Eq. (20) concludes the proof.
The role of the state misalignments ı
In Gaussian ODE filtering, the interconnection between the estimates of the ODE solution
x(t) = x (0) (t) and its first q derivatives {x (1) (t), . . . , x (q) (t)} is intricate. From a purely analytical point of view, every possible estimate m(t) of x(t)
comes with a fixed set of derivatives, which are implied by the ODE, for the following reason: Clearly, by Eq. (1), the esti-
mate m (1) (t) of x (1) (t) ought to be f (m(t)). More generally (for i ∈ [q + 1]) the estimate m (i) (t) of x (i) (t)
is determined by the ODE as well. To see this, let us first recursively define
f (i) : R d → R d by f (0) (a) := a, f (1) (a) := f (a) and f (i) (a) := [∇ x f (i−1) · f ](a). Now, differentiating the ODE, Eq.
(1), (i − 1)-times by the chain rule yields
x (i) (t) = f (i−1) (t) x (0) (t) ,(22)which implies that m (i) (t) ought to be f (i−1) (t) m (0) (t) Since Φ (i) 0 m (0) (nh) = f (i−1) m (0) (nh)(23)
(which we prove in Appendix E), this amounts to requiring that
m (i) (t) ! = Φ (i) 0 m (0) (nh) .(24)
Since
Φ (i) 0 is (recall Sect. 3) the ith time derivative of the flow map Φ (0) at t = 0, this simply means that m (i) (t) would be set to the 'true' derivatives in the case where the initial condition of the ODE, Eq. (1), is x(0) = m (0) (t) instead of x(0) = x 0 -or, more loosely speaking, that the derivative estimates m (i) (t) are forced to comply with m (0) (t), irrespec- tive of our belief x (i) (t) ∼ N (m (i) (t), P ii (t))
. The Gaussian ODE filter, however, does not use this (intractable) analytical approach. Instead, it jointly models and infers x (0) (t) and its first q derivatives {x (1) (t), . . . , x (q) (t)} in a state space X, as detailed in Sect. 2. The thus-computed filtering mean estimates m (i) (t) depend not only on the ODE but also on the statistical model-namely on the prior (SDE) and measurement noise R; recall Sects. 2.1 and 2.3. In fact, the analytically desirable derivative estimate, Eq. (24), is, for i = 1, only satisfied if R = 0 (which can be seen from Eq. (14)), and generally does not hold for i ≥ 2 since both f (i−1) and Φ (i) are inaccessible to the algorithm. The numerical example in Appendix C clarifies that δ (i) is likely to be strictly positive, even after the first step 0 → h.
This inevitable mismatch, between exact analysis and approximate statistics, motivates the following definition of the ith state ith state misalignment at time t:
δ (i) (t) := m (i) (t) − Φ (i) 0 m (0) (t) ≥ 0.(25)
Intuitively speaking, δ (i) (t) quantifies how large this mismatch is for the ith derivative at time t. Note that δ (i) (t) = 0 if and only if Eq. (24) holds-i.e., for i = 1 iff R = 0 (which can be seen from Eq. (14)) and only by coincidence
for i ≥ 2 since both f (i−1) and Φ (i) 0 are inaccessible to the algorithm. (Since Φ (0) 0 = Id, δ (0) (t) = 0 for all t.)
The possibility of δ (i) > 0, for i ≥ 1, is inconvenient for the below worst-case analysis since (if Eq. (24) held true and δ (i) ≡ 0) the prediction step of the drift-less IBM prediction (θ = 0) would coincide with a Taylor expansions of the flow map Φ (i) 0 ; see Eq. (8). But, because δ (i) = 0 in general, we have to additionally bound the influence of δ ≥ 0 which complicates the below proofs further.
Fortunately, we can locally bound the import of δ (i) by the easy Lemma 7 and globally by the more complicated Lemma 11 (see Sect. 7.3). Intuitively, these bounds demonstrate that the order of the deviation from a Taylor expansion of the state m = [m (0) , . . . , m (q) ] due to δ is not smaller than the remainder of the Taylor expansion. This means, more loosely speaking, that the import of the δ (i) is swallowed by the Taylor remainder. This effect is locally captured by Lemma 4 and globally by Lemma 12. The global convergence rates of δ (i) (T ), as provided by Lemma 12, are experimentally demonstrated in Appendix D.
Auxiliary bounds on intermediate quantities
Recall from Eq. (5) that θ = 0 and θ > 0 denote the cases of IBM and IOUP prior with drift coefficient θ , respectively. The ODE filter Ψ iteratively computes the filtering mean m(nh) = (m (0) (nh), . . . , m (q) (nh)) ∈ R (q+1) as well as error covariance matrices P(nh) ∈ R on the mesh {nh} T /h n=0 . (Here and in the following, we assume w.l.o.g. that T /h ∈ N.) Ideally, the truncation error over all derivatives
ε(nh) := (ε (0) (nh), . . . , ε (q) (nh)) := m(nh) − x(nh),(26)
falls quickly as h → 0 and is estimated by the standard deviation √ P 00 (nh). Next, we present a classical worst-case convergence analysis over all f satisfying Assumption 1; see Sect. 10 for a discussion of the desirability and feasibility of an average-case analysis. To this end, we bound the added error of every step by intermediate values, defined in Eqs. (11) and (13),
Δ (i) ((n + 1)h) := Ψ (i) P(nh),h (m(nh)) − Φ (i) h m (0) (nh) (27) eq. (14) ≤ (A(h)m(nh)) i − Φ (i) h m (0) (nh) =:Δ −(i) ((n+1)h) + β (i) ((n + 1)h) r ((n + 1)h) ,(28)
and bound these quantities in the order Δ −(i) , r , β (i) . These bounds will be needed for the local and global convergence analysis in Sects. 6 and 7, respectively. Note that, intuitively, Δ −(i) ((n + 1)h) and Δ (i) ((n + 1)h) denote the additional numerical error which is added in the (n + 1)th step to the ith derivative of the predictive mean m −,(i) (t + h) and the updated mean m (i) (t + h), respectively.
Lemma 4 Under Assumption 1, for all
i ∈ [q + 1] and all h > 0, Δ −(i) ((n + 1)h) ≤K 1 + θ m (q) (nh) h q+1−i + q k=i h k−i (k − i)! δ (k) (nh).(29)
Proof We may assume, as explained in Sect. 2.2, without loss of generality that d = 1. We apply the triangle inequality to the definition of Δ −(i) ((n + 1)h), as defined in Eq. (28), which, by Eq. (8), yields
Δ −(i) ((n + 1)h) ≤ q k=i h k−i (k − i)! δ (k) (nh) + K θ m (q) (nh) h q+1−i + q l=i h l−i (l − i)! Φ (l) 0 m (0) (nh) − Φ (i) h m (0) (nh) ≤K h q+1−i , by eq. (16) .(30)
Lemma 5 Under Assumption 1 and for all sufficiently small h > 0,
r ((n + 1)h) ≤K 1 + θ m (q) (nh) h q + K q k=1 h k−1 (k − 1)! δ (k) (nh).(31)
Proof See Appendix F.
To bound the Kalman gains β(nh), we first need to assume that the orders of the initial covariance matrices are sufficiently high (matching the latter required orders of the initialization error; see Assumption 3).
Assumption 2
The entries of the initial covariance matrix
P(0) satisfy, for all k, l ∈ [q+1], P(0) k,l ≤ K 0 h 2q+1−k−l , where K 0 > 0 is a constant independent of h.
We make this assumption, as well as Assumption 3, explicit (instead of just making the stronger assumption of exact initializations with zero variance), because it highlights how statistical or numerical uncertainty on the initial value effects the accuracy of the output of the filter-a novel functionality of PN with the potential to facilitate a management of the computational budget across a computational chain with respect to the respective perturbations from different sources of uncertainty (Hennig et al. 2015, Section 3(d)).
Lemma 6 Under Assumption 2, for all
i ∈ [q + 1] and for all h > 0, β (i) (h) ≤ K h 1−i . Proof Again, w.l.o.g. d = 1.
Application of the orders of A and Q from Eqs. (6) and (7), the triangle inequality and Assumption 2 to the definition of P − in Eq. (10) yields
P − (h) k,l eq. (10) ≤ A(h)P(0)A(h) k,l + Q(h) k,l eqs. (6),(7) ≤ K q a=k q b=l P(0) a,b h a+b−k−l + 2θ q−1 b=l P(0) q,b + θ 2 P(0) q,q + h 2q+1−k−l Ass. 2 ≤ K [1 + θ + θ 2 ]h 2q+1−k−l .(32)
Recall that P and Q are (positive semi-definite) covariance matrices; hence, P − (h) 1,1 ≥ K h 2q−1 . Inserting these orders into the definition of β (i) (Eq. (11)), recalling that R ≥ 0, and removing the dependence on θ by reducing the fraction conclude the proof.
Local convergence rates
With the above bounds on intermediate algorithmic quantities (involving state misalignments δ (i) ) in place, we only need an additional assumption to proceed-via a bound on δ (i) (0)-to our first main result on local convergence orders of Ψ .
Assumption 3
The initial errors on the initial estimate of
the ith derivative m (i) (0) satisfy ε (i) (0) = m (i) (0) − x (i) (0) ≤ K 0 h q+1−i .
(This assumption is, like Assumption 2, weaker than the standard assumption of exact initializations.)
Lemma 7 Under Assumptions 1 and 3, for all
i ∈ [q + 1] and for all h > 0, δ (i) (0) ≤ K h q+1−i .
Proof The claim follows, using Assumptions 1 and 3, from
δ (i) (0) ≤ m (i) (0) − x (i) (0) = ε (i) (0) ≤K 0 h q+1−i + f (i−1) x (0) (0) − f (i−1) m (0) (0) ≤L ε (0) (0) ≤L K 0 h q+1 .(33)
Now, we can bound the local truncation error ε (0) (h) as defined in Eq. (26).
Theorem 8 (Local Truncation Error) Under the Assumptions 1 to 3 and for all sufficiently small h > 0,
ε (0) (h) ≤ |||ε(h)||| h ≤ K 1 + θ m (q) (0) h q+1 . (34)
Proof By the triangle inequality for ||| · ||| h and subsequent application of Lemma 3 and Assumption 3 to the second summand of the resulting inequality, we obtain
|||ε(h)||| h ≤ Ψ P(0),h (m(0)) − Φ h x (0) (0) h = q i=0 h i Δ (i) (h), by eq. (27) + Φ h x (0) (0) − Φ h m (0) (0) h ≤(1+K h) ε (0) (0) ≤K h q+1 .(35)
The remaining bound on Δ (i) (h), for all i ∈ [q + 1] and sufficiently small h > 0, is obtained by insertion of the bounds from Lemmas 4 to 6 (in the case of n = 0), into Eq. (28):
Δ (i) (h) ≤ K 1 + θ m (q) (0) h q+1−i + K q k=1 h k−1 (k − 1)! δ (k) (nh)(36)Lemma 7 ≤ K 1 + θ m (q) (0) h q+1−i .(37)(i) (h) on the ith derivative x (i) (h) for all i ∈ [q + 1].
Such derivative bounds are (to the best of our knowledge) not available for classical numerical solvers, since they do not explicitly model the derivatives in the first place. These bounds could be useful for subsequent computations based on the ODE trajectory (Hennig et al. 2015).
Unsurprisingly, as the mean prediction (recall Eq. (8)) deviates from a pure qth order Taylor expansion by K θ m (q) (0) h q+1 for an IOUP prior (i.e., θ > 0 in Eq. (5)), the constant in front of the local h q+1 convergence rate depends on both θ and m (q) (0) in the IOUP case. A global analysis for IOUP is therefore more complicated than for IBM: Recall from Eq. (8) that, for q = 1, the mean predic-
tion for x((n + 1)h) is m −,(0) ((n + 1)h) m −,(1) ((n + 1)h) eq. (8) = m (0) (nh) + hm (1) (nh) − θ h 2 2! + O(h 3 ) m (1) (nh) e −θ h m −,(1) (nh) ,(38)
which pulls both m −,(0) and m −,(1) towards zero (or some other prior mean) compared to the prediction given by its Taylor expansion for θ = 0. While this is useful for ODEs converging to zero, such asẋ = −x, it is problematic for diverging ODEs, such asẋ = x (Magnani et al. 2017). As shown in Theorem 8, this effect is asymptotically negligible for local convergence, but it might matter globally and, therefore, might necessitate stronger assumptions on f than Assumption 1, such as a bound on f ∞ which would globally bound {y(nh); n = 0, . . . , T /h} and thereby {m (1) (nh); n = 0, . . . , T /h} in Eq. (38). It is furthermore conceivable that a global bound for IOUP would depend on the relation between θ and f ∞ in a non-trivial way. The inclusion of IOUP (θ > 0) would hence complicate the below proofs further. Therefore, we restrict the following first global analysis to IBM (θ = 0).
Global analysis
As explained in Remark 2, we only consider the case of the IBM prior, i.e., θ = 0, in this section. Moreover, we restrict our analysis to q = 1 in this first global analysis. Although we only have definite knowledge for q = 1, we believe that the convergence rates might also hold for higher q ∈ N-which we experimentally test in Sect. 9.1. Moreover, we believe that proofs analogous to the below proofs might work out for higher q ∈ N and that deriving a generalized version of Proposition 10 for higher q is the bottleneck for such proofs (see Sect. 10 for a discussion of these restrictions).
While, for local convergence, all noise models R yield the same convergence rates in Theorem 8, it is unclear how the order of R in h (as described in Sect. 2.3) affects global convergence rates: e.g., for the limiting case R ≡ K h 0 , the steady-state Kalman gains β ∞ would converge to zero (see Eqs. (43) and 44 below) for h → 0, and hence the evaluation of f would not be taken into account-yielding a filter Ψ which assumes that the evaluations of f are equally off, regardless of h > 0, and eventually just extrapolates along the prior without global convergence of the posterior mean m. For the opposite limiting case R ≡ lim p→∞ K h p ≡ 0, it has already been shown in Schober et al. (2019, Proposition 1 and Theorem 1) that-in the steady state and for q = 1, 2-the filter Ψ inherits global convergence rates from known multistep methods in Nordsieck form Nordsieck (1962). To explore a more general noise model, we assume a fixed noise model R ≡ K h p with arbitrary order p.
In the following, we analyze how small p can be in order for Ψ to exhibit fast global convergence (cf. the similar role of the order p of perturbations in Conrad et al. (2017, Assump-tion 1) and Abdulle and Garegnani (2020, Assumption 2.2)). In light of Theorem 8, the highest possible global convergence rate is O(h)-which will indeed be obtained for all p ∈ [1, ∞] in Theorem 14. Since every extrapolation step of Ψ from t to t + h depends not only on the current state, but also on the covariance matrix P(t)-which itself depends on all previous steps-Ψ is neither a single-step nor a multistep method. Contrary to Schober et al. (2019), we do not restrict our theoretical analysis to the steady-state case, but provide our results under the weaker Assumptions 2 and 3 that were already sufficient for local convergence in Theorem 8-which is made possible by the bounds Eqs. (48) and (49) in Proposition 10.
Outline of global convergence proof
The goal of the following sequence of proofs in Sect. 7 is Theorem 14. It is proved by a special version of the discrete Grönwall inequality (Clark 1987) whose prerequisite is provided in Lemma 13. This Lemma 13 follows from Lemma 3 (on the regularity of the flow map Φ t ) as well as Lemma 12 which provides a bound on the maximal increment of the numerical error stemming from local truncation errors. For the proof of Lemma 12, we first have to establish (i) global bounds on the Kalman gains β (0) and β (1) by the inequalities Eqs. (48) and (49) in Proposition 10, and (ii) a global bound on the state misalignment δ (1) in Lemma 11.
In Sects. 7.2-7.4, we will collect these inequalities in the order of their numbering to subsequently prove global convergence in Sect. 7.5.
Global bounds on Kalman gains
Since we will analyze the sequence of covariance matrices and Kalman gains using contractions in Proposition 10, we first introduce the following generalization of Banach fixedpoint theorem (BFT).
Lemma 9
Let (X , d) be a non-empty complete metric space, T n : X → X , n ∈ N, a sequence of L n -Lipschitz continuous contractions with sup n L n ≤L < 1. Let u n be the fixed point of T n , as given by BFT, and let lim n→∞ u n = u * ∈ X . Then, for all x 0 ∈ X , the recursive sequence x n := T n (x n−1 ) converges to u * as n → ∞.
Proof See Appendix G.
In the following, we will assume that T is a multiple of h.
Proposition 10 For constant R ≡ K h p with p ∈ [0, ∞], the unique (attractive) steady states for the following quantities are
P −,∞ 11 := lim n→∞ P − 11 (nh) = 1 2 σ 2 h + 4σ 2 Rh + σ 4 h 2 ,(39)P ∞ 11 := lim n→∞ P 11 (nh) = σ 2 h + √ 4σ 2 Rh + σ 4 h 2 R σ 2 h + √ 4σ 2 Rh + σ 4 h 2 + 2R ,(40)P −,∞ 01 := lim n→∞ P − 01 (nh) = σ 4 h 2 + (2R + σ 2 h) √ 4σ 2 Rh + σ 4 h 2 + 4Rσ 2 h 2(σ 2 h + √ 4σ 2 Rh + σ 4 h 2 ) h,(41)P ∞ 01 := lim n→∞ P 01 (nh) = R √ 4Rσ 2 h + σ 4 h 2 σ 2 h + √ 4σ 2 Rh + σ 4 h 2 h,(42)β ∞,(0) := lim n→∞ β (0) (nh) = √ 4Rσ 2 h + σ 4 h 2 σ 2 h + √ 4σ 2 Rh + σ 4 h 2 h, and(43)β ∞,(1) := lim n→∞ β (1) (nh) = σ 2 h + √ 4σ 2 Rh + σ 4 h 2 σ 2 h + √ 4σ 2 Rh + σ 4 h 2 + 2R .(44)P − 11 (nh) ≤ K h 1∧ p+1 2 ,(45)max n∈[T /h+1] P 11 (nh) ≤ K h p∨ p+1 2 ,(46)max n∈[T /h+1] P 01 (nh) ≤ K h p+1 ,(47)max n∈[T /h+1] β (0) (nh) ≤ K h, and(48)max n∈[T /h+1] 1 − β (1) (nh) ≤ K h ( p−1)∨0 .(49)
All of these bounds are sharp in the sense that they fail for any higher order in the exponent of h.
Remark 3
The recursions for P(nh) and P − (nh) given by Eqs. (10) and (15) follow a discrete algebraic Riccati equation (DARE)-a topic studied in many related settings (Lancaster and Rodman 1995). While the asymptotic behavior Eq. (39) of the completely detectable state X (1) can also be obtained using classical filtering theory (Anderson and Moore 1979, Chapter 4.4), the remaining statements of Proposition 10 also concern the undetectable state X (0) and are, to the best of our knowledge, not directly obtainable from existing theory on DAREs or filtering (which makes the following proof necessary). Note that, in the special case of no measurement noise (R ≡ 0), Eqs. (43) and (44)
Global bounds on state misalignments
For the following estimates, we restrict the choice of p to be larger than q = 1.
Assumption 4 The noise model is chosen to be
R ≡ K h p , for p ∈ [q, ∞] = [1, ∞], where K h ∞ := 0.
Before bounding the added deviation of Ψ from the flow Φ per step, a global bound on the state misalignments defined in Eq. (25) is necessary. The result of the following lemma is discussed in Appendix D.
Lemma 11 Under Assumptions 1 to 4 and for all sufficiently
small h > 0, max n∈[T /h+1] δ (1) (nh) ≤ K h.(50)
Proof See Appendix I.
See Lemma 11 for a experimental demonstration of Eq. (33).
Prerequisite for discrete Grönwall inequality
Equipped with the above bounds, we can now prove a bound on the maximal increment of the numerical error stemming from local truncation errors which is needed to prove Eq. (56), the prerequisite for the discrete Grönwall inequality.
Lemma 12 Under Assumptions 1 to 4 and for all sufficiently
small h > 0, max n∈[T /h+1] Ψ P(nh),h (m(nh)) − Φ h m (0) (nh) h ≤ K h 2 .(51)
Proof By Eq. (19), we have
Ψ P(nh),h (m(nh)) − Φ h m (0) (nh) h = S 1 (h) + hS 2 (h),(52)
with S 1 (h) and S 2 (h) defined and bounded by
S 1 (h) := Ψ (0) h (m(nh)) − Φ (0) h m (0) (nh) eq.(28) ≤ Δ −(0) ((n + 1)h) eq. (29) ≤ K h 2 +δ (0) (nh)+hδ (1) (nh) + β (0) ((n + 1)h) eq. (48) ≤ K h r ((n + 1)h) eq. (31) ≤ K h+(1+K h)δ (1) (nh) ,(53)
and, analogously,
S 2 (h) := Ψ (1) h (m(nh)) − Φ (1) h m (0) (nh) eq. (28) ≤ Δ −(1) ((n + 1)h) eq. (29) ≤ K h+δ (1) (nh) + β (1) ((n + 1)h) eq. (11) ≤ 1 r ((n + 1)h) eq. (31) ≤ K h+(1+K h)δ (1) (nh)(54)
Insertion of Eqs. (53) and (54) into Eq. (52) yields
Ψ P(nh),h (m(nh)) − Φ h m (0) (nh) h ≤ K h 2 + δ (0) (nh) + K hδ (1) (nh),(55)
which-after recalling δ (0) (nh) = 0 and applying Lemma 11 to δ (1) (nh)-implies Eq. (51).
The previous lemma now implies a suitable prerequisite for a discrete Grönwall inequality.
Lemma 13 Under Assumptions 1 to 4 and for all sufficiently
small h > 0, |||ε ((n + 1)h)||| h ≤ K h 2 + (1 + K h) ε (0) (nh) .(56)
Proof We observe, by the triangle inequality for the norm |||·||| h , that
|||ε ((n + 1)h)||| h = Ψ P(nh),h (m(nh)) − Φ h x (0) (nh) h ≤ Ψ P(nh),h (m(nh)) − Φ h m (0) (nh) h + Φ h m (0) (nh) − Φ h x (0) (nh) h .(57)
The proof is concluded by applying Lemma 12 to the first and Lemma 3 to the second summand of this bound (as well as recalling from Eq. (26) that ε (0) (nh) = m (0) (nh) − x (0) (nh) ).
Global convergence rates
With the above bounds in place, we can now prove global convergence rates.
Theorem 14 (Global truncation error) Under Assumptions 1 to 4 and for all sufficiently small h
> 0, max n∈[T /h+1] ε (0) (nh) ≤ max n∈[T /h+1] |||ε(nh)||| h ≤ K (T )h,(58)
where K (T ) > 0 is a constant that depends on T , but not on h.
Remark 4 Theorem 14 not only implies that the truncation error ε (0) (nh) on the solution of Eq.
(1) has global order h, but also (by Eq. (19)) that the truncation error ε (1) (nh) on the derivative is uniformly bounded by a constant K independent of h. The convergence rate of this theorem is sharp in the sense that it cannot be improved over all f satisfying Assumption 1 since it is one order worse than the local convergence rate implied by Theorem 8.
Proof
≤ K h 2 + K h ε (0) (nh) eq. (19) ≤ K h 2 + K h|||ε(nh)||| h (tel. sum) = K h 2 + |||ε(0)||| h + K h n−1 l=0 (|||ε((l + 1)h)||| h − |||ε(lh)||| h ) (|||ε(0)||| h ≤K h 2 ) ≤ K h 2 + K h n−1 l=0 (|||ε((l + 1)h)||| h − |||ε(lh)||| h ) .(59)
Now, by a special version of the discrete Grönwall inequality (Clark 1987), if z n and g n are sequences of real numbers (with g n ≥ 0), c ≥ 0 is a nonnegative constant, and if Application of this inequality to Eq. (59) with z n := |||ε((n + 1)h)||| h − |||ε(nh)||| h , g n := K h, and c := K h 2 yields
z n ≤ c + n−1 l=0 g l z l , for all n ∈ N,(60)|||ε((n + 1)h)||| h − |||ε(nh)||| h ≤ K (T )h 2 exp (nK h) (61) n≤T /h ≤ K (T )h 2 .(62)
By another telescoping sum argument and |||ε (0)
||| h ≤ K h 2 , we obtain |||ε(nh)||| h (tel. sum) = n−1 l=0 (|||ε((l + 1)h)||| h − |||ε(lh)||| h ) + |||ε(0)||| h (63) eq. (62) ≤ nK (T )h 2 + K h 2 (64) n≤T /h ≤ K (T )h + K h 2 (65) ≤ K (T )h + K h 2 ,(66)
for all sufficiently small h > 0. Recalling that ε (0) (nh) ≤ |||ε(nh)||| h , by Eq. (19), concludes the proof.
Calibration of credible intervals
In PN, one way to judge calibration of a Gaussian output
N (m, V ) is to check whether the implied 0.95 credible inter- val [m − 2 √ V , m + 2 √ V ]
contracts at the same rate as the convergence rate of the posterior mean to the true quantity of interest. For the filter, this would mean that the rate of contraction of max n √ P 00 (nh) should contract at the same rate as max n∈[T /h+1] ε (0) (nh) (recall its rates from Theorem 14). Otherwise, for a higher or lower rate of the interval it would eventually be under-or overconfident, as h → 0. The following proposition shows-in light of the sharp bound Eq. (58) on the global error-that the credible intervals are well calibrated in this sense if p ∈ [1, ∞].
Theorem 15 Under Assumption 2 and for R ≡ K h p , p ∈ [0, ∞], as well as sufficiently small h
> 0, max n∈[T /h+1] P − 00 (nh) ≤ K (T )h ( p+1)∧2 , and(67)max n∈[T /h+1] P 00 (nh) ≤ K (T )h ( p+1)∧2 .(68)
Proof See Appendix J.
Numerical experiments
In this section, we empirically assess the following hypotheses:
(i) the worst-case convergence rates from Theorem 14 hold not only for q = 1 but also for q ∈ {2, 3} (see Sect. 9.1), (ii) the convergence rates of the credible intervals from Theorem 15 hold true (see Sect. 9.2), and (iii) Assumption 4 is necessary to get these convergence rates (see Sect. 9.3).
The three hypotheses are all supported by the experiments. These experiments are subsequently discussed in Sect. 9.4. Appendix D contains an additional experiment illustrating the convergence rates for the state misalignment δ from Lemma 11.
Global convergence rates for q ∈ {1, 2, 3}
We consider the following three test IVPs: Firstly, the following linear ODĖ
x(t) = Λx(t), ∀t ∈ [0, 10], with Λ = 0 −π π 0 and x(0) = (0, 1) ,(69)
and has the harmonic oscillator
x(t) = e tΛ x(0) = − sin(tπ) cos(tπ)(70)
as a solution. Secondly, the logistic equatioṅ
x(t) = λ 0 x(t) (1 − x(t)/λ 1 ) , ∀t ∈ [0, 1.5],
with (λ 0 , λ 1 ) = (3, 1) and x(0) = 0.1,
which has the logistic curve
x(t) = λ 1 exp(λ 0 t)x(0) λ 1 + x(0)(exp(λ 0 t) − 1) .(72)
And, thirdly, the FitzHugh-Nagumo model
x 1 (t) x 2 (t) = x 1 (t) − x 1 (t) 3 − x 2 (t) 1 τ (x 1 (t) + a − bx 2 (t)) ,
, ∀t ∈ [0, 10] (73) with (a, b, c) = (0.08, 0.07, 1.25) and x(0) = (1, 0) which does not have a closed-form solution. Its solution, which we approximate by Euler's method with a step size of h = 10 −6 for the below experiments, is depicted in Fig. 1. We numerically solve these three IVPs with the Gaussian ODE filter for multiple step sizes h > 0 and with a q-times IBM prior (i.e., θ = 0 in Eq. (5)) for q ∈ {1, 2, 3} and scale σ = 20. As a measurement model, we employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (for h ≤ 1) which are permissible under Assumption 4 whose constant K > 0 is denoted explicitly by K R in this section. The resulting convergence rates of global errors m(T ) − x(T ) are depicted in a work-precision diagram in Fig. 2 Figure 2 shows that these convergence rates of qth order hold true in the considered examples for values of up to q = 3 if R ≡ 0 and, for values of up to q = 3. In the case of R ≡ 0, even (q + 1)th order convergence rates appear to hold true for all three ODEs and q ∈ {1, 2, 3}. Note that it is more difficult to validate these convergence rates for q = 4, for all three test problems and small h > 0, since numerical instability can contaminate the analytical rates.
Calibration of credible intervals
To demonstrate the convergence rates of the posterior credible intervals proved in Theorem 15, we now restrict our attention to the case of q = 1, that was considered therein. As in Sect. 9.1, we numerically solve the IVPs eqs. (69) and (71) with the Gaussian ODE filter with a once IBM prior with fixed scale σ = 1. We again employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (for h ≤ 1) which are permissible under Assumption 4 as a measurement model. Figure 3 depicts the resulting convergence rates in work-precision diagrams. As the parallel standard deviation (std. dev.) and h 1 convergence curves show, the credible intervals asymptotically contract at the rate of h 1 guaranteed by Theorem 15. In all four diagrams of Fig. 3, the global error shrinks at a faster rate than the width of the credible intervals. This is unsurprising for R ≡ 0 as we have already observed convergence rates of h q+1 in this case. While this effect is less pronounced for R ≡ K R h q , it still results in underconfidence as h → 0. Remarkably, the shrinking of the standard deviations seems to be 'adaptive' to the numerical error-by which we mean that, as long as the numerical error hardly decreases (up to 10 1.75 evaluations of f ), the standard deviation also stays almost constant, before adopting its h 1 convergence asymptotic (from ≈ 10 2.00 ). h 4 conv.
Necessity of Assumption 4
Having explored the asymptotic properties under Assumption 4 in Sects. 9.1 and 9.2, we now turn our attention to the question of whether this assumption is necessary to guarantee the convergence rates from Theorems 14 and 15. This question is of significance, because Assumption 4 is weaker than the R ≡ 0 assumption of the previous theoretical results (i.e., Proposition 1 and Theorem 1 in Schober et al. (2019)) and it is not self-evident that it cannot be further relaxed.
To this end, we numerically solve the logistic ODE Eq. (71) with the Gaussian ODE filter with a once IBM prior with fixed scale σ = 1 and measurement variance R ≡ K R h 1/2 , which is impermissible under Assumption 4, for increasing choices of K R from 0.00 × 10 0 to 1.00 × 10 7 . In the same way as in Fig. 3, the resulting work-precision diagrams are plotted in Fig. 4. In contrast to the lower left diagram in Fig. 3, which presents the same experiment for R ≡ K R h q (the maximal measurement variance permissible under Assumption 4), the rate of h 2 , that is again observed for K R = 0 in the first diagram, is already missed for K R = 1.00 × 10 0 in the second diagram. With growing constants, the convergence rates of the actual errors as well as the expected errors (standard deviation) decrease from diagram to diagram. In the center diagram with K R = 3.73 × 10 3 , the rates are already slightly worse than the h 1 convergence rates guaranteed by Theorems 14 and 15 under Assumption 4, whereas, for K R = 5.00 × 10 3 , the convergence rates in the lower left plot of Fig. 3 were still significantly better than h 1 . For the greater constants up to K R = 1.00 × 10 7 , the rates even become significantly lower. Notably, as in the lower right diagram of Fig. 3, the slope of the standard deviation curve matches the slope of the global error curve, as can be seen best in the lower right subfigure-thereby asymptotically exhibiting neither over-nor underconfidence. These experiments suggest that the convergence rates from Theorems 14 and 15 do not hold in general for R ≡ K R h 1/2 . Hence, it seems likely that Assumption 4 is indeed necessary for our results and cannot be further relaxed without lowering the implied worst-case convergence rates.
Discussion of experiments
Before proceeding to our overall conclusions, we close this section with a comprehensive discussion of the above experiments. First and foremost, the experiments in Sect. 9.1 suggest that Theorem 14, the main result of this paper, might be generalizable to q ∈ {2, 3} and potentially even higher q ∈ N-although unresolved issues with numerical instability for small step sizes prevent us from confidently asserting that these theoretical results would hold in practice for q ≥ 4. Moreover, we demonstrated the contraction rates of the posterior credible intervals from Theorem 15 and evidence for the necessity of Assumption 4 in Sects. 9.3 and 9.2. The asymptotics revealed by these experiments can be divided by the employed measurement model into three cases: the zero-noise case R ≡ 0, the permissible nonzero case R ≤ K R h q (under Assumption 4) and the nonpermissible case R K R h q . First, if R ≡ 0, the diagrams in the left column of Fig. 2 reaffirm the h q+1 convergence reported for q ∈ {1, 2} in Schober et al. (2019, Figure 4) and extend them to q = 3 (see Sect. 10 for a discussion on why we expect the above global convergence proofs to be extensible to q ≥ 2)
The contraction rates of the credible intervals, for q = 1, appear to be asymptotically underconfident in this case as they contract faster than the error. This underconfidence is not surprising in so far as the posterior standard deviation is a worst-case bound for systems modeled by the prior, while the convergence proofs require smoothness of the solution of one order higher than sample paths from the prior. This is a typical result that highlights an aspect known to, but on the margins of classic analysis: The class of problems for which the algorithm converges is rougher than the class on which convergence order proofs operate. How to remedy such overly cautious UQ remains an open research question in PN as well as classical numerical analysis.
Secondly, in the case of R > 0, as permissible under Assumption 4, the convergence rates are slightly reduced compared to the case R ≡ 0, exhibiting convergence between h q and h q+1 . The asymptotic underconfidence of the credible intervals, however, is either reduced or completely removed as depicted in the right column of Fig. 3. Thirdly, in the final case of an impermissibly large R > 0, the h q convergence speed guaranteed by Theorem 14 indeed does not necessarily hold anymore-as depicted in Fig. 4. Note, however, that even then the convergence rate is only slightly worse than h q . The asymptotic UQ matches the observed global error in this case, as the parallel standard deviation and the h 1 curves in all but the upper left R ≡ 0 diagram show.
Overall, the experiments suggest that, in absence of statistical noise on f , a zero-variance measurement model yields the best convergence rates of the posterior mean. Maybe this was expected as, in this case, R only models the inaccuracy from the truncation error, that ideally should be treated adaptively (Kersting and Hennig 2016, Section 2.2). The convergence rates of adaptive noise models should be assessed in future work. As the observed convergence rates in practice sometimes outperform the proved worst-case convergence rates, we believe that an average-case analysis of the filter in the spirit of Ritter (2000) may shed more light upon the expected practical performance. Furthermore, it appears that the UQ becomes asymptotically accurate as well as adaptive to the true numerical error as soon as the R > 0 is large enough. This reinforces our hope that these algorithms will prove useful for IVPs when f is estimated itself (Hennig et al. 2015, Section 3(d)), thereby introducing a R > 0.
Conclusions
We presented a worst-case convergence rate analysis of the Gaussian ODE filter, comprising both local and global convergence rates. While local convergence rates of h q+1 were shown to hold for all q ∈ N, IBM and IOUP prior as well as any noise model R ≥ 0, our global convergence results is restricted to the case of q = 1, IBM prior and fixed noise model
R ≡ K h p with p ∈ [1, ∞].
While a restriction of the noise model seems inevitable, we believe that the other two restrictions can be lifted:
In light of Theorem 8, global convergence rates for the IOUP prior might only require an additional assumption that ensures that all possible data sequences {y(nh); n = 1, . . . , T /h} (and thereby all possible qth-state sequences {m (q) (nh); n = 0, . . . , T /h}) remain uniformly bounded (see discussion in Remark 2). For the case of q ≥ 2, it seems plausible that a proof analogous to the presented one would already yield global convergence rates of order h q , 3 as suggested for q ∈ {2, 3} by the experiments in Sect. 9.1. The orders of the predictive credible intervals can also help to intuitively explain the threshold of p = 1 (or maybe more generally: p = q; see Fig. 2) below which the performance of the filter is not as good, due to Eqs. (45)-(49): According to Kersting and Hennig (2016, Equation (20)), the 'true' (push-forward) variance on y(t) given the predictive distribution N (m − (t), P − (t)) is equal to the integral of f f with respect to N (m − (t), P − (t)), whose maximum over all time steps, by Eq. (67), has order O(h p+1 2 ∧1 ) if f f is globally Lipschitz-since P − (t) enters the argument of the integrand f f , after a change of variable, only under a square root. Hence, the added 'statistical' noise R on the evaluation of f is of lower order than the accumulated 'numerical' variance P − (t) (thereby preventing numerical convergence) if and only if p < 1. Maybe this, in the spirit of Hennig et al. (2015, Subsection 3(d)), can serve as a criterion for vector fields f that are too roughly approximated for a numerical solver to output a trustworthy result, even as h → 0.
Furthermore, the competitive practical performance of the filter, as numerically demonstrated in Schober et al. (2019, Section 5), might only be completely captured by an average-case analysis in the sense of Ritter (2000), where the average error is computed with respect to some distribution p( f ), i.e., over a distribution of ODEs. To comprehend this idea, recall that the posterior filtering mean is the Bayes estimator with minimum mean squared error in linear dynamical systems with Gauss-Markov prior (as defined by the SDE Eq. (2)), i.e., when the data is not evaluations of f but real i.i.d. measurements, as well as in the special case ofẋ(t) = f (t), when the IVP simplifies to a quadrature problem-see Solak et al. (2003) and O'Hagan (1991, Section 2.2), respectively. In fact, the entire purpose of the update step is to correct the prediction in the (on average) correct direction, while a worst-case analysis must assume that it corrects in the worst possible direction in every step-which we execute by the application of the triangle inequality in Eq. (28) resulting in a worst-case upper bound that is the sum of the worst-case errors from prediction and update step. An analysis of the probabilities of 'good' vs. 'bad' updates might therefore pave the way for such an averagecase analysis in the setting of this paper. Since, in practice, truncation errors of ODE solvers tend to be significantly smaller than the worst case-as mirrored by the experiments in Section 9-such an analysis might be useful for applications.
Lastly, we hope that the presented convergence analysis can lay the foundations for similar results for the novel ODE filters (extended KF, unscented KF, particle filter) introduced in Tronarp et al. (2019), and can advance the research on uncertainty-aware likelihoods for inverse problems by ODE filtering (Kersting et al. 2020, Section 3).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
A Derivation of A and Q
As derived in Särkkä (2006, Section 2.2.6) the solution of the SDE Eq. (2), i.e.,
dX(t) = ⎛ ⎜ ⎜ ⎜ ⎝ dX (0) (t)
. . .
dX (q−1) (t) dX (q) (t) ⎞ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 0. . . 0 . . . . . . . . . 0 . . . . . . 1 c 0 . . . . . . c q ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ =:F ⎛ ⎜ ⎜ ⎜ ⎝ X (0) (t) . . . X (q−1) (t) X (q) (t) ⎞ ⎟ ⎟ ⎟ ⎠ =X(t) dt + ⎛ ⎜ ⎜ ⎜ ⎝ 0 . . . 0 σ ⎞ ⎟ ⎟ ⎟ ⎠ =:L dB(t),(74)
where we omitted the index j for simplicity, is a Gauss-Markov process with mean m(t) and covariance matrix P(t) given by
m(t) = A(t)m(0), P(t) = A(t)P(0)A(t) + Q(t),(75)
where the matrices A, Q ∈ R (q+1)×(q+1) are explicitly defined by
A(t) = exp(t F),(76)Q(t) := t 0 exp(F(t − τ ))L L exp(F(t − τ )) dτ. (77)
Parts of the following calculation can be found in Magnani et al. (2017). If we choose c 0 , . . . , c q−1 = 0 and c q = −θ (for θ ≥ 0) in Eq. (74) the unique strong solution of the SDE is a q-times IOUP, if θ > 0, and a q-times IBM, if θ = 0; see, e.g., Karatzas and Shreve (1991, Chapter 5: Example 6.8). By Eq. (77) and
(t F) k i, j = t k I j−i=k + (−θ) k+i−q I { j=q, i+k≥q} ,(78)
it follows that
A(t) i j = ∞ k=0 (t F) k k! i, j = I i≤ j t j−i ( j−i)! , if j = q, 1 (−θ) q−i ∞ k=q−i (−θt) k k! , if j = q, = I i≤ j t j−i ( j−i)! , if j = q, t q−i (q−i)! − θ ∞ k=q+1−i (−θ) k+i−q−1 t k k! , if j = q.(79)
Analogously, it follows that
exp(F(t − τ )) = I i≤ j (t−τ ) j−i ( j−i)! , if j = q, (t−τ ) q−i (q−i)! − θ ∞ k=q+1−i (−θ) k+i−q−1 (t−τ ) k k! , if j = q, .(80)
If we insert Eq. (80) into Eq. (77), then we obtain, by the sparsity of L, that
Q(t) i j = σ 2 (−θ) 2q−i− j t 0 ⎛ ⎝ ∞ k=q−i (−θτ ) k k! ⎞ ⎠ ⎛ ⎝ ∞ l=q− j (−θτ ) l l! ⎞ ⎠ dτ,(81)
and the dominated convergence theorem (with dominating function τ → e 2θτ ) yields
Q(t) i j = σ 2 (−θ) 2q−i− j ∞ k=q−i ∞ l=q− j t 0 (−θτ ) k+l k!l! dτ = σ 2 (−θ) 2q−i− j ∞ k=q−i ∞ l=q− j (−θ) k+l t k+l+1 (k + 1 + l)k!l! .(82)
Now, by extracting the first term and noticing that the rest of the series is in Θ(t 2q+2−i− j ), it follows that
Q(t) i j = σ 2 t 2q+1−i− j (2q + 1 − i − j)(q − i)!(q − j)! + Θ t 2q+2−i− j .(83)
B Extension to x with dependent dimensions
The algorithm in Sect. 2.2 employs a prior X with inde-
pendent dimensions X j = X (0) j , . . . , X (q) j , j ∈ [d], by
Eq.
(2). While this constitutes a loss of generality for our new theoretical results, which do not immediately carry over to the case of x with dependent dimensions, it is not a restriction to the class of models the algorithm can employ. To construct such a prior X, we first stack its dimensions into the random vector X = (X 0 , . . . , X d−1 ) , choose symmetric positive semi-definite matrices K x , K ε ∈ R d×d , and define, using the Kronecker product ⊗, its law according to the SDE
dX(t) = [K x ⊗ F] X(t) dt + [K ε ⊗ L] dB(t),(84)
with initial condition X(0) ∼ N (m(0), P(0)), mean m(0) ∈ R d(q+1) and covariance matrix P(0) ∈ R d(q+1)×d(q+1) , as well as an underlying d-dimensional Brownian motion B (independent of X(0)). Now, insertion of K x ⊗ F and K ε ⊗ L for F and L into Eq. (77) yields new predictive matricesà andQ. If we now choose K x = I d and K ε = I d , substitutẽ A andQ for A and Q in Eqs. (9) and (10), and use the d(q + 1)-dimensional GP X from Eq. (84) with m(0) ∈ R d(q+1) and P(0) ∈ R d(q+1)×d(q+1) as a prior, we have equivalently defined the version of Gaussian ODE filtering with independent dimensions from Sect. 2.2. If we, however, choose different symmetric positive semi-definite matrices for K x and K ε , we introduce, viaà andQ, a correlation in the development of the solution dimensions (x 0 , . . . , x d−1 ) as well as the error dimensions (ε 0 , . . . , ε d ) , respectively. Note that, while K ε plays a similar role as C h in Conrad et al. (2017, Assumption 1) in correlating the numerical errors, the matrix K x additionally introduces a correlation of the numerical estimates, that is m, along the time axis. Even more flexible correlation models (over all modeled derivatives) can be employed by inserting arbitrary matrices (of the same dimensionality) for K x ⊗ F and K ε ⊗ L in Eq. (84), but such models seem hard to interpret. For future research, it would be interesting to examine whether such GP models with dependent dimensions are useful in practice. There are first publications (Xiaoyue et al. 2018;Gessner et al. 2019) on this topic for integrals, but not yet for ODEs.
C Illustrative example
To illustrate the algorithm defined in Sect. 2.2, we apply it to a special case of the Riccati equation (Davis 1962, p. 73) dx
dt (t) = f (x(t)) = − (x(t)) 3 2 , x(0) = 1,(85)
solution:
x(t) = (t + 1) −1/2 ,(86)
with step size h = 0.1, measurement noise R = 0.0 (for simplicity) as well as prior hyperparameters q = 1, σ 2 = 10.0 and c i = 0 for all i ∈ [q + 1] (recall Eq. (2)), i.e., with a 1-times integrated Brownian motion prior whose drift and diffusion matrices are, by Eq. (8), given by
A(h) = 1 h 0 1 , Q(h) = 1/300 1/20 1/20 1 .(87)
As the ODE Eq. (85) is one-dimensional (i.e., d = 1), the dimension index j ∈ [d] is omitted in this section. Since the initial value and derivative are certain at x(0) = 1 anḋ −1/2) ). Therefore, m(0) = (1, −1/2) and P(0) = 0 ∈ R 2×2 for the initial filtering mean and covariance matrix. Now, the Gaussian ODE Filter computes the first integration step by executing the prediction step Eqs. (9) and (10)
x(0) = f (x 0 ) = −1/2, our prior GP is initialized with a Dirac distribution (i.e., X(0) = (X (0) (0), X (1) (0)) ∼ δ (x 0 , f (x 0 )) = δ (1,m − (h) = A(h)m − (0) = m (0) (0) + hm (1) (0), m (1) (0) = (19/20, −1/2) , and(88)P − (h) = 0 + Q(h) = 1/300 1/20 1/20 1 .(89)
Note that, for all
i ∈ [q + 1], m −,(i) (h) is obtained by a (q − i)th-order Taylor expansion of the state m(0) = (x 0 , f (x 0 )) ∈ R q+1 .
Based on this prediction, the data is then generated by
y(h) = f m −,(0) (h) eq. (88) = f (19/20) eq. (85) = −6859/16000(90)
with variance R = 0.0. In the subsequent update step Eqs. (9) and (11) to (13), a Bayesian conditioning of the predictive distribution Eqs. (88) and (89) on this data is executed: (25)):
β(h) = β (0) (h), β (1) (h) = P − (h) 01 (P − (h)) 11 + R , P − (h) 11 (P − (h)) 11 + R eq. (89) = 1 20 , 1 ,(91)r (h) = y(h) − m −,(δ (1) (h) eq.(25) = m (1) (h) − f m (0) (h) (94) = − 6859 16000 − 1 2 305141 320000 3 (95) ≈ 0.00485 > 0(96)
which confirms the exposition on the possibility of δ (i) > 0 from Sect. 4. Note that δ tends to increase with R; e.g., if R = 1.0 in the above example, then δ (1) (h) ≈ 0.03324. D Experiment: global convergence of state misalignments ı Figure 5 depicts the global convergence of the state misalignment δ (1) (T ) in the above example Eq. (85), as detailed in Appendix C, for q ∈ {1, 2, 3}. The plotting is analogous to Fig. 2. The resulting convergence rates of h q+1 confirm Lemma 11 and suggest that it may also be generalizable to q ∈ {2, 3, . . . }.
where I 1 , I 2 , and I 3 are defined and bounded as follows, using Assumption 1 and Lemma 1:
I 1 (h) := f q k=0 h k k! m (k) (nh) + K θ m (q) (nh) h q+1 − f q k=0 h k k! Φ (k) 0 m (0) (nh) ≤ L q k=0 h k k! δ (k) (nh) + L K θ m (q) (nh) h q+1 ,(101)I 2 (h) := f q k=0 h k k! Φ (k) 0 m (0) (nh) − f Φ (0) h m (0) (nh) ≤ L q k=0 h k k! Φ (k) 0 m (0) (nh) − Φ (0) h m (0) (nh) eq. (16) ≤ K h q+1 ,(102)
and
I 3 (h) := Φ (1) h m (0) (nh) − q k=1 h k−1 (k − 1)! Φ (k) 0 m (0) (nh) eq. (16) ≤ K h q .(103)
Inserting Eqs. (101)
G Proof of Lemma 9
Proof Letũ 0 = u * andũ n = T n (ũ n−1 ), for n ∈ N. Then,
d(u * , x n ) ≤ d(u * , u n ) →0 + d(u n ,ũ n ) =:a n + d(ũ n , x n ) →0 ,(104)
where the last summand goes to zero by
d(ũ n , x n ) = d (T n • · · · • T 1 )(u * ), (T n • · · · • T 1 )(x 0 ) ≤L n d(u * , x 0 ) → 0, as n → ∞.
Hence, it remains to show that lim n→∞ a n = 0. TheL-Lipschitz continuity of T n and the triangle inequality yield that a n = d(T n (u n ), T n (ũ n−1 )) ≤L d(u n , u n−1 ) + d(u n−1 ,ũ n−1 ) Since the convergent sequence u n is in particular a Cauchy sequence, lim m→∞ b m = 0 and, hence, 0 ≤ lim n→∞ a n = lim sup n→∞ a n ≤ 0. Hence, lim n→∞ a n = 0.
=La n−1 + b n−1 ,(105)
H Proof of Proposition 10
Proof Again, w.l.o.g. d = 1. We prove the claims in the following order: Eqs. (39)
= P − 11 (nh) 1 − P − 11 (nh) P − 11 (nh) + R = σ 2 h + √ 4σ 2 Rh + σ 4 h 2 R σ 2 h + √ 4σ 2 Rh + σ 4 h 2 + 2R , and(107)P − 11 ((n + 1)h) = P 11 (nh) + σ 2 h eq. (107) = 1 2 σ 2 h + 4σ 2 Rh + σ 4 h 2 = P − 11 (nh).(108)
After combining Eq. (107) and Eq. (108), the recursion for P − 11 is given by
P − 11 ((n + 1)h) = R P − 11 (nh) + R =:α(nh) P − 11 (nh) + σ 2 h (109) =:T P − 11 (nh) .(110)
Since R and P − 11 (nh) are positive variances, we know that inf n∈[T /h+1] P − 11 (nh) ≥ σ 2 h, and hence max n∈[T /h+1] α(nh) ≤ R/(σ 2 h + R) < 1. Hence,T is a contraction. By BFT, P −,∞ 11 is the unique (attractive) fixed point ofT , and the sequence {|P − 11 (nh) − P −,∞ 11 |} n is strictly decreasing. Since, by Eqs. (15), (6) with θ = 0 and Assumption 2,
P − 11 (h) = P 11 (0) + σ 2 h ≤ K h,(111)
we can, using the reverse triangle inequality and the (by BFT) strictly decreasing sequence {|P − 11 (nh) − P −,∞ 11 |} n , derive Eq. (45):
P − 11 (nh) ≤ P − 11 (nh) − P −,∞ 11 ≤ P − 11 (h)−P −,∞ 11 + P −,∞ 11 (112) ≤ P − 11 (h) ≤K h + 2P −,∞ 11 ≤K h 1∧ p+1 2 , by eq.(39) (113) ≤ K h 1∧ p+1 2 ,(114)
which is sharp because it is estimated against the maximum of the initial P − 11 and the steady state that can both be attained. Recall that, by Eq. (107), P 11 (nh) depends continuously on P − 11 (nh), and, hence, inserting Eq. (39) into Eq. (107) yields Eq. (40)-the necessary computation was already performed in Eq. (107). Since P 11 (nh) monotonically increases in P − 11 (nh) (because the derivative of P 11 (nh) with respect to P − 11 (nh) is non-negative for all P − 11 (nh) due to R ≥ 0; see Eq. (107)), we obtain Eq. (46):
P 11 (nh) eq. (107) ≤ max n P − 11 (nh) R max n P − 11 (nh) + R (115) R∼h p ≤ K h 1∧ p+1 2 K h p K h 1∧ p+1 2 + K h p (116) ≤ K h ( p+1)∧ 3 p+1 2 K h 1∧ p (117) ≤ K h p+1 2 , if p ≤ 1, K h p , if p ≥ 1,(118)≤ K h p∨ p+1 2 ,(119)
which is sharp because the steady state Eq. (45) has these rates. For Eq. (41), we again first construct the following recursion (from Eq. (10), Eqs. (15) and (6) with θ = 0)
P − 01 ((n + 1)h) = R P − 11 (nh) + R =α(nh) P − 01 (nh) + P 11 (nh) + σ 2 h 2 h =:g(nh) (120) = T n P − 01 (nh) ,(121)
where the α(nh)-Lipschitz continuous contractions T n satisfy the prerequisites of Lemma 9, since sup n α(nh) ≤ R/(σ 2 h + R) < 1 (due to inf n P − 11 (nh) ≥ σ 2 h) and the sequence of fixed points (1 − α(nh)) −1 g(nh) of T n (defined by BFT) converges. Both α(nh) and g(nh) depend continuously on P − 11 (nh). Hence, insertion of the limits Eqs. (39) and (40)
yield lim n→∞ (1 − α(nh)) −1 = σ 2 h + √ 4σ 2 Rh + σ 4 h 2 + 2R σ 2 h + √ 4σ 2 Rh + σ 4 h 2 ,(122)
and lim n→∞ g(nh)
= (σ 4 h 2 + (2R + σ 2 h) √ 4σ 2 Rh + σ 4 h 2 + 4Rσ 2 h) 2(σ 2 h + √ 4σ 2 Rh + σ 4 h 2 + 2R) h.(123)
Now, application of Lemma 9 implies convergence of the recursion Eq. (121) to the product of these two limits Eqs. (122) and (123), i.e., Eq. (41):
lim n→∞ P − 01 (nh) = lim n→∞ (1 − α(nh)) −1 × lim n→∞ g(nh) = σ 4 h 2 + (2R + σ 2 h) √ 4σ 2 Rh + σ 4 h 2 + 4Rσ 2 h 2(σ 2 h + √ 4σ 2 Rh + σ 4 h 2 ) h.
For Eqs. (43) and (44), we can simply insert Eqs. (39) and (41) for P − 01 (nh) and P − 11 (nh), respectively, into their definition Eq. (11):
β ∞,(0) eq. (11) = P −,∞ 01 P −,∞ 11 + R(124)
These steady states Eqs. (43) and (44) = Rβ (0) (nh),
which, since P 01 (nh) depends continuously on β (0) (nh), implies the unique (attractive) fixed point P ∞ 01 (nh) = Rβ ∞,(0) , which yields Eq. (42). Now, exploiting Eq. (11) and inf n P − 11 (nh) ≥ σ 2 h yields Eq. (49):
1 − β (1) (nh) = R P − 11 (nh) + R (129) ≤ R σ 2 h + R (130) R∼h p = K h p K h + K h p (131) ≤ K h ( p−1)∨0 ,(132)
which is sharp because inf n P − 11 (nh) ≥ K h is sharp (due to Eqs. (10) and (6)). And since, for β (0) , maximizing over both P − 01 (nh) and P − 11 (nh) at the same time does not yield a sharp bound (while above in Eqs. (129) and (119) the maximization over just one quantity does), we prove Eq. (48) by inductively showing that
β (0) (nh) ≤βh, ∀n ∈ N,(133)withβ := 2K 0 σ 2 + 1 2 ∨ 1 > 0,(134)
where K 0 > 0 is the constant from Assumption 2. The constantβ is independent of n and a possible choice for K in Eq. (48). The base case (n = 1) follows from
β (0) (h) = P − 01 (h) P − 11 (h) + R(135)
In the following inductive step (n−1 → n) we, to avoid notational clutter, simply denote P − ((n − 1)h) i j by P − i j which leaves us-by Eq. (11), (10) and (15)-with the following term to bound:
β (0) (nh) = P − 01 (nh) P − 11 (nh) + R (139) ≤ P − 01 α(nh) + h P − 11 α(nh) + σ 2 2 h 2 P − 11 α(nh) + σ 2 h + R ,(140)
with α(nh) = 1 − P − 11 P − 11 +R = R P − 11 +R . Application of the inductive hypothesis (i.e., P − 01 ≤β(P − 11 + R)) yields, after some rearrangements, that β (0) (nh) ≤β P − 11 + R hα(nh) + h P − 11 α(nh) + σ 2 2 h 2 P − 11 α(nh) + σ 2 h + R = 2β P − 11 R + σ 2 h P − 11 + R + 2P − 11 R + 2β R 2 2 P − 11 R + σ 2 h P − 11 + R + P − 11 R + R 2 h = 2(β + 1)Λ 1 + Λ 2 + 2βΛ 3 4Λ 1 + 2Λ 2 + 2Λ 3 h,
with Λ 1 := 2P − 11 R, Λ 2 := σ 2 h P − 11 + R , and Λ 3 := R 2 . Now, application ofβ ≥ 1 yields |β (0) (nh)| ≤βh, which completes the inductive proof of Eq. (133). This implies Eq. (48), which is sharp because it is the order of β (0) in the steady state Eq. (43)
I Proof of Lemma 11
Proof For all n ∈ [T /h + 1], we can estimate
δ (1) (nh) = m (1) (nh) − f m (0) (nh) (142) = Ψ (1) h (m((n − 1)h) − f m (0) (nh)(143)
≤ L β (0) (nh) r (nh)
eq. (48)
≤ K h r (nh) .
Altogether, after inserting these bounds into Eq. (144),
+ K h (( p−1)∨0)∧1 + K h ( p∨1)∧2 δ (1) ((n − 1)h) =:T δ (1) ((n − 1)h) .(154)
As p ≥ 1 (by Assumption 4), BFT is applicable for all sufficiently small h > 0 such that K h (( p−1)∨0)∧1 + K h ( p∨1)∧2 < 1 and soT is a contraction with a unique fixed point δ ∞ of order
δ ∞ ≤ K h ( p∨1)∧2 1 − K h (( p−1)∨0)∧1 + K h ( p∨1)∧2 (155) ≤ K h ( p∨1)∧2 .(156)
We proceed with showing by induction that, for all n ∈ [T /h],
δ (1) (nh) ≤ δ (1) (0) ∨ 2δ ∞ .(157)
The base case n = 0 is trivial. For the inductive step, we distinguish two cases. If δ (1) ((n − 1)h) ≤ δ ∞ , then T (δ (1) ((n − 1)h)) < 2δ ∞ , sincē T (δ (1) ((n − 1)h)) − δ ∞ ≤ δ ∞ −T (δ (1) ((n − 1)h)) (158)
< δ ∞ − δ (1) ((n − 1)h) ≥0 (159) ≤ δ ∞ .(160)
In this case,
δ (1) (nh) eq. (154) ≤T δ (1) ((n − 1)h) (161) < 2δ ∞ (162) ≤ δ (1) (0) ∨ 2δ ∞ ,(163)
where the last inequality follows from the inductive hypothesis. In the other case, namely δ (1) ((n − 1)h) > δ ∞ , it follows that δ (1) (nh) − δ ∞ eq. (154) ≤T (δ (1) ((n − 1)h)) − δ ∞ (164)
≤ T (δ (1) ((n − 1)h)) − δ ∞ (165) ≤ δ (1) ((n − 1)h) − δ ∞ (166) = δ (1) ((n − 1)h) − δ ∞ ,(167)
which, after adding δ ∞ and applying the inductive hypothesis, completes the inductive step. Hence, Eq. (157) holds. Since this bound is uniform in n, inserting the orders of δ (1) (0) from Lemma 7 and of δ ∞ from Eq. (155) yields Eq. (50).
J Proof of Theorem 15
Proof Again, w.l.o.g. d = 1. We first show that the bounds Eqs. (67) and (68) hold and then argue that they are sharp. The recursion for P − 00 (nh) is given by P − 00 ((n + 1)h) eqs. (10), (6) = P 00 (nh) + 2h P 01 (nh)
+ h 2 P 11 (nh) + σ 2 3 h 3 (168) = P − 00 (nh) − β (0) (nh)P − 01 (nh) + σ 2 3 h 3 , + 2h Rβ (0) (nh) + h 2 Rβ (1) (nh)(169)
where we used P 00 (nh) = P − 00 (nh) − β (0) P − 01 (nh) and P 11 (nh) = Rβ (1) (nh) (both due to Eq. (15) and Eq. (11)), as well as P 01 (nh) = Rβ (0) (nh) (see Eq. (127)), for the last equality in Eq. (169). By P − 01 (nh) ≤ P 01 (nh) and |β (1) | ≤ 1 (due to Eq. (11)), application of the triangle inequality to Eq. (169) yields P − 00 ((n + 1)h) ≤ P − 00 (nh) + β (0) (nh) |P 01 (nh)|
+ 2h R β (0) (nh) + h 2 R + σ 2 3 h 3 ,(170)
which, by Eqs. (47) and (48), implies P − 00 ((n + 1)h) ≤ P − 00 (nh) + K h ( p+2)∧3 .
This, by N = T /h, implies Eq. (67). Since P 00 (nh) ≤ P − 00 (nh), this bound is also valid for P 00 , i.e., Eq. (68) holds. The bound Eq. (67) is sharp, since, e.g., when the covariance matrices are in the steady state, the covariance matrix keeps growing by a rate of K h ( p+2)∧3 for all sufficiently small h > 0, since the only negative summand in Eq. (169) is given by
β ∞,(0) P ∞ 01 = S 1 (h) × S 2 (h) × S 3 (h) ∈ Θ(h 5∧ 3 p+7 2 ),(172)
where the factors have, due to R ≡ K h p , the following orders:
S 1 (h) = 1 2 h 2 ∈ Θ(h 2 ),(173)S 2 (h) = (σ 2 h) 2 + 4(σ 2 h)R, ∈ Θ(h 1∧ p+1 2 ),(174)
S 3 (h) = ((σ 2 h) + 2R) (σ 2 h) 2 + 4(σ 2 h)R + (σ 2 h) 2 + 4(σ 2 h)R ∈ Θ(h 2∧( p+1) ).
The orders in Eqs. (173) to (175) imply the order in Eq. (172). Hence, the sole negative summand −β ∞,(0) P ∞ 01 of Eq. (169) is in Θ(h 5∧ 3 p+7
2 ) and thereby of higher order than the remaining positive summands of Eq. (169):
2h R ∈Θ(h p+1 ) β ∞,(0) (nh) ∈Θ(h) ∈ Θ(h p+2 ),(176)h 2 R ∈Θ(h p+2 ) β ∞,(1) (nh) ∈Θ(1), by eq. (44) ∈ Θ(h p+2 ),(177)σ 2 3 h 3 ∈ Θ h 3 .(178)
Hence, for all sufficiently small h > 0, it still holds in the steady state that P − 00 ((n + 1)h) − P − 00 (nh) ≥ K h ( p+2)∧3 , and therefore Eq. (67) is sharp. The sharpness of Eq. (67) is inherited by Eq. (68) since, in the steady state, by Eqs. (15) and (11), P 00 (nh) = P − 00 (nh) − β (0),∞ P −,∞ 01 and the subtracted quantity β (0),∞ P −,∞ 01 is-as shown above-only of order Θ(h 5∧ 3 p+7 2 ).
yield the equivalence of the filter in the steady state with the P(EC)1 implementation of the trapezoidal rule, which was previously shown in Schober et al. (2019, Proposition 1). For future research, it would be interesting to examine whether insertion of positive choices of R into Eqs. (43) and (44) can reproduce known methods as well. Proof See Appendix H.
, for all n ∈ N.
Fig. 1
1True solution of the FitzHugh-Nagumo model, Eq. (73); x 1 in blue and x 2 in orange
Fig. 2
2Work-precision diagrams for the Gaussian ODE filter with q-times IBM prior, for q ∈ {1, 2, 3}, applied to the linear Eq. (71), logistic ODE Eq. (69) and the FitzHugh-Nagumo model. The number of function evaluations (# Evals of f ), which is inversely proportional to the step size h, is plotted in color against the logarithmic global error at the final time T . The (dash-)dotted gray lines visualize idealized convergence rates of orders one to four. The left and right columns employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (K R = 1) which are permissible under Assumption 4 (69) and R ≡ 0 10 −0.0 10 −2.0 10 −4.0 10 −6.0 10 −8.0 10 −10.0 (71) and R ≡ 0 10 −2.0 10 −4.0 10 −6.0 10 −8.0 10 −10.0 10 −12.0 (71) and R ≡ 1 · h q 10 −1.0 10 −2.0 10 −3.0 10 −4.0 10 −5.0 error m(T ) − x(T ) #Evals of f FitzHugh-Nagumo (73) and R ≡ 0 10 −1.0 10 −2.0 10 −3.0 10 −4.0 error m(T ) − x(T ) FitzHugh-Nagumo eq. (73) and R ≡ 1 · h q q=1 q=2 q=3 h 1 conv. h 2 conv.h 3 conv.
Fig. 3 KR = 1 .00 × 10 7 Fig. 4
3174Work-precision diagrams for the Gaussian ODE filter with q-times IBM prior, for q = 1, applied to the linear Eq. (69) and logistic ODE Eq. (71) in the upper and lower row, respectively. The number of function evaluations (# Evals of f ), which is inversely proportional to the step size h, is plotted in color against the logarithmic global error at the final time T . The (dash-)dotted gray lines visualize idealized convergence rates of orders one and two. The dashed blue lines show the posterior standard deviations calculated by the filter. The left and right columns, respectively, employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (K R = 5.00 × 10 error m(T ) − x(T ) logistic ODE eq. (71) and R ≡ K R · h q with K R m(T ) − x(T ) Work-precision diagrams for the Gaussian ODE filter with qtimes IBM prior, for q = 1 and R ≡ K R h 1/2 , applied to the logistic ODE Eq. (71) for increasing values of K R . The number of function evaluations (# Evals of f ), which is inversely proportional to the step size h, is plotted in blue against the logarithmic global error at the final time T . The (dash-)dotted gray lines visualize idealized convergence rates of orders one and two. The dashed blue lines show the posterior standard deviations calculated by the filter
, (102), and (103) into Eq. (100) (and recalling δ (0) = 0) yields Eq. (31).
where b n :=Ld(u n+1 , u n ) → 0. Now, for all m ∈ N, let a(m) 0 := a 0 and a (m) n :=La (m) n−1 + b m . By BFT, lim n→∞ a (m) n = b m /(1−L). Since, for all m ∈ N,
, for all p ∈ [0, ∞]. Now, insertion of Eq. (48) into Eq. (127) immediately yields Eq. (47), which-by Eq. (127)-inherits the sharpness of Eq. (48).
h
(m((n − 1)h) − f m −,(0) (nh) =:J 1 (h) + f m −,(0) (nh) − f m (0) (nh) :=J 2 (h) ,(144)bound J 1 , using the definition Eq. (14) of Ψ(1) h (m((n − 1)h) as well as the definition Eq. (13) of r (nh), byJ 1 (h) = m −,(1) (nh) − f m −,(0) (nh)(145)+ β (1) (nh) f m −,(0) (nh) − m −,(1) (nh) ≤ 1 − β (1) (nh) r (nh)(146)eq. (49)≤ K h ( p−1)∨0 r (nh)(147)and bound J 2 , by exploiting L-Lipschitz continuity of f , inserting the definition Eq. (14) of Ψ (0) h (m((n − 1)h) and applying Eq. (48) to β (0) (nh) , J 2 (h) ≤ L m (0) (nh) − m −,(0) (nh)
Remark 2 Theorem 8 establishes a bound of order h q+1 on the local truncation error ε(0) (h) on x(h) after one step h.Moreover, by the definition Eq. (19) of ||| · ||| h , this theorem also implies additional bounds of order h q+1−i on the error εInsertion of Eq. (37) into Eq. (35) and ε (0) (h) ≤ |||ε(h)||| h
(by Eq. (19)) concludes the proof.
Fig. 5 Work-precision diagram plotting the number of function evaluations (# Evals of f ) against the final state misalignment δ (1) (T ) on the Riccati Eq. (85); cf. Fig. 2 which concludes the step from 0 to h. The next step h → 2h starts with computing m −,(i) (2h) by a (q − i)th-order Taylor expansion of the ith state m (i) (h), for all i ∈ [q+1]. Note that, now, there is a nonzero state misalignment (recall Eq.1) (h)
eqs. (88),(90)
=
−6859/16000 + 1/2
= 1141/16000,
(92)
m(h)
eq. (9)
=
m −,(0) (h) + β (0) (h)r (h)
m −,(1) (h) + β (1) (h)r (h)
eqs. (88),(91),(92)
=
305141/320000
−6859/16000
,
(93)
10 −3.0
10 −6.0
10 −9.0
10 −12.0
10 −15.0
10 1.50
10 2.00
10 2.50
10 3.00
state misalignment δ (1) (T )
q=1
q=2
q=3
h 1 conv.
h 2 conv.
h 3 conv.
h 4 conv.
Rh + σ 4 h 2 + 2R .eqs. (39) and (41)
=
√
4Rσ 2 h + σ 4 h 2
σ 2 h +
√
4Rσ 2 h + σ 4 h 2
h, (125)
and
β ∞,(1) eqs. (11) and (39)
=
σ 2 h +
√
4σ 2 Rh + σ 4 h 2
σ 2 h +
√
4σ 2
are again unique and attractive because β (0) (nh) and β (1) (nh) depend continuously on P − 11 (nh) and P − 01 (nh). Next, recall thatP 01 (nh)
eq. (15)
=
1 −
P −
11 (nh)
P −
11 (nh) + R
P −
01 (nh)
(127)
= R
P −
01 (nh)
P −
11 (nh) + R
eq. (11)
Here, the word 'Bayesian' describes the algorithm in the sense that it employs a prior over the quantity of interest and updates it by Bayes rule according to a prespecified measurement model (as also used inSkilling (1991);Chkrebtii et al. (2016);Kersting and Hennig (2016)). The ODE filter is not Bayesian in the stronger sense ofCockayne et al. (2019), and it remains an open problem to construct a Bayesian solver in this strong sense without restrictive assumptions, as discussed inWang et al. (2018).
More involved correlation models of X j (t); 0≤t≤T ; j ∈ [d] are straightforward to incorporate into the SDE Eq. (2), but seem complicated to analyze. Therefore, we restrict our attention to independent dimensions. See Appendix B for an explanation of this restriction. Note that one can also use a state space vector X(t) which models other features of x(t) than the derivatives, as demonstrated with Fourier summands inKersting and Mahsereci (2020).
According toLoscalzo and Talbot (1967), the filter might, however, suffer from numerical instability for high choices of q. (See Schober et al. (2019, Section 3.1) for an explanation of how such results on spline-based methods concern the ODE filter.)
AcknowledgementsThe authors are grateful to Han Cheng Lie for discussions and feedback to early versions of what is now Sect. 3 and 5 of this work, as well as Sect. 7.5. The authors also thank Michael Schober for valuable discussions and helpful comments on the manuscript. TJS's work has been partially supported by the Freie Universität Berlin within the Excellence Initiative of the German Research Foundation (DFG), by the DFG through grant CRC 1114 'Scaling Cascades in Complex Systems,' and by the National Science Foundation under grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute (SAMSI) and SAMSI's QMC Working Group II 'Probabilistic Numerics.' HK and PH gratefully acknowledge financial support by theE Proof of Eq. (23)We prove the stronger statement Proof(of Eq. (97)) By induction over i ∈ {0, . . . , q}. The base case (i = 0) is obtained using the fundamental theorem of calculus andFor the inductive step (i − 1) → i, we conclude (using the inductive hypothesis (IH), the chain rule (CR), the base case (BC) andF Proof of Lemma 5Proof Again, w.l.o.g. d = 1. Recall that, by Eq. (13), r is implied by the values of m −,(0) and m −,(1) . By insertion of(due to Eqs.(8)and(14)) into the definition Eq. (13) of r ((n + 1)h), we obtain the following equality which we then bound by repeated application of the triangle inequality:≤ I 1 (h) + I 2 (h) + I 3 (h)
Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration. A Abdulle, G Garegnani, Stat. Comput. 304Abdulle, A., Garegnani, G.: Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration. Stat. Comput. 30(4), 907-932 (2020)
Optimal Filtering. B Anderson, J Moore, Prentice-HallEnglewood CliffsAnderson, B., Moore, J.: Optimal Filtering. Prentice-Hall, Englewood Cliffs (1979)
F M Callier, C A Desoer, Linear System Theory. BerlinSpringerCallier, F.M., Desoer, C.A.: Linear System Theory. Springer, Berlin (1991)
Bayesian solution uncertainty quantification for differential equations. O A Chkrebtii, D A Campbell, B Calderhead, M A Girolami, Bayesian Anal. 114Chkrebtii, O.A., Campbell, D.A., Calderhead, B., Girolami, M.A.: Bayesian solution uncertainty quantification for differential equa- tions. Bayesian Anal. 11(4), 1239-1267 (2016)
Short proof of a discrete Gronwall inequality. D S Clark, Discret. Appl. Math. 163Clark, D.S.: Short proof of a discrete Gronwall inequality. Discret. Appl. Math. 16(3), 279-281 (1987)
Bayesian probabilistic numerical methods. J Cockayne, C J Oates, T J Sullivan, M Girolami, SIAM Rev. 614Cockayne, J., Oates, C.J., Sullivan, T.J., Girolami, M.: Bayesian prob- abilistic numerical methods. SIAM Rev. 61(4), 756-789 (2019)
Statistical analysis of differential equations: introducing probability measures on numerical solutions. P R Conrad, M Girolami, S Särkkä, A Stuart, K Zygalakis, Stat. Comput. 274Conrad, P.R., Girolami, M., Särkkä, S., Stuart, A., Zygalakis, K.: Sta- tistical analysis of differential equations: introducing probability measures on numerical solutions. Stat. Comput. 27(4), 1065-1082 (2017)
Introduction to Nonlinear Differential and Integral Equations. H T Davis, Dover PublicationsNew YorkDavis, H.T.: Introduction to Nonlinear Differential and Integral Equa- tions. Dover Publications, New York (1962)
Bayesian numerical analysis. P Diaconis, Stat. Decis. Theory Rel. Top. IV. 1Diaconis, P.: Bayesian numerical analysis. Stat. Decis. Theory Rel. Top. IV(1), 163-175 (1988)
Active multi-information source Bayesian quadrature. A Gessner, J Gonzalez, M Mahsereci, Uncertainty in Artificial Intelligence (UAI. Gessner, A., Gonzalez, J., Mahsereci, M.: Active multi-information source Bayesian quadrature. In: Uncertainty in Artificial Intelli- gence (UAI) (2019)
Solving Ordinary Differential Equations I-Nonstiff Problems. E Hairer, S Nørsett, G Wanner, SpringerBerlinHairer, E., Nørsett, S., Wanner, G.: Solving Ordinary Differential Equa- tions I-Nonstiff Problems. Springer, Berlin (1987)
Probabilistic numerics and uncertainty in computations. P Hennig, M A Osborne, M Girolami, Proc. R. Soc. Lond. A. R. Soc. Lond. A47120150142Hennig, P., Osborne, M.A., Girolami, M.: Probabilistic numerics and uncertainty in computations. Proc. R. Soc. Lond. A 471(2179), 20150142 (2015)
A Jazwinski, Stochastic Processes and Filtering Theory. CambridgeAcademic PressJazwinski, A.: Stochastic Processes and Filtering Theory. Academic Press, Cambridge (1970)
Brownian Motion and Stochastic Calculus. I Karatzas, S Shreve, SpringerBerlinKaratzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Springer, Berlin (1991)
Active uncertainty calibration in Bayesian ODE solvers. H Kersting, P Hennig, Uncertainty in Artificial Intelligence (UAI). Kersting, H., Hennig, P.: Active uncertainty calibration in Bayesian ODE solvers. In: Uncertainty in Artificial Intelligence (UAI) (2016)
A Fourier state space model for Bayesian ODE filters. H Kersting, M Mahsereci, Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, ICML. Kersting, H., Mahsereci, M.: A Fourier state space model for Bayesian ODE filters. In: Workshop on Invertible Neural Networks, Nor- malizing Flows, and Explicit Likelihood Models, ICML (2020)
Differentiable likelihoods for fast inversion of 'likelihoodfree' dynamical systems. H Kersting, N Krämer, M Schiegg, C Daniel, M Tiemann, P Hennig, International Conference on Machine Learning (ICML). Kersting, H., Krämer, N., Schiegg, M., Daniel, C., Tiemann, M., Hennig, P.: Differentiable likelihoods for fast inversion of 'likelihood- free' dynamical systems. In: International Conference on Machine Learning (ICML) (2020)
Algebraic Riccati Equations. P Lancaster, L Rodman, Oxford Science PublicationsOxfordLancaster, P., Rodman, L.: Algebraic Riccati Equations. Oxford Science Publications, Oxford (1995)
Data Assimilation: A Mathematical Introduction. K Law, A Stuart, K Zygalakis, Texts in Applied Mathematics. 62SpringerLaw, K., Stuart, A., Zygalakis, K.: Data Assimilation: A Mathemati- cal Introduction, Texts in Applied Mathematics, vol. 62. Springer, Cham (2015)
Strong convergence rates of probabilistic integrators for ordinary differential equations. H C Lie, A M Stuart, T J Sullivan, Stat. Comput. 296Lie, H.C., Stuart, A.M., Sullivan, T.J.: Strong convergence rates of probabilistic integrators for ordinary differential equations. Stat. Comput. 29(6), 1265-1283 (2019)
Spline function approximations for solutions of ordinary differential equations. F R Loscalzo, T D Talbot, SIAM J. Numer. Anal. 4Loscalzo, F.R., Talbot, T.D.: Spline function approximations for solu- tions of ordinary differential equations. SIAM J. Numer. Anal. 4, 433-445 (1967)
E Magnani, H Kersting, M Schober, P Hennig, arXiv:1709.08471Bayesian Filtering for ODEs with Bounded Derivatives. Magnani, E., Kersting, H., Schober, M., Hennig, P.: Bayesian Filtering for ODEs with Bounded Derivatives. arXiv:1709.08471 [csNA] (2017)
P S Maybeck, Stochastic Models, Estimation, and Control. CambridgeAcademic PressMaybeck, P.S.: Stochastic Models, Estimation, and Control. Academic Press, Cambridge (1979)
On numerical integration of ordinary differential equations. A Nordsieck, Math. Comput. 16Nordsieck, A.: On numerical integration of ordinary differential equa- tions. Math. Comput. 16, 22-49 (1962)
A modern retrospective on probabilistic numerics. C J Oates, T J Sullivan, Stat. Comput. 296Oates, C.J., Sullivan, T.J.: A modern retrospective on probabilistic numerics. Stat. Comput. 29(6), 1335-1351 (2019)
Bayes-Hermite quadrature. A O'hagan, J. Stat. Plann Inference. 293O'Hagan, A.: Bayes-Hermite quadrature. J. Stat. Plann Inference 29(3), 245-260 (1991)
Some Bayesian Numerical Analysis. Bayesian statistics, 4 (Peñíscola, 1991). A O'hagan, Oxford Univ. PressNew YorkO'Hagan, A.: Some Bayesian Numerical Analysis. Bayesian statistics, 4 (Peñíscola, 1991), pp. 345-363. Oxford Univ. Press, New York (1992)
B Øksendal, Stochastic Differential Equations: An Introduction with Applications. BerlinSpringer5th ednØksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 5th edn. Springer, Berlin (2003)
Calcul des probabilités. H Poincaré, Gauthier-VillarsParisPoincaré, H.: Calcul des probabilités. Gauthier-Villars, Paris (1896)
Gaussian Processes for Machine Learning. C Rasmussen, C Williams, MIT PressLondonRasmussen, C., Williams, C.: Gaussian Processes for Machine Learn- ing. MIT Press, London (2006)
Probabilistic Forecasting and Bayesian Data Assimilation. S Reich, C Cotter, Cambridge University PressNew YorkReich, S., Cotter, C.: Probabilistic Forecasting and Bayesian Data Assimilation. Cambridge University Press, New York (2015)
Average-Case Analysis of Numerical Problems. K Ritter, Lecture Notes in Mathematics. 1733Springer-VerlagRitter, K.: Average-Case Analysis of Numerical Problems. Lecture Notes in Mathematics, vol. 1733. Springer-Verlag, Berlin (2000)
S Särkkä, Recursive Bayesian Inference on Stochastic Differential Equations. Helsinki University of TechnologyPhD thesisSärkkä, S.: Recursive Bayesian Inference on Stochastic Differential Equations. PhD thesis, Helsinki University of Technology (2006)
Bayesian Filtering and Smoothing. S Särkkä, Cambridge University PressCambridgeSärkkä, S.: Bayesian Filtering and Smoothing. Cambridge University Press, Cambridge (2013)
Applied Stochastic Differential Equations. S Särkkä, A Solin, Cambridge University PressCambridgeSärkkä, S., Solin, A.: Applied Stochastic Differential Equations. Cam- bridge University Press, Cambridge (2019)
Probabilistic ODE solvers with Runge-Kutta means. M Schober, D Duvenaud, P Hennig, Advances in Neural Information Processing Systems (NeurIPS). Schober, M., Duvenaud, D., Hennig, P.: Probabilistic ODE solvers with Runge-Kutta means. In: Advances in Neural Information Process- ing Systems (NeurIPS) (2014)
A probabilistic model for the numerical solution of initial value problems. M Schober, S Särkkä, P Hennig, Stat. Comput. 291Schober, M., Särkkä, S., Hennig, P.: A probabilistic model for the numerical solution of initial value problems. Stat. Comput. 29(1), 99-122 (2019)
Bayesian solutions of ordinary differential equations. J Skilling, Maximum Entropy and Bayesian Methods. Skilling, J.: Bayesian solutions of ordinary differential equations. Max- imum Entropy and Bayesian Methods, Seattle (1991)
Derivative observations in Gaussian process models of dynamic systems. E Solak, R Murray-Smith, W E Leithead, D J Leith, C E Rasmussen, Advances in Neural Information Processing Systems (NeurIPS). Solak, E., Murray-Smith, R., Leithead, W.E., Leith, D.J., Rasmussen, C.E.: Derivative observations in Gaussian process models of dynamic systems. In: Advances in Neural Information Process- ing Systems (NeurIPS) (2003)
Ordinary Differential Equations and Dynamical Systems. G Teschl, American Mathematical SocietyProvidenceTeschl, G.: Ordinary Differential Equations and Dynamical Systems. American Mathematical Society, Providence (2012)
Probabilistic linear multistep methods. O Teymur, K Zygalakis, B Calderhead, M Sugiyama, U V Luxburg, I Guyon, R Garnett, Lee, Advances in Neural Information Processing Systems (NeurIPS). D.D.Curran Associates IncTeymur, O., Zygalakis, K., Calderhead, B.: Probabilistic linear multistep methods. In: Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Lee, D.D. (eds.) Advances in Neural Information Processing Systems (NeurIPS), pp. 4314-4321. Curran Associates Inc (2016)
Implicit probabilistic integrators for ODEs. O Teymur, H C Lie, T J Sullivan, B Calderhead, Advances in Neural Information Processing Systems (NeurIPS). Teymur, O., Lie, H.C., Sullivan, T.J., Calderhead, B.: Implicit proba- bilistic integrators for ODEs. In: Advances in Neural Information Processing Systems (NeurIPS) (2018)
Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective. F Tronarp, H Kersting, S Särkkä, P Hennig, Stat. Comput. 296Tronarp, F., Kersting, H., Särkkä, S., Hennig, P.: Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective. Stat. Comput. 29(6), 1297-1315 (2019)
Bayesian ode solvers: The maximum a posteriori estimate. F Tronarp, H Kersting, S Särkkä, P Hennig, arXiv:2004.00623[mathNA](2020Tronarp, F., Kersting, H., Särkkä, S., Hennig, P.: Bayesian ode solvers: The maximum a posteriori estimate. arXiv:2004.00623 [mathNA] (2020)
On the Bayesian solution of differential equations. J Wang, J Cockayne, C Oates, Proceedings of the 38th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. the 38th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and EngineeringWang J, Cockayne, J., Oates, C.: On the Bayesian solution of differential equations. In: Proceedings of the 38th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (2018)
Bayesian quadrature for multiple related integrals. X Xiaoyue, F X Briol, M Girolami, International Conference on Machine Learning (ICML). Xiaoyue, X., Briol, F.X., Girolami, M.: Bayesian quadrature for mul- tiple related integrals. In: International Conference on Machine Learning (ICML) (2018)
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Publisher's Note Springer Nature remains neutral with regard to juris- dictional claims in published maps and institutional affiliations.
| []
|
[
"Constraining the Z Mass in 331 Models using Direct Dark Matter Detection",
"Constraining the Z Mass in 331 Models using Direct Dark Matter Detection"
]
| [
"Stefano Profumo [email protected] \nDepartment of Physics\nSanta Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCAUSA\n",
"Farinaldo S Queiroz \nDepartment of Physics\nSanta Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCAUSA\n"
]
| [
"Department of Physics\nSanta Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCAUSA",
"Department of Physics\nSanta Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCAUSA"
]
| []
| We investigate a 331 extension of the Standard Model gauge sector which accommodates neutrino masses and where the lightest of the heavy neutrinos is a viable WIMP dark matter candidate. In this model, processes mediated by the additional Z gauge boson set both the WIMP relic abundance and the scattering cross section of WIMPs off of nuclei. We calculate the WIMP relic abundance including the important effect of coannihilation across the heavy neutrino sector. We find that the recent XENON results put very stringent bounds on the mass of the extra gauge boson, M Z > 1.6 TeV, for WIMPs lighter than 1 TeV. Finally, we comment on how our bounds on the Z mass impact generic 331-like models and on implications for LHC phenomenology. arXiv:1307.7802v1 [hep-ph] | 10.1140/epjc/s10052-014-2960-x | [
"https://arxiv.org/pdf/1307.7802v2.pdf"
]
| 119,104,879 | 1307.7802 | 86d2a37ff6c7aff8935962019d09045dca5851e8 |
Constraining the Z Mass in 331 Models using Direct Dark Matter Detection
30 Jul 2013
Stefano Profumo [email protected]
Department of Physics
Santa Cruz Institute for Particle Physics
University of California
95064Santa CruzCAUSA
Farinaldo S Queiroz
Department of Physics
Santa Cruz Institute for Particle Physics
University of California
95064Santa CruzCAUSA
Constraining the Z Mass in 331 Models using Direct Dark Matter Detection
30 Jul 2013PREPARED FOR SUBMISSION TO JHEP
We investigate a 331 extension of the Standard Model gauge sector which accommodates neutrino masses and where the lightest of the heavy neutrinos is a viable WIMP dark matter candidate. In this model, processes mediated by the additional Z gauge boson set both the WIMP relic abundance and the scattering cross section of WIMPs off of nuclei. We calculate the WIMP relic abundance including the important effect of coannihilation across the heavy neutrino sector. We find that the recent XENON results put very stringent bounds on the mass of the extra gauge boson, M Z > 1.6 TeV, for WIMPs lighter than 1 TeV. Finally, we comment on how our bounds on the Z mass impact generic 331-like models and on implications for LHC phenomenology. arXiv:1307.7802v1 [hep-ph]
Introduction
The fundamental particle nature of the dark matter is one of the most pressing unanswered questions in science. The search for signals from dark matter that could shed light onto its particle nature is ongoing at a fast pace, and promises major breakthroughs on a very short time-scale. On the theory side, many dark matter candidates have been proposed and studied in detail, with a special role played by so-called WIMPs (an acronym for Weakly Interacting Massive Particles). WIMPs, defined by having a pair-annihilation cross section dictated by weak interactions and a mass at the electroweak scale, naturally yield a thermal relic density consistent with the observed cosmological dark matter density (a fact sometimes indicated as "WIMP miracle"). In addition, WIMPs are predicted to exist in many interesting particle physics models beyond the Standard Model (SM) such as the MSSM [1], Left-Right Models [2], Universal Extra Dimensions [3], Little Higgs Models [4], 331 models [5][6][7], and minimal extensions of the Standard Model (SM) [8].
In this paper, we focus on the dark matter phenomenology of a special class of theories, the so called 331 models, whose phenomenology has been studied in great detail from various particle physics standpoints, but not as far as dark matter searches are concerned. There exist many incarnations of 331 models in the literature, and many of them actually do not offer any viable dark matter candidate: these include the "minimal" 331 model [9], the "economical" 331 model [10], and the 331 with two triplets of scalars [11], among others [12]. Supersymmetric [13,14] or Technicolor [15] versions of these constructions might offer the prospect of having a viable dark matter candidate. However these supersymmetric and Techinicolor extensions have not yet addressed the issue of producing a suitable dark matter candidate in any detail.
Concerning the minimal 331 models, in order to account for the dark matter, models must generically invoke an extended scalar or gauge sector, as pointed out in Ref. [16]. It is important to note that it has been claimed that the economical 331 model does feature a dark matter candidate, but a very severe fine-tuning is required in order to make the dark matter candidate stable, in particular by needing a very large suppression in the coupling λ 3 ∼ 10 −24 in the scalar potential in Eq. (3.7) of Ref. [17]. Likewise, in Ref. [14], the self-interacting dark matter scenario has been investigated. However, only the relic over-abundance requirement has been implemented so far. It would be interesting to investigate if this model has dark matter candidates with viable direct and indirect detection rates, and whether or not these rates are within reach of current experiments.
Here, we will focus on the so-called 3-3-1LHN model, which extends the SM by offering both (i) an elegant explanation to the observed neutrino masses and (ii) a natural dark matter candidate, in marked difference from the other aforementioned 331 proposals. It has already been shown in Ref. [6,7] that this model may feature two possible dark matter candidates, but that they cannot co-exist. Here, we will consider the phenomenology of only one of these dark matter candidates, the lightest of the new, heavy neutrinos, with the purpose to determine the role of the Z gauge boson as far as the dark matter phenomenology is concerned.
In the present study we accurately calculate the dark matter thermal relic density, including new processes that have never been included in this context before (namely, coannihilation in the heavy neutrino sector) and we derive stringent bounds on the mass of the Z gauge boson by comparing the predicted scattering cross section off of nuclei with the most current limits from XENON100. Since the Z couples to quarks similarly in both minimal and right handed 331 models, the limits we obtain on the mass of the Z are not limited to the 3-3-1LHN model, and they effectively apply at some level to any 331 model where the dominant WIMP-nucleon scattering channel is via a Z gauge boson. We note that our limits are complementary to other limits on the Z mass coming from colliders [18,19], FCNC [20], oblique corrections to the STU parameters [21], and muon decay [22].
The paper is organized as follows: In section 2 we briefly introduce the 3-3-1LHN model. In section 3 we investigate the dark matter relic density in the model and we derive bounds on the mass of the Z boson. Finally, we summarize and draw our conclusions in section 4.
The 3-3-1LHN Model
We indicate with "3-3-1 models" extensions of the electroweak sector of the Standard Model where the electroweak sector SU (2) L ⊗ U (1) Y is enlarged to SU (3) L ⊗ U (1) N . This extension is motivated by various, important problems not addressed by the SM, including the observed pattern of neutrino masses and mixing, the number of generations, as well as the existence of a suitable particle candidate for the dark matter. This model also reproduces the SM phenomenology as far as the Higgs sector is concerned, especially in light of recent experimental results, as shown in Ref. [7]. For all these reasons, 3-3-1 models stand out as compelling extensions to the SM. The 3-3-1LHN we consider here has two noticeable distinct features compared to other incarnations of 3-3-1 models, namely:
(i) the presence of heavy neutrino-like particles, and (ii) the existence of two possible, distinct dark matter candidates.
Below we briefly discuss the particle content and key features of the 3-3-1LHN model.
Leptonic Sector
In the 3-3-1LHN model the leptons are arranged in triplet and singlet representations as follows:
f aL = ν a e a N a L ∼ (1 , 3 , −1/3) e aR ∼ (1, 1, −1) , N aR ∼ (1, 1, 0),(2.1)
where a = 1, 2, 3 runs over the three lepton families, and N a(L,R) are new, heavy neutrinos added to the SM particle content. We will be hereafter using the above shorthand notation to refer to the quantum numbers of the symmetry group SU (
3) c ⊗ SU (3) L ⊗ U (1) N .
For instance, as one can clearly see above, the leptons in the triplet are color singlets (1), triplets by SU (3) L (3) and have
hypercharge N = −1/3, i.e (1 , 3 , −1/3).
Hadronic Sector
The quarks in the theory, just like the leptons, come in triplets. In particular, the third generation lives in a triplet representation while the other two generations are in an anti-triplet representation of SU L (3), so that triangle anomalies cancel [9]. The corresponding quantum numbers are as follows:
Q iL = d i −u i q i L ∼ (3 ,3 , 0) , u iR ∼ (3, 1, 2/3), d iR ∼ (3, 1, −1/3) , q iR ∼ (3, 1, −1/3), Q 3L = u 3 d 3 q 3 L ∼ (3 , 3 , 1/3) , u 3R ∼ (3, 1, 2/3), d 3R ∼ (3, 1, −1/3) , q 3R ∼ (3, 1, 2/3) (2.2)
where the index i = 1, 2 runs through the first two generations. The primed quarks (q ) are new, heavy particles added to the SM particle content, with the usual fractional electric charges.
Scalar Content
The symmetry breaking pattern SU (
3) L ⊗U (1) N → SU (2) L ⊗U (1) Y → U (1) QED is reproduced with the introduction of three scalar triplets, namely η = η 0 η − η 0 , ρ = ρ + ρ 0 ρ + , χ = χ 0 χ − χ 0 . (2.
3)
The new scalars posses a general scalar potential of the form: (2.4) with η and χ both transforming as (1 , 3 , −1/3) and ρ transforming as (1 , 3 , 2/3). The scalar triplets above are introduced in order to generate masses for all fermions in the model after the neutral scalars η 0 , ρ 0 and χ 0 develop a vacuum expectation value different from zero.
V (η, ρ, χ) = µ 2 χ χ 2 + µ 2 η η 2 + µ 2 ρ ρ 2 + λ 1 χ 4 + λ 2 η 4 + λ 3 ρ 4 + λ 4 (χ † χ)(η † η) + λ 5 (χ † χ)(ρ † ρ) + λ 6 (η † η)(ρ † ρ) + λ 7 (χ † η)(η † χ) + λ 8 (χ † ρ)(ρ † χ) + λ 9 (η † ρ)(ρ † η) − f √ 2 ijk η i ρ j χ k + H.c.
Global Symmetry
We will invoke the global symmetry U (1) G , given by (2.5) to stabilize the lightest particle charged under this global symmetry (LGP) and, additionally, to simplify the mass spectrum of the model. This lightest particle will be indicated as LGP. Thanks to this global symmetry, Yukawa mass terms likeQ iL χ * d jR ,Q 3L χu 3R andQ iL η * q j among others, are forbidden in the Lagrangian, with significant simplifications in the resulting particle spectra. Such terms would induce mixing between the SM quarks and the new quarks q . Moreover, the presence of this global symmetry makes the LGP stable and, in principle, a viable dark matter candidate.
G(N ,q , V − µ , U 0 µ , χ 0 , χ − , η 0 * , ρ − ) = +1 .
In the context of the 3-3-1LHN model there are two possible LGP candidates: a complex scalar φ (the mass eigenstate resulting from the neutral scalar states in the theory) and a fermion N i (the lightest of the new heavy neutrinos). The most natural one, i.e. the LGP if all couplings in the theory are assumed to be of order one, is the fermion N 1 if one assumes a normal neutrino mass hierarchy, and N 3 if one assumes an inverted hierarchy. We will hereafter assume a normal neutrino mass hierarchy, but the inverted hierarchy scenario with N 3 as the LGP would not qualitatively be any different. In order to demonstrate that the N 1 is a good dark matter candidate we will compute in detail its thermal relic abundance and its scattering cross section off nuclei, and compare our findings with available current experimental bounds.
Yukawa Sector
As mentioned above, one of the benefits of introducing the G-symmetry of Eq. (2.5) is to simplify the mass spectrum. The most generic Yukawa sector of the Lagrangian invariant under the 3-3-1 gauge and the G-symmetry is found to be (2.6) where ρ, η and χ are the scalar triplets introduced above. One might notice that all fermions obtain Dirac masses, similarly to the Standard Model. The new fermions added to the SM, which will have Dirac mass terms as well, will have their masses proportional to the scale of symmetry breaking of the model. This model does not suffer from the problematic non-perturbative behavior at a few TeV that plagues minimal 331 models [23], and hence one can easily push the scale of symmetry breaking up to high energies. We will not consider this possibility here, however, since our goal here is only to derive bounds on the mass of the Z boson based on direct detection searches of dark matter candidates at the electroweak scale.
− L Y = α ijQiL χ * d jR + f 33Q3L χu 3R + g iaQiL η * d aR +h 3aQ3L ηu aR + g 3aQ3L ρd aR + h iaQiL ρ * u aR +G abfaL ρe bR + g abf aL χN bR + h.c.,
Gauge Bosons
Due to the enlarged electroweak gauge group (SU (2) L → SU (3) L ) extra gauge bosons will be present in the 3-3-1LHC model, which we will indicate as Z , V ± , and U 0 and U 0 † . These bosons have masses proportional to the scale of symmetry breaking of the model, which here is assumed to be in the few TeV range. The charged currents involving these gauge bosons can be written as
L N H = − g √ 2 ν a L γ µ e a L W + µ +N a L γ µ e a L V + µ +ν a L γ µ N a L U 0 µ + (ū 3L γ µ d 3L +ū iL γ µ d iL ) W + µ + q 3L γ µ d 3L +ū iL γ µ q iL V + µ + ū 3L γ µ q 3L −q iL γ µ d iL U 0 µ + h.c. ,(2.7)
while the neutral current has the general form
L N C = − g 2 cos θ W f f γ µ (g V + g A γ 5 )f Z µ , (2.8)
where f are leptons and quarks, the couplings g V and g A are indicated in Tables 1, g is the SU (3) L coupling, and θ W is the Weinberg angle. The phenomenological aspects associated with the five gauge bosons in the model have been thoroughly explored in Ref. [24], to which we refer the Reader. The most striking feature is the presence of charged gauge bosons. At LEP-II charged gauge bosons could have been produced in pairs via their photon and Z couplings. The production cross section depends only on the mass of the V ± mass and and is large enough to rule out M V ± < √ s/2 ∼ 105 GeV. At the LHC, W bosons can be detected through resonant pair production of fermions or electroweak bosons. The most commonly studied signal consists of a high energy electron or muon and large missing transverse energy, with a peak in the number of events at M W /2 as can be seen in Fig.1 of Ref. [25]. Assuming SM couplings with fermions, restrictive bounds were derived on the mass of the W , namely M W > 2.55 TeV at 95% C.L [26]. However, this limit does not directly apply to our model for three reasons:
(i) The boson V ± couples, here, differently to the SM fermions, as one can clearly notice in Eq. (2.7): some new particle from the 331 model is always present in the interactions involving the V ± due to the G symmetry;
(ii) V ± decays predominantly into WIMP plus electron (N 1 e) pairs; Table 1. Coupling of the Z with all fermions in the 3-3-1LHN model. Here θ W is the Weinberg angle.
Z Interactions in the 331LHN Interaction g V g A Z ūu,cc 3 − 8 sin 2 θW 6 3 − 4 sin 2 θW − 1 2 3 − 4 sin 2 θW Z t t 3 + 2 sin 2 θW 6 3 − 4 sin 2 θW − 1 − 2 sin 2 θW 2 3 − 4 sin 2 θW Z d d,ss 3 − 2 sin 2 θW 6 3 − 4 sin 2 θW − 3 − 6 sin 2 θW 6 3 − 4 sin 2 θW Z b b 3 − 4 sin 2 θW 6 3 − 4 sin 2 θW − 1 2 3 − 4 sin 2 θW Z ¯ −1 + 4 sin 2 θW 2 3 − 4 sin 2 θW 1 2 3 − 4 sin 2 θW Z N N 4 3 − 4 sin 2 θW 9 − 4 3 − 4 sin 2 θW 9 Z ν ν 4 3 − 4 sin 2 θW 9 − 4 3 − 4 sin 2 θW 9
It is worth pointing out that the interaction Z N N makes a crucial difference from previous 331 models proposals [9][10][11].
(iii) the production mechanism is not the same as for the W ± : in addition to Drell-Yan processes (photon and Z s-channel mediated processes), there is a t-channel diagram mediated by new quark q 1 , and three s-channel processes mediated by the Higgs, the scalar S 2 and the Z .
In conclusion, one cannot straightforwardly apply the bounds found from ATLAS on W mass to our model 1 . The LHC searches for the W represent at some level a constraint on the mass of our charged gauge boson V ± , and they are complementary to the ones derived in this work using direct dark matter detection. A detailed study should thus be performed in the future to translate this bound on the W mass into a limit on the mass of the V ± in our model.
Mass Eigenstates
Spontaneous symmetry breaking in the present model is based on the non-trivial vacuum expectation value (vev) developed by the neutral scalars (η 0 , ρ 0 , χ 0 ) which we indicate as
η 0 , ρ 0 , χ 0 → 1 √ 2 (v η,ρ,χ + R η,ρ,χ + iI η,ρ,χ ) . (2.9)
There exist other neutral scalars in the spectrum, namely η 0 and χ 0 , which are forced not to develop vevs in order to preserve the U (1) G global symmetry, and therefore guarantee the stability of our dark matter candidate. Once the pattern of symmetry breaking has been established one can straightforwardly obtain the mass eigenstates of the model. The SM fermion mass terms are unchanged, except for the neutrinos that acquire mass through dimension 5 effective operators [27]. We do not quote the resulting values for the neutrino masses, which can be made compatible with observation [27], and we only exhibit a summary of the masses of the additional particles added to the SM below.
• Fermions
The neutral fermions (N a ) shown in Eq. (2.1) are Dirac fermions with masses given by
M Na = g aa √ 2 v χ ,(2.10)
where g aa are the Yukawa couplings that appear in the last term of Eq. (2.6). We assume all Yukawa couplings to be diagonal throughout this work.
The three new quarks q a have their masses given by the first two terms of Eq. (2.6) with,
M q a = α aa √ 2 v χ . (2.11)
These new quarks do not play any role in the present analysis, and will be thus completely ignored from now on.
• Scalars
After spontaneous symmetry breaking the three CP-even neutral scalar mass eignestates (H, S 1 , S 2 ) have masses vev v which appears in Eq. 2.12 must be equal to 246/ √ 2 GeV, in order to reproduce the masses of the Z and W bosons. It has been shown in Ref. [7] that the 3-3-1 Higgs boson H reproduces the current results concerning the signal strength for the observation of the Higgs at the LHC.
M 2 S 1 = v 2 4 + 2v 2 χ λ 1 , M 2 S 2 = 1 2 (v 2 χ + 2v 2 (2λ 2 − λ 6 )) , M 2 H = v 2 (2λ 2 + λ 6 ) .
Besides the three CP-even scalars, a new CP-odd scalar state (P 1 ) appears, with the following mass:
M 2 P 1 = 1 2 (v 2 χ + v 2 2 ).(2.13)
An additional complex neutral scalar also emerges which we indicate with φ, with mass given by
M φ = (λ 7 + 1 2 ) 2 [v 2 + v 2 χ ].
(2.14)
Lastly, because of the presence of charged scalar fields in the triplet of scalars in Eq. (2.3), two massive charged scalars h 1 and h 2 arise, with masses
M 2 h − 1 = λ 8 + 1 2 2 (v 2 + v 2 χ ) , M 2 h − 2 = v 2 χ 2 + λ 9 v 2 . (2.15)
Despite the fact that the 3-3-1LHN model has a large scalar content, none of these scalars will actually play a significant role in the phenomenology under scrutiny in this work 2 . We discuss them here primarily for the purpose of showing the richness of the mass spectrum predicted by this model.
• Gauge Bosons
In the 3-3-1LHN model there is a total of 9 gauge bosons, arising because of the enlarged electroweak sector. Their masses are found to be, 16) and,
M 2 W ± = 1 4 g 2 v 2 , M 2 Z = m 2 W ± /c 2 W , M 2 V ± = m 2 U 0 = 1 4 g 2 (v 2 χ + v 2 ) ,(2.M 2 Z = g 2 4(3 − 4s 2 W ) [4c 2 W v 2 χ + v 2 c 2 W + v 2 (1 − 2s 2 W ) 2 c 2 W ].
( 2.17) It is important to emphasize that there are five gauge bosons in addition to the SM, which are within the reach of the LHC, since we assume that the corresponding masses, determined by the scale of symmetry breaking of the model, are in the few TeV range. Bounds on these particles' masses have been placed by the non-observation of certain classes of events [19,25,26]. In particular, a recent and restrictive limit was found on the mass of the Z boson for the 3-3-1 model with right handed neutrinos using CMS data [19], namely, M Z > 2.2 TeV. This bound however, does not apply to our model, because the Z here decays mostly into missing energy. For the regime where M Na < M Z /2, the Z decays mostly into fermion pairs (N a N a ). Since we are assuming a normal hierarchy and N 1 is the LGP, the Z will thus simply decay invisibly into dark matter particle pairs. Therefore, despite the production rate being the same, the branching ratio into charged leptons will be suppressed, and at some level the lower bound as well, as opposite to the 3-3-1 model with right handed neutrinos. Nevertheless, it is important to point out that in the mass regime where Z boson cannot decay into the fermion pair the results found in Ref. [19] do apply to our model. A variety limits have been placed on the mass of this boson and they come from different 2 Note that as stated above we do not consider the possibility that the mass eigenstate φ be the LGP and thus the model's dark matter candidate. sources [20][21][22] and from different models. In summary, the bounds derived here on the mass of this boson are complementary to those.
We show in Fig. 1 how the mass of the Z (in blue) and the total width (in red) vary with the scale of symmetry breaking of the model. Since the the mass of the Z depends on the scale of symmetry breaking only, a bound on the mass of this boson translates into a limit on the whole mass spectrum of the model, because the masses of the new particles are all proportional to the scale of symmetry breaking.
In summary, we have hereby briefly reviewed the key features of the 3-3-1LHN model. It will become clear from what follows that our results are complementary to other results, relevant for this class of models, obtained in the literature. We now turn to the phenomenology of the dark matter candidate, especially as a function of the Z mass.
Dark Matter
Thermal Relic Abundance
The calculation of the thermal relic abundance of the LGP (here assumed to be the heavy neutrino N 1 ) in the 3-3-1LHN model follows standard techniques. To achieve the best possible numerical accuracy, we use here a suitably modified version of the micrOMEGAs package [28] where we implemented the model of interest. In the present model, the thermal relic abundance is set by a wide variety of annihilation and co-annihilation processes, some of which are shown in Fig. 2 and 3, respectively. It is important to notice that the new version of micrOMEGAs we employ includes the computation of 3-and 4-body final state processes. This is of great relevance in the present context, because it opens up new diagrams which had not been considered before, e.g., in Ref. [6]- [7]. In addition to this, we include all relevant co-annihilation processes, such as those displayed in Fig. 3, and we investigate the role of the gauge boson Z in the overall abundance. In our calculations, we vary stochastically the mass splitting between the LGP N 1 and the heavier neutrinos N 2 and N 3 within 10%. We will see further that it is however the Z gauge boson that plays the most important role in determining the abundance of the dark matter candidate and the associated direct detection rates.
In Fig. 4 we show the abundance of the fermion N 1 as a function of mass, for four different values of the Z mass. We keep all parameters of the model fixed, with the exception of the masses of the heavier neutrinos N 2 and N 3 which, as stated above, are varied within 10% of the N 1 mass. Throughout the parameter space of our model, we employ couplings of order one, and we use the values v χ = 2, 3, 4, 5 TeV while changing the mass of the WIMP. Taking all parameters of order one guarantees that all new particles lie at the v χ scale, and enforces the LGP to be the N 1 (assuming fine-tuning in the λ 7 parameter, the scalar φ might become, in fact, lighter than N 1 ).
There are two important conclusions arising from the calculation of the N 1 thermal relic abundance. First, we notice that the co-annihilation processes shown in Fig. 3 only produce some scatter (at most of a factor 2) to appear in Fig. 4, which produces the "thickness" in the curves shown in the figure. Heavy neutrino coannihilation processes, therefore, do not play a crucial role in setting the thermal relic abundance of the N 1 LGP. Second, the change in v χ can be directly translated into a change in the Z mass through Eq. (2.17), in such way that the values for v χ = 2, 3, 4, 5 TeV effectively correspond to the choices M Z = 0. 8,1.2,1.6,2 TeV. It is convenient to cast our results as a function of the Z mass so we can clearly appreciate the effect of changing the Z mass. For instance, for v χ = 2 TeV ( M Z = 0.8 TeV), we observe that the thermal cross section has a resonance exactly at M Z /2 = 400 GeV, and for this reason the resulting abundance is suppressed. This effect similarly appears for M Z = 1.2, 1.6, 2 TeV. This tell us the Z mediated processes in Figs. 2 are the most relevant ones, at least near resonance. In other words, by requiring the abundance of our WIMP to match what measured by Planck, we can in principle constrain the mass of this gauge boson. However, one might notice from Fig. 4, that imposing the right abundance is not enough to obtain a bound on the Z mass: for each value of M Z there is always a region of the parameter space, as small as it can be, that provides the right abundance. On the order hand, as we shall see in the next section, direct detection limits coming from the XENON100 for instance, rule out a large portion of the Z mass range. The figure also shows that, for a given value of the symmetry breaking scale, or equivalently of M Z , the "correct" thermal relic density is always achieved at M N 1 < M Z /2, and at M N 1 > M Z /2 (on the "other side" of the resonance) only for massive enough m N 1 1 TeV. As M N 1 → M Z the relic density drops again due to many additional coannihilation partners arising in the particle spectrum of the theory, and generically one gets a second viable value of M N 1 (at fixed M Z ), but again only for massive enough N 1 's.
Direct Detection and bounds on the Z
In general, the WIMP scattering off of nuclei can be either spin-independent (SI) or spin-dependent (SD), depending or what sort of couplings are involved in the underlying theory. In our model, the dark matter candidate is a fermion that couples to quarks primarily through the Z boson. This coupling results in a WIMP-Nucleon cross section that has both a SI and SD component. The SD WIMP-nucleon cross section is numerically larger than the SI WIMP-nucleon one. However, due to the well known enhancement from coherent scattering, the SI bounds on the WIMP-Nucleon cross section turn out to be stronger than the the SD ones. Therefore, we will limit our discussion to SI processes only.
The differential event rate for elastic scattering of a WIMP with mass M wimp and a nucleus with mass M nuc is given by, 1) where N T is the number of target nuclei per kilogram of the detector, ρ DM = 0.3 GeV/cm 3 is the local dark matter density, dσ dEr (v, E r ) is the differential cross-section for the WIMP-Nucleus elastic scattering , v is the velocity of the WIMP relative to the Earth, v min is the minimum WIMP speed that can cause a recoil of energy E R , and f E ( v) is the the velocity distribution of the dark matter in the frame of the Earth (normalized to 1). This minimum velocity will depend on the energy threshold of the detector as well as on the masses of the WIMP and the nucleus.
dR dE r = N T ρ DM M wimp v min vf E ( v) dσ dE r (v, E r ) d 3 v ,(3.
In Eq. (3.1) dR/dE r is the only measured quantity by direct detection experiments. The standard procedure is to plug in the Eq. (3.1) the values of N T and ρ DM , which are know quantities, and adopt some velocity distribution (f E ( v)), usually Maxwell-Botzmann, and assume some particular interaction between the WIMP and the nucleons, and the form factor, in such a way to determine dσ dEr (v, E r ).
The WIMP-Nucleus cross section is typically separated into a spin-independent (scalar) and a spin-dependent contribution as,
dσ dE r = dσ dE r SI + dσ dE r SD ,(3.2)
but, as mentioned earlier, we will focus our attention to the SI only, since it provides stronger bounds. In this case the differential SI cross section might be written as,
dσ dE r = M nuc 2µ 2 v 2 σ SI 0 F 2 (q) ,(3.3)
where q = √ 2M nuc E r is the momentum transferred to the nucleus, σ SI 0 is the SI cross sections at zero momentum transfer (q = 0), F 2 (q) is the form factor that describes the dependence on the momentum transferred to the nucleus, in other words, it accounts for the coherence loss as the momentum transfer is increased.
Spin-independent contributions to the cross section may arise from scalar-scalar and vectorvector couplings in the Lagrangian:
L ⊃ α S qχ χqq + α V qχ γ µ χqγ µ q . (3.4)
The presence of these couplings depends on the particular particle physics model chosen for the dark matter candidate. In general one can write
dσ dE r SI = M nuc σ 0 F 2 (E r ) 2µ 2 v 2 ,(3.5)
where the nuclear form factor, F 2 (E r ), is the Fourier transform of the nuclear charge density and has the effect of suppressing the signal at large recoil energies, and σ 0 is the total WIMP-nucleon cross section, which has a scalar and vector component. Scalar couplings lead to the following expression for the WIMP-nucleon cross section,
σ 0 = 4µ 2 π [Zf p + (A − Z)f n ] 2 ,(3.6) with f p m p = q=u,d,s α S q m q f p T q + 2 27 f p T G q=c,b,t α S q m q ,(3.7)
where the quantities f p T q represent the contributions of the light quarks to the mass of the proton, and are defined as m p f p T q ≡ p|m qq q|p . The second term is due to the 1-loop interaction WIMPgluons through a colored loop diagram, with f p T G = 1 − q=u,d,s f p T q . These quantities are related to the strange quark content in the nucleon and are determined from pion-nucleon scattering amplitude [29] and from baryon mass differences [30]. The vector coupling is only present in the case of a Dirac fermion, such as our WIMP N 1 . The sea quarks and gluons do not contribute to the vector current. This means that only valence quarks contribute, leading to the following expression
σ 0 = µ 2 B 2 N 64π , (3.8) with B N ≡ α V u (A + Z) + α V d (2A − Z) . (3.9)
For a general WIMP particle with both scalar and vector interactions, the spin-independent contribution to the scattering cross section can be written as,
dσ dE r SI = 2 m N πv 2 [Zf p + (A − Z)f n ] 2 + B 2 N 256 F 2 (E r ) . (3.10)
Most direct detection experiments choose to parametrize their results in terms of the scalar SI WIMP-nucleon cross section (σ n or σ p ), by rewriting the differential cross section as follows,
dσ dE r SI = M nuc σ i 2v 2 µ 2 n [Zf p + (A − Z)f n ] 2 f 2 i F 2 (E r ) ,(3.11)
where σ n,p = 4µ 2 n,p π f 2 n,p , (3.12) where µ n,p is the WIMP-nucleon reduced mass. In many cases the WIMP couples to neutrons and protons similarly, and in this situation f p f n , and therefore the scalar contribution can be approximated by dσ
dE r SI = M nuc σ n A 2 2v 2 µ 2 n F 2 (E r )
. (3.13) Notice that for the vector coupling, the WIMP-Nucleus cross section would also scale with A 2 for α V u = α V d , and a similar definition for the WIMP-nucleon cross section would apply. Anyway, this A 2 enhancement typical for SI scatterings has lead many direct detection experiments to employ heavy targets such as Xenon and Iodine to boost the signal.
We have thus far reviewed the procedure to calculate the SI WIMP-nucleon cross section determined by only one channel, shown in Fig. 5. In theory there would exist two additional diagrams that could contribute to the WIMP-nucleon scattering. The second one is the 1-loop process with quarks running in the loop. However this process is not to relevant here because the fermion N 1 does not couple to the Higgs. The third, is a t-channel diagram mediated by the SI -Cross Section (pb) 10 heavy pseudoscalar P 1 with mass given in Eq. 2.13. Since the couplings involve a γ 5 matrix only, and the WIMP-nucleon scattering happens at the non-relativistic limit, this process is completely negligible. In any case, all three processes are taken into account in the realization of the 3-3-1LHN model we implemented in the micrOMEGAs package [28].
We summarize our numerical results for the N 1 -nucleon scattering cross section as a function of the N 1 mass in Fig. 6, for two values of the Z mass. We set the symmetry breaking (vev) scale at v χ = 4 TeV (green) and v χ = 5 TeV (blue). These values translate into M Z = 1.6 TeV and M Z = 2 TeV respectively, through Eq. (2.17). Thicker lines indicate the N 1 mass range where a thermal relic density compatible with the observed dark matter abundance is achieved. The thick grey line indicates the XENON100 (2012) bound [31]: the region above the curve is excluded. The black dashed line indicates the anticipated 2017 XENON1T performance [32].
The figure shows that if one assumes that the N 1 is not heavier than 1 TeV a lower bound M Z > 1.6 TeV can be inferred. However, if one assumes the N 1 to be much heavier than 1 TeV this limit does not apply. For a N 1 lighter than ∼ 600 GeV one will need a Z much heavier than 2 TeV in order to evade the XENON100 limits. Also, it is apparent that there is only a very weak dependence on the N 1 mass. Since only gauge couplings are involved, the scattering cross section is determined by the mass of the WIMP and the Z only. Consequently, the bound on the scattering cross section off nuclei can be converted into a limit on the mass of the Z for a given WIMP mass, as we discuss below.
The lower bound on the Z thus depends on the N 1 mass regime we are considering. In Fig. 7 we show the region of the parameter space (M Z , M N 1 ) which is allowed by direct detection searches of dark matter. The red region is excluded by XENON100 limits [31]. The grey region is M Z' (GeV) 1 Figure 7. The M Z , M N1 parameter space. The red region is excluded by XENON100 bounds [31]. In the grey region the N 1 is not the LGP, and is thus unstable, allowing it to decay into U 0 ν e , where U 0 is a neutral gauge boson and ν e is the SM electron-neutrino according to Eq. (2.7). The black, blue and green points indicate parameter space points where the thermal relic density Ω N1 h 2 > 0.11, = 0.11 and < 0.11, respectively. excluded by requiring that N 1 be the lightest particle charged under the global symmetry G defined in Eq. (2.5). For instance, when M Z = 1.2 TeV, i.e. for v χ = 3076 GeV, the bosons V ± and U 0 have masses close to ∼ 1000 GeV, and because of the trilinear coupling involving theses boson and the fermion N 1 , as one can see in Eq. (2.7), the fermion N 1 , which is assumed to be the dark matter candidate, cannot be heavier than about 1000 GeV. For this reason the grey region reflects a N 1 stability requirement: if the N 1 is not the LGP it would not be stable.
The figure also shows the regions where the N 1 thermal relic density is overabundant (black), under-abundant (green) and in accord (blue) with the universal dark matter density. The structure of the relic density on the plane reflects what shown in Fig. 4: the central funnel corresponds to the resonant annihilation mode via Z exchange in an s-channel, while the right region, close to the instability region reflects the coannihilation with other particles in the theory (i.e. the right-most end of the curves in Fig. 4).
As mentioned above, the bounds we discuss here apply, at some level, to other extensions of the so called minimal 331 models where fermions arise similar to the N 1 , because the couplings of the Z boson on those models are not so different from the model we investigate here. Moreover, these limits are complementary to other limits coming from colliders [19], Flavor Changing Neutral Current processes [20], electroweak corrections to the S,T,U parameters [21], and from muon decay [22]. More importantly, the limits on the mass of the Z found here imply a bound on the scale of symmetry breaking that forces all particle masses to lie at a few TeV, if one considers all couplings to be of order one. As a final note, we warn the Reader that the limits we have derived here only apply under two assumptions:
• The symmetry G is not broken.
• N 1 is the lightest particle charged under the global symmetry G.
Conclusions
In this paper we studied the phenomenology of the so-called 3-3-1LHN model. The model extends the electroweak symmetry group from SU(2) to SU (3), it adds a variety of particles that fit in the new representations quarks and leptons belong to, and it adds a richer scalar sector, needed to obtain an appropriate pattern of symmetry breaking. In particular, the 3-3-1LHN model naturally encompasses heavy neutrinos and provides a viable dark matter candidate after imposing a suitable global symmetry. While 3-3-1LHN models have been studied from a variety of particle physics standpoints, here we focused on the dark matter phenomenology. We implemented 3-3-1LHN models in a numerical code (micrOMEGAs) for the accurate calculation of the dark matter thermal relic abundance as well as the direct detection scattering rate. We then studied how direct detection results constrain the dark matter candidate mass and the mass of the Z , the latter in turn related to the scale of symmetry breaking of the model and to the mass of several other new particles in the theory.
The thermal relic density of the dark matter candidate is set either by resonant annihilation through Z exchange, or via coannihilation. We found that experimental direct detection results force the Z mass to very large values if the WIMP mass is in the sub-TeV domain. In particular, we have outlined a lower bound, namely M Z ≥ 1.6 TeV, for a WIMP lighter than 1 TeV, and M Z ≥ 2 TeV for a WIMP lighter than 700 GeV. These mass values are in principle within reach of future LHC searches. Hence, in the next few years we expect either discovery or complementary bounds on the Z boson of the 3-3-1LHN model. Either way, LHC will shed light on the dark sector of this model.
and S 2 are new scalars particles added to the SM and have masses proportional to the scale of symmetry breaking of the model v χ , while H is identified with the SM Higgs boson. The
Figure 1 .
1Mass (blue) and total width (read) of the Z as a function of the scale of symmetry breaking.
Figure 2 .
2Selected annihilation channels which contribute to the thermal relic density of our dark matter candidate N 1 .
Figure 3 .
3Example co-annihilation channels which contribute to the abundance of our dark matter candidate N 1 .
Figure 4 .
4Abudance of N 1 including co-annihilation as a function of its mass for four different value of the Z mass. We can clearly notice a resonance at M Z /2, indicating the major role played by the Z in computing the abundance.
Figure 5 .
5WIMP-nucleon scattering process.
Figure 6 .
6SI scattering cross section off nuclei of the fermion N 1 . See the text for details.
It is beyond the scope of this work to derive the impact of these limits from LHC on this model. However, as aforementioned, these bounds are complementary to the ones we derive here.
AcknowledgmentsThe authors would like to thank H.N. Long, Phung Van Dong, Yara Coutinho and Carlos Pires for their comments, and a special acknowledgement to Chris Kelso for his help on this work. We also thank Center for Theoretical Underground Physics and Related Areas (CETUP 2013) for their support and hospitality during the completion of this work. This work is partly supported by the Department of Energy under contract DE-FG02-04ER41286 (SP), and by the Brazilian National Counsel for Technological and Scientific Development (CNPq) (FQ).
. T Han, Z Liu, A Natarajan, arXiv:1303.3040T. Han, Z. Liu, A. Natarajan, [arXiv:1303.3040];
. Andrew Fowlie, Kamila Kowalska, Leszek Roszkowski, Enrico Maria Sessolo, Yue-Lin Sming Tsai, arXiv:1306.1567Andrew Fowlie, Kamila Kowalska, Leszek Roszkowski, Enrico Maria Sessolo, Yue-Lin Sming Tsai,[arXiv:1306.1567].
. Ernest Ma, arXiv:1202.5828Phys.Rev. 8591701Ernest Ma, Phys.Rev. D85 (2012) 091701, [arXiv:1202.5828];
. Wan-Lei Guo, Yue-Liang Wu, Yu-Feng Zhou, Int.J.Mod.Phys. 20Wan-Lei Guo, Yue-Liang Wu, Yu-Feng Zhou, Int.J.Mod.Phys. D20 (2011) 1389-1397.
. G Belanger (annecy, M Lapth), A Kakizaki, Pukhov, arXiv:1012.2577JCAP. 1102G. Belanger (Annecy, LAPTH), M. Kakizaki, A. Pukhov,JCAP 1102 (2011) 009, [arXiv:1012.2577];
Salah Nasri (Maryland U.). Ken Hsieh, R N Mohapatra, hep-ph/0604154Phys.Rev. 7466004Ken Hsieh, R.N. Mohapatra, Salah Nasri (Maryland U.),Phys.Rev. D74 (2006) 066004, [hep-ph/0604154];
. D Hooper, S Profumo, hep-ph/0701197Phys. Rept. 453D. Hooper and S. Profumo, Phys. Rept. 453, 29 (2007) [hep-ph/0701197].
. A W Travis, Alejandro Martin, De La Puente, arXiv:1304.7835hep-ph/0610357Maxim Perelstein. 7583519Phys.Rev.Travis A. W. Martin, Alejandro de la Puente, arXiv:1304.7835; Maxim Perelstein, Andrew Spray, Phys.Rev. D75 (2007) 083519,[hep-ph/0610357].
. C A Pires, P S Rodrigues Da, Silva , JCAP. 071212C. A. de S. Pires, P. S. Rodrigues da Silva, JCAP 0712:012 (2007).
. J K Mizukoshi, C A Pires, F S Queiroz, P S Rodrigues Da, Silva , arXiv:1010.4097Phys.Rev. 8365024J.K. Mizukoshi, C.A. de S.Pires, F.S. Queiroz, P.S. Rodrigues da Silva, Phys.Rev. D83 (2011) 065024,[arXiv:1010.4097].
. J D Ruiz-Alvarez, C A , S Pires, Farinaldo S Queiroz, D Restrepo, P S Rodrigues Da, Silva , arXiv:1206.5779Phys.Rev. 8675011J.D. Ruiz-Alvarez, C.A. de S.Pires, Farinaldo S. Queiroz, D. Restrepo, P.S.Rodrigues da Silva ,Phys.Rev. D86 (2012) 075011,[arXiv:1206.5779].
. C A Pires, F S Queiroz, P S Rodrigues Da, Silva , arXiv:1002.4601Phys.Rev. 82105014C.A. de S.Pires, F.S. Queiroz, P.S. Rodrigues da Silva (Paraiba U.),Phys.Rev. D82 (2010) 105014,[arXiv:1002.4601].
. F Pisano, V Pleitez, hep-ph/9206242Phys. Rev. 46410F. Pisano and V. Pleitez, Phys. Rev. D46,(1992) 410,[hep-ph/9206242];
. P H Frampton, Phys. Rev. Lett. 692889P. H. Frampton, Phys. Rev. Lett. 69,(1992) 2889;
. F Queiroz, C A Pires, P S Rodrigues Da, Silva , arXiv:1003.1270Phys.Rev. 8265018F. Queiroz, C.A. de S.Pires, P.S.Rodrigues da Silva, Phys.Rev. D82 (2010) 065018 , [arXiv:1003.1270].
. Alex G Dias, hep-ph/0309058Phys.Rev. C. A. de S. Pires68Alex G. Dias (Sao Paulo U.), C. A. de S. Pires (Paraiba U.), P. S. Rodrigues da Silva, Phys.Rev. D68 (2003) 115009, [hep-ph/0309058];
Hoang Ngoc Long (Hanoi, Inst. Phys.), Dang Van Soa. Phung Van Dong, hep-ph/0603108Phys.Rev. 7375005Phung Van Dong, Hoang Ngoc Long (Hanoi, Inst. Phys.), Dang Van Soa, Phys.Rev. D73 (2006) 075005, [hep-ph/0603108];
. J G Ferreira, P R D Jr, C A Pinheiro, P S Pires, Silva Rodrigues Da, arXiv:1109.0031Phys.Rev. 8495019J.G. Ferreira, Jr, P.R.D. Pinheiro, C.A.de S. Pires, P.S.Rodrigues da Silva, Phys.Rev. D84 (2011) 095019, [arXiv:1109.0031];
. W Caetano, C A Pires, P S Rodrigues Da Silva, D Cogollo, Farinaldo S Queiroz, arXiv:1305.7246W. Caetano, C. A. de S. Pires, P. S. Rodrigues da Silva, D. Cogollo, Farinaldo S. Queiroz, [arXiv:1305.7246].
. J C Montero, C A De, S Pires, V Pleitez (sao, Paulo , hep-ph/0112246Phys.Rev. 795001Published inJ.C. Montero, C.A. De S. Pires, V. Pleitez (Sao Paulo, IFT). Dec 2001. 7 pp. Published in Phys.Rev. D65 (2002) 095001,[hep-ph/0112246];
. J C Montero, C A Pires, V Pleitez, hep-ph/0011296Phys.Lett. 502J.C. Montero, C.A. de S.Pires, V. Pleitez, Phys.Lett. B502 (2001) 167-170,[hep-ph/0011296];
. D Cogollo, H Diniz, C A De, S Pires, arXiv:1002.1944Phys.Lett. 687D. Cogollo, H. Diniz, C.A. de S.Pires, Phys.Lett. B687 (2010) 400-404,[arXiv:1002.1944];
. F Queiroz, C A Pires, P S Rodrigues Da, Silva , arXiv:1003.1270Phys.Rev. 1065018Published inF. Queiroz, C.A. de S.Pires, P.S.Rodrigues da Silva (Paraiba U.). Mar 2010. 10 pp. Published in Phys.Rev. D82 (2010) 065018,[arXiv:1003.1270];
. P V Dong, D T Huong, M C Rodriguez, Ngoc Hoang, Long, hep-ph/0701137Nucl.Phys. 772P.V. Dong, D.T. Huong, M.C. Rodriguez, Hoang Ngoc Long, Nucl.Phys. B772 (2007) 150-174, [hep-ph/0701137].
. Ngoc Hoang, Long, arXiv:0710.5833Adv.Stud.Theor.Phys. 4Hoang Ngoc Long, Adv.Stud.Theor.Phys. 4 (2010) 173-196, [arXiv:0710.5833].
. A Doff, A A Natale, arXiv:1303.3974A. Doff, A.A. Natale, [arXiv:1303.3974];
. A Doff, A A Natale, arXiv:1210.3390Int.J.Mod.Phys. 27A. Doff, A.A. Natale, Int.J.Mod.Phys. A27 (2012) 1250156,[arXiv:1210.3390].
. P V Dong, H T Hung, T D Tham, arXiv:1305.0369Phys. Rev. D. 87115003P. V. Dong, H. T. Hung, T. D. Tham, Phys. Rev. D 87, 115003 (2013), [arXiv:1305.0369].
. D T Huong, C S Kim, H N Long, N T Thuy, arxiv:1110.1482D. T. Huong, C. S. Kim,H. N. Long, N. T. Thuy, [arxiv:1110.1482].
. E Ramirez Barreto, Y A Coutinho, J Sa Borges, arXiv:1004.3269Phys.Lett. 689E. Ramirez Barreto, Y.A. Coutinho, J. Sa Borges, Phys.Lett. B689 (2010) 36-41,[arXiv:1004.3269];
. E Ramirez Barreto, Yara Do Amaral Coutinho, J Sa Borges, hep-ph/0703099Eur.Phys.J. 50E. Ramirez Barreto, Yara Do Amaral Coutinho, J. Sa Borges, Eur.Phys.J. C50 (2007) 909-917, [hep-ph/0703099];
Yara Do Amaral Coutinho. E Ramirez Barreto, hep-ph/0605098J. Sa Borges. E. Ramirez Barreto, Yara Do Amaral Coutinho, J. Sa Borges, [hep-ph/0605098].
. Y A Coutinho, V Guimares, A A Nepomuceno, arXiv:1304.7907Y.A. Coutinho, V. Salustino Guimares, A.A. Nepomuceno, [arXiv:1304.7907];
. Ngoc Hoang, Long, Thanh Vo, Van, hep-ph/9909302J.Phys. 25Hoang Ngoc Long, Vo Thanh Van, J.Phys. G25 (1999) 2319-2324, [hep-ph/9909302];
. D Gomez Dumm, F Pisano, V Pleitez, Mod.Phys.Lett. 9D. Gomez Dumm, F. Pisano, V. Pleitez, Mod.Phys.Lett. A9 (1994) 1609-1615;
. James T Liu, hep-ph/9312312Phys.Rev. 50James T. Liu, Phys.Rev. D50 (1994) 542-547, [ hep-ph/9312312];
. Alexis Jairo, Marc Rodriguez, Sher, hep-ph/0407248Phys.Rev. 70117702Jairo Alexis Rodriguez, Marc Sher, Phys.Rev. D70 (2004) 117702, [hep-ph/0407248];
. H Richard, Yithsbey Benavides, William A Giraldo, Ponce, arXiv:0911.3568Phys.Rev. 80Richard H. Benavides, Yithsbey Giraldo, William A. Ponce, Phys.Rev. D80 (2009) 113009, [arXiv:0911.3568];
. J M Cabarcas, D Gomez Dumm, R Martinez, arXiv:0910.5700J.Phys. 3745001J.M. Cabarcas, D. Gomez Dumm, R. Martinez, J.Phys. G37 (2010) 045001, [arXiv:0910.5700];
. J M Cabarcas, J Duarte, J-Alexis Rodriguez, arXiv:1111.0315Adv.High Energy Phys. 2012J.M. Cabarcas, J. Duarte, J-Alexis Rodriguez, Adv.High Energy Phys. 2012 (2012) 657582 ,[arXiv:1111.0315];
. D Cogollo, A De Andrade, arXiv:1201.1268Eur.Phys.J. C72. Campina Grande Federal U.), F.S. Queiroz, P. Rebello Teles2029D. Cogollo, A.Vital de Andrade (Campina Grande Federal U.), F.S. Queiroz, P. Rebello Teles, Eur.Phys.J. C72 (2012) 2029,[arXiv:1201.1268];
. A C B Machado, J C Montero, V Pleitez, arXiv:1305.1921A.C.B. Machado, J.C. Montero, V. Pleitez, [arXiv:1305.1921].
. J T Liu, D Ng, hep-ph/9302271Z.Phys. 62J. T. Liu and D. Ng, Z.Phys. C62 (1994) 693-700, [hep-ph/9302271];
. K , ; R Martinez, F Ochoa, arXiv:0909.1121Phys. Lett. B. 308Phys.Rev. DK. sasaki, Phys. Lett. B 308, (1993), 297; R. Martinez and F. Ochoa, Phys.Rev. D 80 (2009) 075020, [arXiv:0909.1121].
. D Ng, hep-ph/9212284Phys. Rev. D. 494805D. Ng, Phys. Rev. D. 49 (1994) 4805 [hep-ph/9212284];
. David L Anderson, Marc Sher, hep-ph/0509200Phys.Rev. 7295014David L. Anderson, Marc Sher, Phys.Rev. D72 (2005) 095014, [hep-ph/0509200].
. Alex G Dias, R Martinez, V Pleitez, hep-ph/0407141Eur.Phys.J. 39Alex G. Dias, R. Martinez, V. Pleitez, Eur.Phys.J. C39 (2005) 101-107, [hep-ph/0407141].
J Borges, Y A Coutinho, E R Barreto, AIP Conf.Proc. 1520. J. Sa Borges, Y.A. Coutinho, E.R. Barreto, AIP Conf.Proc. 1520 (2012) 440-442;
. E Ramirez Barreto, Y A Coutinho, J Sa Borges, arXiv:1103.1267Phys.Rev. 83E. Ramirez Barreto, Y.A. Coutinho, J. Sa Borges,Phys.Rev. D83 (2011) 075001, [arXiv:1103.1267];
. E Ramirez Barreto ; Abc Federal, U ) , Y A Coutinho, J Sa Borges, arXiv:1004.3269Phys.Lett. 689E. Ramirez Barreto (ABC Federal U.), Y.A. Coutinho, J. Sa Borges, Phys.Lett. B689 (2010) 36-41, [arXiv:1004.3269];
. E , E.
. Ramirez Barreto, Y A Coutinho, J Sa Borges, Braz.J.Phys. 38Ramirez Barreto, Y.A. Coutinho, J. Sa Borges, Braz.J.Phys. 38 (2008) 495-498;
Rio de Janeiro Federal U.), J. Sa Borges. E Ramirez Barreto, Y A Coutinho, arXiv:0811.0846Nucl.Phys. 810E. Ramirez Barreto, Y.A. Coutinho (Rio de Janeiro Federal U.), J. Sa Borges, Nucl.Phys. B810 (2009) 210-225 , [arXiv:0811.0846];
. E Ramirez Barreto, Yara Do Amaral Coutinho, J Sa Borges, hep-ph/0703099Eur.Phys.J. 50E. Ramirez Barreto, Yara Do Amaral Coutinho, J. Sa Borges, Eur.Phys.J. C50 (2007) 909-917, [hep-ph/0703099].
. arXiv:1108.1316Phys.Lett.B. 705ATLAS Collaboration, Phys.Lett.B 705 (2011) 28-46, [arXiv:1108.1316].
. arXiv:1209.4446Eur. Phys. J. C. 722241ATLAS Collaboration, Eur. Phys. J. C (2012) 72: 2241, [arXiv:1209.4446].
. Alex G Dias, C A Pires, P S Rodrigues Da, Silva , arXiv:hep-ph/0508186Phys.Lett. 628Alex G. Dias, C. A. de S. Pires, P. S. Rodrigues da Silva, Phys.Lett. B628 (2005) 85-92, [arXiv:hep-ph/0508186].
. G Belanger, F Boudjema, A Pukhov, A Semenov, arXiv:1305.0237G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, [arXiv:1305.0237];
. G Belanger, F Boudjema, P Brun, A Pukhov, S Rosier-Lees, P Salati, A Semenov, arXiv:1004.1092Comput.Phys.Commun. 182G. Belanger, F. Boudjema, P. Brun, A. Pukhov, S. Rosier-Lees, P. Salati, A. Semenov, Comput.Phys.Commun.182:842-856 (2011), [arXiv:1004.1092];
. G Belanger, F Boudjema, A Pukhov, A Semenov, arXiv:0803.2360Comput.Phys.Commun. 180G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput.Phys.Commun.180:747-767 (2009), [arXiv:0803.2360].
. A Bottino, F Donato, N Fornengo, S Scopel, hep-ph/0111229Astropart.Phys. 18A. Bottino, F. Donato, N. Fornengo, S. Scopel, Astropart.Phys.18:205-211,2002, [hep-ph/0111229].
. T P Cheng, Phys. Rev. 382869T. P. Cheng, Phys. Rev.D38 (1988) 2869.
. arXiv:1207.5988Phys. Rev. Lett. 109181301XENON Collaboration, Phys. Rev. Lett. 109, 181301 (2012), [arXiv:1207.5988].
Rafael Lang, KITP Conference: Identifying and Characterizing Dark Matter via Multiple Probes. Rafael Lang, KITP Conference: Identifying and Characterizing Dark Matter via Multiple Probes, http : //online.kitp.ucsb.edu/online/dmatter c13/.
| []
|
[
"CONVERGENCE OF NARASIMHAN-SIMHA MEASURES ON DEGENERATING FAMILIES OF RIEMANN SURFACES",
"CONVERGENCE OF NARASIMHAN-SIMHA MEASURES ON DEGENERATING FAMILIES OF RIEMANN SURFACES"
]
| [
"Sanal Shivaprasad "
]
| []
| []
| Given a compact Riemann surface Y and a positive integer m, Narasimhan and Simha defined a measure on Y associated to the m-th tensor power of the canonical line bundle. We study the limit of this measure on holomorphic families of Riemann surfaces with semistable reduction. The convergence takes place on a hybrid space whose central fiber is the associated metrized curve complex in the sense of Amini and Baker. We also study the limit of the measure induced by the Hermitian pairing defined by Narasimhan-Simha measure. For m = 1, both these measures coincide with the Bergman measure on Y . We also extend the definition of the Narasimhan-Simha measure to the singular curves on the boundary of Mg in such a way that these measures form a continuous family of measures on the universal curve over Mg. | 10.4310/ajm.2022.v26.n5.a3 | [
"https://arxiv.org/pdf/2011.14471v1.pdf"
]
| 227,227,873 | 2011.14471 | 27f8fd7ad5a504505e7e9844e51625f2d1e00f85 |
CONVERGENCE OF NARASIMHAN-SIMHA MEASURES ON DEGENERATING FAMILIES OF RIEMANN SURFACES
Sanal Shivaprasad
CONVERGENCE OF NARASIMHAN-SIMHA MEASURES ON DEGENERATING FAMILIES OF RIEMANN SURFACES
Given a compact Riemann surface Y and a positive integer m, Narasimhan and Simha defined a measure on Y associated to the m-th tensor power of the canonical line bundle. We study the limit of this measure on holomorphic families of Riemann surfaces with semistable reduction. The convergence takes place on a hybrid space whose central fiber is the associated metrized curve complex in the sense of Amini and Baker. We also study the limit of the measure induced by the Hermitian pairing defined by Narasimhan-Simha measure. For m = 1, both these measures coincide with the Bergman measure on Y . We also extend the definition of the Narasimhan-Simha measure to the singular curves on the boundary of Mg in such a way that these measures form a continuous family of measures on the universal curve over Mg.
Introduction
Let Y be a compact Riemann surface of genus g ≥ 1 and m a fixed positive integer. Let Ω ⊗m Y denote the m-th tensor power of the canonical line bundle on Y . Using the global sections of Ω ⊗m Y , Narasimhan and Simha defined a volume form τ (m) on Y as follows [NS68]. Given θ ∈ H 0 (Y, Ω ⊗m Y ), let |θ| 2/m denote the associated volume form i.e. if locally θ(z) = f (z)dz ⊗m , then |θ| 2/m (z) = |f (z)| 2/m ( i 2 dz ∧ dz). Then, τ (m) (z) := max
{θ| Y |θ| 2/m =1} |θ| 2/m (z)
is a continuous positive volume form on Y that we call the Narasimhan-Simha volume form associated to the line bundle Ω ⊗m Y . The induced Radon measure on Y is called the Narasimhan-Simha measure and is also denoted by τ (m) .
More generally, given points P 1 , . . . , P r ∈ Y and integers 0 < b 1 , . . . , b r < m, a similar construction yields the Narasimhan-Simha measure τ (m,b1P1+···+brPr) on Y associated to the line bundle L = Ω ⊗m Y (b 1 P 1 + · · · + b r P r ). For details, see Section 2.3.
When m = 1, the Narasimhan-Simha measure associated to Ω Y , τ (1) , coincides with the Bergman measure [Ber10, Section 4] on Y (See Section 2.1 for details). Thus, τ (m) is a possible generalization of the Bergman measure using pluricanonical forms. The volume form τ (m) was introduced by Narasimhan and Simha to construct the moduli space of projective complex structures on a given compact connected real analytic manifold [NS68]. Tsuji, and Berndtsson-Păun studied the semipositivity of the curvature current of the Narasimhan-Simha metric in families [Tsu07] [BP08]. The asymptotics of the Bergman measure in degenerating families is studied in [HJ96], [Don15], [dJ19], [Shi20] [AN20]. We are interested in computing the asymptotics of the Narasimhan-Simha measure in degenerating families.
Let X → D * be a holomorphic family of curves of genus g and m ≥ 2 be a fixed integer. Let B = b 1 B 1 + · · · + b r B r be a horizontal divisor on X for integers 0 < b 1 , . . . , b r < m. Denote L = Ω ⊗m X/D * (B). Assume that if g = 1, then r ≥ 1 and if g = 0, r ≥ 3 and deg(B| Xt ) ≥ 2m. Note that when g = 0, the assumption r ≥ 3 and deg(B| Xt ) ≥ 2m is equivalent to requiring that H 0 (X t , L Xt ) = 0. Therefore, effectively, the only case we are excluding is when g = 1 and r = 0. But in this case, Theorem A, stated below, is already known [Shi20, Theorem B] (see also [CLT10,Corollary 4.8], [BJ17,Theorem C] and [dJ19,Remark 16.4] for related results).
We will also assume that (X, B red ) has semistable reduction i.e. there exists a regular model X of X such that X 0 is reduced and X 0 + B red is an snc divisor on X , where B denotes the component-wise closure of B in X . A theorem of Deligne and Mumford guarantees that such a model always exists after a base change D * → D * given by t → t N for some positive integer N [DM69].
Let τ (m,B) t := τ (m,B|t) be the Narasimhan-Simha measure on X t associated to L| Xt . If X is an snc model of X, then it is natural to ask what the limit of τ (m,B) t on X is. However, instead of computing the limit of τ (m,B) t on X , we instead compute the limit of τ (m,B) t on the metrized curve complex hybrid space, X hyb CC , defined in [Shi20]. This has the advantage of simultaneously computing the limit of τ (m,B) t in X as well as in the non-Archimedean hybrid space in the sense of Boucksom and Jonsson [Ber09] [BJ17].
The space X hyb CC is a partial compactification of X with central fiber being the metrized curve complex ∆ CC (X ) (in the sense of Amini and Baker [AB15]) associated to X 0 . The latter is obtained by replacing all nodal points in X 0 with line segments i.e. by taking the normalization X 0 , of X 0 and adding a line segment connecting P , P ∈ X 0 if P , P lie over the same nodal point in X 0 . See Figure 1 for an example. We refer to the irreducible components of X 0 as curves in ∆ CC (X ) and the line segments as edges in ∆ CC (X ).
We have the following theorem regarding the convergence of τ is a sum of Narasimhan-Simha measures on the curves and Lebesgue measures on the edges. For details, see Theorem 5.1.1. The reason for working with the more general Narasimhan-Simha measure τ (m,B) instead of just τ (m) is that even if we start with τ (m) t , the restriction of the limiting measure to a curve E in ∆ CC (X ) could still be of the form τ (m,a1P1+···+asPs) . The mass of an edge e under τ (m,B) 0 is 1 N , where N is the length of the maximal inessential chain containing e (see Section 2.2 for details).
As a corollary of Theorem A, we get that the limit of τ (m,B) t on X is a sum of Narasimhan-Simha measures on certain irreducible components of X 0 and Dirac masses on nodal points of X 0 . The Dirac mass at a nodal point is equal to the mass of the associated edge in ∆ CC (X ) with respect to the limiting measure τ (m,B) 0 . The Berkovich hybrid space X hyb = X ∪ X an C((t)) is a partial compactification of X with central fiber being the Berkovich analytification of X viewed as a variety over C((t)). This partial compactification has the advantage that it does not depend on the choice of a model X and is, therefore, canonical. A number of degeneration problems [BJ17], [Oda18], [Sus18], [LS19], [PS19], [Shi19], [Shi20] [Li20] and dynamical problems [Fav18], [DF19], [DKY20] have been studied in this setting. Here, we compute the limit of τ Corollary B. The measures τ (m,B) t admit a weak limit as t → 0 on X hyb . The support of this limiting measure coincides with the essential skeleton of the pair (X C((t)) , 1 m B C((t)) ) and is given by a sum of Lebesgue measures on edges and Dirac masses on vertices.
The essential skeleton of the pair (X C((t)) , 1 m B C((t)) ) (see Section 4.4 for details) is a piecewise linear subset of X hyb which encodes information about the pair (X C((t)) , 1 m B C((t)) ) [KS06] [MN15] [BM19]. Moreover, in the case when B = 0, the measure τ (m) 0 is independent of the one parameter family X. More precisely, if X → D and Y → D are two families of genus g curves degenerating to the same semistable curve C = X 0 = Y 0 , then the limit of the Narasimhan-Simha measure with respect to Ω ⊗m Xt and with respect to Ω ⊗m Yt coincide on ∆ CC (C). Since the data of the one-parameter family keeps track of the 'direction of approach' towards a stable curve in M g , we could ask whether the Narasimhan-Simha measures with respect to Ω ⊗m form a continuous family of measures on M g .
To make this precise, we first extend the notion of Narasimhan-Simha measure to all stable curves by considering the limiting measure described in Theorem A and collapse all edges to a node. Note that this measure will place a unit Dirac mass at all nodal points of a stable curve. Now consider the universal curve C g → M g . Recall that, topologically, the fiber of this map over the isomorphism class of a stable curve C is C/Aut(C). Let τ m C denote the pushforward of τ (m) from C to C/Aut(C). Let (C 0 (C g )) ∨ denote the space of Radon measures on C g equipped with the weak * topology.
Theorem C. The map M g → (C 0 (C g )) ∨ given by [C] → τ m C is continuous. Note that the above result is in a stark contrast with the case for the Bergman measures (i.e. when m = 1), where the mass of limiting measure on an edge depends strongly on the lengths of all the edges in the dual graph. Thus, an analog of Theorem C would be false in the case of Bergman measures2. In fact, to extend the Bergman measures continuously, Amini and Nicolussi construct a large hybrid space which keeps track of the relative orders of the logarithmic rates of approach to each node on a stable curve [AN20].
We also compute the asymptotics of a measure closely related to the Narasimhan-Simha measure. Let Y be a compact Riemann surface of genus g, P 1 , . . . , P r points on Y and 0 < b 1 , . . . , b r < m integers. We get a Hermitian pairing on H 0 (Y, Ω ⊗m Y (b 1 P 1 + · · · + b r P r )) given by
θ, ϑ = i 2 m Y θ ∧ ϑ (τ (m,b1P1+···+brPr) ) m−1 .
Let e 1 , . . . , e M be an orthonormal basis of H 0 (Y, Ω ⊗m Y (b 1 P 1 + · · · + b r P r )) with respect to the above pairing. Then, the positive volume form
µ (m,b1P1+···+brPr) = i 2 m M i=1 e i ∧ e i (τ (m,b1P1+···+brPr) ) m−1
does not depend on the choice of the orthonormal basis and we call it as the pluri-Bergman measure on Y associated to Ω ⊗m Y (b 1 P 1 + · · · + b r P r ). Note that when m = 1 and r = 0, µ (1) is just the Bergman measure. Thus, this measure is yet another generalization of the Bergman measure.
Consider the family X → D * along with the horizontal divisor B. Using the same notation as before, let µ (m,B) t denote the measure µ (m,Bt) on X t associated to L t . We are also able to compute the limit of µ . For details, see Section 6 and Theorem 6.0.1.
As before, we also get the limit of measures µ (m,B) t on the hybrid space X hyb .
Corollary E. The measures µ (m,B) t
converge to a measure on the hybrid space X hyb whose support is the essential skeleton of the pair (X C((t)) , 1 m B C((t)) ). The limiting measure is a sum of Dirac masses on the vertices and Lebesgue measure on the edges.
Finally, we would also like to understand what happens to the limiting pluri-Bergman measure as m → ∞. We compute this limit on X hyb instead of X hyb CC . The reason for doing so is that it is not clear to us what the limit of µ (m) as m → ∞ is for a fixed Riemann surface. However, the total mass of µ (m) is easy to figure out.
There are two ways to think of the limit. In the first case, we fix B and suppose that g ≥ 2. Let µ (m,B) t denote the pluri-Bergman measure on X t induced by Ω ⊗m Xt (B| Xt ). By abuse of notation, let µ , normalized to volume 2g − 2, converges to an analogue of the hyperbolic measure on X an C((t)) and this limit does not depend on the choice of the divisor B (see Section 6.2). This limit measure lives on the dual graph of the stable reduction of X (which coincides with the essential skeleton in this case). It places no mass on the edges and places a mass of 2g(v) − 2 + val(v) on each vertex, where g(v) is the genus of the irreducible component associated to v and val(v) is the valency of the vertex v in the dual graph. It follows from [PS19] that this measure is the limit of hyperbolic measures on X t . It seems to be unknown whether the measures µ (m) t , normalized to volume 2g − 2, themselves converge weakly to the hyperbolic measure on X t .
Another way to think of the limit is to fix the Q-divisor 1 m B instead of fixing B i.e. we consider µ (km,kB) 0 associated to ω ⊗km Xt (kB| Xt ) and take the limit as k → ∞. Method of proof. A general observation (Lemma 3.4.1) tells that us in order to compute the limit on all snc models of X, it is enough to compute it on any one snc model. We work with the model X of X obtained from the minimal snc model of (X, B) by repeatedly blowing down the (−1)-curves E in the central fiber for which deg(B| E ) < m. The advantage of working with this model is that h 0 (ω X0 (B| X0 )) = h 0 (ω Xt (B| Xt )); now we can apply Grauert's lemma [Har77,Corollary III.12.9] to find sections of ω X /D (B) that restrict to a basis of H 0 (ω X0 (B| X0 )) and H 0 (ω Xt (B| Xt )). We make a clever choice of such a basis that renders the computations simple. By analyzing these sections, we find expressions for τ (m) t and µ (m) t . Now, understanding the asymptotics of these sections allows us to understand the asymptotics of τ Theorems C and D also follow from similar calculations. We prove Corollary 3.4.2, a general result on how to transfer convergence from X hyb CC to X hyb . As a consequence, Corollaries B and E follow from Theorems A and D, respectively, using Corollary 3.4.2.
Assume that 2g − 2 + deg(B| X t ) m > 0.
Further questions. On a fixed Riemann surface, we can consider a sequence of measures constructed iteratively using the recipe for constructing the pluri-Bergman measure, starting with the Bergman measure. Tsuji has shown that this sequence of measures converges to the hyperbolic measure [Tsu10]. It would be interesting to know what the limit of these measures on ∆ CC (X ) is and whether the sequence of limiting measures could be given a dynamical interpretation.
We could also ask whether the measures µ (m) converge to the hyperbolic measure as m → ∞, which is the case for their limits on X hyb .
It would be interesting to know if there is a higher dimensional analog of Theorem A and Corollary B.
Organization of the paper. We discuss some preliminaries in Section 2. In Section 3, we discuss the metrized curve complex hybrid space. In Section 4, we discuss the global sections of ω ⊗m X0 (B| X0 ). We prove Theorem A in Section 5 and in Section 6, we prove Theorem D. In Section 7, we prove Theorem C 2. Preliminaries 2.1. Families of curves and models. Let D denote the complex unit disk and let D * denote the complex unit disk punctured at the origin. A family of curves X → D * of genus g is a complex manifold X along with a projective holomorphic submersion X → D * such that the fibers are smooth compact connected complex curves of genus g. We also assume that our family of curves is meromorphic at the origin i.e. X ⊂ P N × D * is cut out by polynomials whose coefficients are holomorphic functions on D * and meromorphic at the origin.
A model of X is a normal complex analytic space X along with a projective flat holomorphic map X → D such that X | D * = X as complex analytic spaces over D * . Let X 0 denote the central fiber of X . Note that X 0 will always be connected [Liu02,Corollary 8.3.6]. A model X is said to be an snc model of X if X is regular and X 0,red is an snc divisor on X .
Given two snc models X and X of X, we say that X dominates X if there is a proper map q: X → X such that q| X is the identity map. Note that q is a bimeromorphic map between X and X .
Let B = b 1 B 1 + · · · + b r B r be a horizontal divisor on X. After shrinking the base disk, we may assume that B i ∩ B j = ∅ for i = j. Let B denote the component-wise closure of B in X . We say that X is an snc model of (X, B) if X is regular and (X 0 + B) red is an snc divisor on X .
Let m ≥ 2 be a positive integer. Suppose that b i < m for all i. Further assume that deg(B| Xt ) ≥ 1 if g = 1 and deg(B| Xt ) ≥ 2m if g = 0. Throughout this paper, we will only be working with such pairs. Note that in this case (X, B red ) is stable in the sense of [DM69] i.e. 2g − 2 + deg(B red ) > 0.
A theorem of Deligne and Mumford guarantees that if (X, B red ) is a stable pair, then after a base change D * → D * given by u → u N , there exists an snc model X of (X, B) such that X 0 is reduced. Such an X is called a semistable model and we will assume that all our families have a semistable model. If (X, B) has a semistable model, there exists a unique minimal one. Here, minimality means that any other semistable model is obtained by applying a sequence of blowups to the minimal semistable model. We can get the minimal semistable model by considering the stable reduction of (X, B red ) and then resolving the singularities by blowing up (see [ACG11, Chapter X.4]).
Let X denote the minimal semistable model of (X, B). We will mostly work with the model X that is obtained from X by repeatedly contracting the (−1)curves E in the central fiber for which deg(B| E ) < m. We will call X as the minimal snc model of (X, 1 m B). The choice of notation is due to the fact that the model only depends on the Q-divisor 1 m B, in the sense that the minimal snc model of (X, 1 m B) is the same as that of (X, 1 km kB) for any positive integer k. Note that
E 1 E 2 E 3 B 1 B 2 B 3 B 3 B 2 B 1 E 3 X 0 X 0= E 1 + E 2 + E 3 .
The model X is obtained by first contracting E 1 and then E 2 from the minimal semistable model, X , of (X, B).
X 0 + B red is no longer an snc divisor; however this is not a major problem as X 0 is still a (reduced) snc divisor. Moreover, B does not pass through any nodal points of X 0 -this is due to the fact that the image of a (−1)-curve E in X under the map X → X , obtained by contracting E, is a smooth point in X 0 . For an example, see Figure 2.
Let X be the minimal snc model of (X, 1 m B). For any irreducible component E ⊂ X 0 , let val(E) denote the number of intersection points of E with the rest of X 0 i.e. val(E) = #(E ∩ (X 0 \ E)). Any irreducible component E ⊂ X 0 is of one of the following forms: Note that all irreducible components E ⊂ X 0 satisfy 2g(E) − 2 + val(E) + deg(B| E ) m ≥ 0. If g(E) = 0, val(E) = 2 and deg(B| E ) = 0, we call E as inessential, otherwise we call it as essential. If B = 0, the essential irreducible components of X 0 are exactly those that show up in the central fiber of the stable reduction of X in the sense of [DM69] i.e. when B = 0, the inessential components correspond to (−2)-curves which can be contracted to obtain the stable reduction of X.
2.2.
Dual graph and the stable dual graph of a model. Let (X, B) be a pair as from the previous section. Given an snc model X of X, the dual graph Γ X is a graph whose vertices correspond to irreducible components on X 0 and edges correspond to the nodes in X 0 . Note that Γ X is allowed to have multiple edges between a pair of vertices, but no loops are allowed. Note that Γ X is connected because X 0 is connected. Associated to each vertex v E ∈ V (Γ X ), we keep track of two numbers: the genus g(v E ) = g(E) and the valency val(v E ) = val(E). Figure 3. The above figure shows the stable dual graph in the case when central fiber of the minimal snc model X of (X, 0) is given by X 0 = E 0 + · · · + E 5 such that g(E 0 ) = g(E 5 ) = 2 and g(E 1 ) = g(E 2 ) = g(E 3 ) = g(E 4 ) = 0.
E 0 E 1 E 2 E 4 E 3 E 5 X 0 = 5 i=0 E i v E0 v E1 v E2 v E3 v E4 v E5 v E0 v E5 1 1 1 1 1 Γ X Γ 5
We define the length of an edge e between v E1 and v E2 by
l e = 1 mult X 0 (E 1 ) · mult X 0 (E 2 )
.
In particular, when X 0 is reduced, all edges in Γ X have length 1. Now let X be the minimal snc model of (X, 1 m B). We call a vertex v E ∈ V (Γ X ) inessential (respectively essential) if E is inessential (respectively essential).
Let v 0 , v N ∈ V (Γ X ) for some N ≥ 1. We define an inessential chain between v 0 , v N to be a sequence of vertices v 0 , . . . , v N ∈ V (Γ X ) such that there is an edge between v i−1 and v i for 1 ≤ i ≤ n and v 1 , . . . , v N −1 are inessential. Such a chain is said to be maximal if v 0 and v N are essential. Note that we do allow v 0 = v N .
The stable dual graph of (X, 1 m B), denoted Γ, is the graph obtained from Γ X by forgetting the inessential vertices. The lengths of the edges of Γ are such that Γ and Γ X are isometric as metric graphs. For an example, see Figure 3.
Thus, V ( Γ) is exactly the set of essential vertices of Γ X and an edge in Γ corresponds to a maximal inessential chains in Γ X .
Even though, it is suppressed in the notation, Γ depends on the choice of the integer m and the divisor B. Note that if B = 0, then Γ is just the dual graph of the stable reduction of X in the sense of [DM69].
2.3. The Narasimhan-Simha measure. Let Y be a compact Riemann surface of genus g, and let P 1 , . . . , P r be distinct points on Y for some r ≥ 0. We allow g = 0, provided that r ≥ 3.
Pick an integer m ≥ 1 and let 0 < a 1 , . . . , a r < m be integers. If g = 0, we also require that a 1 +· · ·+a r ≥ 2m. This ensures that h 0 (Y, mK Y +a 1 P 1 +· · ·+a r P r ) > 0.
Given θ, ϑ ∈ H 0 (Y, mK Y + a 1 P 1 + · · · + a r P r ), we define a volume form |θ ∧ ϑ| 1/m on Y as follows. Locally if θ(z) = f (z)dz ⊗m and ϑ(z) = g(z)dz ⊗m , where f and g are local meromorphic functions on Y with poles of order at worst a i at P i , then, |θ ∧ ϑ| 1/m = |f (z)g(z)| 1/m ( i 2 dz ∧ dz). Also denote |θ| 2/m := |θ ∧ θ| 1/m . Now, we define a continuous function · Y : H 0 (Y, mK Y + a 1 P 1 + · · · + a r P r ) → R ≥0 as follows:
θ Y := Y |θ| 2/m m/2 .
The assumption a i < m ensures that the integral converges. Note that · Y satisfies the following properties (see [NS68] for details):
• θ Y = 0 ⇐⇒ θ = 0; • λθ Y = |λ| θ Y for λ ∈ C; and • {θ | θ Y = 1} is a compact subset of H 0 (mK Y + a 1 P 1 + · · · + a r P r ). • · Y is not a norm if m > 1.
The Narasimhan-Simha volume form associated to the line bundle mK Y +a 1 P 1 + · · · + a r P r is the continuous positive volume form on Y defined by
τ (m,a1P1+···+arPr) (z) = sup θ Y =1 |θ| 2/m (z), where the supremum is over {θ ∈ H 0 (mK Y + a 1 P 1 + · · · + a r P r ) | θ Y = 1}.
Since |θ| 2/m (z) is a real cotangent vector at z, it lies in ordered set R ≥0 ∪ {∞} and the supremum makes sense. Since the supremum is over the compact set {θ | θ Y = 1}, the supremum is indeed a maximum. If no confusion arises, we skip the superscript and just use denote τ to denote τ (m,a1P1+···+arPr) . We could think of τ as a continuous section of the real line bundle
|K Y | 2 ⊗|O Y (P 1 )| 2a1/m ⊗ . . . ⊗ |O Y (P r )| 2ar/m on Y .
If m ≥ 2, then note that mK Y + a 1 P 1 + · · · + a r P r is base point free and thus, τ does not vanish anywhere on Y . Also note that τ (z) < ∞ if z ∈ Y \ {P 1 , . . . , P r } and that the total mass of τ is finite, but does not seem to be easy to calculate. Since the line bundle mK Y + a 1 P 1 + · · · + a r P r is base point free, given a P i , there exists a global section of mK Y + a 1 P 1 + · · · + a r P r that looks locally like z −ai dz ⊗m near P i . Thus, locally near P i , τ is given by
φ · |z| −2ai/m ( i 2 dz ∧ dz), where φ is a continuous function in a neighborhood of P i .
Note that if f : Y → Y is a biholomorphism fixing P 1 , . . . , P r , then the pushforward measure f * τ is equal to τ i.e. τ is invariant under the action of an automorphism of the marked curve (Y ; P 1 , . . . , P r ).
2.4. The pluri-Bergman measure. Let the notation be as in the previous subsection. We define a Hermitian pairing on H 0 (mK Y + a 1 P 1 + · · · + a r P r ) as follows [NS68]:
(2.1) θ, ϑ = i 2 m Y θ ∧ ϑ τ m−1 .
In the above ( i 2 ) m θ∧ϑ τ m−1 is the (1,1)-form given as follows. If θ = f (z)dz ⊗m , ϑ = g(z)dz ⊗m and τ = h(z)( i 2 dz ∧ dz) locally for some holomorphic functions f, g and positive real-valued function h,
then ( i 2 ) m θ∧ϑ τ m−1 = f (z)g(z) h(z) m−1 ( i 2 dz ∧ dz)
locally. We also use the notation |θ∧ϑ|
τ m−1 to denote |f (z)g(z)| h(z) m−1 ( i 2 dz ∧ dz) locally.
The continuous (1, 1)-form τ does not vanish anywhere on Y and thus the integral is well defined and finite. Note that if m = 1, then τ does not play any role and we recover the Hermitian pairing induced by the Bergman metric.
Let e 1 , . . . , e M be an orthonormal basis of H 0 (mK Y + a 1 P 1 + · · · + a r P r ) with respect to the above pairing. Using elementary linear algebra, we see that the (1, 1)-form
µ (m,a1P1+···+arPr) = M i=1 |e i ∧ e i | τ m−1
does not depend on the choice of the orthonormal basis. We call the corresponding Radon measure on Y as the pluri-Bergman measure on Y induced by line bundle mK Y +a 1 P 1 +· · ·+a r P r . Whenever there is no confusion regarding the line bundle, we skip the superscript and denote µ = µ (m,a1P1+···+arPr) . It is also given by the formula
µ = sup |θ ∧ θ| τ m−1 θ ∈ H 0 (mK Y + a 1 P 1 + · · · + a r P r ), Y |θ ∧ θ| τ m−1 = 1 ,
which is proved using the same arguments as in the proof of Propositions 1.1 and 1.2 of [Ber10]. It also follows from this description that in the case when m = 1 and B = 0, µ (1) is the same as τ (1) and is the Bergman measure on Y .
Since τ does not vanish on Y , µ(z) is finite for all points z ∈ Y \ {P 1 , . . . , P r }. From the second description of µ, it follows that µ is nowhere vanishing. By using the fact that there is a global section of mK Y + a 1 P 1 + · · · + a r P r that looks like z −ai dz ⊗m near P i and that τ ∼
C 1 |z| −2ai/m ( i 2 dz ∧ dz) near P i , we conclude that µ ∼ C 2 |z| −2ai/m ( i 2 dz ∧ dz) near P i .
Note that the total mass of µ is just h 0 (mK Y + a 1 P 1 + · · · + a r P r ).
2.5. The dualizing sheaf and its tensor powers. Let X denote an snc model of X such that X 0 is reduced. Then, the exists a dualizing sheaf
ω X 0 = ω X (X 0 )| X 0 on X 0 .
Let E 1 and E 2 be irreducible components of X 0 . A local section of ω X 0 near a node P = E 1 ∩ E 2 is the data of meromorphic one-forms f 1 , f 2 on E 1 and E 2 , respectively, with at worst simple-pole along P , such that the residues of f 1 and f 2 at P sum to 0 [DM69, Section I].
Using this local description of sections of ω X 0 , we also get the following description of local sections of ω ⊗k X 0 , where k is an integer (possibly negative). If k ≥ 1, let us denote dz ⊗k = dz ⊗ . . . ⊗ dz; if k is negative, we can think of dz ⊗k as a formal symbol satisfying the appropriate change of coordinates. Then, a local section θ of ω ⊗k X 0 near P = E 1 ∩ E 2 is just given by the data of two meromorphic k-canonical forms f (z)dz ⊗k and g(w)dw ⊗k locally on E 2 and E 1 near P , respectively, such that • f and g can have at worst poles of order k at the origin. (When k is negative, this means that f and g vanish to order at least −k at the origin) • If we write f (z) = n≥−k a n z n and g(w) = n≥−k b n w n locally around P , then
a −k + (−1) k+1 b −k = 0.
We call a −k and b −k the residues of θ at P along E 2 and E 1 respectively.
Let E 1 , . . . , E s denote the irreducible components of X 0 . Let P (i) 1 , . . . , P (i) ri denote the nodal points of X 0 that lie in E i . Now pick an orientation on Γ X i.e. for every edge e ∈ E(Γ X ), we pick a direction. Let e − and e + denote the initial and final vertex of e with respect to the chosen orientation. Summarizing the above discussion, we have a short exact sequence of sheaves on X 0 .
(2.2) 0 → ω ⊗k X0 → i O Ei (kK Ei + kP (i) 1 + · · · + kP (i) ri ) → P ∈X0 node C(P ) → 0,
where the first map is given by the restrictions and the second map is given by taking the sum (respectively difference) of residues if k is odd (respectively even) i.e. this map is given by
(ψ i ) i → (res P (ψ e + P ) + (−1) k+1 res P (ψ e − P )
) P . The following short exact sequence will also be useful.
(2.3) 0 → i O Ei (kK Ei + (k − 1)P (i) 1 + · · · + (k − 1)P (i) ri ) → ω ⊗k X0 → P ∈X0 node C(P ) → 0,
where the first map exists because all the residues of sections of the left term are zero, so there is no compatibility of residues to be checked. The second map is taking the residue at each node P along the irreducible component associated to e − P .
Metrized curve complex hybrid space
We describe the metrized curve complex and the associated hybrid space in this section. See [AB15] and [Shi20, Section 7] for more details. In this section X will denote an arbitrary snc model of X. We do not assume that X is semistable and we will not keep track of the divisor B.
3.1. Metrized curve complex. Let X 0 denote the normalization of X 0 i.e. the disjoint union of all irreducible components of X 0 . The metrized curve complex ∆ CC (X ) associated to X is a topological space defined as follows.
∆ CC (X ) = X 0 e∈E(Γ X ) [0, l e ] ∼,
where ∼ is the identification of the end points [0, l e ] with the corresponding points that lie over the nodal point associated to e. Recall that l e is the length of the edge e ∈ E(Γ X ). See Figure 1 for an example.
3.2. Metrized curve complex hybrid space. As a set, the metrized curve complex hybrid space, X hyb CC , is given by X hyb CC = X ∆ CC (X ). We also have a map X hyb CC → D given by extending the map X → D * and sending ∆ CC (X ) to the origin. This map will turn out to be continuous in the topology on X hyb CC . Before we describe the topology on X hyb CC , we make a few definitions.
First, consider a point Q ∈ E, where E ⊂ X 0 is an irreducible component of multiplicity a such that Q is not a nodal point. We can find an open set U 1 ⊂ X containing Q with coordinates z, w on U 1 such that U 1 ∩ X 0 = U 1 ∩ E, |z|, |w|< 1 and the map to D is given by (z, w) → z a . We say that (U 1 , z, w) is a coordinate chart adapted to the irreducible component E and centered at Q. Now, consider a nodal point P = E 1 ∩ E 2 in X 0 , where E 1 , E 2 ⊂ X 0 are irreducible components of multiplicity a, b respectively. Then, we can find an open chart U 2 and coordinates z, w on U 2 such that U 2 ∩ X 0 = U 2 ∩ (E 1 ∪ E 2 ), |z|, |w|< 1 and the map to D is given by
(z, w) → z a w b . We say that (U 2 , z, w) is adapted to the node P = E 1 ∩ E 2 .
To describe the topology on X hyb CC , it is enough to describe the neighborhood basis of each point.
• Firstly, we require that X → X hyb CC is an open immersion. This describes the neighborhood basis of points in X.
• Pick a point Q ∈ E, where E ⊂ X 0 is an irreducible component and Q is not a nodal point on X 0 . Let (U 1 , z, w) be a coordinate chart adapted to E and centered at Q. Viewing U 1 as a subset of X hyb CC , we get a neighborhood of Q. By shrinking such adapted coordinate charts, we get a neighborhood basis of Q.
• Pick a point Q ∈ e P , where P = E 1 ∩ E 2 is a nodal point in X 0 , where E 1 , E 2 have multiplicities a, b respectively such that Q does not lie in X 0 . Identify e P with [0, 1 ab ], where 0 gets identified with v E1 and 1 ab with v E2 . Let (U 2 , z, w) be a coordinate chart adapted to the node P = E 1 ∩ E 2 . Pick α, β so that 0 < α < Q < β < 1 ab . Then, (z, w) ∈ U 2 | α < log|w| a log|z a w b | < β ∪ (α, β)
is a neighborhood of Q. Shrinking U 2 and letting α, β → Q, we get a neighborhood basis of Q.
• Pick a point Q = e P ∩ E 1 , where P = E 1 ∩ E 2 is a nodal point in X 0 , where E 1 , E 2 have multiplicities a, b respectively. Identify e P with [0, 1 ab ], where 0 gets identified with v E1 and 1 ab with v E2 . Let (U 2 , z, w) be a coordinate chart adapted to the node P = E 1 ∩ E 2 . Pick 0 < 1 ab . Then, (z, w) ∈ U 2 | log|w| a log|z a w b | < ∪ (U 2 ∩ E 1 ) ∪ [0, )
is a neighborhood of Q. Shrinking U 2 and letting → 0, we get a neighborhood basis of Q.
Alternatively, we can define the topology on X hyb CC to be the coarsest topology for which the maps X hyb CC → X and X hyb CC → X hyb are continuous, where X hyb = X ∪ Γ X denotes the Boucksom-Jonsson hybrid space [Shi20, Section 7].
Lemma 3.2.1. If X , X are models of X such that X dominates X i.e. a proper map X → X which restricts to identity on X, then there exists a unique continuous surjective map (X ) hyb CC → X hyb CC that restricts to identity on X. Proof. Such a map X → X is given by a composition of blowups along closed points in the central fiber [Lic68, Theorem 1.15], we may reduce to the case when X → X is obtained by a single blowup. If X → X is obtained by blowing up a smooth point in X 0 , then we get a map ∆ CC (X ) → ∆ CC (X ) obtained by collapsing the extra edge and curve in ∆ CC (X ) to the center of the blowup.
If X → X is obtained by blowing up a nodal point in X 0 , then ∆ CC (X ) → ∆ CC (X ) is obtained by collapsing the extra curve in ∆ CC (X ) to the corresponding point in Γ X . More precisely, suppose X is obtained by blowing up P = E 1 ∩ E 2 , where E 1 and E 2 are irreducible components of multiplicity a and b respectively and let E be the exceptional divisor of the blowup. We can identify e P [0, 1 ab ], where v E1 is identified with 0 and v E2 is identified with 1 ab . The exceptional curve E is collapsed to the point 1 a(a+b) ∈ e P and the edges e E1∩E and e E2∩E are identified with [0, 1 a(a+b) ] and [ 1 a(a+b) , 1 ab ]. In both cases, we get a continuous surjective map ∆ CC (X ) → ∆ CC (X ), which gives rise to a surjective map (X ) hyb CC → X hyb CC To show that this map is continuous, it is enough to note that the compositions (X ) hyb CC → X → X and (X ) hyb CC → (X ) hyb → X hyb are continuous and that these compositions are the same as the compositions (X ) hyb
CC → X hyb CC → X and (X ) hyb CC → X hyb CC → X hyb . 3.3.
Convergence of measures on the metrized curve complex hybrid space. We outline some general techniques that will be used to prove Theorem A and D.
Lemma 3.3.1. Let X be an snc model of X. Let (ν t ) t∈D * be a family of Radon measures on X with the support of ν t contained in X t and such that lim sup t→0 ν t (X t ) < ∞. Let ν 0 be a Radon measure on ∆ CC (X ). To show that ν t → ν 0 weakly as measures on X hyb CC , it is enough to prove the following.
(1) Let (U 1 , z, w) be a coordinate chart adapted to an irreducible component E ⊂ X 0 and let f be a continuous function on U 1 with compact support.
Then,
U1∩Xt f ν t → U1∩E f ν 0 as t → 0.
(2) Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩ E 2 , where E 1 , E 2 have multiplicities a, b in X 0 respectively. Let 0 < α < β ≤ 1 2ab and let f be a continuous function on [0, 1 ab ] e P . Then,
{w∈U2|α≤ log|w| a log|t| ≤β} f log|w| a log|t| ν t → [α,β] f ν 0 as t → 0. (3) Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩ E 2 , where E 1 , E 2 have multiplicities a, b in X 0 respectively. Let 0 < < 1 2ab and identify e P [0, 1 ab ]. Let D = {(w, u) ∈ D × [0, ) | Either w = 0 or u = 0}
and let r : D × [0, ) → D be a strong deformation retract. We can identify D with (U 2 ∩E 1 )∪[0, ). Let f be a compactly supported continuous function on D . Then,
{w∈U2| log|w| a log|t| < } f r w, log|w| a log|t| ν t → D f ν 0 as t → 0.
Proof. Let h be a continuous function in a neighborhood of ∆(X ). We need to show that Xt hν t → ∆CC(X ) hν 0 . The sets listed in the lemma form the neighborhood basis of points in ∆ CC (X ) in X hyb CC . So, we can cover a neighborhood of ∆ CC (X ) using finitely many open sets of these forms. Now consider a partition of unity {χ i } i adapted to such a cover.
Writing h = i χ i h, it is enough to show that χ i hν t → χ i hν 0 as t → 0. So,
we are reduced to the case where h is supported in a set of one of the forms listed above.
If h is supported in the set listed in (1), then take f = h and there is nothing to show.
If h is supported in the set listed in (2), let f = h| e P . Then, h − f ( log|w| a log|t| ) is a compactly supported continuous function which vanishes along ∆ CC (X 0 ). Thus,
given > 0, we can find t 0 > 0 such that |h − f ( log|w| a log|t| )|< on X t for all |t|< t 0 . Thus, we get that |h − f ( log|w| a log|t| )|ν t < ν t (X t ).
Letting → 0 and using the fact
that lim sup t→0 ν t (X t ) < ∞, we get that {w∈U2|α≤ log|w| a log|t| ≤β} h − f log|w| a log|t| ν t → 0, and thus lim t→0 hν t = lim t→0 {w∈U2|α≤ log|w| a log|t| ≤β} f log|w| a log|t| ν t = [α,β] f ν 0 = hν 0 .
A similar argument also shows that if h is supported in the set listed in (3), then hν t → hν 0 .
3.4. Extending convergence to higher models. Let X and X be models of X such that X dominates X . Recall that this gives rise to a unique continuous surjective map X hyb CC → X hyb CC . Lemma 3.4.1. Let X , X be snc models of X such that X dominates X . Let (ν t ) t∈D * be a family of Radon measures on X t and ν 0 a Radon measure on ∆ CC (X ) such that ν t converges weakly to ν 0 on X hyb CC . Suppose that ν 0 ({Q}) = 0 for all points Q ∈ ∆ CC (X ). Then there exists a unique measure ν 0 on ∆ CC (X ) such that ν t → ν 0 weakly on (X ) hyb CC . Proof. Since the map q: X → X is a composition of blowups [Lic68, Theorem 1.15], we see that the map X 0 → X 0 is obtained by contracting some irreducible components of X 0 and that the map ∆ CC (X ) → ∆ CC (X ) is obtained by collapsing some curves and edges to points. Let q CC : ∆ CC (X ) → ∆ CC (X ), q hyb CC : (X ) hyb CC → X hyb CC denote the induced maps. We claim that there is a unique measure ν 0 on ∆ CC (X ) such that the pushforward measure (q CC ) * (ν 0 ) is ν 0 . This is easy to see as there exist finitely many points Q 1 , . . . Q s ∈ ∆ CC (X ) such that q CC is an homeomorphism over ∆ CC (X ) \ {Q 1 , . . . , Q s }. This determines ν 0 | q −1 CC (∆CC(X )\{Q1,...,Qs}) and since ν 0 ({Q i }) = 0, we also get that ν 0 | q −1 CC (Qi) = 0 for all i = 1, . . . , s. Pick 0 < r < 1. Let (X ) hyb CC,r denote the preimage of rD under the map π: (X ) hyb CC,r → D. Then, (X ) hyb CC,r is a compact topological space and ν t for |t|≤ r is a collection of Radon measures on (X ) hyb CC,r . Since ν t (X t ) → ν 0 (∆ CC (X )), we may decrease r to further assume that ν t (X t ) ≤ ν 0 (∆ CC (X )) + 1 for all t ∈ rD * .
Let t 1 , t 2 , . . . be a sequence in rD that converges to 0. Applying the Banach-Alaoglu theorem to the dual space of continuous functions on (X ) hyb CC,r , we get that, after passing to a subsequence, the measures ν ti k has a weak limit ν 0 . Then, we get that ν ti k → ν 0 on (X ) hyb CC,r . But since pushforward of Radon measures under a continuous map commutes with taking weak limits, we get that ν ti k → (q CC ) * ( ν 0 ). But this means that (q CC ) * ( ν 0 ) = ν 0 . By the uniqueness of such a measure we get that ν 0 = ν 0 i.e. all convergent subsequnces have the same weak limit. Thus, we get that ν t → ν 0 on (X ) hyb CC,r and hence on (X ) hyb CC . Corollary 3.4.2. Let X, X , (ν t ) t∈D * , ν 0 be as in Lemma 3.4.1. Then, there exists a Radon measure ν 0 on X an C((t)) such that ν t → ν 0 weakly as measures on X hyb . Moreover, the support of ν 0 lies in contained in the skeletal subset Γ X ⊂ X an C((t)) . Proof. Recall that the Berkovich hybrid space X hyb = X ∪ X an C((t)) can also be obtained as an inverse limit of the Boucksom-Jonsson hybrid spaces (X ) hyb = X ∪ Γ X , where X runs through all snc models of X; these form a directed system. Therefore, to prove convergence on X hyb , it is enough to prove a compatible convergence of ν t on (X ) hyb for all snc models X of X.
Since the collection of models that dominate X form a cofinal system, it is enough to prove this for models X that dominate X . Consider such a model X . From Lemma 3.4.1, we get that the limit of ν t on (X ) hyb CC exists. From the continuous map (X ) hyb CC → (X ) hyb obtained by collapsing the curves in the central fiber, we see that the limit of ν t on (X ) hyb is just the pushforward of the limit of ν t on (X ) hyb CC . Using the techniques from the proof of Lemma 3.4.1, it is also easy to check that the limits are compatible i.e. if X and X are snc models of X such that X dominates X , then limit on Γ X is just the pushforward under the retraction map Γ X → Γ X . This proves that the limit of ν t on X hyb exists.
The statement about the support follows from the fact that the limit of ν t on (X ) hyb CC does contain any of the edges collapsed under the map ∆ CC (X ) → ∆ CC (X hyb ).
4. The sheaf ω ⊗m X0 (B| X0 ) Throughout this section, let m ≥ 2 denote a positive integer. Let (X, B) satisfy the assumptions listed in Section 2.1. Let X denote the minimal snc model of (X, 1 m B). Recall that X 0 is reduced and that X 0 and B do not intersect at nodal points in X 0 . In this section, we give a description of the global sections of ω ⊗m X0 (B| X0 ). 4.1. Local description. To get a local description of sections of ω ⊗m X0 (B| X0 ), we just need to tensor the short exact sequence (2.2) for X 0 = X 0 and k = m with O X0 (B| X0 ) to get
(4.1) 0 → ω ⊗m X0 (B| X0 ) → i O Ei (mK Ei + mP (i) 1 + · · · + mP (i) ri + B| Ei ) → P ∈X0 node C(P ) → 0,
where the first map is given by the restrictions and the second map is given by taking the sum/difference of residues (See Section 2.5 for details).
Dimension of global sections.
To understand the convergence of the Narasimhan-Simha, we would like to use Grauert's Lemma [Har77, Corollary III.12.9] to be able to conclude that there exists an open neighborhood U ⊂ X of X 0 such that we can find θ 1 , . . . , θ M ∈ H 0 (U, ω ⊗m X (B)) such that θ i | Xt is a basis of H 0 (X t , ω ⊗m Xt (B| Xt )) for |t| 1 and θ i | X0 is a basis for H 0 (X 0 , ω ⊗m X0 (B| X0 )). To do this, it is enough to show that h 0 (X 0 , ω ⊗m
X0 (B| X0 )) = h 0 (X t , ω ⊗m Xt (B| Xt )) = (2m − 1)(g − 1) + deg(B| Xt ).
Remark 4.2.1. The reason for working with the minimal snc model of (X, 1 m B) is precisely because h 0 (X 0 , ω ⊗m X0 (B| X0 )) = h 0 (X t , ω ⊗m Xt (B| Xt )) is satisfied for the minimal snc model X , while it not necessarily satisfied by a general snc model. The minimality assumption plays a role in the proof of Lemma 4.2.3, where it helps us control the H 0 (
E i , (1 − m)(K Ei + P (i) 1 + · · · + P (i) ri ) − B| Ei ) term that shows up. Lemma 4.2.2. h 0 (X 0 , ω ⊗m X0 (B| X0 )) = (2m − 1)(g − 1) + deg(B| X0 ). Proof.
Using the short exact sequence (4.1), we get
χ(ω ⊗m X0 (B| X0 )) = i χ(O Ei (mK Ei + mP (i) 1 + · · · + mP (i) ri + B| Ei ) − #E(Γ X ),
where χ(F) = h 0 (F) − h 1 (F) is the Euler characteristic of a sheaf F. By Riemann-Roch, the right hand side is
i χ(O Ei (mK Ei + mP (i) 1 + · · · + mP (i) ri + B| Ei ) − #E(Γ X ) = i ((2g(E i ) − 2 + r i )m + deg(B| Ei ) − g(E i ) + 1) − #E(Γ) = (2m − 1) i g(E i ) + (−2m + 1)#V (Γ) + (2m − 1)#E(Γ) + deg(B| X0 ) = (2m − 1) i g(E i ) + g(Γ X ) − 1 + deg(B| X0 ) = (2m − 1)(g − 1) + deg(B| X0 ).
Therefore the result follows if we can show that h 1 (ω ⊗m X0 (B| X0 )) = 0. This is proved in the following lemma.
Lemma 4.2.3. Let X be the minimal snc model of (X, 1 m B). Then, h 1 (X 0 , ω ⊗m X0 (B| X0 )) = 0.
Proof. Using Serre duality, we have h 1 (ω ⊗m X0 (B| X0 )) = h 0 (ω ⊗1−m
X0
(−B| X0 )) and it is enough to show that h 0 (X 0 , ω ⊗1−m X0 (−B| X0 )) = 0. Consider the short exact sequence obtained by tensoring (2.2) for X 0 = X 0 and
k = 1 − m with O X0 (−B| X0 ). 0 → ω ⊗(1−m) X0 (−B| X0 ) → i O Ei ((1 − m)(K Ei + P (i) 1 + · · · + P (i) ri ) − B| Ei ) → P ∈X0 node C(P ) → 0,
By considering the long exact sequence induced in cohomology, we get
0 → H 0 (ω ⊗1−m X0 (−B| X0 )) → i H 0 (E i , (1−m)(K Ei +P (i) 1 +· · ·+P (i) ri )−B| Ei ) → P ⊂X0 node C(P ). Since m ≥ 2, H 0 (E i , (1 − m)(K Ei + P (i) 1 + · · · + P (i)
ri ) − B| Ei ) = 0 in any one of the following cases.
•
g(E i ) ≥ 2, • g(E i ) = 1 and val(E i ) ≥ 1 • g(E i ) = 1 and deg(B| Ei ) ≥ 1 • g(E i ) = 0 and val(E i ) ≥ 3 • g(E i ) = 0, val(E i ) = 2 and deg(B| Ei ) ≥ 1, • g(E i ) = 0, val(E i ) = 1 and deg(B| Ei ) ≥ m, or • g(E i ) = 0, val(E i ) = 0 and deg(B| Ei ) ≥ 2m − 1.
Comparing this with all the constraints on the irreducible components of X 0 mentioned in Section 2.1, we see that the only contribution in the middle term comes from the inessential components i.e. the components for which g(E i ) = deg(B| Ei ) = 0 and val(E i ) = 2. Note that here we crucially use that X is the minimal snc model of (X, 1 m B). In this case,
h 0 (E i , (1 − m)(K Ei + P (i) 1 + P (i)
2 )) = h 0 (P 1 , O P 1 ) = 1, and any section of H 0 (E i , (1 − m)(K Ei + P (i) 1 + P (i) 2 ) is determined by its residue at P (i) 1 . Note that not all irreducible components E i ⊂ X 0 are inessential. Indeed, this would mean that X 0 is a cycle of rational curves with no marked points, which means that g = 1 and deg(B| Xt ) = 0, contradicting our assumption that (X, B) is not a family of genus 1 curves with no marked points.
So, without loss of generality, let E 1 ⊂ X 0 be an essential component. Suppose
E 2 is an inessential component of X 0 such that P = E 1 ∩ E 2 is a nodal point in X 0 . Let ψ ∈ H 0 (ω ⊗1−m X0
(−B| X0 )). Then, ψ| E1 must be zero. So by compatibility of residues, the residue of ψ| E2 at P must be 0. Thus, ψ| E2 = 0. More generally, for any inessential component E i of X 0 , we pick a path joining v E1 and v Ei in Γ X and apply induction along this path to conclude that ψ| Ei = 0. This can be done as Γ X is connected. Thus, H 0 (ω ⊗1−m X0 (−B| X0 )) = 0
Applying Grauert's Lemma [Har77, Corollary III.12.9] to L = ω ⊗m X (B), we conclude Lemma 4.2.4. There exists an open neighborhood U ⊂ X of X 0 such that we can find θ 1 , . . . , θ M ∈ H 0 (U, ω ⊗m X (B)) such that θ i | Xt is a basis of H 0 (X t , ω ⊗m Xt (B)) for |t| 1 and θ i | X0 is a basis for H 0 (X 0 , ω ⊗m X0 (B)).
A description of global sections.
The following lemma tells us that we can recover the residues of any section ψ ∈ H 0 (X 0 , ω ⊗m X0 (B| X0 )) along an inessential chain by just knowing it on one of the edges in the inessential chain.
Lemma 4.3.1. Let v 0 , . . . , v N be an inessential chain in Γ X . Let Q i for 0 ≤ i ≤ N − 1 be the nodal point in X 0 corresponding to the edge v i v i+1 in the inessential chain.
Let θ ∈ H 0 (X 0 , ω ⊗m X0 (B)). Let C denote the residue of θ at Q 0 along E v0 . Then the residue of θ at Q i along E vi is C and the residue at
Q i along E vi+1 is (−1) m C for all 0 ≤ i ≤ N − 1.
Proof. If the residue of θ at Q 0 along E v0 is C, then its residue at Q 0 along E v1 must be (−1) m C by the compatibility of the residues. Note that θ| Ev 1 ∈ H 0 (E v1 , mK Ev 1 + mQ 0 + mQ 1 ).
We also have that H 0 (E v1 , mK Ev 1 + mQ 0 + mQ 1 ) H 0 (P 1 , O P 1 ) is a onedimensional complex vector space and the map H 0 (E v1 , mK Ev 1 + mQ 0 + mQ 1 ) → C given by taking the residue at Q 1 is an isomorphism. The residue of θ at Q 0 and Q 1 differ by a factor of (−1) m . Thus, the residue of θ at Q 1 is C. Now the proof follows by induction.
Lemma 4.3.2. Let X be the minimal snc model of (X, 1 m B). We have the following short exact sequence of vector spaces.
0 → i H 0 (mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i) ri + B| Ei ) φ − → H 0 (ω ⊗m X0 (B| X0 )) φ − → C E( Γ) → 0.
Proof. We first describe the maps. The map φ exists because all the residues of sections in H 0 (mK Ei +(m−1)P (i) 1 +· · ·+(m−1)P (i) ri +B| Ei ) are zero and there is no compatibility of residues that needs to be checked. It is clearly injective since any element of H 0 (ω ⊗m X0 (B| X0 )) can be recovered from the restrictions to all irreducible components of X 0 .
The second map φ is defined as follows. Assign an arbitrary orientation to edges in Γ. Pick an edge e ∈ E( Γ), let v 0 , . . . , v N be the maximal inessential chain associated to the edge e, where v 0 is the initial vertex. Then φ sends an element ψ ∈ H 0 (ω ⊗n X 0,red ) to the residue of ψ| Ev 0 at the point corresponding to the edge v 0 v 1 .
It is clear that the composition φ • φ is 0 and the exactness at the middle place follows from Lemma 4.3.1. It remains to show that φ is surjective, which will follow if we show that all the vector spaces in the above short exact sequence have the right dimensions.
Consider the following long exact sequence induced by the short exact sequence (2.3) tensored with O X0 (B), where we get the last map is surjective by Lemma 4.2.3.
(4.2) 0 → i H 0 (mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i) ri + B| Ei ) φ − → H 0 (ω ⊗m X0 (B| X0 )) → C E(Γ X ) → i H 1 (mK Ei +(m−1)P (i) 1 +· · ·+(m−1)P (i) ri +B| Ei ) → 0.
By Serre duality,
h 1 (mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i) ri + B| Ei ) = h 0 ((1 − m)K Ei + (1 − m)P (i) 1 + · · · + (1 − m)P (i) ri − B| Ei )
. Following the discussion in the proof of Lemma 4.2.3, the above is 0 unless E i is inessential, in which case it is 1. Thus, the dimension of the last term in the above long exact sequence is equal to the number of inessential vertices in Γ X . Using
#E( Γ) = #E(Γ X ) − #{v ∈ V (Γ X ) | v is inessential},
it follows that all the vectors spaces in the short exact sequence in the lemma have the required dimensions.
Remark 4.3.3. In the case when B = 0, the analog of the above lemma for the case m = 1 is [Shi20, Equation (4.2)], which states that we have the following short exact sequence
0 → i H 0 (E i , K Ei ) → H 0 (X 0 , ω X0 ) → Ω(Γ X ) → 0.
Here Ω(Γ X ) is the collection complex-valued functions on E(Γ X ) which satisfies a balancing condition at all vertices. The reason for the difference in the two cases is that the global sections of ω X0 must satisfy the residue theorem at all irreducible components while global sections of ω ⊗m X0 , for some m ≥ 2, only need to satisfy the residue theorem at irreducible components with genus zero and valency 2.
4.4. The essential skeleton. We show that the dual graph Γ X of X , the minimal snc model of (X, 1 m B) is precisely the essential skeleton of the pair (X, 1 m B). We first recall what the essential skeleton is.
Given a pair (Y, D) where Y is a smooth variety over a C((t)) and D is a Qdivisor on Y , we can obtain a subset of the Y an defined by the minimality locus of certain weight functions [MN15] [BM19]. If all the coefficients of the irreducible components appearing in D are all strictly less than 1, then the essential skeleton of (Y, D) is contained in the dual complex of any snc model of (Y, D red ). Therefore, to compute the essential skeleton of (Y, D), it is enough to work with any one snc model of (Y, D red ). We describe the weight function in our context in the proof of the following lemma.
Lemma 4.4.1. Let X be the minimal snc model of (X, 1 m B). Then, Γ X is precisely the essential skeleton of the pair (X C((t)) , 1 m B C((t)) ). Proof. Note that X is not an snc model of (X, B red ). Therefore, we work with the minimal semistable model of (X, B red ), which we denote as X . Recall that X is obtained from X is obtained by repeatedly blowing down those (−1) curves E in the central fiber such that deg(B| E ) < m. Let p : X → X denote this map.
Let θ ∈ H 0 (ω ⊗km X/D * (kB)). Then, we can think of θ as also being a rational section of ω ⊗km X /D (kX 0 +kB 1 +· · ·+kB r ). Let div(θ) denote the associated divisor associated to θ on X , when viewed as a rational section of ω ⊗km X /D (kX 0 + kB 1 + · · · + kB r ). We define a function wt θ : Γ X → R as
wt θ (x) = ν x (div(θ)),
where ν x is the valuation associated to the point x ∈ Γ X ⊂ X an C((t)) . Here ν x (div(θ)) denotes the valuation ν x applied to the equation defining div(θ) at the center of the valuation ν x .
For example, If x ∈ Γ X is a vertex associated to an irreducible component E ⊂ X 0 , then wt θ (x) is the multiplicity of E in div(θ) i.e. the order of vanishing of θ along E.
Recall that Sk(X C((t)) , 1 m B C((t)) , θ) ⊂ Γ X is the minimalily locus of wt θ i.e.
Sk(X C((t)) ,
1 m B C((t)) , θ) = {x ∈ Γ X |wt θ (x) = min y∈Γ X wt θ (y)}.
The essential skeleton is given by Sk(X, 1 m B) = ∪ θ Sk(X, 1 m B, θ), where θ runs over all non-zero elements H 0 (X, ω ⊗km X/D * (kB)) for all k ≥ 1. Let θ 1 , . . . , θ M be the elements of H 0 (X , ω ⊗km X /D (kB)) obtained from Lemma 4.2.4. It is enough to consider those θ that lie in the linear span of θ 1 | X , . . . , θ M | X as any section of H 0 (X, ω ⊗km X/D * (kB)) would differ from an element in the linear span by a factor of a non-zero element in C((t)) (in which case, the weight function would differ by a constant). In this case, the minimum value of wt θ is 0. Let S denote the set of non-zero elements in the linear span of θ 1 , . . . , θ M for all choices of k.
If e is an edge in Γ X , then wt θ | e = 0 iff p * (θ) has a pole of order km along the node associated to P . If v is a vertex in Γ X , then wt θ (v) = 0 iff p * (θ) does not vanish along the irreducible component associated to v.
Thus, it follows from Lemma 4.3.2 given an edge e P ∈ E(Γ X ), there exists a θ such that θ has a pole of order m along P . Similarly, given a vertex v E ∈ V (Γ X ) there exists a θ such that θ does not vanish along E. Thus, Γ X is contained in the essential skeleton.
To show that Γ X contains the essential skeleton, recall that X is obtained from X is obtained by repeatedly blowing down those (−1) curves E in the central fiber such that deg(B| E ) < m.
Consider a (−1)-curve E ⊂ X 0 . Then, there is only one nodal point of X 0 that is contained in E. Let P denote this nodal point of X 0 contained in E.
Since X 0 is a principal divisor, we have that ω X 0 ω X 0 (X 0 ). By the adjunction formula, we also have that have that
ω X 0 | E ω X 0 (X 0 )| E ω X0 (E)| E ⊗O E ((X 0 \ E) ∩ E) ω E ⊗ O E (P ) Therefore, ω ⊗km X 0 (kB)| E ω ⊗km E (kmP + kB| E ) O P 1 (−km + k deg(B| E )).
Since deg(B| E ) < m, O P 1 (−km + k deg(B| E )) has no global sections. Thus, θ| E = 0 and θ| X 0 does not have a pole of order m at P for all θ ∈ S. Thus, e P and v E do not lie in the essential skeleton.
More generally, given an edge e P not lying the essential skeleton, we can factor X q − → X q −→ X such that p = q • q and q , q are a series of blow downs such that
P = E ∩ E 1 in X , where E is a (−1)-curve in X with deg(B| E ) < m in X .
Repeating the previous argument, we get that (q ) * θ, and thus p * θ, vanish on E and do not have a pole of order m along P . Thus, e P and v E do not lie in the essential skeleton.
Convergence of the Narasimhan Simha measure
We study the convergence of the Narasimhan-Simha measure in this section. 5.1. Setup and notation. Let X → D * be a holomorphic family of genus g curves. Let B = b 1 B 1 + · · · + b r B r be a horizontal divisor in X. Let m be an integer such that b i < m for all i = 1, . . . , r.
Let X be the minimal snc model of (X, 1 m B) and let m ≥ 2 be a fixed integer. Let M = (2m − 1)(g − 1) + deg(B| Xt ) and let s = #E( Γ). Let τ t denote the Narasimhan-Simha volume form on X t with respect to the line bundle Ω ⊗m Xt (B| Xt ) and let µ t denote the pluri-Bergman measure on X t with respect to the line bundle Ω ⊗m Xt (B| Xt ).
Enumerate the edges of E( Γ) as e 1 , . . . , e s . Since all edges of Γ X have length 1, if e i ∈ E( Γ) corresponds to the maximal inessential chain v 0 , v 1 , . . . , v N in Γ X , then l ei = N . (See Section 2.2 for details.)
Using Lemma 4.3.2, we can pick a basis ψ 1 , . . . , ψ M of H 0 (X 0 , ω ⊗m X0 (B| X0 )) such that ψ 1 , . . . , ψ s map to the standard basis of C E( Γ) = C s and ψ s+1 , . . . , ψ M give rise to an orthonormal basis of
m i=1 H 0 (E i , mK Ei +(m−1)P (i) 1 +· · ·+(m−1)P (i) ri +B| Ei )
with respect to the Hermitian pairing (2.1) on each summand. In particular, for 1 ≤ i ≤ s, ψ i has residues of ±1 at those nodal points of X 0 that lie on the maximal inessential chain associated to e i and has zero residues at all other points. For s + 1 ≤ i ≤ M , ψ i has zero residues at all the nodal points of X 0 .
We say that E 1 is a Type I component if h 0 (E i , mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i) ri + B| Ei ) > 0, otherwise it is called a Type II component. It is easy to check that there are only the following possible choices for a Type II component E. The Type II components will precisely be the curves in ∆ CC (X ) on which the limiting measure τ 0 and µ 0 place no mass.
Let τ 0 denote the Narasimhan-Simha volume form on X 0 with respect to
m i=1 H 0 (E i , mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i) ri + B| Ei ) i.e.
if E i is a Type I component, then τ 0 | Ei is the Narasimhan-Simha volume form on E i with respect to mK Ei + (m − 1)P
(i) 1 + · · · + (m − 1)P (i) ri + B| Ei . If E i is a Type II component, then h 0 (E i , mK Ei + (m − 1)P (i) 1 + · · · + (m − 1)P (i)
ri + B| Ei ) = 0 and we just set
τ 0 | Ei = 0.
Note that if E is of Type II, then ψ s+1 | E , . . . , ψ M | E = 0 as ψ s+1 , . . . , ψ M form a basis of m i=1 H 0 (E i , mK Ei +(m−1)P (i) 1 +· · ·+(m−1)P (i) ri +B| Ei ) and h 0 (E, mK E + (m − 1)P 1 + · · · + (m − 1)P ri + B| E ) = 0. Similarly, if E is of Type I, ψ s+1 , . . . , ψ M do not simultaneously vanish at any point of E.
The following Theorem is a more precise version of Theorem A for the minimal snc model, X , of (X, 1 m B). Theorem 5.1.1. Let τ 0 denote the measure on ∆ CC (X ) given by τ 0 on curves and by taking the Lebesgue measure on an edge e ∈ E(Γ X ) of length 1 le , where l e is the length of the maximal inessential chain in Γ X containing the edge e.
The measures τ t converge to the measure τ 0 when viewed as measures on X hyb CC . Proof. Theorem 5.1.1 follows directly from Lemma 3.3.1 and Corollaries 5.3.3 -5.3.5.
Using Lemma 4.2.4, we pick θ 1 , . . . , θ M ∈ H 0 (U, ω ⊗m X (B)) for a neighborhood U ⊂ X of X 0 such that θ 1 | Xt , . . . , θ M | Xt form a basis of H 0 (X t , ω ⊗m Xt (B| Xt )) and θ 1 | X0 , . . . , θ M | X0 form a basis of H 0 (X 0 , ω ⊗m X0 (B| X0 )). After applying a C-linear transformation to θ 1 , . . . , θ M , we may assume that
θ j | X0 = ψ i for 1 ≤ i ≤ M . Denote θ j,t = θ j | Xt .
Let (U 1 , t, w) be a coordinate chart in X adapted to an irreducible component E ⊂ X 0 . Recall that the coordinates in U 1 are t, w with |t|, |w|< 1 and the projection U 1 → D is given by (t, w) → t. We may shrink U 1 to suppose that either U 1 ∩E ∩B = ∅ or U 1 ∩E ∩B = {(0, 0)} and that θ j 's admit a power series expansion:
θ j (t, w) = α,β≥0 c (j) α,β t α w β φ j (t, w) (dw ∧ dt) ⊗m ,
where φ j (t, w) = 0 is a local equation of B in U . Then,
θ j,t (w) = α,β≥0 c (j) α,β t α w β φ j (t, w) dw ⊗m and ψ j (w) = β≥0 c (j) 0,β w β φ j (0, w) dw ⊗m . Note that since U ∩ E ∩ B = ∅ or U ∩ E ∩ B = {(0, 0)},
we may pick φ j so that φ j (0, w) = w k for some integer 0 ≤ k < m. Thus, we get that
(5.1) θ j,t (w) = ψ j (w) + O(|w| 1−m |t|)
as t → 0 for fixed w ∈ D and for all 1 ≤ j ≤ M . Moreover, |θ j,t (w)| 2/m are bounded by an integrable function (for example, C |w| 2k/m ) for t small enough and we are in the setting to apply the dominated convergence theorem. Throughout this paper, most pointwise convergences that show up will be in the setting to apply the dominated convergence theorem and in most cases, we do not mention explicitly mention a dominating integrable function as it would be easy to find one. Now let (U 2 , z, w) be a coordinate chart in X adapted to a node P = E 1 ∩ E 2 ∈ X 0 such that U 2 ∩ B = ∅. Recall that the coordinates in U 2 are z, w such that |z|, |w|< 1, E 1 = {z = 0}, E 2 = {w = 0}, and the projection U 2 → D is given by (z, w) → zw. We may shrink U 2 to suppose that θ j 's admit a power series expansion.
θ j (t, w) = α,β≥0 c (j) α,β z α w β (dw ∧ dz) ⊗m .
Then, for |t|< |w|< 1,
(5.2) θ j,t (w) = α,β≥0 c (j) α,β t α w β−α−m dw ⊗m .
We also have that
ψ j (w) = α≥0 c (i) 0,β w β−m dw ⊗m .
on U 2 ∩ E 1 , where we think of w as a coordinate on E 1 ∩ U 2 . Thus, we see that c (j) 0,0 is the residue of ψ j at P . Thus, c
0,0 is ±1 if ψ j has a pole of order m at P , otherwise it is 0.
Consider the regions
R 1 = (z, w) ∈ U 2 |t| 1/2 < |w|< 1 (log|t| −1 ) m R 2 = (z, w) ∈ U 2 1 (log|t| −1 ) m < |w|< 1 .
Let us figure out the dominating terms of θ j,t in each of these regions. Without loss of generality, suppose that ψ 1 develops a pole of order m at P with residue 1. Then, the ψ 2 , . . . , ψ M can have poles of order at worst m − 1 at P . From equation, (5.2), we get that
θ 1,t (w) = w −m 1 + (α,β) =(0,0) c (j) α,β w β−α t α dw ⊗m
After shrinking U 2 and rescaling z, w, t, we may assume that α,β |c α,β |< ∞. In the region R 1 , we have that |t| 1/2 < |w|< 1 (log|t| −1 ) m . Thus,
|w β−α t α | ≤ |t| β−α 2 |t| α = |t| β+α 2 if α ≥ β |w β−α t α | ≤ 1 log|t| −1 m(β−α) |t| α if β ≥ α,
and we get that
α,β =(0,0) c α,β w β−α t α = O 1 (log|t| −1 ) m and (5.3) θ 1,t ≈ w −m dw ⊗m in R 1 .
Similarly, for 2 ≤ j ≤ M , we get that
(5.4) θ j,t ≈ c (j)
0,m−mj w −mj dw ⊗m in R 1 , where m j < m is order of the pole of ψ j at P .
In the region R 2 , we can write θ j,t = ψ j + α≥1,β c (j) α,β w β−α−m t α and we see that
(5.5) θ j,t (w) → ψ j (w)
as t → 0 for a fixed w ∈ D * for all 1 ≤ j ≤ M .
5.2.
Asymptotics of θ j,t Xt . Recall from Section 2.3 that for a Riemann surface Y and a meromorphic m-canonical form ϑ on Y , we denote
ϑ Y := Y |ϑ| 2/m m/2 .
Note that the above also makes sense if Y is a disconnected Riemann surface. One of the key things in the definition of τ t is the condition that · Xt = 1. Therefore to understand the asymptotics of τ t , we first need to understand · Xt . We begin by looking at θ j,t Xt .
Lemma 5.2.1. For 1 ≤ j ≤ s, θ j,t Xt ≈ (2πl ej log|t| −1 ) m/2 and for s + 1 ≤ j ≤ M , θ j,t Xt → ψ j X0
as t → 0.
Proof. By using a partition of unity argument, we may reduce the problem to finding the asymptotics on adapted coordinate chats. If (U 1 , t, w) is a coordinate chart adapted to an irreducible component E ⊂ X 0 , then using Equation (5.5), we get that θ j,t → ψ j as t → 0. Using the dominated convergence theorem, we get that
Xt∩U1 |θ j,t | 2/m → E∩U1 |ψ j | 2/m for all 1 ≤ j ≤ M .
If U 2 is a coordinate chart adapted to a node P = E 1 ∩ E 2 . First consider the case when 1 ≤ j ≤ s, then it follows from Equation (5.2) that on the set {|t| 1/2 < |w|< 1},
θ j,t = C j w m + O(|w| 1−m ) dw ⊗m ,
where the O(|w| 1−m ) is with respect to w as w → 0 and is uniform in t, and C j = ±1 if ψ j has a pole of order m at P , otherwise C j = 0. Thus in the region {|t| 1/2 < |w|< 1},
|θ j,t | 2/m = |C j | |w| 2 + O 1 |w| 2(m−1) m |dw ∧ dw|.
Thus, we get that |t| 1/2 <|w|<1 |θ j,t | 2/m = |C j |π log |t| −1 +O(1) as t → 0.
Similarly, we get that |t| 1/2 <|z|<1 |θ j,t | 2/m = |C j |π log |t| −1 +O(1) and thus U2∩Xt |θ j,t | 2/m = 2|C j |π log|t| −1 +O(1).
Since, ψ j has a pole of order m at l ej many points, we get that Xt |θ j,t | 2/m = 2πl ej log|t| −1 +O(1), and thus we get the required estimate for 1 ≤ j ≤ s.
In the case when s + 1 ≤ j ≤ M , then on the set {|t| 1/2 < |w|< 1}, we have that |θ j,t | 2/m → |ψ j | 2/m as t → 0. Furthermore, we have using Equation (5.2) that
θ j,t (w) − ψ j (w) = α≥1,β≥0 c (j) α,β t α w β−α−m dw ⊗m .
On the region {|t| 1/2 < |w|< 1}, we get |t α w β−α−m |< |w| β+α−m and we get that the right hand side in above equation is uniformly bounded by C|w| 1−m dw ⊗m . Since |ψ j (w)| 2/m and |w| 2(1−m)/m are integrable on D for s + 1 ≤ j ≤ M , by the dominated convergence theorem, we get that
|t| 1/2 <|w|<1 |θ j,t | 2/m → U2∩E1 |ψ j | 2/m , and we get Xt |θ j,t | 2/m → X0 |ψ j | 2/m .
To understand the asymptotics of · Xt , let us denote
θ i,t := θ i,t (2πl ei log|t| −1 ) m/2 for 1 ≤ i ≤ s, θ i,t := θ i,t ψ i X0 and ψ i := ψ i ψ i X0 for s + 1 ≤ i ≤ M.
The previous lemma tells us that θ i,t Xt → 1 as t → 0. Moreover, the following lemma tells us that θ i,t are a 'nice' basis of C M with respect to · Xt as t → 0.
Lemma 5.2.2. Let c 1 , . . . , c M ∈ C. Then,
c 1 θ 1,t + . . . c M θ M,t Xt ≈ s i=1 |c i | 2/m +( c s+1 ψ s+1 + . . . c M ψ M X0 ) 2/m m/2 as t → 0.
Proof. We use a partition of unity argument to reduce the problem to adapted coordinate charts. If (U 1 , w) is an coordinate chart adapted to an irreducible component of X 0 , then for 1 ≤ j ≤ s,
(5.6) |c 1 θ 1,t + · · · + c M θ M,t | 2/m (w) → |c s+1 ψ s+1 + · · · + c M ψ M | 2/m (w)
pointwise for a fixed w. Moreover, there exists an integrable function on D which dominates |c 1 θ 1,t + · · · + c M θ M,t | 2/m for all t small enough. To see this, just note that there exists a constant C such that |c 1 θ 1,t + · · · + c M θ M,t | 2/m ≤ C max j,k | θ j,t ∧ θ k,t | 1/m and each of the | θ j,t ∧ θ k,t | 1/m are themselves bounded by an integrable function on D. Thus, by the dominated convergence theorem, we get that
(5.7) U1∩Xt |c 1 θ 1,t +· · ·+c M θ M,t | 2/m (w) → U1∩E |c s+1 ψ s+1 +· · ·+c M ψ M | 2/m (w)
Now consider a coordinate chart (U 2 , z, w) adapted to a node P = E 1 ∩ E 2 . Without loss of generality, suppose that ψ 1 develops a pole of order m at P . To analyze the integral |t| 1/2 <|w|<1 |c 1 θ 1,t + · · · + c M θ M,t | 2/m , we break up {|t| 1/2 < |w|< 1} into two regions:
R 1 = |t| 1/2 < |w|< 1 (log|t| −1 ) m and R 2 = 1 (log|t| −1 ) m < |w|< 1 .
On the region R 1 , using Equations (5.3) and (5.4),
θ j,t θ 1,t ≤ C(log|t| −1 ) m/2 |w|= O 1 (log|t| −1 ) m/2 ,
for 2 ≤ j ≤ M , where the second equality follows from the fact that |w|< 1 log(|t| −1 ) m in this region.
Thus, in this region, |c 1 θ 1,t + · · · + c M θ M,t | 2/m ≈ |c 1 | 2/m | θ 1,t | 2/m ≈ |c 1 | 2/m dw ∧ dw 2πl e1 |w| 2 log|t| −1 , (5.8)
where the second equality follows from Equation (5.3).
It is easy to verify that
|t| 1/2 <|w|<1 dw ∧ dw 2πl e1 |w| 2 log|t| −1 → 1 2l e1 .
Thus, we get that (5.9) R1∩Xt |c 1 θ 1,t + · · · + c M θ M,t | 2/m → |c 1 | 2/m 2l e1
as t → 0.
To analyze R2∩Xt |c 1 θ 1,t +· · ·+c M θ M,t | 2/m , note that |c 1 θ 1,t +· · ·+c M θ M,t | 2/m (w) → |c s+1 ψ s+1 + · · · + c M ψ M | 2/m (w) as t → 0 for a fixed w ∈ D * and thus,
(5.10) R2∩U |c 1 θ 1,t + · · · + c M θ M,t | 2/m → E1∩U |c s+1 ψ s+1 + · · · + c M ψ M | 2/m as t → 0.
Combining Equations (5.7), (5.9), and (5.10), we get that
Xt |c 1 θ 1,t + · · · + c M θ M,t | 2/m → s i=1 |c i | 2/m + X0 |c s+1 ψ s+1 + · · · + c M ψ M | 2/m
as t → 0 and the result follows.
5.3. Asymptotics of τ t .
Corollary 5.3.1. Let (U 1 , t, w) be a coordinate chart adapted to a an irreducible component E ⊂ X 0 . Then τ t → τ 0 as t → 0. Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩ E 2 . Then, in the region {|t| 1/2 < |w|<
1 (log|t| −1 ) m }, τ t ≈ |dw ∧ dw| 2πl e P |w| 2 log|t| −1 ,
where l is the length of the edge in Γ containing e P and in the region
{ 1 (log|t| −1 ) m < |w|< 1}, τ t → τ 0 .
Proof. It follows from Lemma 5.2.2 that
(5.11) τ t ≈ max |c1| 2/m +···+|cs| 2/m + cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c 1 θ 1,t + · · · + c M θ M,t | 2/m .
Consider the coordinate chart (U 1 , t, w) adapted to E ⊂ X 0 . It follows from Equation (5.6) that for a fixed w ∈ D * ,
|c 1 θ 1,t + · · · + c M θ M,t | 2/m (w) → |c s+1 ψ s+1 + · · · + c M ψ M | 2/m (w).
Therefore to maximize, |c 1 θ 1,t + · · · + c M θ M,t | 2/m (w), we need to pick c 1 = · · · = c s = 0 and we get
τ t (w) ≈ max cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c s+1 ψ s+1 + · · · + c M ψ M | 2/m (w) = τ 0 (w).
Thus, τ t (w) → τ 0 (w) pointwise for w ∈ D * . Similarly, the other assertion follows from Equations (5.8).
Corollary 5.3.2. There exists a constant C such that for |t| small enough such that
C −1 max j,k |θ j,t ∧ θ k,t | 1/m (log|t| −1 ) ηj +η k ≤ τ t ≤ C max j,k |θ j,t ∧ θ k,t | 1/m (log|t| −1 ) ηj +η k where η j = 1 2 if 1 ≤ j ≤ s and η j = 0 if s + 1 ≤ j ≤ M . Proof.
It is enough to show that there a constant C such that for |t| small enough
C −1 max j,k | θ j,t ∧ θ k,t | 1/m ≤ τ t ≤ C max j,k | θ j,t ∧ θ k,t | 1/m
It follows from Lemma 5.2.2 that
(5.12) τ t ≈ max |c1| 2/m +···+|cs| 2/m + cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c 1 θ 1,t + · · · + c M θ M,t | 2/m .
Thus,
τ t < 2 max |c1| 2/m +···+|cs| 2/m + cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c 1 θ 1,t + · · · + c M θ M,t | 2/m
for t small enough. The constraint
|c 1 | 2/m + · · · + |c s | 2/m + c s+1 ψ s+1 + . . . c M ψ M 2/m X0 = 1
ensures that there exist a constant C 1 such that |c 1 |, . . . , |c M |≤ C 1 . Thus, we get a constant C 2 such that
max |c1| 2/m +···+|cs| 2/m + cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c 1 θ 1,t +· · ·+c M θ M,t | 2/m ≤ C 2 max j,k | θ j,t ∧ θ k,t | 1/m ,
which gives us one of the inequalities. For the other inequality, note that we also have
τ t > 1 2 max |c1| 2/m +···+|cs| 2/m + cs+1 ψs+1+...c M ψ M 2/m X 0 =1 |c 1 θ 1,t + · · · + c M θ M,t | 2/m
for t small enough. Setting c j = 1 and the rest 0, we get that τ t ≥ 1 2 | θ j,t | 2/m for all 1 ≤ j, k ≤ M . Since | θ j,t ∧ θ k,t | 1/m is the geometric mean of | θ j,t | 2/m and | θ k,t | 2/m , we also get that
τ t ≥ 1 2 | θ j,t ∧ θ k,t | 1/m
for all 1 ≤ j, k ≤ M which gives us the second inequality.
The following three corollaries along with Lemma 3.3.1 prove Theorem 5.1.1.
Corollary 5.3.3. Let (U 1 , t, w) be a coordinate chart adapted to an irreducible component E of X 0 . Let f be a compactly supported continuous function on U 1 . Then, Xt∩U1 χτ t → E∩U1 χ τ 0 .
Proof. Using Corollary 5.3.1, we have that τ t → τ 0 in U 1 . Thus, we get that
Xt∩U1 f τ t → E∩U1 f τ 0 .
Corollary 5.3.4. Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩E 2 of X 0 . Suppose that e 1 is the edge in Γ that contains e P . Let f be a continuous function on [0, 1] and let 0 < α < β < 1 2 . Then,
|t| β <|w|<|t| α f log|w| log|t| τ t → 1 l e1 β α f (u)du as t → 0.
Proof. Since e 1 is the edge in Γ containing e P , ψ 1 develops a pole of order m along P . Then, using Corollary 5.3.1, we have that τ t ≈ |dw∧dw| 2π|w| 2 le 1 log|t| −1 . Thus, we are interested in computing the limit
|t| β <|w|<|t| α f log|w| log|t| |dw ∧ dw| 2π|w| 2 l e1 log|t| −1 .
Using a change of variables u = log|w| log|t| and ϑ = arg(w), we get that the above limit of the above expression as t → 0 is 1
le 1 β α f (u)du.
Corollary 5.3.5. Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩E 2 of X 0 . Suppose that e 1 is the edge in E( Γ) that contains e P . Let f be a continuous
f r w, log|w| log|t| τ t → 1 l e1 0 f (u)du + E1∩U f (w) τ 0 as t → 0.
Proof. To analyze the limit of the integral, we break up the region {|t| < |w|< 1} into two parts:
{|t| < |w|< 1 (log|t| −1 ) m }, and { 1 (log|t| −1 ) m < |w|< 1}. In the region, {|t| < |w|< 1 (log|t| −1 ) m }, note that τ t ≈ |dw∧dw|
2π|w| 2 log|t| −1 and thus the contribution of the region {|t| < |w|<
1 (log|t| −1 ) m } to the integral is |t| <|w|< 1 (log|t| −1 ) m f r w, log|w| log|t| |dw ∧ dw| 2πl e1 |w| 2 log|t| −1 .
Using the change of variables u = log|w| log|t| −1 and ϑ = arg(w), we get 1 2l e1 π m log(log|t| −1 ) log|t| −1 2π 0 f (r(|t| u e iϑ , u))dϑdu.
As t → 0, f (r(|t| u e iϑ , u)) → f (r(0, u)) = f (u) pointwise almost everywhere and thus we get |t| <|w|<
1 (log|t| −1 ) m f r w, log|w| log|t| τ t → 1 l e1 0 f (u)du.
In the region { 1 (log|t| −1 ) m < |w|< 1}, note that τ t → τ 0 . Thus, it is enough to evaluate the limit 1 (log|t| −1 ) m <|w|<1 f r w, log|w| log|t| τ 0 .
As t → 0, f r w, log|w| log|t| −1 → f (r(w, 0)) = f (w). Thus, we get
1 (log|t| −1 ) m <|w|<1 f r w, log|w| log|t| τ t → E1∩U f τ 0 as t → 0.
6. Convergence of µ t Let the notation be as in the previous section. We prove the following precise version of Theorem D.
Theorem 6.0.1. Let X be the minimal snc model of (X, 1 m B). Let µ 0 denote the measure on ∆ CC (X ) which is given by
• On a Type I component E i , the pluri-Bergman measure on E i with respect to mK Ei + (m − 1)P
(i) 1 + · · · + (m − 1)P (i) ri + B| Ei .
• If E i is a Type II component, we pick the zero measure • On each edge e ∈ E(Γ X ), we pick the Lebesgue measure of length 1 le , where l e is the length of the edge in Γ containing e.
On X hyb CC , the measures µ t → µ 0 weakly. Proof. Theorem 6.0.1 follows from Lemma 3.3.1 and Corollaries 6.1.4 -6.1.6. 6.1. Asymptotics of θ j,t , θ k,t . Recall from Section 2.1 the Hermitian pairing used to define the pluri-Bergman measure.
θ, ϑ = i 2 m θ ∧ ϑ τ m−1 .
We first understand the asymptotics of this pairing. Let A(t) denote the M × M matrix with the (j, k)-th coefficient
(A(t)) j,k = θ j,t , θ k,t = i 2 m Xt θ j,t ∧ θ k,t τ m−1 t ,
Then using elementary linear algebra, we see that
µ t = i 2 m M j,k=1 (A(t)) −1 j,k θ j,t ∧ θ k,t τ m−1 t .
We first state a lemma that we will use to estimate the integral Xt θj,t∧θ k,t τ m−1 t . Lemma 6.1.1. There exists a constant C such that
θ j,t ∧ θ k,t τ m−1 t ≤ C(log|t| −1 ) (ηj +η k )(m−1) |θ j,t ∧ θ k,t | 1/m for all 1 ≤ j, k ≤ M , where η j = 0 if 1 ≤ j, k ≤ s and η j,k = 1 if s + 1 ≤ j, k ≤ M
and η j,k = 1 2 in all other cases. Proof. From Corollary 5.3.2, we have that there exists a constant C 1 > 0 such that τ t ≥ C 1 |θj,t∧θ k,t | 1/m log|t| η j +η k . The result now follows immediately. Lemma 6.1.2. The matrix A(t) has the following form.
A = B D * D F
where B is an s × s matrix with entries B j,j ≈ (2πl ei log|t| −1 ) m and B j,
k = O((log|t| −1 ) m−1 ) for j = k, D = O((log|t| −1 ) m−1 2 ) and F is a (M − s) × (M − s) matrix such that F → I M −s as t → 0.
Proof. We use a partition of unity argument to reduce the problem of computing the integral Xt θj,t∧θ k,t τ m−1 t to computing it on adapted coordinate charts. The result follows by computing the integrals using Equations (5.3)-(5.5), Corollary 5.3.1 and Lemma 6.1.1. Since the leading term of τ is differs in different region of coordinate charts adapted to a node, we will have break up such a chart into different regions while analyzing the integral. We only show how to get the entries of B and F . The estimates for entries of C are obtained using similar techniques.
To get the asymptotics for F , note that on a coordinate chart (U 1 , t, w) adapted to a Type I irreducible component E, we have that τ t → τ 0 and that τ 0 | E is nowhere vanishing.
Furthermore, there exists an integrable function on D that dominates θj,t∧θ k,t τ m−1 t for all t small enough. One way to see this is by using Lemma 6.1.1; we get
θj,t(w)∧θ k,t (w) τ m−1 t (w)
≤ C|θ j,t (w) ∧ θ k,t (w)| 1/m ≤ C |w| −2+1/m . Thus by the dominated convergence theorem, we have that
U1∩Xt θ j,t ∧ θ k,t τ m−1 t → U1∩E ψ j ∧ ψ k τ m−1 0 as t → 0 for all s + 1 ≤ j, k ≤ M .
If E is of Type II, then using Lemma 6.1.1, we have that
θj,t∧θ k,t τ m−1 t ≤ C|θ j,t ∧ θ k,t | 1/m . But since θ j,t → ψ j = 0 on U 1 for all s + 1 ≤ j ≤ M , we get that θ j,t ∧ θ k,t τ m−1 t → 0 and U1∩Xt θ j,t ∧ θ k,t τ m−1 t → 0 for all s + 1 ≤ j ≤ M if E is of Type II.
For a coordinate chart (U 2 , z, w) adapted to a node P = E 1 ∩ E 2 , to estimate the integral |t| 1/2 <|w|<1 θj,t∧θ k,t τ n−1 t , observe that its pointwise limit is
ψj ∧ψ k τ m−1 0
if E 1 is of Type I and is 0 if E 1 is of Type II. Since ψ s+1 , . . . , ψ M was chosen to be an orthonormal basis for the Hermitian pairing (2.1), we get the asymptotics for F .
To analyze the diagonal entries of B, without loss of generality consider B 11 . The estimate using Lemma 6.1.1 and Equations (5.4) and (5.5) shows that U ∩Xt
θ1,t∧θ1,t τ m−1 t = O((log|t| −1 ) m−1 )
if U is either a coordinate chart adapted to an irreducible component, or a coordinate chart adapted to a node at which ψ does not develop a pole of order m.
To get the leading term for B 11 , consider a coordinate chart (U 2 , z, w) adapted to a node P such that ψ 1 has a pole of order m at P . Consider the region {|t| 1/2 < |w|< 1 (log|t| −1 ) m }. Using Equation (5.3) and Corollary 5.3.1, we get that
|θ 1,t ∧ θ 1,t | τ m−1 t ≈ (2πl e1 log|t| −1 ) m−1 |dw ∧ dw| |w| 2
in this region. Then,
|t| 1/2 <|w|< 1 (log|t| −1 ) m |θ 1,t ∧ θ 1,t | τ m−1 t ≈ |t| 1/2 <|w|< 1 (log|t| −1 ) m (2πl e1 log|t| −1 ) m−1 |dw ∧ dw| |w| 2 = 2π (2πl e1 log|t| −1 ) (m−1) (log|t| −1 ) −m |t| 1/2 dr r ≈ π(2πl e1 ) m−1 (log|t| −1 ) m .
Using Lemma 6.1.1, we get that
1 (log|t| −1 ) m <|w|<1 |θ 1,t ∧ θ 1,t | τ m−1 t ≤ C(log|t| −1 ) m−1 1 (log|t| −1 ) m <|w|<1 |θ 1,t | 2/m .
An easy computation shows that the integral on the right-hand side is of the order of (log|t| −1 ) m−1 log(log|t| −1 ), which is a subdominant term. Thus, we see that
|t| 1/2 <|w|<1 |θ 1,t ∧ θ 1,t | τ m−1 t ≈ π(2πl e1 ) m−1 (log|t| −1 ) m and U2∩Xt |θ 1,t ∧ θ 1,t | τ m−1 t ≈ 2π(2πl e1 ) m−1 (log|t| −1 ) m .
Since ψ 1 has a pole of order m along l e1 many nodes, we get that
B 11 = Xt |θ 1,t ∧ θ 1,t | τ m−1 t ≈ (2πl e1 log|t| −1 ) m .
To estimate B j,k for j = k, using Lemma 6.1.1, we only need to estimate (log|t| −1 ) m−1 |θ j,t ∧ θ k,t | 1/m . It is easy to verify using Equations (5.3)-(5.5) that |θ j,t ∧ θ k,t | 1/m = O(1) if 1 ≤ j, k ≤ s and j = k, which gives us the estimate for B.
The asymptotics of A(t) −1 now follows by using Gauss-Jordan elimination.
Corollary 6.1.3. The matrix A(t) −1 has the following form.
A −1 = B (D ) * D F
where B is an s × s diagonal matrix with diagonal entries
B i,i ≈ 1 (2πl ei log|t| −1 ) m , B j,k = O 1 (log|t| −1 ) m+1 for i = j, D = O 1 (log|t| −1 ) m+1 2 and F is a (M − s) × (M − s) matrix such that F → I M −s as t → 0.
The following corollaries prove Theorem 6.0.1. These are an easy consequence of Equations (5.3)-(5.5) and Corollaries 5.3.1 and 6.1.3.
Corollary 6.1.4. Let (U 1 , t, w) be a coordinated chart adapted to an irreducible component E ⊂ X 0 . Let f be a compactly supported function on U 1 . If E is of Type I, then
Let v be a vertex in Γ X and let E ⊂ X 0 be the associated irreducible component. It follows from Theorem 6.0.1 that the mass of µ
(m) 0 on v E is µ (m,B) 0 ({v E }) = h 0 (mK E + (m − 1) E =E (E ∩ E ) + B| E )
if E is a Type I component, otherwise it is 0. Note that for a fixed B and m large enough, being Type I is equivalent to being essential.
Thus,
µ (m,B) 0 ({v E }) = (2g(E) − 2)m + (m − 1)val(E) + deg(B| E ) if E is essential, otherwise µ (m,B) 0 (v) = 0. We get that µ (∞) 0 ({v E }) = lim m→∞ (2g − 2)((2g(E) − 2)m + (m − 1)val(E) + deg(B| E )) (2m − 1)(g − 1) + deg(B| Xt ) = 2g(E) − 2 + val(E) if E is essential, otherwise µ ∞ 0 (v) = 0. Thus, we get that µ (∞) 0 = v∈V (Γ X ) (2g(v) − 2 + val(v))δ v ,
which is also the limit of the hyperbolic measures on X t [PS19, Theorem A].
6.3. Limit of µ 0 as m → ∞ for fixed 1 m B. Another way to think of the limit of µ 0 is to vary B, while fixing the Q-divisor 1 m B. Let µ (km,kB) t denote the pluri-Bergman measure on X t associated to Ω ⊗km Xt (kB| Xt ). Let X denote the minimal snc model of (X, 1 m B). Note that X is also the minimal snc model of (X, 1 km kB) for all k ≥ 1. Let µ (km,kB) 0 be the weak limit µ
µ [∞] 0 ({v E }) = lim k→∞ (2g − 2 + deg(B| X t ) m
)((2g(E) − 2)km + (km − 1)val(E) + k deg(B| E )) (2km − 1)(g − 1) + k deg(B| Xt ) = 2g(E) − 2 + val(E) + deg(B| E ) m if E is a component of Type I, otherwise the limit is 0. Note that for k 0, the only Types I components that can show up are the inessential components and those components E for which g(E) = 0, val(E) = 1 and deg(B| E ) = m. Since 2g(E) − 2 + val(E) + deg(B| E ) m is 0 in both these cases, we get that
µ [∞] 0 = v E ∈V (Γ X ) 2g(v E ) − 2 + val(v E ) + deg(B| E ) m δ v E .
Convergence on M g
In this section, we show that in the case of g ≥ 2 and B = 0, τ t extends to a continuous family of measures on the Deligne-Mumford compactification, M g , of the moduli space of genus g curves. Let g ≥ 2 and m ≥ 2 be fixed integers.
Let C g denote the universal curve over M g . Let S 0 be a stable curve of genus g. Recall that this means that S 0 is a reduced curve of arithmetic genus g with only nodal singularities and every rational irreducible component of S 0 intersects the rest of the curve in at least three points. We define the Narasimhan-Simha measure τ on S 0 associated to ω ⊗m S0 to be a Radon measure on S 0 which will be a sum of Narasimhan-Simha measures on the irreducible components and Dirac masses at the nodal points. More precisely,
• Let E i ⊂ S 0 is an irreducible component and let P • If P is a nodal point in S 0 , then τ has a unit Dirac mass at P i.e. τ ({P }) = 1.
7.1. The local picture. The first step in proving Theorem C is to reduce it to a local computation. To do this, we need to understand the local charts on M g and C g . The key tool used here is a Kuranishi family [ACG11, Chapter XI].
Let S 0 be a stable curve. Roughly speaking, a Kuranishi family parametrizes the local deformations of S 0 . More precisely, a family of stable curves π : S → D is said to be a Kuranishi family for π −1 (0) = S 0 if for any other family of stable curves π : S → D with φ 0 : (π ) −1 (x 0 ) S 0 , there exists a neighborhood of U ⊂ D of x 0 and unique maps φ, ψ which make the diagram commute. From this universal property, it follows that, up to shrinking the base, a Kuranishi family is unique up to an isomorphism. A Kuranishi family always exists for a stable curve. The total space S is regular and the base D is smooth over C and has dimension 3g − 3. We can choose coordinates t = (t 1 , . . . , t 3g−3 ) on D so that D D 3g−3 and the point 0 can be identified with the origin. Moreover, the coordinates t can be chosen in such a way a that a fiber over a point t has a node corresponding to i iff t i = 0, where i = 1, . . . , s is an enumeration of nodes in S 0 . See [ACG11, Chapter XI] for details.
Thus, we can think of the coordinates t i as a coordinate that smoothens out the node i for i = 1, . . . , s and the coordinates t i as varying the complex structure on the irreducible components of S 0 for i = s + 1, . . . , 3g − 3.
For t ∈ D, let S t denote the fiber over the point t. By the adjunction formula, for a point t 0 ∈ D, we have
ω S 3g−3 i=1 div(t i − t 0,i ) St 0 ω St 0 .
Since div(t i − t 0,i ) are principal divisors, we get that ω S | St ω St for all t ∈ D, where the isomorphism is given by 'unwedging' dt 1 ∧ . . . ∧ dt 3g−3 .
We also get that ω ⊗m S | St ω ⊗m St for all t ∈ D. Since h 0 (S t , ω St ) = (2m−1)(g −1) is independent of t, using Grauert's lemma [Har77, Corollary III.12.9], we get that π * (ω ⊗m S ) is a locally free sheaf on B of rank M = (2m − 1)(g − 1). Thus, we get an analog of Lemma 4.2.4 i.e. possibly after shrinking D, there exists θ 1 , . . . , θ M ∈ H 0 (S, ω S ) such that θ i,t = θ i | St for i = 1, . . . , M form a basis of H 0 (S t , ω St ) for all t ∈ D. Recall that by θ i | St , we mean that we 'unwedge' (dt 1 ∧ . . . ∧ dt 3g−3 ) ⊗m from θ i and then restrict it to S t .
After performing a change of basis, we can assume that θ i | S0 only has a pole of order m with residue 1 along the i-th node in S 0 for all i = 1, . . . , s and θ i does not have a pole of order m for all i = s + 1, . . . , M . Now all we need to do is to repeat the analysis in Section 5.
The i-th nodal point P i ∈ S 0 has a neighborhood U in S with coordinates (t 1 , . . . , t i−1 , z i , w i , t i+1 , . . . , t 3g−3 )
such that the projection to D is given by t i = z i w i . Similar to the computation in 5, we get that where t i = (t 1 , . . . , t i , . . . , t 3g−3 ) and |t i | 1/2 < |w i |< 1. Here, the residue of θ j,0 along P i up to a sign is c (j) 0,0,0 . Thus, |c (j) 0,0,0 |= δ ij for j = 1, . . . , M . Similarly, we can also get such an expression in terms of the z i coordinate and we can also find an expression of θ j in a neighborhood of smooth points of S 0 .
We get an analog of Lemma 5.2.1 by following the same techniques. as t → 0.
Let τ t denote the Narasimhan-Simha measure on S t . We also get an analog of Corollary 5.3.1 Lemma 7.1.2. In the chart U around P i described above, in the region {|t i | 1/2 < |w i |< 1 (log|ti| −1 ) m } we have that
τ t ≈ |dw i ∧ dw i | 2π|w i |log|t i | −1 .
action G is Aut(S t ). Let t, t ∈ D be two points in the orbit of the G-action on D. Then, the action of some element in G provides a biholomorphism S t ∼ − → S t which induces a canonical biholomorphic map S t /Aut(S t ) ∼ − → S t /Aut(S t ) and is independent of the choice of the aforementioned element of G.
Thus, topologically, the fiber of the map C g → M g over the isomorphism class of a stable curve C is C/Aut(C).
Recall from the construction of τ on a smooth genus g curve Y (see Section 2.3) that τ is invariant under the action of Aut(Y ). It is not hard to check that τ is also invariant on a stable curve under the action of its automorphism group.
Let τ t denote the pushforward of the Narasimhan-Simha measure under the map S t → S t /Aut(S t ).
The following corollary is equivalent to Theorem C Corollary 7.2.1. The map D/G → (C c (S/G)) ∨ given by sending [t] → τ t is well defined and continuous, where (C c (S/G)) ∨ is the space of Radon measures on S/G equipped with the weak * topology.
Proof. From Lemma 7.1.3, it follows that the map D → (C c (S)) ∨ given by t → τ t is continuous. It is enough to show that the composition D → (C c (S)) ∨ → (C c (S/G)) ∨ is invariant under the action of G on D i.e. we need to show that if g · t = t for g ∈ G and t, t ∈ D, then the pushforward of τ t is the same as τ t under the canonical identification S t /Aut(S t )
∼ − → S t /Aut(S t ). Consider the diagram S t S t S t /Aut(S t ) S t /Aut(S t ) g ∼
Since g induces a biholomorphism between S t and S t , it follows that the pushforward of τ t under g is the same as τ t and thus we get that the pushforward of τ t to S t /Aut(S t ) is the same the same as τ t
Date: December 1, 2020.
Figure 1 .
1A family of genus 4 curves and the associated curve complex hybrid space
.
Theorem A. There exists a measure τ (m,B) 0 on ∆ CC (X ) such that τ(m,B) t → τ (m,B) 0 weakly as measures on X hyb CC .
X hyb as a corollary of Theorem A.
a sum of pluri-Bergman measures on the curves in ∆ CC (X ) and Lebesgue measures on the edges. The mass of each edge with respect to µ (m,B) 0 is the same as the mass of the edge with respect to τ (m,B) 0
denote the weak limit of µ
on X hyb as t → 0. Then, the measures µ (m,B) 0
Figure 2 .
2The above figure shows the minimal snc model, X , for (X, 1 4 B), where B = B 1 + B 2 + B 3 and X 0
• g(E) ≥ 2; • g(E) = 1 and either val(E) ≥ 1 or deg(B| E ) ≥ 1; • g(E) = 0 and val(E) ≥ 2; • g(E) = 0, val(E) = 1 and deg(B| E ) ≥ m; or • g(E) = 0, val(E) = 0 and deg(B| E ) ≥ 2m.
• m = 2, g(E) = 0, val(E) = 3 and deg(B| E ) = 0 • g(E) = 0, val(E) = 2 and deg(B| E ) = 1. • E is inessential i.e. g(E) = 0, val(E) = 2 and deg(B| E ) = 0. • g(E) = 0, val(E) = 1 and deg(B| E ) = m
let f be a compactly supported function on the halfdumbbell D = {(w, u) ∈ D×[0, ) | either w = 0 or u = 0} and let r : D×[0, ) → D be a strong deformation retract. Then, |t| <|w|<1
− 1)(g − 1) + k deg(B| Xt ) be the limit of µ (km,kB) 0 normalized to volume 2g − 2 + deg(B| X t ) m . A similar computation shows that the µ [∞] 0 places no mass on the edges in Γ X . The mass of µ [∞] 0 on a vertex v E associated to an irreducible component E is
be the nodal points that lie on E i . Then, τ | Ei\{P (i) 1 ,...,P (i) r i } is the Narasimhan-Simha measure on E i associated to the line bundle O Ei (mK Ei + (m − 1
Lemma 7.1. 1 .
1For i = 1, . . . , s, we have θ i,t St = (2π log|t i | −1 ) m/2 + O(1)and for i = s + 1, . . . , M , we haveθ i,t St → θ i,0 S0
In this case, there exists a model X such that is as m → ∞. In the case of B = 0, Tsuji has shown that the supremum of τ (m) τ (m) as m → ∞ exists as a bounded volume form [Tsu07, Theorem 4.1]. However, it is not clear whether this supremum is a limit or not.µ
(km,kB)
0
, normalized to volume 2g−2+
deg(B| X t )
m
, converges to a measure that places
no mass on the edges and places a Dirac mass of 2g(v E ) − 2 + val(v E ) + deg(B| E )
m
on each vertex v E of the dual graph of X 0 .
As for the limit of τ
(m,B)
0
, it is not even clear to us what the asymptotics of
Xt τ m,B
t
Acknowledgments. I would like to thank my advisor, Mattias Jonsson, for his suggestions and comments. This work was supported by the NSF grants DMS-1600011 and DMS-1900025.as t → 0.Corollary 6.1.5. Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩ E 2 . Let f be a continuous function on [0, 1] and let 0 < α < β < 1 2 . Without loss of generality, suppose that ψ 1 develops a pole of order m at P . Then,Corollary 6.1.6. Let (U 2 , z, w) be a coordinate chart adapted to a node P = E 1 ∩ E 2 . Let 0 < 1 2 . Let f be a continuous compactly-supported function on the half-dumbbell D ⊂ D × [0, ) and let r : D × [0, ) → D be a strong deformation retract. Without loss of generality, suppose that ψ 1 develops a pole of order m at P . If E is of Type I, then,Proof. We break up the region {|t| < |w|< 1} into two regions:We analyze the integral separately on each region.Let us first analyze the integral on the region { 1 (log|t| −1 ) m < |w|< 1}. It follows from the asymptotics of A −1 j,k , θ j,t and τ t that the pointwise limit ofas t → 0 if E is of Type I, otherwise the limit is 0. Using the dominated convergence theorem, we get thatif E is of Type I, otherwise the limit is 0.To estimate the integral in the region {|t| < |w|<Now we use a change of variables u = log|w| log|t| and ϑ = arg(w) to getWe would like to understand the limit of µ 0 as m → ∞. Firstly, note that if m is large enough then the minimal snc model of (X, 1 m B) is just the minimal semistable model of X. So, let X denote the minimal semistable model of X in this section.We only compute the limit on Γ X and not on ∆ CC (X ) as it is not clear to us what the limit behavior would look like near a smooth point in X 0 .Let µ Away from such a region and near smooth points of S 0 , we have that τ t → τ 0 , where τ 0 is the part of τ 0 without the Dirac masses.We immediately get the following result, which is a local version of Theorem C.Corollary 7.1.3. Let f be a continuous compactly supported function on S. Then,Proof. By using partitions of unity, we can assume that support of f is small enough. If f is supported in the chart U described above, then using the fact thatwe getWe also get a similar convergence near the smooth points of S 0 , which proves the result.Remark 7.1.4 (Limit of pluri-Bergman measures). If we try to figure out the limit of the pluri-Bergman measures on S using the techniques in Section 6, then we run into an issue. As an analog of Lemma 6.1.2, we will get that the matrix A will have the first s diagonal entries being (2π log|t i | −1 ) m , but when we try to figure out the asymptotics of A −1 , we see that it will depend on the relative orders of magnitude of log|t i | −1 . Thus, it is unlikely that the limit of µ t will exist on S. The limit of µ t might exist on the hybrid space constructed by Amini and Nicolussi[AN20]which keeps track of the order of magnitude of log|t i | −1 .7.2. The global picture. Let π : S → D denote a Kuranishi family for a stable curve S 0 . Let G = Aut(S 0 ) denote the (finite) group of self biholomorphisms of S 0 . From the universal property of the Kuranishi family, we see that G acts on S as well as D, after possibly shrinking D.Since G is finite, the quotients S/G and D/G exist as normal complex analytic spaces. For our purposes, the underlying topological space is sufficient. The spaces D/G and S/G forms a neighborhood of the isomorphism class of S 0 in M g and its preimage in C g . Locally, the map C g → M g is given by S/G → D/G.Note that S → D is also a Kuranishi family for S t for all t ∈ D [ACG11, Corollary XI.4.9]. Thus, it follows that the stabilizer of a point t ∈ D under the
Linear series on metrized complexes of algebraic curves. Omid Amini, Matthew Baker, 10.1007/s00208-014-1093-8Math. Ann. 3621-2Omid Amini and Matthew Baker. Linear series on metrized complexes of algebraic curves. Math. Ann., 362(1-2):55-106, 2015. doi:10.1007/s00208-014-1093-8.
Geometry of algebraic curves. Enrico Arbarello, Maurizio Cornalba, Phillip A Griffiths, IIof Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical SciencesEnrico Arbarello, Maurizio Cornalba, and Phillip A. Griffiths. Geometry of al- gebraic curves. Volume II, volume 268 of Grundlehren der Mathematischen Wis- senschaften [Fundamental Principles of Mathematical Sciences].
. Springer, 10.1007/978-3-540-69392-5Joseph Daniel HarrisHeidelbergSpringer, Heidelberg, 2011. doi:10.1007/978-3-540-69392-5. With a contribution by Joseph Daniel Harris.
Moduli of hybrid curves and variations of canonical measures. Omid Amini, Noema Nicolussi, arXiv:2007.07130Omid Amini and Noema Nicolussi. Moduli of hybrid curves and variations of canonical measures, 2020, arXiv:2007.07130.
A non-Archimedean interpretation of the weight zero subspaces of limit mixed Hodge structures. In Algebra, arithmetic, and geometry: in honor of Yu. Vladimir G Berkovich, 10.1007/978-0-8176-4745-2_2Progr. Math. IBirkhäuser Boston, IncI. Manin.Vladimir G. Berkovich. A non-Archimedean interpretation of the weight zero subspaces of limit mixed Hodge structures. In Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. I, volume 269 of Progr. Math., pages 49-67. Birkhäuser Boston, Inc., Boston, MA, 2009. doi:10.1007/978-0-8176-4745-2_2.
An introduction to things ∂. Bo Berndtsson, 10.1090/pcms/017Analytic and algebraic geometry. Providence, RIAmer. Math. Soc17Bo Berndtsson. An introduction to things ∂. In Analytic and algebraic geometry, vol- ume 17 of IAS/Park City Math. Ser., pages 7-76. Amer. Math. Soc., Providence, RI, 2010. doi:10.1090/pcms/017.
Tropical and non-Archimedean limits of degenerating families of volume forms. Sébastien Boucksom, Mattias Jonsson, 10.5802/jep.39J. Éc. polytech. Math. 4Sébastien Boucksom and Mattias Jonsson. Tropical and non-Archimedean limits of degenerating families of volume forms. J. Éc. polytech. Math., 4:87-139, 2017. doi:10.5802/jep.39.
The essential skeleton of a product of degenerations. V Morgan, Enrica Brown, Mazzon, 10.1112/s0010437x19007346Compos. Math. 1557Morgan V. Brown and Enrica Mazzon. The essential skeleton of a product of degenera- tions. Compos. Math., 155(7):1259-1300, 2019. doi:10.1112/s0010437x19007346.
Bergman kernels and the pseudoeffectivity of relative canonical bundles. Bo Berndtsson, Mihai Păun, 10.1215/00127094-2008-054Duke Math. J. 1452Bo Berndtsson and Mihai Păun. Bergman kernels and the pseudoeffectivity of relative canonical bundles. Duke Math. J., 145(2):341-378, 2008. doi:10.1215/00127094-2008- 054.
Igusa integrals and volume asymptotics in analytic and adelic geometry. Antoine Chambert, - Loir, Yuri Tschinkel, 10.1142/S1793744210000223Confluentes Math. 23Antoine Chambert-Loir and Yuri Tschinkel. Igusa integrals and volume asymp- totics in analytic and adelic geometry. Confluentes Math., 2(3):351-429, 2010. doi:10.1142/S1793744210000223.
Degenerations of SL(2, C) representations and Lyapunov exponents. Romain Dujardin, Charles Favre, 10.5802/ahl.24Ann. H. Lebesgue. 2Romain Dujardin and Charles Favre. Degenerations of SL(2, C) representations and Lyapunov exponents. Ann. H. Lebesgue, 2:515-565, 2019. doi:10.5802/ahl.24.
Faltings delta-invariant and semistable degeneration. Robin De, Jong , 10.4310/jdg/1549422102J. Differential Geom. 1112Robin de Jong. Faltings delta-invariant and semistable degeneration. J. Differential Geom., 111(2):241-301, 2019. doi:10.4310/jdg/1549422102.
Uniform Manin-Mumford for a family of genus 2 curves. Laura Demarco, Holly Krieger, Hexi Ye, 10.4007/annals.2020.191.3.5Ann. of Math. 1912Laura DeMarco, Holly Krieger, and Hexi Ye. Uniform Manin-Mumford for a family of genus 2 curves. Ann. of Math. (2), 191(3):949-1001, 2020. doi:10.4007/annals.2020.191.3.5.
The irreducibility of the space of curves of given genus. P Deligne, D Mumford, 10.1007/BF02684599Inst. Hautes Études Sci. Publ. Math. 36P. Deligne and D. Mumford. The irreducibility of the space of curves of given genus. Inst. Hautes Études Sci. Publ. Math., (36):75-109, 1969. doi:10.1007/BF02684599.
Boundary asymptotics of the relative Bergman kernel metric for elliptic curves. Dong Robert Xin, 10.1016/j.crma.2015.04.015C. R. Math. Acad. Sci. Paris. 3537Robert Xin Dong. Boundary asymptotics of the relative Bergman kernel met- ric for elliptic curves. C. R. Math. Acad. Sci. Paris, 353(7):611-615, 2015. doi:10.1016/j.crma.2015.04.015.
Degeneration of endomorphisms of the complex projective space in the hybrid space. Charles Favre, 10.1017/S147474801800035XJournal of the Institute of Mathematics of Jussieu. 38Charles Favre. Degeneration of endomorphisms of the complex projective space in the hybrid space. Journal of the Institute of Mathematics of Jussieu, August 2018. doi:10.1017/S147474801800035X. 38 pages.
Algebraic geometry. Robin Hartshorne, 10.1007/978-1-4757-3849-0Graduate Texts in Mathematics. 52Springer-VerlagRobin Hartshorne. Algebraic geometry. Springer-Verlag, New York-Heidelberg, 1977. doi:10.1007/978-1-4757-3849-0. Graduate Texts in Mathematics, No. 52.
Riemannian metrics on Teichmüller space. Lutz Habermann, Jürgen Jost, 10.1007/BF02567518Manuscripta Math. 893Lutz Habermann and Jürgen Jost. Riemannian metrics on Teichmüller space. Manuscripta Math., 89(3):281-306, 1996. doi:10.1007/BF02567518.
Affine structures and non-Archimedean analytic spaces. Maxim Kontsevich, Yan Soibelman, 10.1007/0-8176-4467-9_9The unity of mathematics. Boston, MA244Progr. Math.Maxim Kontsevich and Yan Soibelman. Affine structures and non-Archimedean ana- lytic spaces. In The unity of mathematics, volume 244 of Progr. Math., pages 321-385. Birkhäuser Boston, Boston, MA, 2006. doi:10.1007/0-8176-4467-9_9.
Metric syz conjecture and non-archimedean geometry. Yang Li, arXiv:2007.013842020Yang Li. Metric syz conjecture and non-archimedean geometry, 2020, arXiv:2007.01384.
Curves over discrete valuation rings. Stephen Lichtenbaum, 10.2307/2373535Amer. J. Math. 90Stephen Lichtenbaum. Curves over discrete valuation rings. Amer. J. Math., 90:380-405, 1968. doi:10.2307/2373535.
Algebraic geometry and arithmetic curves. Qing Liu, of Oxford Graduate Texts in Mathematics. Reinie ErnéOxfordOxford Science Publications6Qing Liu. Algebraic geometry and arithmetic curves, volume 6 of Oxford Graduate Texts in Mathematics. Oxford University Press, Oxford, 2002. Translated from the French by Reinie Erné, Oxford Science Publications.
Thibaud Lemanissier, Matthew Stevenson, arXiv:1903.01926Topology of hybrid analytifications. Thibaud Lemanissier and Matthew Stevenson. Topology of hybrid analytifications, 2019, arXiv:1903.01926.
Weight functions on non-Archimedean analytic spaces and the Kontsevich-Soibelman skeleton. Mircea Mustaţă, Johannes Nicaise, 10.14231/AG-2015-016Algebr. Geom. 23Mircea Mustaţă and Johannes Nicaise. Weight functions on non-Archimedean ana- lytic spaces and the Kontsevich-Soibelman skeleton. Algebr. Geom., 2(3):365-404, 2015. doi:10.14231/AG-2015-016.
Manifolds with ample canonical class. M S Narasimhan, R R Simha, 10.1007/BF01425543Invent. Math. 5M. S. Narasimhan and R. R. Simha. Manifolds with ample canonical class. Invent. Math., 5:120-128, 1968. doi:10.1007/BF01425543.
Tropical geometric compactification of moduli, II: Ag case and holomorphic limits. Yuji Odaka, 10.1093/imrn/rnx293International Mathematics Research Notices. 01Yuji Odaka. Tropical geometric compactification of moduli, II: Ag case and holomorphic limits. International Mathematics Research Notices, 01 2018. doi:10.1093/imrn/rnx293.
Léonard Pille, - Schneider, arXiv:1911.03357Hybrid convergence of Kähler-Einstein measures. Léonard Pille-Schneider. Hybrid convergence of Kähler-Einstein measures, 2019, arXiv:1911.03357.
Convergence of volume forms on a family of log-Calabi-Yau varieties to a non-Archimedean measure. Sanal Shivaprasad, arXiv:1911.07307Sanal Shivaprasad. Convergence of volume forms on a family of log-Calabi-Yau varieties to a non-Archimedean measure, 2019, arXiv:1911.07307.
Sanal Shivaprasad, arXiv:2005.05753Convergence of bergman measures towards the zhang measure. Sanal Shivaprasad. Convergence of bergman measures towards the zhang measure, 2020, arXiv:2005.05753.
Gromov-Hausdorff limits of flat Riemannian surfaces. Dmitry Sustretov, arXiv:1802.03818Dmitry Sustretov. Gromov-Hausdorff limits of flat Riemannian surfaces, 2018, arXiv:1802.03818.
Curvature semipositivity of relative pluricanonical systems. Hajime Tsuji, arXiv:math/0703729Hajime Tsuji. Curvature semipositivity of relative pluricanonical systems, 2007, arXiv:math/0703729.
Dynamical construction of Kähler-Einstein metrics. Hajime Tsuji, 10.1215/00277630-2010-005Nagoya Math. J. 199Hajime Tsuji. Dynamical construction of Kähler-Einstein metrics. Nagoya Math. J., 199:107-122, 2010. doi:10.1215/00277630-2010-005.
| []
|
[
"Observing stellar bow shocks",
"Observing stellar bow shocks"
]
| [
"A C Sparavigna \nDepartment of Physics\nPolitecnico di Torino\nTorinoItaly\n",
"R Marazzato \nDepartment of Control and Computer Engineering\nPolitecnico di Torino\nTorinoItaly\n"
]
| [
"Department of Physics\nPolitecnico di Torino\nTorinoItaly",
"Department of Control and Computer Engineering\nPolitecnico di Torino\nTorinoItaly"
]
| []
| For stars, the bow shock is typically the boundary between their stellar wind and the interstellar medium. Named for the wave made by a ship as it moves through water, the bow shock wave can be created in the space when two streams of gas collide. The space is actually filled with the interstellar medium consisting of tenuous gas and dust. Stars are emitting a flow called stellar wind. Stellar wind eventually bumps into the interstellar medium, creating an interface where the physical conditions such as density and pressure change dramatically, possibly giving rise to a shock wave.Here we discuss some literature on stellar bow shocks and show observations of some of them, enhanced by image processing techniques, in particular by the recently proposed AstroFracTool software. | null | [
"https://export.arxiv.org/pdf/1005.1527v1.pdf"
]
| 119,257,158 | 1005.1527 | 59736f34b7380fe060a2419eed6092ee2b8613a0 |
Observing stellar bow shocks
A C Sparavigna
Department of Physics
Politecnico di Torino
TorinoItaly
R Marazzato
Department of Control and Computer Engineering
Politecnico di Torino
TorinoItaly
Observing stellar bow shocks
Shock WavesAstronomyImage Processing
For stars, the bow shock is typically the boundary between their stellar wind and the interstellar medium. Named for the wave made by a ship as it moves through water, the bow shock wave can be created in the space when two streams of gas collide. The space is actually filled with the interstellar medium consisting of tenuous gas and dust. Stars are emitting a flow called stellar wind. Stellar wind eventually bumps into the interstellar medium, creating an interface where the physical conditions such as density and pressure change dramatically, possibly giving rise to a shock wave.Here we discuss some literature on stellar bow shocks and show observations of some of them, enhanced by image processing techniques, in particular by the recently proposed AstroFracTool software.
Introduction
Due to the presence of space telescopes and large surface telescopes, equipped with adaptive optics devices, a continuously increasing amount of high quality images is published on the web and can be freely seen by scientists as well as by astrophiles. Among the many latest successes of telescope investigations, let us just remember the discovery with near-infrared imaging of many extra-solar planets, such as those named HR 8799b,c,d [1][2][3]. However, the detection of faint structures remains a challenge for large instruments. After high resolution images have been obtained, further image processing are often necessary, to remove for instance the point-spread function of the instrument. This is an essential processing, which helped determining the presence of planets revolving about stars. Besides the well-known enhancement techniques, such as the Richardson and Lucy deconvolution, [4,5], many other processing methods are involved in astrophysical researches. Among the others, those based on the MCS deconvolution algorithm revealed spectacular gravitational lensing effects, such as that at cluster Abell 2218, where distant blue galaxies are squished into a circular thin ring around the middle of the cluster [6][7][8][9][10]. Other interesting objects are the nursery of stars, dense nebulae, billowing clouds of gas and dust. The filamentary structures of these clouds can be enhanced by image processing, to appreciate the detailed directions of the streaming flows. A beautiful example of cometary structures can be observed in NGC5189 (see Fig.1). In the nurseries, young stars are emitting strong solar winds. In the space near these stars, the blowing stellar wind can create regions where the conditions of density and pressure change dramatically, giving rise to shock waves. These shock waves are also known as bow shocks, as the waves made by a ship as it moves through water. The literature on the stellar bow-shocks is now rapidly increasing, due to many research projects based on the use of infrared observations, able to reveal the emission of energy from the bow shocks, distinguished from the stars themselves. Let us consider for instance the MIRIAD project, based on the use of Spitzer Space telescope: this project has as primary aim to probe the material distribution in the extended circumstellar envelopes of evolved stars. Ref. [11] shows a bow shock at the interface of the interstellar medium and the wind of the moving star R Hydrae. According to [11], this discovery exemplifies the potential of Spitzer as a tool to examine the detailed structure of extended far-IR nebulae around bright central sources.
As we shall see, besides Spitzer, other space telescopes, such as Akari, and surface IR telescopes, are able to reveal bow shocks created in the interstellar medium. Here, we discuss some observations of stellar bow shocks, chosen among the scientific literature. In particular, we discuss two observations that we, and the readers, can easily repeat using images freely published on the web. In the enhanced images, obtained with our recently proposed processing technique, we have a better view of the bow shocks, because the software is able to enhance many details in images, such as edges and faint objects (an example is shown in Fig.1) [12,13]. Before reporting the bow shocks observations, let us shortly discuss their physics.
Shock waves
The base theory of shock waves can be considered a part of fluid mechanics modelling. We find shock waves in bullet motions, explosions, as well as in astrophysics. To have a shock wave, we need a motion comparable or exceeding the sound speed. Shocks start to occur in the limit where the pressure variation cannot be considered as small. As a simple example of how these waves arise, consider the one-dimensional flow of gas in a small-diameter cylindrical tube, which is fitted with a piston at one end and closed at the other end. As the piston is moved into the fluid, the fluid starts to compress. Information about this rise in pressure propagates away from the piston at the sound speed c s of the fluid. If the piston speed v p is greater than the sound speed, the pressure continues to build in front of the piston with the gradient in pressure becoming steeper and steeper. The edge of the pressure hump (the shock) moves down the tube at speed v s . We can define the Mach number as M=v s /c s . Since sonic disturbances travel in the gas at the local sound speed, and since the gas immediately ahead of the piston is moving with the piston speed, the sound waves emanating from each point along the piston path travel at a speed of v p + c s . The waves converge to form a shock front, travelling along the tube at supersonic speeds. Landau and Lifshitz discussed the shock waves in their book on fluid mechanics [14]. Their discussion of shock uses a frame of reference where it is at rest. At the stationary shock, a set of equations, the Rankine-Hugenoit equations, relates the density, pressure and flux conditions on either side of it. These equations are characterized by the conservation of three quantities: the mass, the flux of momentum and the specific total energy (for more details, see [15]). These conditions can be discussed to obtain the relative conditions before and after the shock. As in Ref. 15, we can distinguish isothermal from adiabatic shocks: in fact, real shocks have conditions between adiabatic and isothermal shocks.
Shocks in the space
The equations of hydrodynamics are valid if the particle mean-free path is small in comparison with characteristic lengths involved in the problem. In the space, where the medium is so rarefied, that conditions for shock seem to be impossible. Nevertheless shock waves exist. In fact, the interstellar medium and the solar wind contain plasma. For a plasma in a magnetic field, the hydrodynamic approximation can be used even in those cases when this criterion is not fulfilled, because, in a magnetic field there is a second characteristic length, the Larmor radius [16]. If this length is much less than the characteristic size of the system, the equations of magnetohydrodynamics (MHD) will be valid. The theory is of fluid can be then changed in the magnetohydrodynamic fluid theory, to have the so-called Alfven Magnetosonic waves. MHD shocks are similar to adiabatic shocks, but with the addition of a magnetic field term and one more conserved quantity, the magnetic flux. Numerous examples of cosmic shocks exist. The earth and other planets have magnetic fields. The solar wind, mostly composed by protons and electrons, encounters an MHD shock at the earth's Bow Shock, after which particles move in the earth's magnetic field [15]. Another shock is caused by the solar wind, at the so-called heliopause, where the sun's particles flow out into the interstellar medium. Larger stars, having dense winds, produce great shocks. An example of a cosmic shock is a Herbig-Haro object [15]. It is observed as small patches of nebulosity associated with a newly born star, formed when the gas ejected by the star collides with clouds of gas and dust. Herbig-Haro objects are ubiquitous in star-forming regions that can evolve visibly over short timescales as they move rapidly away from their parent star into the gas clouds in interstellar space (see for instance, [17]). Spectacular shock waves are also visible in the form of supernova remnants.
Stellar bow shock
Stellar wind bow shocks are structures due to the supersonic passage of wind-blowing stars [18]. They condense the interstellar matter into thin shells, which may be revealed by their post-shock emission or by scattered light. By means of these structures, stellar winds, that might otherwise go undetected, can be studied. A theory, that of momentum-supported bow shocks, is discussed in Ref. 18: this theory was first developed by Baranov,Krasnobaev and Kulikovskii [19], who were motivated by solving the problem of solar wind and local interstellar medium interaction. They considered the collision of an isotropic stellar wind with a uniform ambient medium, including the supersonic motion of the star with respect to that medium, and solved numerically the equation to have the shape of the bow shock. In [18], the author presents a formulation of the problem, that allows to obtain simple, exact solutions for all quantities in the numerical model of BKK, such as the shell's shape and the mass and velocity distributions within it. Where the stellar wind and the ambient medium collide head-on, it is possible to define the radius of the starting point of the shell, found by balancing the ram pressure of the wind and the ambient medium [18]. This radius is given by
( ) 2 1 2 0 4 / S a w w V / V m R πρ & = , where w w V , m &
is the mass-loss rate and the constant speed of the stellar wind, S V the speed of the star and a ρ the density of the uniform ambient medium. This standoff distance sets the length scale of the shell. The shape is a universal function. Because the bow shock depends upon both the stellar wind's and ambient medium's properties, stellar wind bow shocks may be very useful probes of both. According to [18], it is then possible to recover the mass and momentum flux functions and thus the stellar massloss. The local ambient density can be derived from equations too. In the following, some examples of observed stellar bow shocks.
Betelgeuse
Akari Space Telescope observations of Betelgeuse, the bright red supergiant star located in the constellation Orion, 640 light-years from Earth, show the star making a bow shock as it crosses the interstellar medium. At [20,21], three-colour composite images of Betelgeuse and its surroundings, taken at 65, 90 and 140 micrometers, are proposed: Betelgeuse travels through the interstellar medium, creating a bow shock. As previously discussed, it is not the star itself that creates the bow shock, but rather the interaction of its stellar wind with the gas in the interstellar medium. This wind warms up the surrounding gas releasing light in the infrared. The shape of a bow shock can be calculated exactly using a shock theory based on the conditions of the shock [18]. By analyzing the shape of the bow shock around Betelgeuse, researchers figured out that there is a strong flow of the interstellar medium around Betelgeuse: the interstellar medium, originated from star forming regions in the Orion's Belt is flowing at 11 km/s and Betelgeuse is crossing this flow at 30 km/s, while spewing out its wind at 17 km/s. The combined motion of Betelgeuse and its wind is behaving as a ship crossing a river [21]. At [22], an image shows further evidence of bow shock existence from dense gases and plasma in the Orion Nebula. In this case, it is the Hubble Space Telescope revealing the structures residing within the intense star-forming region which is the Great Nebula in Orion. One such structure is the bow shock around the star, LL Orion. Again, this star emits a solar wind. The material in the fast wind from LL Orion collides with slow-moving gas evaporating away from the center of the Orion Nebula. The surface where the two winds collide has a clearly crescent bow shape. The presence of a cometary tail after the star creates an image looking like a bow with its arrow.
In the Orion Nebula again
Let us note that, unlike the bow-shock made by a ship, an interstellar bow shock is a threedimensional structure. The structure has a clear boundary on the side facing away from LL Orion and it is diffuse on the side closest to the star: this is a characteristic common to many bow shocks [22]. A second bow shock can be seen around a star near LL Orion. The image in Ref.22, is composite and obtained with specific filters to represent oxygen, nitrogen, and hydrogen emissions.
RCW 49 nebula
One of the most prolific birthing place in our galaxy, the nebula RCW 49, was observed in high detail for the first time by the Spitzer Space Telescope. Located 13,700 light-years away in the Centaurus constellation, this is a dusty stellar nursery with more than 2,200 stars. Many of the stars cannot be seen at visible wavelengths but viewed with Spitzer's infrared camera. Ref. 23 describes the structure of the nebula. The interstellar medium structures are dominated by two large cavities. The first, blown out to the west, contains the massive young cluster Westerlund 2, and the second is an enclosed bubble around the Wolf-Rayet star WR 20b. In Ref. [23] the researches show three bow shocks associated with RCW 49. These bow-shocks are interesting: none of them point directly back toward the central cluster. This could be a consequence that the expanding bubbles driven by Westerlund 2 and the Wolf-Rayet stars are interacting and then giving a non-radial components to the flows. It is also possible that the bow shock driving stars have large orbital motions relative to the dynamic interstellar medium. In Fig.2, we show a detail of an original high-quality image, obtained from a web page of the University of Wisconsin-Madison [24], containing two of the bow shocks described in Ref. 23. On the right of the same Figure 2, it is shown the image obtained enhancing the edges of interstellar clouds by means of AstrofracTool. Bow shocks form Ref. 23 are marked with square boxes. The enhanced image reveals another bow structure (encircled) which is not considered in Ref. 23. In our opinion, an investigation on images with higher resolution could be rather interesting to understand whether a bow shock is present or not.
The Galactic Center
We can observe bow shocks at the center of our Galaxy too. Because of the high dust content along the line of sight to the Galactic Center, it is impossible to observe this region at ultraviolet and optical wavelengths, but the infrared array detectors combined with adaptive optics give astronomers a new way to explore this region. In the past years, many investigation with high-resolution near-infrared imaging have been carryed out of the central few light years of the Milky Way, in particular in the vicinity of the compact radio source SgrA*. This is the most likely counterpart of the supposed black hole, occupying the center of the Galaxy. The accretion of gas onto the black hole would release energy to power the radio source [25,26]. The Galactic Center also contains a number of young clusters of recently formed hot and massive stars. Many stars are old red main sequence stars. The existence of relatively young stars raise problems because, it was expected that the tidal forces from the central black-hole prevent their formation [25]. This paradox of youth is even more remarkable for stars that are on very tight orbits around Sagittarius A* [26]. Proposed explanation are that stars were formed in a massive star cluster offset from the Galactic Center, that eventually migrated to its current location once formed, or that stars formation happens within a rather compact accretion disk around the central blackhole. In these dense regions, scientists of the Gemini Observatory, analyzing images obtained with the Hawaii's adaptive optics on Gemini North, found that an object, IRS-8, known to be a strong infrared source for almost two decades, has an infrared largely in the form of a spectacular bow shock surrounding a central star [27]. It is the heated dust in the bow shock which is radiating, because it is compressed by the shock and because it is absorbing radiation from the hot neighbouring stars, including the IRS-8's central star itself. This is an interesting example, showing how new observations can discriminate the true nature of an infrared source.
Other observations of the Galactic Center have been obtained from the AO imager/spectrometer NAOS/CONICA, mounted on the Yepun Telescope, Chile. As claimed in Ref. 28, this is the ideal instrument for tackling all the basic questions concerning the Galactic Center. In Ref. 29, using this device, researchers obtained images of a bow shock in the region near IRS7. In Figure 3, we show this region, as seen in the image from the Gemini South Telescope [30]. Let us enhance the image with AstrofRacTool: the image on the right is obtained. Note that the structure of the Galactic Center is strongly enhanced. The bow shock reported in Ref. 29 is now clearly visible, marked by a square box.
Conclusions
We devoted this paper to the discussion of bow shocks in the interstellar medium created by the wind of moving stars. We discussed some observations of stellar bow shocks, chosen among the scientific literature. The two examples proposed in 4.3 and 4.4 are important in our opinion, because the reader can easily repeat the investigation using images, which can be freely downloaded from the web. In the enhanced images, obtained with a recently proposed processing method, we have a better view of the bow shocks. Fig,.3 The Galactic Center (credits: Gemini Observatory-AbuTeam-NOAO-AURA-NSF). This infrared image reveals the core of our Galaxy. On the right the same image after AstroFracTool processing. Note the enhancement of the structures. The bow shock is clearly visible, marked in the square box. This is the bow shock reported in Ref. 29.
C Marois, B Macintosh, T Barman, B Zuckerman, I Song, J Patience, D Lafreniere, R Doyon, Direct imaging of multiple planets orbiting the star HR. 8799C. Marois, B. Macintosh, T. Barman, B. Zuckerman, I. Song, J. Patience, D. Lafreniere, R. Doyon, Direct imaging of multiple planets orbiting the star HR 8799, Science, Vol322, pp.1348- 1352, 2008.
NICMOS imaging of 2MASSWJ 1207334-393254 -A planetary-mass companion candidate. G Schneider, I Song, B Zuckerman, E Becklin, P Lowrance, B Macintosh, M Bessell, C Dumas, G Chauvin, AAS 205th Meeting. G. Schneider, I. Song, B. Zuckerman, E. Becklin, P. Lowrance, B. Macintosh, M. Bessell, C. Dumas, G. Chauvin, NICMOS imaging of 2MASSWJ 1207334-393254 -A planetary-mass companion candidate, AAS 205th Meeting, 9-13 January 2005.
HST NICMOS imaging of the planetary-mass companion to the young brown dwarf 2MASSW J1207334_393254. I Song, G Schneider, B Zuckerman, J Farihi, E E Becklin, M S Bessell, P Lowrance, B A Macintosh, The Astrophysical Journal. 652I. Song, G. Schneider, B. Zuckerman, J. Farihi, E.E. Becklin, M.S. Bessell, P. Lowrance, B.A. Macintosh, HST NICMOS imaging of the planetary-mass companion to the young brown dwarf 2MASSW J1207334_393254, The Astrophysical Journal, Vol.652, pp.724Y729, 2006.
Bayesian-Based Iterative Method of Image Restoration. W H Richardson, 10.1364/JOSA.62.00005JOSA. 621W.H. Richardson, Bayesian-Based Iterative Method of Image Restoration, JOSA, Vol.62(1), pp.55-59, 1972, doi:10.1364/JOSA.62.00005.
. Christian Buil, Christian Buil, IRIS software, http://www.astrosurf.com/~buil/
A method for spatial deconvolution of spectra. F Courbin, P Magain, M Kirkove, S Sohy, The Astrophysical Journal. 529F. Courbin, P. Magain, M. Kirkove, S. Sohy, A method for spatial deconvolution of spectra, The Astrophysical Journal, Vol.529, pp.1136:1144, 2000.
Deconvolution of HST images of the Cloverleaf gravitational lens: Detection of the lensing galaxy and a partial Einstein ring. V Chantry, Magain , 10.1051/0004-6361:20066839A&A. 470V. Chantry, Magain, Deconvolution of HST images of the Cloverleaf gravitational lens: Detection of the lensing galaxy and a partial Einstein ring, A&A Vol.470, pp.467-473, 2007, doi: 10.1051/0004-6361:20066839
DECPHOT: an optimal deconvolution-based photometric reduction method. M Gillon, P Magain, V Chantry, G Letawe, S Sohy, F Courbin, F Pont, C Moutou, arXiv:astro-ph/0701607v1Astrophysics (astroph. M. Gillon, P. Magain, V. Chantry, G. Letawe, S. Sohy, F. Courbin, F. Pont, C. Moutou, DECPHOT: an optimal deconvolution-based photometric reduction method, Astrophysics (astro- ph) arXiv:astro-ph/0701607v1
The reader can find many examples of the use of a deconvolution method at the web page. The reader can find many examples of the use of a deconvolution method at the web page: http://wela.astro.ulg.ac.be/themes/dataproc/deconv/dec/deconv_e.html 10) Images of Abell 2218 at http://en.wikipedia.org/wiki/Gravitational_lens
Detection of a far-infrared bow shock nebula around R HYA: the first MIRIAD results. T Ueta, A K Speck, R E Stencel, F Herwig, R D Gehrz, R Szczerba, H Izumiura, A A Zijlstra, W B Latter, M Matsuura, M Meixner, M Steffen, M Elitzur, The Astrophysical Journal. 648T. Ueta, A.K. Speck, R.E. Stencel, F. Herwig, R.D. Gehrz, R. Szczerba, H. Izumiura, A.A. Zijlstra, W.B. Latter, M. Matsuura, M. Meixner, M. Steffen, M. Elitzur, Detection of a far-infrared bow shock nebula around R HYA: the first MIRIAD results, The Astrophysical Journal, Vol.648, pp.L39-L42, 2006.
R Marazzato, A C , arXiv:0910.4637v2[astro-ph.IMSparavigna Astronomical image processing based on fractional calculus: the AstroFracTool, Instrumentation and Methods for Astrophysics (astro-ph.IM). R. Marazzato, A.C. Sparavigna Astronomical image processing based on fractional calculus: the AstroFracTool, Instrumentation and Methods for Astrophysics (astro-ph.IM), arXiv:0910.4637v2 [astro-ph.IM], 2009.
Using fractional differentiation in astronomy. A C Sparavigna, P Milligan, arXiv:0910.4243v2astro-ph.IMInstrumentation and Methods for Astrophysics. astro-ph.IM)A.C. Sparavigna, P. Milligan, Using fractional differentiation in astronomy, Instrumentation and Methods for Astrophysics (astro-ph.IM) arXiv:0910.4243v2 [astro-ph.IM], 2009.
Fluid mechanics. L Landau, E M Lifshitz, Pergamon PressOxfordL. Landau, E.M. Lifshitz, Fluid mechanics, Pergamon Press, Oxford, 1987.
Charles Danforth, Cosmic Gardening: the physics of shocks. Charles Danforth, Cosmic Gardening: the physics of shocks, September 30, 1997.
A I Akhiezer, G J A Lubarski, R V Polovin, Simple Waves and Shock Waves in Magnetohydrodynamics. A.I. Akhiezer, G.J.A. Lubarski, R.V. Polovin, Simple Waves and Shock Waves in Magneto- hydrodynamics, http://www-naweb.iaea.org/napc/physics/2ndgenconf/sets/108.html
Exact analytic solutions for stellar wind bow shocks. F P Wilkin, The Astrophysical Journal. 459F.P. Wilkin, Exact analytic solutions for stellar wind bow shocks, The Astrophysical Journal, Vol.459, pp.L31-L34, 1996.
. V B Baranov, K V Krasnobaev, A G Kulikovskii, Soviet Phys.-Dokl. 15791V.B. Baranov, K.V. Krasnobaev, A.G. Kulikovskii, Soviet Phys.-Dokl., Vol.15, p.791, 1971.
. M S Povich, R A Benjamin, B A Whitney, B L Babler, R Indebetouw, M R Meade, E , M.S. Povich, R.A. Benjamin, B.A. Whitney, B.L. Babler, R. Indebetouw, M.R. Meade, E.
Interstellar weather vanes: GLIMPSE mid-infrared stellar wind bow shocks in M17 and RCW 49. Churchwell, The Astrophysical Journal. 689Churchwell, Interstellar weather vanes: GLIMPSE mid-infrared stellar wind bow shocks in M17 and RCW 49, The Astrophysical Journal, Vol.689, pp.242-248, 2008.
A M Ghez, S Salim, S D Hornstein, A Tanner, J R Lu, M Morris, E E Becklin, G Duchêne, arXiv:astro-ph/0306130v2Stellar orbits around the Galactic Center balck hole. A.M. Ghez, S. Salim, S.D. Hornstein, A. Tanner, J.R. Lu, M. Morris, E.E. Becklin, G. Duchêne, Stellar orbits around the Galactic Center balck hole, arXiv:astro-ph/0306130v2, 2004.
. R Schödel, T Ott, R Genzel, R Hofmann, M Lehnert, A Eckart, N Mouawad, T Alexander, M J Reid, R Lenzen, M Hartung, F Lacombe, D Rouan, E Gendron, G Rousset, A.-M , R. Schödel, T. Ott, R. Genzel, R. Hofmann, M. Lehnert, A. Eckart, N. Mouawad, T. Alexander, M.J. Reid, R. Lenzen, M. Hartung, F. Lacombe, D. Rouan, E. Gendron, G. Rousset, A.-M.
A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way. W Lagrange, N Brandner, C Ageorges, A F M Lidman, J Moorwood, N Spyromilio, K M Hubin, Menten, Nature. 419Lagrange, W. Brandner, N. Ageorges, C. Lidman, A.F.M. Moorwood, J. Spyromilio, N. Hubin and K. M. Menten, A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way, Nature, Vol.419, pp.694-696, 2002.
A bow shock of heated dust surrounding galactic center source IRS8. T R Geballe, F Rigaut, J.-R Roy, B T Draine, The Astrophysical Journal. 602T.R. Geballe, F. Rigaut,J.-R. Roy, and B.T. Draine, A bow shock of heated dust surrounding galactic center source IRS8, The Astrophysical Journal, Vol.602, pp.770-775, 2004.
T Ott, R Schödel, R Genzel, A Eckart, F Lacombe, D Rouan, R Hofmann, M Lehnert, T Alexander, A Sternberg, M Reid, W Brandner, R Lenzen, M Hartung, E Gendron, Y Clénet, P Léna, G Rousset, A-M Lagrange, N Ageorges, N Hubin, C Lidman, A F M Moorwood, A Renzini, J Spyromilio, L Etacconi-Garman, K M Menten, N Mouawad, arXiv:astro-ph/0303408v1Inward bound: studying the Galactic Center with NAOS/CONICA, Astrophysics (astro-ph). T.Ott, R.Schödel, R.Genzel, A.Eckart, F.Lacombe, D.Rouan, R.Hofmann, M.Lehnert, T.Alexander, A.Sternberg, M.Reid, W. Brandner, R. Lenzen, M. Hartung, E.Gendron, Y.Clénet, P.Léna, G. Rousset, A-M. Lagrange, N. Ageorges, N. Hubin, C. Lidman, A.F.M. Moorwood, A.Renzini, J. Spyromilio, L.ETacconi-Garman, K.M. Menten, N.Mouawad, Inward bound: studying the Galactic Center with NAOS/CONICA, Astrophysics (astro-ph), arXiv:astro- ph/0303408v1, 2003.
Y Clénet, D Rouan, F Lacombe, E Gendron, D Gratadour, 10.1002/asna.200385039Near-infrared adaptive optics observations of the Galactic Center with NAOS/CONICA (ESO) and GriF (CFHT). 324Y. Clénet, D. Rouan, F. Lacombe, E. Gendron, and D. Gratadour, Near-infrared adaptive optics observations of the Galactic Center with NAOS/CONICA (ESO) and GriF (CFHT), Astron. Nachr., Vol. 324, pp.327 -331, 2003, doi:10.1002/asna.200385039.
Cometary structures in Ngc5189: on the left the original image from GEMINI web site and on the right the image as obtained after AstroFracTool and GIMP application. Fig.1 Cometary structures in Ngc5189: on the left the original image from GEMINI web site and on the right the image as obtained after AstroFracTool and GIMP application.
Bow shocks in RCW49: on the left the original image (credits: NASAJPL-Caltech, University of Wisconsin-Madison) and on the right, the same image enhance after AstroFracTool application. Bow shocks form Ref.23 are marked with square boxes. The enhanced image reveals a bow structure. encircled) which is not considered in Ref.23Fig.2 Bow shocks in RCW49: on the left the original image (credits: NASAJPL-Caltech, University of Wisconsin-Madison) and on the right, the same image enhance after AstroFracTool application. Bow shocks form Ref.23 are marked with square boxes. The enhanced image reveals a bow structure (encircled) which is not considered in Ref.23.
| []
|
[
"Latent Structured Ranking",
"Latent Structured Ranking"
]
| [
"Jason Weston [email protected] \nMountain ViewGoogle, GoogleNew YorkUSA., USA\n",
"John Blitzer [email protected] \nMountain ViewGoogle, GoogleNew YorkUSA., USA\n"
]
| [
"Mountain ViewGoogle, GoogleNew YorkUSA., USA",
"Mountain ViewGoogle, GoogleNew YorkUSA., USA"
]
| []
| Many latent (factorized) models have been proposed for recommendation tasks like collaborative filtering and for ranking tasks like document or image retrieval and annotation. Common to all those methods is that during inference the items are scored independently by their similarity to the query in the latent embedding space. The structure of the ranked list (i.e. considering the set of items returned as a whole) is not taken into account. This can be a problem because the set of top predictions can be either too diverse (contain results that contradict each other) or are not diverse enough. In this paper we introduce a method for learning latent structured rankings that improves over existing methods by providing the right blend of predictions at the top of the ranked list. Particular emphasis is put on making this method scalable. Empirical results on large scale image annotation and music recommendation tasks show improvements over existing approaches.LaSR t = 0 (no structure)LaSR t = 1 (structured ranking) | null | [
"https://export.arxiv.org/pdf/1210.4914v1.pdf"
]
| 17,474,697 | 1210.4914 | 618d9a1a219087329f59ca4a581d608b84174d4f |
Latent Structured Ranking
Jason Weston [email protected]
Mountain ViewGoogle, GoogleNew YorkUSA., USA
John Blitzer [email protected]
Mountain ViewGoogle, GoogleNew YorkUSA., USA
Latent Structured Ranking
Many latent (factorized) models have been proposed for recommendation tasks like collaborative filtering and for ranking tasks like document or image retrieval and annotation. Common to all those methods is that during inference the items are scored independently by their similarity to the query in the latent embedding space. The structure of the ranked list (i.e. considering the set of items returned as a whole) is not taken into account. This can be a problem because the set of top predictions can be either too diverse (contain results that contradict each other) or are not diverse enough. In this paper we introduce a method for learning latent structured rankings that improves over existing methods by providing the right blend of predictions at the top of the ranked list. Particular emphasis is put on making this method scalable. Empirical results on large scale image annotation and music recommendation tasks show improvements over existing approaches.LaSR t = 0 (no structure)LaSR t = 1 (structured ranking)
INTRODUCTION
Traditional latent ranking models score the ith item d i ∈ R D given a query q ∈ R D using the following scoring function:
f (q, d i ) = q W d i = q U V d i ,(1)
where W = U V has a low rank parameterization, and hence q U can be thought of as the latent representation of the query and V d i is equivalently the latent representation for the item. The latent space is n-dimensional, where n D, hence U and V are n × D dimensional matrices. This formulation covers a battery of different algorithms and applications.
For example, in the task of collaborative filtering, one is required to rank items according to their similarity to the user, and methods which learn latent representations of both users and items have proven very effective. In particular, Singular Value Decomposition (SVD) (Billsus and Pazzani, 1998;Bell et al., 2009) and Non-negative Matrix Factorization (NMF) (Lee and Seung, 2001) are two standard methods that at inference time use equation (1), although the methods to learn the actual parameters U and V themselves are different. In the task of document retrieval, on the other hand, one is required to rank text documents given a text query. The classical method Latent Semantic Indexing (LSI) (Deerwester et al., 1990) is an unsupervised approach that learns from documents only, but still has the form of equation (1) at test time. More recently, supervised methods have been proposed that learn the latent representation from (query, document) relevance pairs, e.g. the method Polynomial Semantic Indexing (SSI) (Bai et al., 2009). Finally, for multiclass classification tasks, particularly when involving thousands of possible labels, latent models have also proven to be very useful, e.g. the Wsabie model achieves state-of-the-art results on large-scale image (Weston et al., 2011) and music (Weston et al., 2012) annotation tasks. Moreover, all these models not only perform well but are also efficient in terms of computation time and memory usage.
Scoring a single item as in equation (1) is not the end goal of the tasks described above. Typically for recomendation and retrieval tasks we are interested in ranking the items. This is achieved by, after scoring each individual item using f (q, d i ), sorting the scores, largest first, to produce a ranked list. Further, typically only the top few results are then presented to the user, it is thus critical that the method used performs well for those items. However, one potential flaw in the models described above is that scoring items individually as in eq. (1) does not fully take into account the joint set of items at the top of the list (even when optimizing top-of-the-ranked-list type loss functions).
The central hypothesis of this paper is that latent ranking methods could be improved if one were to take into account the structure of the ranked list during inference. In particular this would allow the model to make sure there is the right amount of consistency and diversity in the predictions.
Let us suppose for a given query that some of the predictions at the top of the ranked list are accurate and some are inaccurate. A model that improves the consistency of the predictions might improve overall accuracy. A structured ranking model that predicts items dependent on both the query and other items at the top of the ranked list can achieve such a goal. To give a concrete example, in a music recommendation task you might not want to recommend both "heavy metal" and "60s folk" in the same top k list. In that case, a structured model which encodes item-item similarities as well as query-item similarities could learn this by representing those two items with very different latent embedding vectors such that their pairwise item-item contribution is a large negative value, penalizing both items appearing in the top k. Note that a structured ranking model can do this despite the possibility that both items are a good match to the query, so an unstructured model would find this difficult to achieve.
Conversely, if improved results are gained from encouraging the top ranked items to be a rather diverse set for a particular query, then a structured model can learn to predict that instead. For example in the task of document retrieval, for ambiguous queries like "jaguar", which may refer either to a Panthera or to the car manufacturer, diversity should be encouraged. The goal of a structured ranker is to learn the optimal tradeoff between consistency and diversity on a case-by-case (per query) basis. As latent parameters are being learnt for each query type this is indeed possible.
In this work we propose a latent modeling algorithm that attempts to do exactly what we describe above. Our model learns to predict a ranked list that takes into account the structure of the top ranked items by learning query-item and item-item components. Inference then tries to find the maximally scoring set of documents. It should be noted that while there has been strong interest in building structured ranking models recently (Bakir et al., 2007), to our knowledge this is the first approach of this type to do so for latent models. Further, the design of our algorithm is also particularly tuned to work on large scale datasets which are the common case for latent models, e.g. in collaborative filtering and large scale annotation and ranking tasks. We provide empirical results on two such large scale datasets, on a music recommendation task, and an image annotation task, that show our structured method brings accuracy improvements over the same method without structure as well as other standard baselines. We also provide some analysis of why we think this is happening.
The rest of the paper is as follows. Section 2 describes our method, Latent Structured Ranking (LaSR). Section 3 discusses previous work and connects them to our method. Section 4 describes our empirical results and finally Section 5 concludes.
METHOD
Given a query q ∈ Q our task is to rank a set of documents or items D. That is, we are interested in outputting (and scoring) a permutationd of the set D, whered j is the jth item in the predicted ranked list. Our ultimate goal will be to design models which take into account not just individual document scores but the (learned) relative similarities of documents in different positions as well.
SCORING PERMUTATIONS BY SCORING INDIVIDUAL ITEMS
Let us begin by proposing methods for using the standard latent model of eq. (1) to score permutations. We need a method for transforming the scores for single documents into scores for permutations. Such transformations have been studied in several previous works, notably (Le and Smola, 2007). They show that finding maximally scoring permutations from single documents can be cast as a linear assignment problem, solvable in polynomial time with the Hungarian algorithm.
For the vanilla model we propose here, however, we can use a simple parameterization which allows for inference by sorting. For any given permutation we assign a score as follows:
f vanilla (q,d) = |d| i=1 w i (q U Vd i ),(2)
where for each position i in the permutation, we associate a weight w i , where w i can be any weights such that w 1 > w 2 > · · · > w |d| ≥ 0. For example, one can just set w i = 1 i . Inference using this model is then performed by calculating:
F vanilla (q) = argmaxd [f vanilla (q,d )].
In this case, computing the best-scoring assignment is simply a matter of sorting documents by their scores from eq. (1). To see this note that the score of any unsorted pair can be increased by sorting, since the positional weights w i are fixed and decreasing.
LATENT STRUCTURED RANKING
The fundamental hypothesis of this paper is that including knowledge about the structure of the rankings at inference time will improve the overall set of ranked items. That is, we want to define a model where the score of a documentd i does not only depend on the query q but also on the other items and their respective positions as well. What is more, we would prefer a model that places more weight on the top items in a permutation (indeed, this is reflected by common ranking losses like MAP and precision@k).
This leads us to propose the following class of Latent Structured Ranking (LaSR) models:
f lsr (q,d) = |d| i=1 w i (q U Vd i )+ |d| i,j=1 w i w j (d i S Sd j ) .
(3) In addition to the parameters of eq. (2), we now introduce the additional parameter S. S takes into account the structure of the predicted ranked list. S S is a low rank matrix of item-item similarities where S is a n×D matrix, just like U and V , and must also be learnt by the model using training data.
CHOICE OF THE w i PARAMETERS
The weights w are crucial to the usefulness of the matrix in the second term of eq. (3). If w i = 1 for all i then the entire second term would always be the same no matter what choice of rankingd one chooses. If the position weights w i are decreasing, however, then the structural term S is particularly meaningful at the top of the list.
As suggested before in Section 2.1 we could choose w i = 1 i . In that case the items that are at the top of the predicted ranked list dominate the overall score from the second term. In particular, the pairwise itemitem similarities between items in the top-ranked positions play a role in the overall choice of the entire ranked listd. Our model can hence learn the consistency vs. diversity tradeoff within the top k we are interested in.
However, if one knows in advance the number of items one wishes to show to the user (i.e. the top k) then one could choose directly to only take into account those predictions:
w i = 1/i, if i ≤ k, and 0 otherwise.(4)
As we will see this also has some computational advantages due to its sparsity, and will in fact be our method of choice in the algorithm we propose.
MODEL INFERENCE
At test time for a given query we need to compute:
F lsr (q) = argmaxd [f lsr (q,d )].(5)
Just as inference in the vanilla model can be cast as a linear assignment problem, inference in the LaSR model can be cast as a quadratic assignment problem (Lacoste-Julien et al., 2006). This is known to be NP hard, so we must approximate it. In this section, we briefly discuss several alternatives.
• Linear programming relaxation: Since we know we can cast our problem as quadratic assignment, we could consider directly using the linear programming relaxation suggested by (Lacoste-Julien et al., 2006). In our experiments, however, we have tens of thousands of labels. Solving even this relaxed LP per query is computationally infeasible for problems of this size. We note that Wsabie's (Weston et al., 2011) sampling-based technique is an attempt to overcome even linear time inference for this problem.
• Greedy structured search: we could also consider a greedy approach to approximately optimizing eq. (5) as follows: (i) pick the documentd 1 ∈ D that maximizes:
f greedy (q,d 1 ) = w 1 (qU Vd 1 ) + (w 1 ) 2 (d 1 S Sd 1 ) (6) and then fix that document as the top ranked prediction. (ii) Find the second best document dependent on the first, by maximizing (for N = 2):
f greedy (q,d N ) = w N (qU Vd N )q + N i=1 w i w N (d i S Sd N ).
Finally, (iii) repeat the above, greedily adding one more document each iteration by considering the above equation for N = 3, . . . , k up to the number of desired items to be presented to the user. This method has complexity O(k 2 |D|). Its biggest drawback is that the highest-scoring document is chosen using the vanilla model. Even if we could improve our score by choosing a different document, taking into account the pairwise scores with other permutation elements, this algorithm will not take advantage of it. Another way to look at this is, precision@1 would be no better than using the vanilla model of eq. (1).
The greedy procedure also permits beam search variants. Using a beam of M candidates this gives a complexity of O(M k 2 |D|). This is tractable at test time, but the problem is that during (online) learning one would have to run this algorithm per query, which we believe is still too slow for the cases we consider here.
• Iterative search: Motivated by the defects in greedy search and LP relaxation, we propose one last, iterative method. This method is analogous to inference by iterated conditional modes in graphical models (Besag, 1986). (i) On iteration t = 0 predict with an unstructured model (i.e. do not use the second term involving S):
f iter:t=0 (q,d) = |d| i=1 w i (qU Vd i ).(7)
As mentioned before, computing the best rankinḡ d just involves sorting the scores qU V d i and ordering the documents, largest first. Utilizing the sparse choice of w i = 1/i, if i ≤ k, and 0 otherwise described in Section 2.3 we do not have to sort the entire set, but are only required to find the top k which can be done in O(|D| log k) time using a heap. Let us denote the predicted ranked list as d 0 and in general on each iteration t we are going to make predictionsd t . (ii) On subsequent iterations, we maximize the following scoring function:
f iter:t>0 (q,d) = |d| i=1 w i (qU Vd i ) + |d| i,j=1 w i w j (d i S Sd t−1 j ).(8)
Asd t−1 is now fixed on iteration t, the perdocument d i scores
(qU Vd i ) + |d| j=1 w j (d i S Sd t−1 j )(9)
are now independent of each other. Hence, they can be calculated individually and, as before, can be sorted or the top k can be found, dependent on the choice of w. If we use the sparse w of eq. (4) (which we recommend) then the per-document scores are also faster to compute as we only require:
(qU Vd i ) + k j=1 w j (d i S Sd t−1 j ).
Overall this procedure then has complexity O(T k|D|) when running for T steps. While this does not look at first glance to be any faster than the greedy or beam search methods at testing time, it has important advantages at training time as we will see in the next section.
LEARNING
We are interested in learning a ranking function where the top k retrieved items are of particular interest as they will be presented to the user. We wish to optimize all the parameters of our model jointly for that goal.
As the datasets we intend to target are large scale, stochastic gradient descent (SGD) training seems a viable option. However, during training we cannot afford to perform full inference during each update step as otherwise training will be too slow. A standard loss function that already addresses that issue for the unstructured case which is often used for retrieval is the margin ranking criterion (Herbrich et al., 2000;Joachims, 2002). In particular, it was also used for learning factorized document retrieval models in Bai et al. (2009). The loss can be written as:
err AU C = m i=1 d − =di max(0, 1 − f (q i , d i ) + f (q i , d − )).
(10) For each training example i = 1, . . . , m, the positive item d i is compared to all possible negative items d − = d i , and one assigns to each pair a cost if the negative item is larger or within a "margin" of 1 from the positive item. These costs are called pairwise violations. Note that all pairwise violations are considered equally if they have the same margin violation, independent of their position in the list. For this reason the margin ranking loss might not optimize the top k very accurately as it cares about the average rank.
For the standard (unstructured) latent model case, the problem of optimizing the top of the rank list has also recently been addressed using sampling techniques (Weston et al., 2011) in the so-called WARP (Weighted Approximately Ranked Pairwise) loss. Let us first write the predictions of our model for all items in the database as a vectorf (q) where the i th element isf i (q) = f (q, d i ). One then considers a class of ranking error functions:
err W ARP = m i=1 L(rank di (f (q i )))(11)
where rank di (f (q i )) is the margin-based rank of the labeled item given in the i th training example:
rank di (f (q)) = j =i θ(1 +f j (q) ≥f i (q))(12)
where θ is the indicator function, and L(·) transforms this rank into a loss:
L(r) = r i=1 α i , with α 1 ≥ α 2 ≥ · · · ≥ 0.(13)
The main idea here is to weight the pairwise violations depending on their position in the ranked list. Different choices of α define different weights (importance) of the relative position of the positive examples in the ranked list. In particular it was shown that by choosing α i = 1/i a smooth weighting over positions is given, where most weight is given to the top position, with rapidly decaying weight for lower positions. This is useful when one wants to optimize precision at k for a variety of different values of k at once Usunier et al. (2009). (Note that choosing α i = 1 for all i we have the same AUC optimization as equation (10)).
We can optimize this function by SGD following the authors of Weston et al. (2011), that is samples are drawn at random, and a gradient step is made for each draw. Due to the cost of computing the exact rank in (11) it is approximated by sampling. That is, for a given positive label, one draws negative labels until a violating pair is found, and then approximates the rank with
rank d (f (q)) ≈ |D| − 1 N
where . is the floor function, |D| is the number of items in the database and N is the number of trials in the sampling step. Intuitively, if we need to sample more negative items before we find a violator then the rank of the true item is likely to be small (it is likely to be at the top of the list, as few negatives are above it).
This procedure for optimizing the top of the ranked list is very efficient, but it has a disadvantage with respect to structured learning: we cannot simply sample and score items any longer as we need to somehow score entire permutations. In particular, it is not directly applicable to several of the structured prediction approaches like LP, greedy or beam search. That is because we cannot compute the score off i independently because they depend on the ranking of all documents, which then makes the sampling scheme invalid. However, for (a variant of) the iterative algorithm which we described in the previous section the WARP (or AUC) technique can still be used.
The method is as follows. In the first iteration the model scores in eq. (7) are independent and so we can train using the WARP (or AUC) loss. We then have to computed 0 (the ranking of items) for each training example for use in the next iteration. Note that using the sparse w of eq. (4) this is O(D log k) to compute, and storage is also only a |D| × k matrix of top items. After computingd 0 , in the second iteration we are again left with independent scoring functions f i as long as we make one final modification, instead
Algorithm 1 LaSR training algorithm Input: Training pairs {(q i , d i )} i=1,...,l . Initialize model parameters U t , V t and S t (we use mean 0, standard deviation 1
√ d ) for each t. for t = 0, . . . , T do repeat if t = 0 then f (q, d) = qU 0 V 0 d. else f (q, d) = qU t V t d + k j=1 w j d S t S td t−1 j end if Pick a random training pair (q, d + ). Compute f (q, d + ). Set N = 0. repeat Pick a random document d − ∈ D, d = d i . Compute f (q, d − ). N = N + 1. until f (q + , d + ) < f (q + , d − ) + 1 or N ≥ |D| − 1 if f (q + , d + ) < f (q + , d − ) + 1 then
Make a gradient step to minimize:
L( |D|−1 N ) max(1−f (q + , d + )+f (q + , d − ), 0). Project weights to enforce constraints, i.e. if ||U ti || > C then U ti ← (CU ti )/||U ti ||, i = 1, .
. . , D (and likewise for V t and S t ). end if until validation error does not improve. For each training example, compute the top k ranking documentsd t i , i = 1, . . . , k for iteration t using f (q, d) defined above. end for of using eq. (8) we instead use:
f iter:t>0 (q,d) = |d| i=1 w i (qU t V tdi ) + |d| i,j=1 w i w j (d i S t S td t−1 j ). (14)
on iteration t, where U t , V t and S t are separate matrices for each iteration. This decouples the learning at each iteration. Essentially, we are using a cascade-like architecture of t models trained one after the other. Note that if a global optimum is reached for each t then the solution should always be the same or improve over step t − 1, as one could pick the weights that give exactly the same solution as for step t − 1.
So far, the one thing we have failed to mention is regularization during learning. One can regularize the parameters by preferring smaller weights. We constrain them using ||S ti || ≤ C, ||U ti || ≤ C, ||V ti || ≤ C, i = 1, . . . , |D|. During SGD one projects the parameters back on to the constraints at each step, following the same procedure used in several other works, e.g. Weston et al. (2011);Bai et al. (2009). We can optimize hyperparameters of the model such as C and the learning rate for SGD using a validation set.
Overall, our preferred version of Latent Structured
Ranking that combines all these design decisions is given in Algorithm 1.
PRIOR WORK
In the introduction we already mentioned several latent ranking methods: SVD (Billsus and Pazzani, 1998;Bell et al., 2009), NMF (Lee and Seung, 2001), LSI (Deerwester et al., 1990), PSI (Bai et al., 2009) and Wsabie (Weston et al., 2011). We should mention that many other methods exist as well, in particular probabilistic methods like pLSA (Hofmann, 1999) and LDA (Blei et al., 2003). None of those methods, whether they are supervised or unsupervised, take into the structure of the ranked list as we do in this work, and we will use several of them as baselines in our experiments.
There has been a great deal of recent work on structured output learning (Bakir et al., 2007), particularly for linear or kernel SVMs (which are not latent embedding methods). In methods like Conditional Random Fields (Lafferty et al., 2001), SVM-struct (Tsochantaridis et al., 2004) LaSO (Daumé III and Marcu, 2005) and SEARN (Daumé et al., 2009) one learns to predict an output which has structure, e.g. for sequence labeling, parse tree prediction and so on. Predicting ranked lists can also be seen in this framework. In particular LaSO (Daumé III and Marcu, 2005) is a general approach that considers approximate inference using methods like greedy approximation or beam search that we mentioned in Section 2.4. As we said before, due to the large number of items we are ranking many of those approaches are infeasible. In our method, scalabality is achieved using a cascade-like training setup, and in this regard is related to (Weiss et al., 2010). However, unlike that work, we do not use it to prune the set of items considered for ranking, we use it to consider pairwise item similarities.
The problem of scoring entire permutations for ranking is well-known and has been investigated by many authors (Yue et al., 2007a,b;Le and Smola, 2007;Xia et al., 2008). These works have primarily focused on on using knowledge of the structure (in this case the predicted positions in the ranked list) in order to optimize the right metric, e.g. MAP or precision@k. In that sense, methods like Wsabie which uses the WARP loss already use structure in the same way. In our work we also optimize top-of-the-ranked-list metrics by using WARP, but in addition we also use the ranking structure to make predictions dependent on the query and the other predicted items during inference by encoding this in the model itself. That is, in our work we explicitly seek to use (and learn) interdocument similarity measures.
There has been work on taking into account interdocument similarities during ranking. The most famous and prominent idea is pseudo-relevance feedback via query expansion (Rocchio, 1971). Pseudorelevance works by doing normal retrieval (e.g. using cosine similarity in a vector space model), to find an initial set of most relevant documents, and then assuming that the top k ranked documents are relevant, and performing retrieval again by adjusing the cosine similarity based on previously retrieved documents. In a sense, LaSR is also a pseudo-relevance feedback technique, but where inter-document similarities are learned to minimize ranking loss.
More recently, some authors have investigated incorporating inter-document similarity during ranking. Qin et al. (2008) have investigated incorporating a fixed document-document similarity feature in ranking. In their work, however, they did not score permutations. Instead, each document was associated with a relevance score and the authors treated learning as a structured regression problem. For a situation with implicit feedback, Raman et al. (2012) investigate an inference technique similar to our greedy algorithm. Volkovs and Zemel (2009) also explored listwise ranking using pairwise document interactions in a probabilistic setup. To the best of our knowledge, however, none of these methods investigate a learned inter-document similarity (i.e. latent parameters for that goal), which is the most powerful feature of LaSR.
EXPERIMENTS
We considered two large scale tasks to test our proposed method. The first is a music recommendation task with over 170,000 artists (possible queries or items) and 5 million training pairs. The second is a task of large scale image annotation with over 15,000 labels and 7 million training examples.
MUSIC RECOMMENDATION TASK
The first task we conducted experiments on is a large scale music recommendation task. Given a query (seed) artist, one has to recommend to the user other artists that go well together with this artist if one were listening to both in succession, which is the main step in playlisting and artist page recommendation on sites like last.fm, music.google.com and programs such as iTunes and http://the.echonest.com/. artists. Two consecutively played artists by the same user are considered as a (query, item) pair. Hence, both q i and d i are D = 176, 948 sparse vectors with one non-zero value (a one) indicating which artist they are. One in every five days (so that the data is disjoint) were left aside for testing, and the remaining data was used for training and validation. Overall this gave 5,408,975 training pairs, 500,000 validation pairs (for hyperparameter tuning) and 1,434,568 test pairs.
We compare our Latent Structured Ranking approach to the same approach without structure by only performing one iteration of Algorithm 1. We used k = 20 for eq. (4). We also compare to two standard methods of providing latent recommendations, Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF). For SVD the Matlab implementation is used, and for NMF the implementation at http://www.csie.ntu.edu.tw/ ∼ cjlin/nmf/ is used.
Main Results
We report results comparing NMF, SVD and our method, Latent Structured Ranking (LaSR) in Table 1. For every test set (query, item) pair we rank the document set D according to the query and record the position of the item in the ranked list. We then measure the recall at 5, 10, 30 and 50. (Note that because there is only one item in the pair, precision at k is equal to recall@k divided by k.) We then average the results over all pairs. For all meth-ods the latent dimension n = 50. For LaSR, we give results for iterations t = 0, . . . , 2, where t = 0 does not use the structure. LaSR with t = 0 already outperforms SVD and NMF. LaSR optimizes the top of the ranked list at training time (via the WARP loss), whereas SVD and NMF do not, which explains why it can perform better here on top-of-the-list metrics. We tested LaSR t = 0 using the AUC loss (10) instead of WARP (11) to check this hypothesis and we obtained a recall at 5, 10, 30 and 50 of 3.56%, 6.32%, 14.8% and 20.3% respectively which are slightly worse, than, but similar to SVD, thus confirming our hypothesis. For LaSR with t = 1 and t = 2 our method takes into account the structure of the ranked list at inference time, t = 1 outperforms iteration t = 0 that does not use the structure. Further slight gains are obtained with another iteration (t = 2).
Changing the Embedding Dimension The results so far were all with latent dimension n = 50. It could be argued that LaSR with t > 0 has more capacity (more parameters) than competing methods, and those methods could have more capacity by increasing their dimension n. We therefore report results for various embedding sizes (n = 10, 25, 50, 100) in Table 2.
The results show that LaSR (t = 0) consistently outperforms SVD and NMF for all the dimensions tried, but even with 200 dimensions, the methods that do not use structure (SVD, NMF and LaSR t = 0) are still outperformed by LaSR that does use structure (t > 0) even with n = 50 dimensions.
Analysis of Predictions
We give two example queries and the top ranked results for LaSR with and without use of structure ((t = 0) and (t = 1)) in Table 3. The left-hand query is a popular artist "Bob Dylan". LaSR (t = 0) performs worse than (t = 1) with "Wilco" in positon 1 -the pair ("Bob Dylan","Wilco") only appears 10 times in the test set, whereas ("Bob Dylan", "The Beatles") appears 40 times, and LaSR (t = 1) puts the latter in the top position. In general t = 1 improves the top ranked items over t = 0, removing or demoting weak choices, and promoting some better choices. For example, "Sonic Youth" which is a poor match is demoted out of the top 20. The second query is a less popular artist "Plaid" who make electronic music. Adding structure to LaSR again improves the results in this case by boosting relatively more popular bands like "Orbital" and "µ − ziq", and relatively more related bands like "Four-Tet" and "Squarepusher", whilst demoting some lesser known bands. Table 3: Music Recommendation results for our method LaSR with (t = 1) and without (t = 0) using the structure. We show top ranked results for a popular query, "Bob Dylan" (folk rock music) and a less popular query "Plaid" (electronic music). Total numbers of train and test pairs for given artist pairs are in square brackets, and totals for all artists shown are given in the last row. Artists where the two methods differ are labeled with an asterisk. Adding structure improves the results, e.g. unrelated bands like "Sonic Youth" are demoted in the "Bob Dylan" query, and relatively more popular bands like Orbital and µ−ziq and more related bands like Four-Tet and Squarepusher are boosted for the "Plaid" query. Table 4: Image Annotation Results comparing Wsabie with LaSR. The top 10 labels of each method is shown, the correct label (if predicted) is shown in bold. In many cases LaSR can be seen to improve on Wsabie by demoting bad predictions e.g. war paint, soccer ball, segway, denture, rottweiler, reindeer, tv-antenna, leopard frog (one example from each of the first 8 images). In the last 3 images neither method predicts the right label (armrest, night snake and heifer) but LaSR seems slightly better (e.g. more cat, snake and cow predictions). WordNet (Fellbaum, 1998). WordNet is a graph of linguistic terms, where each concept node consists of a word or word phrase, and the concepts are organized within a hierarchical structure. ImageNet is a growing image dataset that attaches quality-controlled human-verified images to these concepts by collecting images from web search engines and then employing annotators to verify whether the images are good matches for those concepts, and discarding them if they are not. For many nouns, hundreds or even thousands of images are labeled. We can use this dataset to test image annotation algorithms. We split the data into train and test and try to learn to predict the label (annotation) given the image. For our experiments, we downloaded the "Spring 2010" release which consists of 9 million images and 15,589 possible concepts (this is a different set to (Weston et al., 2011) but our baseline results largely agree). We split the data into 80% for training, 10% for validation and 10% for testing.
Following (Weston et al., 2011) we employ a feature representation of the images which is an ensemble of several representations which is known to perform better than any single representation within the set (see e.g. (Makadia et al., 2008)). We thus combined multiple feature representations which are the concatenation of various spatial (Grauman and Darrell, 2007) and multiscale color and texton histograms (Leung and Malik, 1999) for a total of about 5 × 10 5 dimensions. The descriptors are somewhat sparse, with about 50,000 non-zero weights per image. Some of the constituent histograms are normalized and some are not. We then perform Kernel PCA (Schoelkopf et al., 1999) on the combined feature representation using the intersection kernel (Barla et al., 2003) to produce a 1024 dimensional input vector for training.
We compare our proposed approach to several baselines: one-versus-rest large margin classifiers (One-vs-Rest) of the form f i (x) = w i x trained online to perform classification over the 15,589 classes, or the same models trained with a ranking loss instead, which we refer to as Rank SVM, as it is an online version of the pairwise multiclass (ranking) loss of (Weston and Watkins, 1999;Crammer and Singer, 2002). Finally, we compare to Wsabie a (unstructured) latent ranking method which has yielded state-of-the-art performance on this task. For all methods, hyperparameters are chosen via the validation set.
Results
The overall results are given in Table 5. One-vs-Rest performs relatively poorly (2.83% re-call@1), perhaps because there are so many classes (over 15,000) that the classifiers are not well calibrated to each other (as they are trained independently). The multiclass ranking loss of Rank SVM performs much better (5.35% recall@1) but is still outperformed by Wsabie (8.39% recall@1). Wsabie uses the WARP loss to optimize the top of the ranked list and its good performance can be explained by the suitability of this loss function for measure like recall@k. LaSR with t = 0 is essentially identical to Wsabie in this case and so we use that model as our "base learner" for iteration 0. LaSR (t = 1), that does use structure, outperforms Wsabie with a recall@1 of 9.45%. Some example annotations are given in Table 4. LaSR seems to provide more consistent results than Wsabie on several queries (with less bad predictions in the top k) which improves the overall results, whilst maintaining the right level of diversity on others.
CONCLUSION
In this paper we introduced a method for learning a latent variable model that takes into account the structure of the predicted ranked list of items given the query. The approach is quite general and can potentially be applied to recommendation, annotation, classification and information retrieval tasks. These problems often involve millions of examples or more, both in terms of the number of training pairs and the number of items to be ranked. Hence, many otherwise straight-forward approaches to structured prediction approaches might not be applicable in these cases. The method we proposed is scalable to these tasks.
Future work could apply latent structured ranking to more applications, for example in text document retrieval. Moreover, it would be interesting to explore using other algorithms as the "base algorithm" which we add the structured predictions to. In this work, we used the approach of (Weston et al., 2011) as our base algorithm, but it might also be possible to make structured ranking versions of algorithms like Non-negative matrix factorization, Latent Semantic Indexing or Singular Value Decomposition as well.
Table 1 :
1Recommendation Results on the music recommendation task. We report for recall at 5, 10, 30 and 50 for our method and several baselines.Method
R@5
R@10
R@30
R@50
NMF
3.76%
6.38%
13.3% 17.8%
SVD
4.01%
6.93%
13.9% 18.5%
LaSR (t = 0) 5.60%
9.49%
18.9% 24.8%
LaSR (t = 1) 6.65% 10.73% 20.1% 26.7%
LaSR (t = 2) 6.93% 10.95% 20.3% 26.5%
Table 2 :
2Changing the embedding size on the music recommendation task. We report R@5 for various dimensions n.Method
n =
25
50
100
200
NMF
2.82% 3.76% 3.57% 4.82%
SVD
3.61% 4.01% 4.53% 5.28%
LaSR (t = 0)
5.23% 5.60% 6.24% 6.42%
We used the "Last.fm Dataset -1K users" dataset
available from http://www.dtic.upf.edu/ ∼ ocelma/
MusicRecommendationDataset/lastfm-1K.html.
This dataset contains (user, timestamp, artist, song)
tuples collected from the Last.fm (www.lastfm.com)
API, representing the listening history (until May
5th, 2009) for 992 users and 176,948
Table 5 :
5Summary of Test Set Results on Imagenet. Recall at 1, 5, 10, Mean Average Precision and Mean Rank are given. LaSR (t = 1) 9.45% 22.1% 29.1% 0.161 523 4.2 IMAGE ANNOTATION TASK ImageNet (Deng et al., 2009) (http://www. image-net.org/) is a large scale image database organized according toAlgorithm
r@1
r@5
r@10 MAP MR
One-vs-Rest
2.83% 8.48% 13.2% 0.065 667
Rank SVM
5.35% 14.1% 19.3% 0.102 804
Wsabie
8.39% 19.6% 26.3% 0.144 626
Polynomial semantic indexing. B Bai, J Weston, D Grangier, R Collobert, K Sadamasa, Y Qi, C Cortes, M Mohri, Advances in Neural Information Processing Systems. Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Cortes, C., and Mohri, M. (2009). Polynomial semantic indexing. In Advances in Neural Information Processing Systems (NIPS 2009).
Predicting Structured Data (Neural Information Processing. G H Bakir, T Hofmann, B Schölkopf, A J Smola, B Taskar, S V N Vishwanathan, The MIT PressBakir, G. H., Hofmann, T., Schölkopf, B., Smola, A. J., Taskar, B., and Vishwanathan, S. V. N. (2007). Predicting Structured Data (Neural Infor- mation Processing). The MIT Press.
Histogram intersection kernel for image classification. A Barla, F Odone, A Verri, III-513-16International Conference on Image Processing (ICIP). 3Barla, A., Odone, F., and Verri, A. (2003). Histogram intersection kernel for image classification. Interna- tional Conference on Image Processing (ICIP), 3, III-513-16 vol.2.
The bellkor solution to the netflix prize. R Bell, Y Koren, C Volinsky, Bell, R., Koren, Y., and Volinsky, C. (2009). The bellkor solution to the netflix prize.
On the statistical analysis of dirty pictures. J Besag, Journal of the Royal Statistical Society, Series B. 48Besag, J. (1986). On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, Series B , 48, 259-302.
Learning collaborative information filters. D Billsus, M Pazzani, Proceedings of the Fifteenth International Conference on Machine Learning. the Fifteenth International Conference on Machine Learning5448Billsus, D. and Pazzani, M. (1998). Learning collabo- rative information filters. In Proceedings of the Fif- teenth International Conference on Machine Learn- ing, volume 54, page 48.
Latent dirichlet allocation. D M Blei, A Ng, Jordan , M I , The Journal of Machine Learning Research. 3Blei, D. M., Ng, A., and Jordan, M. I. (2003). La- tent dirichlet allocation. The Journal of Machine Learning Research, 3, 993-1022.
On the algorithmic implementation of multiclass kernel-based vector machines. K Crammer, Y Singer, The Journal of Machine Learning Research. 2Crammer, K. and Singer, Y. (2002). On the algo- rithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2, 265-292.
Search-based structured prediction. Machine learning. H Daumé, J Langford, D Marcu, 75Daumé, H., Langford, J., and Marcu, D. (2009). Search-based structured prediction. Machine learn- ing, 75(3), 297-325.
Learning as search optimization: Approximate large margin methods for structured prediction. Iii Daumé, H Marcu, D , Proceedings of the 22nd international conference on Machine learning. the 22nd international conference on Machine learningACMDaumé III, H. and Marcu, D. (2005). Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learn- ing, pages 169-176. ACM.
Indexing by latent semantic analysis. S Deerwester, S T Dumais, G W Furnas, T K Landauer, R Harshman, JASISDeerwester, S., Dumais, S. T., Furnas, G. W., Lan- dauer, T. K., and Harshman, R. (1990). Indexing by latent semantic analysis. JASIS .
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A Large-Scale Hier- archical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition.
WordNet: An Electronic Lexical Database. Fellbaum, C., editorMIT PressFellbaum, C., editor (1998). WordNet: An Electronic Lexical Database. MIT Press.
The Pyramid Match Kernel: Efficient Learning with Sets of Features. K Grauman, T Darrell, Journal of Machine Learning Research. 8Grauman, K. and Darrell, T. (2007). The Pyramid Match Kernel: Efficient Learning with Sets of Fea- tures. Journal of Machine Learning Research, 8, 725-760.
Large margin rank boundaries for ordinal regression. Advances in large margin classifiers. R Herbrich, T Graepel, K Obermayer, 88Herbrich, R., Graepel, T., and Obermayer, K. (2000). Large margin rank boundaries for ordinal regres- sion. Advances in large margin classifiers, 88(2), 115-132.
Probabilistic latent semantic indexing. T Hofmann, SIGIR 1999. Hofmann, T. (1999). Probabilistic latent semantic in- dexing. In SIGIR 1999 , pages 50-57.
Optimizing search engines using clickthrough data. T Joachims, Proceedings of the eighth ACM SIGKDD. the eighth ACM SIGKDDACMJoachims, T. (2002). Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD, pages 133-142. ACM.
Word alignment via quadratic assignment. S Lacoste-Julien, B Taskar, D Klein, Jordan , M , North American chapter of the Association for Computational Linguistics (NAACL). Lacoste-Julien, S., Taskar, B., Klein, D., and Jordan, M. (2006). Word alignment via quadratic assign- ment. In North American chapter of the Association for Computational Linguistics (NAACL).
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Lafferty, J., McCallum, A., and Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data.
Direct optimization of ranking measures. Q Le, A Smola, National ICT AustraliaTechnical reportLe, Q. and Smola, A. (2007). Direct optimization of ranking measures. Technical report, National ICT Australia.
Algorithms for nonnegative matrix factorization. D Lee, H Seung, Advances in neural information processing systems. 13Lee, D. and Seung, H. (2001). Algorithms for non- negative matrix factorization. Advances in neural information processing systems, 13.
Recognizing surface using three-dimensional textons. T Leung, J Malik, Proc. of 7th Int'l Conf. on Computer Vision. of 7th Int'l Conf. on Computer VisionCorfu, GreeceLeung, T. and Malik, J. (1999). Recognizing surface using three-dimensional textons. Proc. of 7th Int'l Conf. on Computer Vision, Corfu, Greece.
A new baseline for image annotation. A Makadia, V Pavlovic, S Kumar, European conference on Computer Vision (ECCV). Makadia, A., Pavlovic, V., and Kumar, S. (2008). A new baseline for image annotation. In European con- ference on Computer Vision (ECCV).
Global ranking using continuous conditional random fields. T Qin, T Liu, X Zhang, D Wang, Li , H , Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems. the Twenty-Second Annual Conference on Neural Information Processing SystemsQin, T., Liu, T., Zhang, X., Wang, D., and Li, H. (2008). Global ranking using continuous conditional random fields. In Proceedings of the Twenty-Second Annual Conference on Neural Information Process- ing Systems (NIPS 2008).
Learning to diversify from implicit feedback. K Raman, P Shivaswamy, Joachims , T , WSDM Workshop on Diversity in Document Retrieval. Raman, K., Shivaswamy, P., and Joachims, T. (2012). Learning to diversify from implicit feedback. In WSDM Workshop on Diversity in Document Re- trieval .
Relevance feedback in information retrieval. J Rocchio, Rocchio, J. (1971). Relevance feedback in information retrieval.
Kernel principal component analysis. Advances in kernel methods: support vector learning. B Schoelkopf, A J Smola, K R Müller, Schoelkopf, B., Smola, A. J., and Müller, K. R. (1999). Kernel principal component analysis. Advances in kernel methods: support vector learning, pages 327- 352.
Support vector machine learning for interdependent and structured output spaces. I Tsochantaridis, T Hofmann, T Joachims, Altun , Y , International Conference on Machine Learning (ICML). Tsochantaridis, I., Hofmann, T., Joachims, T., and Altun, Y. (2004). Support vector machine learn- ing for interdependent and structured output spaces. In International Conference on Machine Learning (ICML), pages 104-112.
Ranking with ordered weighted pairwise classification. N Usunier, D Buffoni, P Gallinari, ICML. L. Bottou and M. LittmanMontreal. OmnipressUsunier, N., Buffoni, D., and Gallinari, P. (2009). Ranking with ordered weighted pairwise classifica- tion. In L. Bottou and M. Littman, editors, ICML, pages 1057-1064, Montreal. Omnipress.
Boltzrank: learning to maximize expected ranking gain. M Volkovs, R Zemel, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningACMVolkovs, M. and Zemel, R. (2009). Boltzrank: learning to maximize expected ranking gain. In Proceedings of the 26th Annual International Conference on Ma- chine Learning, pages 1089-1096. ACM.
Sidestepping intractable inference with structured ensemble cascades. D Weiss, B Sapp, B Taskar, Advances in Neural Information Processing Systems. 23Weiss, D., Sapp, B., and Taskar, B. (2010). Sidestep- ping intractable inference with structured ensemble cascades. Advances in Neural Information Process- ing Systems, 23.
Support vector machines for multi-class pattern recognition. J Weston, C Watkins, Proceedings of the seventh European symposium on artificial neural networks. the seventh European symposium on artificial neural networks4Weston, J. and Watkins, C. (1999). Support vector machines for multi-class pattern recognition. In Pro- ceedings of the seventh European symposium on ar- tificial neural networks, volume 4, pages 219-224.
Large scale image annotation: Learning to rank with joint word-image embeddings. J Weston, S Bengio, N Usunier, International Joint Conference on Artificial Intelligence (IJCAI). Weston, J., Bengio, S., and Usunier, N. (2011). Large scale image annotation: Learning to rank with joint word-image embeddings. In International Joint Conference on Artificial Intelligence (IJCAI).
Largescale music annotation and retrieval: Learning to rank in joint semantic spaces. J Weston, S Bengio, P Hamel, Journal of New Music Research. Weston, J., Bengio, S., and Hamel, P. (2012). Large- scale music annotation and retrieval: Learning to rank in joint semantic spaces. In Journal of New Music Research.
Listwise approach to learning to rank: theory and algorithm. F Xia, T.-Y Liu, J Wang, W Zhang, Li , H , Proceedings of the 25th International Conference on Machine Learning. the 25th International Conference on Machine LearningXia, F., Liu, T.-Y., Wang, J., Zhang, W., and Li, H. (2008). Listwise approach to learning to rank: the- ory and algorithm. In Proceedings of the 25th Inter- national Conference on Machine Learning.
A support vector method for optimizing average precision. Y Yue, T Finley, F Radlinski, Joachims , T , SIGIR. Yue, Y., Finley, T., Radlinski, F., and Joachims, T. (2007a). A support vector method for optimizing average precision. In SIGIR, pages 271-278.
A support vector method for optimizing average precision. Y Yue, T Finley, F Radlinski, Joachims , T , Proceedings of the 30th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 30th International ACM SIGIR Conference on Research and Development in Information RetrievalYue, Y., Finley, T., Radlinski, F., and Joachims, T. (2007b). A support vector method for optimizing average precision. In Proceedings of the 30th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , pages 271- 278.
| []
|
[
"DIRECT, a low-cost system for high-speed, low-noise imaging of fluorescent biological samples",
"DIRECT, a low-cost system for high-speed, low-noise imaging of fluorescent biological samples"
]
| [
"Isabell Whiteley *[email protected] \nDepartment of Bioengineering\nImperial College London\nLondonUK\n\nCentre for Neurotechnology\nImperial College London\nLondonUK\n",
"Chenchen Song \nDepartment of Brain Sciences\nImperial College London\nLondonUK\n",
"Glenn A Howe \nDepartment of Bioengineering\nImperial College London\nLondonUK\n",
"Thomas Knöpfel \nCentre for Neurotechnology\nImperial College London\nLondonUK\n\nDepartment of Brain Sciences\nImperial College London\nLondonUK\n",
"Christopher J Rowlands \nDepartment of Bioengineering\nImperial College London\nLondonUK\n\nCentre for Neurotechnology\nImperial College London\nLondonUK\n"
]
| [
"Department of Bioengineering\nImperial College London\nLondonUK",
"Centre for Neurotechnology\nImperial College London\nLondonUK",
"Department of Brain Sciences\nImperial College London\nLondonUK",
"Department of Bioengineering\nImperial College London\nLondonUK",
"Centre for Neurotechnology\nImperial College London\nLondonUK",
"Department of Brain Sciences\nImperial College London\nLondonUK",
"Department of Bioengineering\nImperial College London\nLondonUK",
"Centre for Neurotechnology\nImperial College London\nLondonUK"
]
| []
| A targeted imaging system has been developed for applications requiring recording from stationary samples at high spatiotemporal resolutions. It works by illuminating regions of interest in rapid sequence, and recording the signal from the whole field of view onto a single photodetector, and can be implemented at low cost on an existing microscope without compromising existing functionality. The system is characterized in terms of speed, spatial resolution, and tissue penetration depth, before being used to record individual action potentials from ASAP-3 expressing neurons in an ex vivo mouse brain slice preparation. | null | [
"https://export.arxiv.org/pdf/2212.04176v1.pdf"
]
| 254,408,934 | 2212.04176 | 83ad102eb46cbdca77aec416d23c8bfedfeb9b1e |
DIRECT, a low-cost system for high-speed, low-noise imaging of fluorescent biological samples
Isabell Whiteley *[email protected]
Department of Bioengineering
Imperial College London
LondonUK
Centre for Neurotechnology
Imperial College London
LondonUK
Chenchen Song
Department of Brain Sciences
Imperial College London
LondonUK
Glenn A Howe
Department of Bioengineering
Imperial College London
LondonUK
Thomas Knöpfel
Centre for Neurotechnology
Imperial College London
LondonUK
Department of Brain Sciences
Imperial College London
LondonUK
Christopher J Rowlands
Department of Bioengineering
Imperial College London
LondonUK
Centre for Neurotechnology
Imperial College London
LondonUK
DIRECT, a low-cost system for high-speed, low-noise imaging of fluorescent biological samples
A targeted imaging system has been developed for applications requiring recording from stationary samples at high spatiotemporal resolutions. It works by illuminating regions of interest in rapid sequence, and recording the signal from the whole field of view onto a single photodetector, and can be implemented at low cost on an existing microscope without compromising existing functionality. The system is characterized in terms of speed, spatial resolution, and tissue penetration depth, before being used to record individual action potentials from ASAP-3 expressing neurons in an ex vivo mouse brain slice preparation.
Introduction
There are many advantages to using optical microscopy to investigate biological systems such as cells, tissues and superficial organs. It is fast, can resolve single cells and sub-surface features, and causes little damage to the sample under investigation. Nevertheless, cutting-edge biological applications have ever-more-stringent requirements in terms of speed, limits on photobleaching and tolerance of tissue scattering, all while maintaining the spatial resolution and field of view (FOV) that researchers have come to expect from their microscopes.
Achieving these performance improvements in the general case (for which the structure of the sample is unknown) is an enormous engineering challenge, but in a subset of applications, what matters is not the structure itself, but how it changes over time. This is particularly true in the case of fluorescent sensors, in which the cell does not move on the timescale of the experiment, but does change in emission wavelength, intensity or fluorescence lifetime. For example, neuroscientists increasingly use optical techniques to monitor calcium ion concentrations and even membrane voltage potentials in populations of neurons, by genetically-expressing a fluorescent probe and monitoring the resulting signal using a microscope. Many biological fields use Förster resonance energy transfer (FRET) sensors which work by non-radiative transfer of energy between a donor and an emitter fluorophore (or quencher) causing a change in fluorescence level. FRET sensors can respond very rapidly to external conditions [1]; they are widely used to monitor protein interactions and conformational changes [2], metal ions [3], metabolites and biomarkers [4] and for fluorescence lifetime imaging [5] in live in vivo samples. As many of these applications require long-term monitoring, they require optical techniques that can minimize photobleaching, improve temporal resolution (as high as ~1kHz in the case of some voltage sensors [6]) and reduce the impact of tissue scattering.
So high is the demand for high-speed, targeted systems that a number of techniques have recently been developed that are able to optically target features such as individual neurons within a labelled population, while also achieving improved spatial and/or temporal resolutions. Multiphoton excitation is commonly used in these applications; to increase laser efficiency and maximize the number of neurons targeted, some methods split a two-photon laser beam into beamlets using a spatial light modulator (SLM) [7,8] or with a microlens array [9] and then scan the beamlets across the sample using galvanometric (galvo) scanning mirrors and record the activity with a camera (EM-CCD [7] or sCMOS [8,9]). These scanning setups report reduced photodamage to the samples but are limited in their scanning rates due to the mechanical galvo mirrors and limited camera frame-rates. Other methods, such as two-photon FACED microscopy, also split the laser into beamlets (this time distributed into a tilted line), recording the fluorescence signal onto a photomultiplier tube sampling at hundreds of megasamples per second [10,11]. Nevertheless, the frame rate is still limited by the need to scan the tilted line of foci using a galvo mirror, necessitating a trade-off between temporal and spatial resolutions.
SLMs can also be used in combination with computer generated holography (CGH) directly (i.e. without galvo scanning). It has primarily been used in two photon optogenetic targeting [12,13], however these examples are limited in the number of targets that can be addressed in the same experiment, and the temporal resolution of the systems do not achieve the speed necessary for imaging fluorescent voltage sensors and other high-speed probes. SLMbased CGH was also used in single photon imaging of neurons containing a voltage dye [14].
Here the SLM was able to achieve high quality lateral (single cell) and axial (~10µm) confinement and reduced baseline fluorescence and photodamage. It recorded onto a sCMOS camera at high speed but required post-processing to remove potential temporal distortions caused by the high-speed imaging rates of the camera. Additionally, only a single target was recorded from during each experiment, and no solution was offered to scale to multiple targets. SLMs, through numerous methods, are able to generate spatially precise targeting, however they are temporally limited and costly to implement. Temporal focusing on the other hand can be used to precisely target thousands of individual neurons at sampling rates up to 160Hz [15]. Though it enables spatially-precise lateral and axial resolution, its limited temporal resolution of the highlighted system prevents its use in optical targeting experiments where there are rapid fluctuations in fluorescence in a densely populated or scattering sample, and furthermore, as the number of targets is increased, the temporal resolution must be further decreased. Though highspeed multiphoton targeting has been applied over large volumes, the lasers required are fragile and expensive, thus the use of single-photon excitation would be advantageous.
Spatially precise targeting has also been achieved in neurons with a digital micromirror device (DMD) for both optogenetic activation [16] and imaging with genetically encoded voltage indicators (GEVIs) [17]. In both situations, activation and imaging of neurons, the DMD provided precise spatial resolution and reduced photobleaching, but its high speeds were not utilized to increase temporal resolution. In both cases, a single frame was projected by the DMD to project to all targets, rather than switching between targets at high speed. Using a camera to record the activity of the targeted regions is the spatially and temporally limiting factor in GEVI imaging experiments. Only the neurons that the camera can see can be targeted and pixel binning is required to achieve high speed frame rates, that are fast enough to record high speed fluorescent changes.
Analyzing the techniques above there is a clear need for a low-cost method for recording changes in fluorescent probes with high sensitivity and ~500-1000Hz bandwidth, while ideally preserving the favorable photobleaching characteristics and tolerance to tissue scattering that existing methods provide, compared to simple wide-field microscopy. To address these limitations we present DMD Imaging with Rapid Excitation and Confined Targeting (DIRECT), a method for the high-speed monitoring of temporally-varying fluorescence signals based on strong structural priors. It uses a high-speed DMD to project patterns onto the sample in rapid sequence, recording the fluorescence emission from each projected pattern using a single point detector. The high speed of the DMD permits more than a dozen regions of interest (ROIs) to be probed with effective frame rates of more than a thousand measurements per second. This preserves the spatial resolution (ability to distinguish one cell precisely from its neighbor) while allowing high-bandwidth recording of fluorescence changes. DIRECT exhibits minimal photobleaching, significantly higher speeds, reduced read noise, has less onerous requirements for high-speed data storage, and increased tolerance to tissue scattering relative to conventional wide-field imaging or DMD targeting using a camera, while remaining low-cost, easy to implement and compatible with almost all existing microscope systems.
In this paper we characterize the performance of DIRECT, in particular the recording speed, resistance to sample scattering, and photobleaching reduction. We discuss the design decisions made in its creation, paying particular attention to the choice of detector. Finally, we demonstrate DIRECT on a number of test samples, culminating in the imaging of ASAP3expressing neurons in an ex vivo mouse brain slice.
Results and Discussion
DIRECT has a number of advantages over conventional camera-based widefield imaging. Here we characterize the speed of recording, tolerance to photobleaching, and tolerance to scattering. We also present an analysis of the achievable resolution, as well as a comparison of different single-point detectors to establish which is most suitable for use in DIRECT. Finally we show how DIRECT has been used to record membrane voltage fluctuations in ex vivo mouse brain samples.
Design
DIRECT ( Fig. 1) was designed to be integrated into standard upright microscopes using widely available, off-the-shelf parts and with minimal disruption to the existing optical pathways in the microscope. It fits into the infinity path of the microscope, between the microscope objective and the tube lens. A custom-machined interface part was used to fix DIRECT's optical pathway to the microscope. This part incorporates Olympus microscope dovetails with a Thor Labs removable filter cube holder (computer aided design (CAD) model available) but can be easily adapted to dovetails from other microscope manufacturers.
A DMD (a type of SLM consisting of an array of micromirrors) is placed conjugate to the sample plane in DIRECT's optical pathway. Binary masks of the desired targets are loaded and displayed by the DMD in quick succession (up to 22kHz frame rate) and projected onto the Fig. 1 Imaging pathway for targeted illumination. Light from a 488nm laser passes through a beam expander before striking a DMD. The reflected light is projected through a 4f lens system (L1 and L2) and enters the microscope through a tube lens. The light reflects off a dichroic mirror (DM) and through the objective onto the sample. The emitted light returns through the objective and DM, then passes through an emission filter and tube lens. The light is recorded by a detector (PMT, photodiode, SiPM, or camera). A removable cube containing a prism mirror is located after the beam expander to switch permit illumination with an LED if desired.
sample. DMDs provide advantages over other light shaping devices such as liquid crystal SLMs due to their fast refresh rates, (allowing for many target regions to be imaged sequentially) and relatively low cost. After the shaped light reflects off the sample, the resulting fluorescence information is captured by a single-point detector, such as a photodiode, photomultiplier tube (PMT), or silicon photomultiplier (SiPM); a camera is also available for more conventional imaging. A theoretical comparison of different detectors is given in the next section.
Detector comparison
DIRECT can use a variety of different detector types, including photodiodes, photomultipliers, silicon photomultipliers (SiPMs, also known as Multi-Pixel Photon Counters, or MPPCs), avalanche photodiodes and so on. Several of these were available in the laboratory, and their specified dynamic range, sensitivity and speed compared in order to establish their utility under different imaging conditions. Neurobiological experiments in particular have exacting requirements; a very small change in fluorescence must be recorded at high speed (~1% change in the signal intensity at 1ms per frame [18]). Because fluorescence measurements are ultimately shot-noise limited, the probability distribution of the detected signal is given by a Poisson distribution, the signal-tonoise ratio (SNR) of which can be approximated by √ (where is the number of detected events, detected photons in this case). Thus the detector must be able to capture at least 10,000 photons to achieve a SNR of 1:1 for a change in fluorescence (Δ / ) of 1%. Obviously, greater values are preferable.
To compare detectors under various conditions, it is necessary to know the expected photon count per pixel required to overcome the experimental noise. The total noise (defined as the mean deviation of a measurement from its true value) can be approximated as the combination of the detector dark noise and the photon shot noise of the overall measurement, summed in quadrature:
= √ ℎ 2 + 2 Eq. 1
Since the signal is expressed as a fractional change in fluorescence Δ % , the total number of fluorescence photons = Δ % ⁄ , for a SNR of 1. If the user desires a higher SNR, this expression must be multiplied by the relevant factor. Thus
= Δ % Eq. 2
Finally, because ℎ is the standard deviation of the shot noise of the measurement (which can be approximated as ℎ = √ ), the following expression can be obtained:
= √ + 2 Δ % Eq. 3
Which can be rearranged into the form of a quadratic:
( Δ % ) 2 2 − − 2 = 0 Eq. 4
This can be solved using the well-known quadratic formula
= − ± √ 2 − 4 2
Eq. 5
Substituting = , = (Δ % / ) 2 , = −1 and = − 2 , and ignoring the negative solution resulting from subtracting the square root term:
= 1 + √ 1 + 4 × ( Δ % ) 2 × 2 2 × ( Δ % ) 2 Eq. 6
For comparison, the same measurement can also be performed using a camera. The noise levels of a camera are slightly different though; because camera pixels have limited well depth , and per-pixel read noise is approximately constant regardless of integration time, many pixels must be summed together in order to reach the necessary photon counts; the total noise is therefore the sum of the noise from each pixel in quadrature. The number of pixels needed to capture a total of photons is, of course, heavily dependent on the spatial distribution of the fluorescence signal, but the lower bound is simply / . Consequently, a lower bound on can be given as:
= √ 2 Eq. 7
For a camera, the quadratic expression therefore simplifies to
( Δ % ) 2 2 − − 2 = ( Δ % ) 2 2 − (1 + 2 ) = 0 Eq. 8
Ignoring the trivial solution = 0, the required number of captured photons for a camera is thus given by:
= (1 + 2 ) ( Δ % ) 2
Eq. 9
The results are summarized in Fig. 2; all cases describe a system in which integrated photon counts from 10 different areas must be recorded at 1000fps. A detailed description of each detector can be found in Table 1. For cases where the fractional change in fluorescence Δ % is low (on the order of 1%, consistent with early fluorescent voltage sensors like ArcLight [19]), there are no appreciable differences between detectors. When the fractional change in fluorescence reaches ~5%, the performance of the PMT and SiPM are broadly equivalent however there is a more pronounced performance deficit for the photodiode. Once the fractional change in fluorescence reaches 50% however (consistent with genetically-encoded calcium indicators such as jGCaMP7 [20]), the PMT has a clear performance advantage. Note that using a camera does not appear to improve sensitivity over that of a PMT (especially since real fluorescence distributions will be substantially more varied than the ideal case modelled), and in fact performs slightly worse than the PMT (although this performance deficit is reduced for higher Δ % ).
Projection speed
The number of ROIs that can be measured and the rate at which these measurements can be taken is controlled directly be the frame-rate of the DMD. The frame exposure time for each mask on the DMD was manually adjusted within the software used to run the experiments. The minimum exposure time at which the DMD was used was 100µs per frame (10kHz frame rate) as the DMD firmware became unstable at higher rates, though it is possible to achieve a frame rate of up to 22kHz. The inter-frame time (i.e. the time between the masks being displayed while the DMD was switching) was measured. At frame exposure times of 100µs, 1000µs, 10000µs, and 100000µs the inter-frame time between masks was 60µs. This time is included in the frame exposure times of the DMD. Over the course of a 10 second trial targeting 10 regions, and at a Fig. 2 Performance comparison between detectors under different imaging conditions. Simulated frame rate is 1000fps, and there are 10 independent illuminated areas. For cases where the fractional change in fluorescence is high, photon shot noise dominates and all detectors perform similarly. In cases where the fractional change is large however, detector read noise is significant and detector performance varies. Note also that the number of photons required to overcome photon shot noise decreases as the fractional change in fluorescence increases, as expected.
frame rate of 10kHz, each region is being targeted at a 1kHz rate and only 6% of the experimental time per sample is taken by the DMD switching between masks. This implies that fluorescence signals with a bandwidth of 500Hz (after considering Nyquist sampling) can be recorded, which comfortably exceeds the bandwidth of most fluorescent indicators in biological samples [6].
Resolution and scattering tolerance
Because the system was optimized such that the DMD image substantially covered the field of view (rather than to maximise spatial resolution at the cost of reduced field of view), the achievable resolution was limited by the size projected image of the DMD mirrors. This was first tested under ideal, non-scattering conditions. To test whether the relay optics were sufficient to maintain the resolution set by the DMD pixel size, a thin sample consisting of a monolayer of 100nm fluorescent beads was prepared. Single-mirror-width lines were projected onto the sample; images were captured of a single line, two adjacent lines, two lines with a row of off mirrors separating them, and finally two lines with two rows of off mirrors separating them ( Fig. 3 A-D). The Sparrow limit was selected as the resolution criterion as it is more stringent than the more common Rayleigh criterion [21] while being simple and unambiguous to assess. The Sparrow limit is reached when the dip in light intensity between two points can no longer be resolved; it assumes uniform light intensity for both points. In the targeting resolution experiments, the two projected lines adjacent to each other, with no off mirrors in between are resolvable by the Sparrow criterion as the two separate intensity peaks are clearly visible (Fig. 3 B,F). The gap that is resolved is the gap between the mirrors of the DMD, thus we can conclude that the resolution of DIRECT is only limited by the mechanical design of the DMD.
The average pixel intensity of the projected lines was measured (Fig. 3 E-H), and the full-width, half-maximum (FWHM) of the plots was acquired and then compared to the theoretical FWHMs of a diffraction limited system. For a single row of mirrors, the FWHM was 1.69µm and the theoretical was 0.71µm at the sample plane. The acquired FWHMs for the other instances were 2.60µm, 3.23µm, and 3.80µm respectively. For the different images we found the FWHM of the acquired data to be on average 1.04µm±0.094µm wider than the theoretical FWHMs. We believe that much of the increase in width is due to light scatter from the fluorescent bead sample. DIRECT was designed to be used for biological tissues and therefore needs to be able to target ROIs through densely scattering samples. Because all scattered photons are integrated onto a single detector (thus scattering of emission photons has negligible effect), DIRECT is expected to have heightened tolerance to tissue scattering compared to techniques like confocal microscopy which are sensitive to scattering of the emission photons. Nevertheless, because the optical properties of tissue samples are difficult to control, scattering phantoms, with controllable levels of light scattering, were used as a proxy. As in the above targeting resolution experiment, lines were projected by the DMD at different separation distances and imaged onto a thin layer of fluorescent microspheres. Above the fluorescent spheres, dilutions of Intralipid 20% were used as phantoms to represent different levels of scattering [22] (Fig. 4). The upright microscope's existing epifluorescence camera was supplemented by a second camera in a transmission geometry. Briefly, a custom-machined adaptor replaced the condenser, allowing an objective to be placed (in a geometry reminiscent of an inverted microscope) under the sample, imaging the fluorescent microspheres. An elliptical mirror in the adaptor enabled the light from the "inverted" objective to be projected through a tube lens onto a camera (Fig. 4). Images were taken from both epifluorescence and inverted cameras simultaneously, and their intensities were compared (Fig. 5). With low levels of scattering, projected lines in the transmission and reflection images were resolvable. As levels of scattering increased, the transmission images could still be resolved, however the reflection images were not resolvable.
In densely scattering situations, DIRECT was still able to achieve precise targeting resolution even when the reflected image was not visible through the scattering. When used in combination with single-point detectors such as photodiodes or PMTs, DIRECT can accurately target and record from ROIs whose exact shape is unknown (see section 2.2).
Photobleaching
Photobleaching is a persistent issue in many biological experiments [23]. To reduce it, the light source power can be reduced, but this can lead to an increase in background noise. In a shot noise limited system, the noise scales as approximately the square root of the signal intensity, so decreasing the excitation power reduces the signal-to-noise ratio. Additionally, regions that are not actively being recorded from are still exposed to light and subject to photobleaching, which is damaging to the sample and can limit experimental utility. Because DIRECT does not illuminate cells that are not being recorded from, it should substantially reduce off target photobleaching.
Photobleaching rates for widefield fluorescence and DIRECT were compared. A series of images were captured over 20 minutes using both techniques: uniform illumination in the case of widefield fluorescence imaging, or switching between ROIs in the case of DIRECT. The average pixel value of each captured image or targeted region within the image was calculated and normalized to the exposure time, and then exponential decay curves were fitted to the averaged data ( Fig. 6). At the same laser power and switching between three ROIs (~ 55 µm diameter circles) at 100 frames per second, DIRECT has an approximate twofold improvement in rate of decay compared to widefield imaging (decay rate of DIRECT = -2.34×10 -3 , decay rate of widefield = -4.63×10 -3 ) (Fig. 6A,C). When the laser power was increased by a factor of three to account for the reduced amount of time DIRECT spends illuminating each target, we unexpectedly found that the decay rate was still approximately two times better than widefield (decay rate of DIRECT at increased power = -1.98×10 -3 ) (Fig. 6B). Speculation on the mechanism of this improved tolerance to on-target photobleaching is beyond the scope of this paper, but nevertheless provides further support for the benefits of using DIRECT.
Voltage Imaging
While DIRECT can be used for a variety of applications, it is particularly suited to the highspeed high-dynamic-range measurements needed for imaging genetically-encoded fluorescent voltage indicators in brain tissue. For this demonstration, DIRECT was used to selectively target and record from multiple neurons simultaneously in an ex vivo mouse brain sample containing ASAP3-expressing neurons; ASAP3 is a genetically-encoded voltage sensor which changes fluorescence in response to changes in cell membrane potential [24]. While many geneticallyencoded voltage sensors operate in this manner, typically the actual fluctuations are small and they are prone to bleaching [18]. Therefore, they require low background noise and high speed imaging to resolve the changes. The imaging system must also achieve a high spatial resolution to avoid bleaching the voltage indicator-containing neurons that are not being recorded from. DIRECT is an ideal system for recording from voltage indicators as its high speeds can capture the fluctuations in fluorescence and its targeting cuts background noise and off-target bleaching, while also reducing on-target photobleaching by decreasing the time the light spends on each target. DIRECT vs widefield imaging. Before recording data at high speed it is illustrative to assess just the effect of the targeted ROIs, using a camera rather than a single-point detector. ASAP3expressing neurons across the field of view of the camera were targeted and a mask containing the selected neurons was generated. The resulting image of the projected mask was compared to a widefield image taken of the same FOV. The images were normalized such that the targeted neurons had the same average intensity and the background/non-targeted regions were compared (Fig. 7). In the WF image, the target intensity was negligibly higher than the background intensity (target to background ratio = 0.9293) whereas the ratio of intensity of the targeted region compared to the nontarget region of DIRECT was much larger (target to background ratio = 6.7440). While the intensity of the light inside the ROIs was similar, the intensity of the background of the widefield image was much greater than that of the ROI image. By using DIRECT, the background noise of the image was drastically reduced while the target intensity was unchanged. This allows for recordings of voltage activity with less noise while preserving other fluorescence-labelled neurons for further experimental recording. Fluorescence activity detection. As a final demonstration, DIRECT was used to record electrically-induced action potentials in ASAP3-containing neurons. Masks of each targeted neuron were generated and projected onto the sample by the DMD for 100µs before switching to the next mask. The emitted photons were collected by a PMT. Ten neurons across the FOV were targeted (Fig. 8A) and the PMT recorded at 1MS/s; the high sampling rate enables accurate off-line synchronization of the measured data. There was an effective sampling rate of 1kHz per neuron from the DMD. An electrode was placed into the sample at a close proximity to the targeted neurons and a short electrical pulse was given, followed by 5 consecutive pulses 0.5 sec later. Activity traces of the 10 neurons were separated (Fig. 8B,E) and the ΔF/F and SNR ratio for the first action potential was calculated. The average SNR for the first action potential was 6.23 (σ = 1.59, n = 10 neurons) and the mean ∆ ⁄ of the ASAP3 indicator during the first action potential was -7.62% (σ = 0.0098, n = 10 neurons) (Fig. 8C,D). The average intensity of each targeted neuron differed due to the different axial depths of each neuron, however the signal to noise ratio was high enough to resolve the changes in fluorescence that occurred due to electrical stimulation, even at the lowest baseline fluorescence values. By utilizing DIRECT, and switching between the different targets, the individual activity traces of each neuron could be easily separated and single action potentials for each target were resolved. DIRECT combined with a PMT enabled neurons at different depths to be recorded from simultaneously despite the precise shape of the neuron being unknown.
The total number of ROIs that can be recorded from during a single experiment without losing any activity information is determined by minimum sample rate that can be used, which in turn is determined by the rise and fall times of the fluorescence indicators used. In the above experiments, the system is able to unambiguously resolve signals with a bandwidth of 500Hz (Nyquist sampling). If this sampling rate is retained, then DIRECT is able to target up to 22 ROIs (based on the DMD switching rate).
Methods
Optical design
The DIRECT system is described as follows: light from a 488nm laser (Coherent Sapphire 488-200) passes through a 4.5× beam expander (Thorlabs A220TM-A and LBF254-050-A) and strikes a DMD (Vialux V-7001) located conjugate to the sample plane of the microscope. An alternative illumination path is also available, for light-emitting diode (LED) rather than laser illumination; after the beam expander but before the DMD, a mirror (Thorlabs MRA25-E02, placed in a removable cube: Thorlabs DFM1B-M and Thorlabs DFM1T2 cube insert) can be placed to switch to the LED.
The light reflected from the DMD passes through a 4f lens system (for characterization experiments: four Thorlabs LA1256-A lenses in Plössl pairs, for voltage imaging experiments: two Thorlabs LA1417-A lenses) onto an intermediate plane (where optionally a mask may be placed) and into a tube lens (Olympus SWTLU-C) before reflecting off a dichroic mirror (Semrock Di03-R488-t1-25x36 mounted in a Thorlabs DFM1/M removable filter cube, which is in turn mounted in a custom holder with Olympus dovetails) and through the objective (20x Olympus UPlanSApo for fluorescent bead experiments, either 16x Nikon CFI75 LWD 16X W (Hamamatsu H9305-03). In all cases, signals from the non-camera detectors were digitized using an ADC module / oscilloscope (Pico Technology Picoscope 5444D).
DIRECT is intended to be inserted to the infinity path of a standard fluorescence microscope; in our demonstration we inserted it between the built-in fluorescence illuminator (Olympus BX-RFAA) and the trinocular head (Olympus U-TR30-2) of an Olympus BX51WI or BX61 using a custom-machined dichroic holder with Olympus dovetails. CAD models for all custom parts are available at https://www.imperial.ac.uk/rowlands-lab/; an optical diagram can be seen in Fig. 1. The system was designed and optimized using Autodesk Inventor Professional 2023 and Zemax OpticStudio.
Experimental control and ROI generation
The DMD allows arbitrary binary patterns to be projected onto the sample at high speed. This can be used to project individual ROIs. A custom software interface written in LabVIEW 2018 calling the functions from the Vialux ALP4.3 Dynamic Link Library (DLL) was used to upload binary sequences to the DMD; ROIs and patterns could be generated, and the rate of switching between each frame of the sequence could be selected for each experiment.
For an ROI projection experiment (where a sequence of ROIs was illuminated in quick succession), a widefield image was taken of the full FOV of the sample with no binning. The picture was loaded into the software interface and regions of interest were hand-drawn around the selected targets. Each region of interest was converted into a binary mask and uploaded to the DMD. To confirm accuracy of ROIs, all masks were combined into a single mask then projected onto the sample and a picture was captured.
Synchronization
Because the DMD could not be synchronized to the photodetector / ADC clock (or vice versa), synchronization between the two had to be performed offline. A start trigger was used to initialize the projection of the pattern sequence, after which the DMD and ADC board then ran asynchronously. Because the DMD was reset between frames (i.e. all mirrors returned to an 'off' position) the signal on the detector dropped to zero every frame, before rapidly increasing to a nonzero value, acting as a built-in clock. The ADC sampled the signal much faster than this clock, and could therefore observe the rapid increase and decrease. A custom MATLAB script was used to recover the clock (also compensating for slow drift between the DMD and ADC clocks); the script could optionally take an average of each 'on' signal for denoising and data reduction purposes.
Projection Speed
Masks of differing sizes were projected by the DMD at frame exposure times of 100µs, 1000µs, 10000µs, and 100000µs onto a fluorescent sample. Fluorescence was detected using a photodiode (Laser components Ltd Photoreceiver LCA-S-400K-SI) and oscilloscope (Picoscope 2204) sampling at 100kS/s. Mask switching was detected by identifying troughs between changing voltage levels as described previously.
Photobleaching
To assess on-and off-target photobleaching, a sample consisting of a close-packed array of 10× diluted 100nm fluorescent microspheres (Fluoresbrite YG Carboxylate Microspheres 0.10µm) was placed in the field of view. A reference image was taken of the whole field of view with all pixels illuminated. ROIs consisting of three ~55µm diameter circles were projected onto the sample in quick succession (10kHz), with the sequence repeating continuously throughout the experiment. Camera frames were taken continuously (exposure time 15ms, laser intensity at the sample 60mW over a ~425×350µm area) and the exposure continued for a period of 20 minutes. Finally a comparison image was taken of the whole field of view (once again with all pixels illuminated). The experiment was then repeated with the laser power increased by a factor of three (exposure time 15ms) (accounting for the threefold reduction in exposure duration caused by the sequential ROI exposure durations). A final experiment was done with all pixels illuminated for the whole duration of the experiment (exposure time 15ms) (simulating widefield camera exposure). In each case a new, unexposed region of the sample was used.
Projection resolution and scattering tolerance
To assess the resolution of the targeting system under various imaging conditions, lines were projected by the DMD and imaged onto a camera (IDS U3-3080SE-M-GL) rather than a single point detector, in order to eliminate the effect of sample heterogeneity. Tests were performed by projecting two single-mirror-width lines onto a sample with rows of OFF mirrors between them. The sample was composed of a microscope slide with a thin layer of dried fluorescent microspheres (Fluoresbrite YG Carboxylate Microspheres 0.10µm) and a coverslip. The average pixel intensity was taken horizontally across the vertically projected lines to determine the intensity profile and separation of the lines.
The resolution of the targeting was assessed in controlled scattering conditions. An inverted microscope (Fig. 4) consisting of a microscope objective (Olympus UPlanApo 20x), elliptical mirror (Thorlabs BBE1-E02), tube lens (Thorlabs TTL200MP), emission filter (Chroma HQ535/50 x), and camera (IDS U3-3080SE-M-GL) was placed below the sample to acquire the transmission image of the pattern after passing through a scattering medium. The inverted microscope was fitted to the upright microscope setup at the location of the condenser. The condenser lens was removed from its mount and a custom milled dovetail-adaptor, holding the objective and elliptical mirror, was placed. The above projection resolution experiment was repeated, this time through the scattering sample and both transmission and reflection images were captured. Scattering phantoms were created using dilutions of Intralipid 20% (Sigma-Aldrich 68890-65-3). Dilutions used were 0.1%, 0.5%, 1%, 2%, 4%, 8%, and 10%. The sample consisted of a thin layer of fluorescent microspheres (Fluoresbrite YG Carboxylate Microspheres 0.10µm) between two coverslips, a 200µm thick spacer with a 5mm diameter well containing a scattering phantom and a third coverslip on top (Fig. 4). Spacer was made using made 3M 9088 White Double Sided Plastic Tape; the well was made using a 5mm hole punch.
In Vitro slice preparation
Neurophysiological activity recordings
DIRECT vs Widefield
To assess the functional use of DIRECT for biological applications in scattering tissue, the spatial resolution of DIRECT was compared widefield imaging in an in vitro mouse brain sample with neurons containing the voltage indicator ASAP3. Previous papers have also shown the reduction of background light using a DMD to target neurons [17], and this was also confirmed with DIRECT. A mask of four neurons was projected by the DMD and a widefield image of the same FOV was also captured onto a camera (Basler acA1920-155um). Neurophysiological Activity A widefield image was captured using the laser at 20mW and ROIs drawn around the targeted neurons as described in previous section. During the experiment, the laser power was set to 200mW. Multi-ROI PMT recordings with extracellular stimulation: an extracellular electrode was placed in close proximity to the targeted neurons. Individual ROI masks of each target were loaded into the DMD, the experiment was triggered to begin. A single pulse was given by the electrode 0.5 seconds after the experiment began. It was followed by five consecutive pulses at one second intervals. The pulse current was 10-100µA with 200µs pulse duration. The five consecutive pulses were given at 20Hz. The DMD switched between all ROI masks in a continuous pattern with a mask exposure time of 100µs. Emitted photons were recorded by a PMT with an oscilloscope (PicoScope 5444D PC Oscilloscope 200 MHz 4 channel, Pico Technology) sampling at 1MHz for two seconds.
Data analysis
All system characterization, image, and neurophysiological data analysis was performed in Matlab R2022a.
System Characterization
Projection speed. Multi-ROI Picoscope data was converted to Matlab files, the troughs between ROIs were identified and the average time of the troughs was taken.
Photobleaching assessment. For ROI images, the mask used to project the ROIs was used to isolate the target regions of the sample, and the average value of targeted regions was taken.
For full FOV images, the average value of each image gathered in the time series was taken for full FOV images. The average values of each image were normalized to the starting image value. Photobleaching was assumed to be monoexponential with an offset to account for camera pixel offsets. The time series means were therefore curve-fitted using a nonlinear least square method to the following equation:
( ) = (− ) +
Eq. 10
where a, b and c are fitted constants. For off target results, the non-targeted regions were isolated in the full FOV before and after images and the average was taken and normalized to the starting value. The change was calculated by subtracting the after from the before. Resolution and scattering tolerance. Images were loaded, normalized, and lines were manually isolated. The average pixel intensity was calculated across rows of the image. FWHM was measured by calculating the half maximum and interpolating the width at the value. Theoretical mirror widths were calculated for a diffraction limited system. To model the imaging of the projected DMD patterns under diffraction-limited conditions, the imaging pipeline was simulated using a Fourier optics approach. A simulated PSF was constructed assuming a diffraction-limited optical system along with a grid of pixels based on the detector and DMD pixel widths. This pixel grid was sufficiently subsampled to facilitate the inclusion of intra-mirror gaps of the given DMD fill factor. Columns of pixels in this grid, representing columns of DMD mirrors, were subsequently 'illuminated' thereby generating the projected DMD pattern. This projected pattern was then convolved with the PSF to simulate the final diffraction-limited image of the DMD pattern.
Neurophysiological Activity
Individual traces of neurophysiological data recorded onto the PMT were separated using a labbuilt function. The function fits a square wave (with variable period and duty cycle) to the data to separate each ROI activity trace. The data from each segment separated by the square wave is averaged to a single data point and the points are placed into a vector for each ROI. Data is presented in raw format, or with a moving average smoothing filter with spans of 10 data points for multi-neuron PMT recordings. Action potentials for Δ / measurements were calculated by taking the Δ / of each value in a subset of the data, known to contain the first action potential and selecting the maximum resulting value. The SNR was calculated by taking the maximum Δ / and dividing by the standard deviation of the baseline signal.
Conclusion
In summary, a new targeted imaging system, DIRECT, has been developed for applications in which optical measurements must be taken at impractically-high frame rates across a wide FOV, without compromising spatial resolution. The system is simple to implement and can be retrofitted with only minor modification to almost any microscope, with minimal disruption to existing functionality. DIRECT can project patterns as small as a single mirror width of the DMD, allowing for precise targeting resolution. The speed of the DMD allows for target switching as fast as 22kHz, well above the bandwidth of most fluorescent biological probes; this was demonstrated by recording from ASAP-3 expressing cells in an ex vivo slice preparation.
Funding. IW is grateful for an EPSRC-funded studentship from the Center for Doctoral Training in Neurotechnology at Imperial College London. CJR acknowledges funding from EPSRC (EP/S016538/1), BBSRC (BB/T011947/1), Wellcome Trust (212490/Z/18/Z), Cancer Research UK (29694 and EDDPMA-May22\100059), the Royal Society (RGS\R2\212305), the Chan Zuckerberg Initiative (2020-225443 and 2020-225707) and the Imperial College Excellence Fund for Frontier Research.
Fig. 3
3Spatial resolution of DIRECT's targeting, A. a single pixel width line, B. a double pixel width line, C. two single pixel width lines with a line of off pixels in between, D. two single pixel width lines with two lines of off pixels in between. E-H. average intensity plots of the above lines; the mean of all the pixel values in each column are plotted to ascertain whether there is a reduction in intensity between the projected image of the mirrors.
Fig. 4
4Experimental design for scattering tolerance assessment. Left: upright and inverted microscopes. The inverted microscope is fitted onto the existing upright microscope at the location of the condenser lens using a custom milled objective holder with an Olympus condenser dovetail. Right: scattering sample design. From bottom up, coverslip, thin layer of fluorescent microspheres, coverslip, 200µm thick spacer with 5mm diameter well containing Intralipid 20% phantom, and coverslip.
Fig. 5
5Transmission and reflection images of two single mirror width lines, separated by two columns of off mirrors, projected through a scattering phantom. Dilutions of intralipid 20% were used to increase scattering density. Plots show the average pixel intensity of the projected lines. Scale bar = 5µm
Fig. 6
6Photobleaching decay rates using average pixel values of targeted regions with DIRECT versus full FOV widefield imaging. A. a three-ROI photobleaching decay curve at 60mW laser power, decay rate = -2.342×10 -3 , B. a three-ROI photobleaching decay curve at 180mW laser power, decay rate = -1.976×10 -3 , C. widefield photobleaching decay curve at 60mW laser power, decay rate = -4.628×10 -3 .
Fig. 7
7Widefield vs DIRECT imaging. Representative example of widefield voltage imaging compared to DIRECTs targeted illumination. A. Widefield image of an in vitro sample with GEVI containing neurons, red arrows on widefield example indicate neurons selected for targeting. B. The same in vitro sample with targeted imaging of the selected neurons, demonstrating the removal of background fluorescence resulting from the use of targeted imaging.
Fig. 8
8Single trial multi-cell recording. A. Targeted illumination of the sample, green circles are the drawn ROIs for each neuron. B. Single trial traces of each ROI on left during extracellular stimulation experiment. Electrode was placed in proximity to the targeted neurons. A single pulse was given at approximately 0.5s into the experiment, followed by or 60x Olympus LUMPLFLN60XW for voltage imaging experiments), onto the sample. The light emitted from the sample passes back through the dichroic and an emission filter (Semrock ff03-525/50-25), and is recorded by either a camera (Basler acA1920-155um or IDS U3-3080SE-M-GL), photodiode recording at 1kHz (Photonics 0210007), Silicon photomultiplier (SiPM, Hamamatsu C13366-3050GA) or PMT
Table 1 .
1Properties of selected detectors Read noise calculated as dark current/ radiant sensitivity. b Maximum signal calculated as (max current/ charge of an electron)/ electron multiplication gain c Maximum signal calculated as max current/ charge of an electron.Detector
Read noise
Maximum signal
PMT: Hamamatsu H9305-03
3720 photons / s a
6.24x10 8 photons / s, 10 5 gain b
Photodiode: Femto LCA-S-400K-SI-
FST
2,680,000 photons / s
4.29x10 12 photons / s c
SiPM: Hamamatsu C13366-3050GA 937,000 photons / s
12.6 x 10 9 photons / s
Camera: Photometrics Kinetix
2 photons / pixel / frame
200 photons / pixel / frame
a
Adult C57BL/6 mice were injected with 1µl of AAV9.mDlx-ASAP3-Kv into the somatosensory cortex. After 3 weeks expression time, mice were terminally anaesthetized with ketamine/xylazine, and transcardially perfused with ice-cold sucrose-based cutting solution (osmolarity of 300-310), with the composition (in mM): 3 KCl, 26 NaHCO3, 1.25 NaH2PO4, 3 Na pyruvate, 0.5 CaCl2, 4 MgCl2, 190 sucrose, and 25 dextrose (pH7.4 bubbled with carbogen), and brain extracted. Coronal slices of 250µm thickness were cut with a Leica TS1200 vibratome and immediately transferred to holding artificial cerebrospinal fluid (ACSF, osmolarity of 300-310) at 34°C, with the composition (in mM): 126 NaCl, 3.5 KCl, 26 NaHCO3, 1.25 NaH2PO4, 2 CaCl2, 2 MgSO4 and 10 dextrose (pH7.4 bubbled with carbogen), and allowed to recover for 30 min before transferring to room temperature. Recording ACSF (osmolarity of 300-310) was heated to 34°C and similar to holding ACSF in composition except with 1.2 CaCl2 and 1 MgSO4.
pulses half a second later. Action potentials are visible at each pulse for each neuron. C,D. Signal to noise ratio and change in fluorescence for the first action potential of each neuron (n= 10). E. Close view example of single trial data in B, black arrows indicate voltage response to each extracellular electrical pulse. Blue is raw data, orange line is smoothed data using a moving average filter with a span of 10 data points.
Acknowledgments. The authors are grateful to Dr Debora Machado Andrade Schubert for proofreading and giving feedback.Disclosures. The authors declare no conflicts of interest.Data availability. Data is available upon request. CAD models are available at https://www.imperial.ac.uk/rowlandslab/
High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor. Y Gong, C Huang, J Z Li, B F Grewe, Y Zhang, S Eismann, M J Schnitzer, Science. 350Y. Gong, C. Huang, J. Z. Li, B. F. Grewe, Y. Zhang, S. Eismann, and M. J. Schnitzer, "High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor," Science 350, 1361-1366 (2015).
Monitoring receptor signaling by intramolecular FRET. M J Lohse, M Bünemann, C Hoffmann, J.-P Vilardaga, V O Nikolaev, Curr. Opin. Pharmacol. 7M. J. Lohse, M. Bünemann, C. Hoffmann, J.-P. Vilardaga, and V. O. Nikolaev, "Monitoring receptor signaling by intramolecular FRET," Curr. Opin. Pharmacol. 7, 547-553 (2007).
Genetically-encoded FRET-based sensors for monitoring Zn2+ in living cells. A M Hessels, M Merkx, Metallomics. 7A. M. Hessels and M. Merkx, "Genetically-encoded FRET-based sensors for monitoring Zn2+ in living cells," Metallomics 7, 258-266 (2015).
FRET-based genetically-encoded sensors for quantitative monitoring of metabolites. Mohd, A Mohsin, M Ahmad, Iqbal, Biotechnol. Lett. 37Mohd. Mohsin, A. Ahmad, and M. Iqbal, "FRET-based genetically-encoded sensors for quantitative monitoring of metabolites," Biotechnol. Lett. 37, 1919-1928 (2015).
Fluorescent proteins for FRET microscopy: Monitoring protein interactions in living cells. R N Day, M W Davidson, BioEssays. 34R. N. Day and M. W. Davidson, "Fluorescent proteins for FRET microscopy: Monitoring protein interactions in living cells," BioEssays 34, 341-350 (2012).
Genetic voltage indicators. Y Bando, C Grimm, V H Cornejo, R Yuste, BMC Biol. 1771Y. Bando, C. Grimm, V. H. Cornejo, and R. Yuste, "Genetic voltage indicators," BMC Biol. 17, 71 (2019).
Addressable multiregional and multifocal multiphoton microscopy based on a spatial light modulator. Y Shao, J Q D , X Peng, H Niu, W Qin, H Liu, B Z Gao, J. Biomed. Opt. 1730505Y. Shao, J. Q. D.d.s, X. Peng, H. Niu, W. Qin, H. Liu, and B. Z. Gao, "Addressable multiregional and multifocal multiphoton microscopy based on a spatial light modulator," J. Biomed. Opt. 17, 030505 (2012).
Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. A M Packer, L E Russell, H W P Dalgleish, M Häusser, Nat. Methods. 12A. M. Packer, L. E. Russell, H. W. P. Dalgleish, and M. Häusser, "Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo," Nat. Methods 12, 140-146 (2015).
Kilohertz two-photon brain imaging in awake mice. T Zhang, O Hernandez, R Chrapkiewicz, A Shai, M J Wagner, Y Zhang, C.-H Wu, J Z Li, M Inoue, Y Gong, B Ahanonu, H Zeng, H Bito, M J Schnitzer, Nat. Methods. 16T. Zhang, O. Hernandez, R. Chrapkiewicz, A. Shai, M. J. Wagner, Y. Zhang, C.-H. Wu, J. Z. Li, M. Inoue, Y. Gong, B. Ahanonu, H. Zeng, H. Bito, and M. J. Schnitzer, "Kilohertz two-photon brain imaging in awake mice," Nat. Methods 16, 1119-1122 (2019).
Ultrafast laser-scanning time-stretch imaging at visible wavelengths. J.-L Wu, Y.-Q Xu, J.-J Xu, X.-M Wei, A C Chan, A H Tang, A K Lau, B M Chung, H Shum, E Y Lam, K K Wong, K K Tsia, Light Sci. Appl. 6J.-L. Wu, Y.-Q. Xu, J.-J. Xu, X.-M. Wei, A. C. Chan, A. H. Tang, A. K. Lau, B. M. Chung, H. Cheung Shum, E. Y. Lam, K. K. Wong, and K. K. Tsia, "Ultrafast laser-scanning time-stretch imaging at visible wavelengths," Light Sci. Appl. 6, e16196-e16196 (2017).
Kilohertz two-photon fluorescence microscopy imaging of neural activity in vivo. J Wu, Y Liang, S Chen, C.-L Hsu, M Chavarha, S W Evans, D Shi, M Z Lin, K K Tsia, N Ji, Nat. Methods. 17J. Wu, Y. Liang, S. Chen, C.-L. Hsu, M. Chavarha, S. W. Evans, D. Shi, M. Z. Lin, K. K. Tsia, and N. Ji, "Kilohertz two-photon fluorescence microscopy imaging of neural activity in vivo," Nat. Methods 17, 287- 290 (2020).
Temporally precise single-cell-resolution optogenetics. O A Shemesh, D Tanese, V Zampini, C Linghu, K Piatkevich, E Ronzitti, E Papagiakoumou, E S Boyden, V Emiliani, Nat. Neurosci. 20O. A. Shemesh, D. Tanese, V. Zampini, C. Linghu, K. Piatkevich, E. Ronzitti, E. Papagiakoumou, E. S. Boyden, and V. Emiliani, "Temporally precise single-cell-resolution optogenetics," Nat. Neurosci. 20, 1796- 1806 (2017).
Optogenetic strategies for high-efficiency all-optical interrogation using blue-light-sensitive opsins. A Forli, M Pisoni, Y Printz, O Yizhar, T Fellin, 1063359A. Forli, M. Pisoni, Y. Printz, O. Yizhar, and T. Fellin, "Optogenetic strategies for high-efficiency all-optical interrogation using blue-light-sensitive opsins," eLife 10, e63359 (2021).
Computer-generated holography enhances voltage dye fluorescence discrimination in adjacent neuronal structures. A J Foust, V Zampini, D Tanese, E Papagiakoumou, V Emiliani, Neurophotonics. 221007A. J. Foust, V. Zampini, D. Tanese, E. Papagiakoumou, and V. Emiliani, "Computer-generated holography enhances voltage dye fluorescence discrimination in adjacent neuronal structures," Neurophotonics 2, 021007 (2015).
Fast volumetric calcium imaging across multiple cortical layers using sculpted light. R Prevedel, A J Verhoef, A J Pernía-Andrade, S Weisenburger, B S Huang, T Nöbauer, A Fernández, J E Delcour, P Golshani, A Baltuska, A Vaziri, Nat. Methods. 13R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, "Fast volumetric calcium imaging across multiple cortical layers using sculpted light," Nat. Methods 13, 1021-1028 (2016).
Voltage imaging and optogenetics reveal behaviour-dependent changes in hippocampal dynamics. Y Adam, J J Kim, S Lou, Y Zhao, M E Xie, D Brinks, H Wu, M A Mostajo-Radji, S Kheifets, V Parot, S Chettih, K J Williams, B Gmeiner, S L Farhi, L Madisen, E K Buchanan, I Kinsella, D Zhou, L Paninski, C D Harvey, H Zeng, P Arlotta, R E Campbell, A E Cohen, Nature. 569Y. Adam, J. J. Kim, S. Lou, Y. Zhao, M. E. Xie, D. Brinks, H. Wu, M. A. Mostajo-Radji, S. Kheifets, V. Parot, S. Chettih, K. J. Williams, B. Gmeiner, S. L. Farhi, L. Madisen, E. K. Buchanan, I. Kinsella, D. Zhou, L. Paninski, C. D. Harvey, H. Zeng, P. Arlotta, R. E. Campbell, and A. E. Cohen, "Voltage imaging and optogenetics reveal behaviour-dependent changes in hippocampal dynamics," Nature 569, 413-417 (2019).
Large-scale voltage imaging in the brain using targeted illumination. S Xiao, E Lowet, H J Gritton, P Fabris, Y Wang, J Sherman, R Mount, H Tseng, H.-Y Man, J Mertz, X Han, 2021.04.05.438451S. Xiao, E. Lowet, H. J. Gritton, P. Fabris, Y. Wang, J. Sherman, R. Mount, H. Tseng, H.-Y. Man, J. Mertz, and X. Han, "Large-scale voltage imaging in the brain using targeted illumination," 2021.04.05.438451 (2021).
Optical voltage imaging in neurons: moving from technology development to practical tool. T Knöpfel, C Song, Nat. Rev. Neurosci. 20T. Knöpfel and C. Song, "Optical voltage imaging in neurons: moving from technology development to practical tool," Nat. Rev. Neurosci. 20, 719-727 (2019).
Single action potentials and subthreshold electrical events imaged in neurons with a novel fluorescent protein voltage probe. L Jin, Z Han, J Platisa, J R A Wooltorton, L B Cohen, V A Pieribone, Neuron. 75L. Jin, Z. Han, J. Platisa, J. R. A. Wooltorton, L. B. Cohen, and V. A. Pieribone, "Single action potentials and subthreshold electrical events imaged in neurons with a novel fluorescent protein voltage probe," Neuron 75, 779-785 (2012).
High-performance calcium sensors for imaging activity in neuronal populations and microcompartments. H Dana, Y Sun, B Mohar, B K Hulse, A M Kerlin, J P Hasseman, G Tsegaye, A Tsang, A Wong, R Patel, J J Macklin, Y Chen, A Konnerth, V Jayaraman, L L Looger, E R Schreiter, K Svoboda, D S Kim, Nat. Methods. 16H. Dana, Y. Sun, B. Mohar, B. K. Hulse, A. M. Kerlin, J. P. Hasseman, G. Tsegaye, A. Tsang, A. Wong, R. Patel, J. J. Macklin, Y. Chen, A. Konnerth, V. Jayaraman, L. L. Looger, E. R. Schreiter, K. Svoboda, and D. S. Kim, "High-performance calcium sensors for imaging activity in neuronal populations and microcompartments," Nat. Methods 16, 649-657 (2019).
The Diffraction Barrier in Optical Microscopy. J S Silfies, S A Schwartz, M W Davidson, J. S. Silfies, S. A. Schwartz, and M. W. Davidson, "The Diffraction Barrier in Optical Microscopy," https://www.microscopyu.com/techniques/super-resolution/the-diffraction-barrier-in-optical-microscopy.
Effect of dependent scattering on the optical properties of Intralipid tissue phantoms. P Di Ninni, F Martelli, G Zaccanti, Biomed. Opt. Express. 2P. Di Ninni, F. Martelli, and G. Zaccanti, "Effect of dependent scattering on the optical properties of Intralipid tissue phantoms," Biomed. Opt. Express 2, 2265-2278 (2011).
Photobleaching. A Diaspro, G Chirico, C Usai, P Ramoino, J Dobrucki, Handbook Of Biological Confocal Microscopy. J. B. Pawley, ed.Springer USA. Diaspro, G. Chirico, C. Usai, P. Ramoino, and J. Dobrucki, "Photobleaching," in Handbook Of Biological Confocal Microscopy, J. B. Pawley, ed. (Springer US, 2006), pp. 690-702.
Ultrafast Two-Photon Imaging of a High-Gain Voltage Indicator in Awake Behaving Mice. V Villette, M Chavarha, I K Dimov, J Bradley, L Pradhan, B Mathieu, S W Evans, S Chamberland, D Shi, R Yang, B B Kim, A Ayon, A Jalil, F St-Pierre, M J Schnitzer, G Bi, K Toth, J Ding, S Dieudonné, M Z Lin, 1590-1608.e23Cell. 179V. Villette, M. Chavarha, I. K. Dimov, J. Bradley, L. Pradhan, B. Mathieu, S. W. Evans, S. Chamberland, D. Shi, R. Yang, B. B. Kim, A. Ayon, A. Jalil, F. St-Pierre, M. J. Schnitzer, G. Bi, K. Toth, J. Ding, S. Dieudonné, and M. Z. Lin, "Ultrafast Two-Photon Imaging of a High-Gain Voltage Indicator in Awake Behaving Mice," Cell 179, 1590-1608.e23 (2019).
| []
|
[
"TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions",
"TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions"
]
| [
"Sachin Shah ",
"Sakshum Kulshrestha [email protected] ",
"Christopher A Metzler [email protected] ",
"\nUniversity of Maryland\nCollege Park\n",
"\nUniversity of Maryland\nCollege Park\n",
"\nUniversity of Maryland\nCollege Park\n"
]
| [
"University of Maryland\nCollege Park",
"University of Maryland\nCollege Park",
"University of Maryland\nCollege Park"
]
| []
| Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images. Used in combination with deep learning, such systems now offer stateof-the-art performance at monocular depth estimation, extended depth-of-field imaging, lensless imaging, and other tasks. Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically over time? We first prove that the set of PSFs described by static phase masks is non-convex and that, as a result, time-averaged PSFs generated by dynamic phase masks are fundamentally more expressive. We then demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance.AbstractThis supplement includes a more general proof that the set of PSFs described by a single phase mask is non-convex. It also includes an extended discussion of the benefits a single-shot time-averaged systems has over a multi-shot burst imaging system. | 10.48550/arxiv.2303.17583 | [
"https://export.arxiv.org/pdf/2303.17583v1.pdf"
]
| 257,834,039 | 2303.17583 | 8fdf40e5a77b7654a3922058cf645dbd4b3d077c |
TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions
Sachin Shah
Sakshum Kulshrestha [email protected]
Christopher A Metzler [email protected]
University of Maryland
College Park
University of Maryland
College Park
University of Maryland
College Park
TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions
Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images. Used in combination with deep learning, such systems now offer stateof-the-art performance at monocular depth estimation, extended depth-of-field imaging, lensless imaging, and other tasks. Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically over time? We first prove that the set of PSFs described by static phase masks is non-convex and that, as a result, time-averaged PSFs generated by dynamic phase masks are fundamentally more expressive. We then demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance.AbstractThis supplement includes a more general proof that the set of PSFs described by a single phase mask is non-convex. It also includes an extended discussion of the benefits a single-shot time-averaged systems has over a multi-shot burst imaging system.
Introduction
Extracting depth information from an image is a critical task across a range of applications including autonomous driving [26,30], robotics [21,31], microscopy [7,18], and augmented reality [28,14]. To this end, researchers have developed engineered phase masks and apertures which serve to encode depth information into an image [12,23]. To optimize these phase masks, recent works have exploited deep learning: By simultaneously optimizing a phase mask and a reconstruction algorithm "end-to-end learning" is able to dramatically improve system performance [29,24]. * These authors contributed equally to this work Figure 1. Time-averaged Dynamic PSFs Top: Phase mask sequence that was optimized to perform simultaneous extended depth-of-field imaging and monocular depth estimation. Middle: Proposed TiDy PSFs at specific depths. Bottom left: Depth estimation and all-in-focus imaging performance improve as one averages over more phase masks. Bottom right: Depth-encoded image and reconstructed depth map.
Most existing works have focused on learning or optimizing a single phase mask for passive depth perception. We conjecture that this restriction leaves much room for improvement. Perhaps by using an SLM to introduce a sequence of phase masks over time, one could do much better.
Supporting this idea is the fact, which we prove in Theorem 2, that the set of PSFs described by a single phase mask is non-convex. This implies that time-averaged PSFs, which span the convex hull of this set, can be significantly more expressive. In this work, we exploit the PSF nonconvexity by developing a multi-phase mask end-to-end optimization approach for learning a sequence of phase masks whose PSFs are averaged over time.
This work's central contributions are as follows:
• We prove the set of PSFs generated by a single phase arXiv:2303.17583v1 [cs.CV] 30 Mar 2023 mask, is non-convex. Thus, dynamic phase-masks offer a fundamentally larger design space.
• We extend the end-to-end learning optics and algorithm design framework to design a dynamic set of phase masks.
• We demonstrate, in simulation, that time-averaged PSFs can achieve superior monocular depth estimation and extended depth-of-field imaging performance.
Background
Image Formation Model. One can simulate the formation of an an image in a camera by discretizing an RGB image by depth, convolving each depth with it's the corresponding PSF, and compositing the outputs to form the signal on the sensor. This process can be represented by the equation
I = D d=1 O d (L * h d ) ,(1)
where L represents all-in-focus image, {1, · · · , D} represent a set of discrete depth layers, O d is the occlusion mask at depth d, and the set {h 1 , · · · , h D } represent the depthdependent PSF, i.e., the cameras response to point sources at various depths [9]. Other works assume no depth discontinuities [24] or add additional computation to improve blurring at depth boundaries [10]. Our model is similar to those used in [29,3].
PSF Formation Model.
A PSF h d can be formed as a function of distance d and phase modulation φ M caused by height variation on a phase mask.
h d = |F[A exp(iφ DF (d) + iφ M )]| 2(2)
where φ DF (d) is the defocus aberration due to the distance d between the focus point and the depth plane. Note that because this PSF depends on depth, it can be used to encode depth information into I [8].
The key idea behind PSF-engineering and end-to-end learning is that one can use the aforementioned relationships to encode additional information into a captured image I by selecting a particularly effective mask φ M .
Related Work
Computational Optics for Depth Tasks
Optics based approaches for depth estimation use sensors and optical setups to encode and recover depth information. Modern methods have used the depth-dependent blur caused by an aperture to estimate the depth of pixels in an image. These approaches compare the blur at different ranges to the expected blur caused by an aperture focused at a fixed distance [25]. Groups improved on this idea by implementing coded apertures, retaining more high frequency information about the scene to disambiguate depths [12]. Similar to depth estimation tasks, static phase masks have been used to produce tailored PSFs more invariant to depth, allowing for extended depth-of-field imaging [6]. However, these optically driven approaches have been passed in performance by modern deep neural networks, allowing for joint optimization of optical elements and neural reconstruction networks.
Deep Optics
Many methods have engineered phase masks with specific depth qualities. By maximizing Fisher information for depth, the coded image theoretically will have the most amount of depth cues as possible [22] and by minimizing Fisher information, one may achieve an extended depthof-field image [6]. Deep learning techniques can be used to jointly train the optical parameters and neural network based estimation methods. The idea is that one can "code" an image to retain additional information about a scene, and then use a deep neural network to produce reconstructions. By using a differentiable model for light propagation, back-propagation can be used to update phase mask values simultaneously with neural network parameters. This approach was demonstrated for extended depth-of-field imaging [24, 10, 13], depth estimation [29, 3,10], and holography [5,4]. While these previous approaches successfully improved performance, they focused on enhancing a single phase mask. We build on these works by simultaneously optimizing multiple phase masks, which allows us to search over a larger space of PSFs.
Theory
Micro-ElectroMechanical SLMs offer high framerates but have limited phase precision due to heavy quantization [1]. As [4] noted, intensity averaging of multiple frames can improve quality by increasing effective precision to overcome quantization. Our key insight is that even as SLM technology improves, intensity averaging yields a more expressive design space than a single phase mask. This is supported by the claim that the set of PSFs that can be generated by a single phase mask is non-convex. We provide a rigorous proof for the claim as follows.
f (M ) = A exp(iD + icM )(3)
where denotes entry-wise multiplication, and D ∈ R N ×N and c ∈ R − {0} (the reals except for 0) are fixed constants.
Definition 4. Let g : T A (N ) → R N ×N be defined by g(X) = |F(X)| |F(X)| F(X) 2 F(4)
where F denotes the discrete Fourier Transform with sufficient zero-padding, | · | denotes entry-wise absolute value, and · F denotes the Frobenius norm.
Lemma 1. From fourier optics theory [8], any single phase mask's PSF at a specific depth can be written as
P SF = g • f.
Theorem 2. The range of PSF is not a convex set.
Proof. f is clearly surjective, so it suffices to argue the range of g is not convex. Assume by way of contradiction that the range of g is convex. Then, for all
X (1) , . . . , X (k) ∈ T A (N ) there exists Y ∈ T A (N ) such that g(Y ) = 1 k k i=1 g(X (i) )
. By Parseval's Theorem,
F(X) 2 F = N 2 X 2 F = N 2 N i=0 N j=0 A i,j(5)
so the condition is
|F(Y )| |F(Y )| = 1 k k i=1 |F(X (i) )| |F(X (i) )| (6)
or equivalently
F(Y ) F(Y ) = 1 k k i=1 F(X (i) ) F(X (i) ).(7)
Then the cross-correlation theorem reduces it to
F(Y Y ) = 1 k k i=1 F(X (i) X (i) )(8)
where denotes cross-correlation. Because the Fourier Transform is linear we finally have
Y Y = 1 k k i=1 X (i) X (i) .(9)
Therefore, the convexity of the range of g is equivalent to the convexity of the set {X X : X ∈ T A (N )}. We will show the set's projection onto a particular coordinate is not convex.
(X X) s,r = N i=0 N j=0 X i,j X i+s,j+r(10)
where we adopt the convention that X s,r = 0 when s, r > N or s, r < 0. Take the points u and v from the definition of A (1). Also observe that correlation can be represented geometrically as shifting X over X. In this representation, notice that as the shift (s, r) approaches v − u, the non-zero overlap between X and X shifted by (s, r) approaches 1 by construction. That is, when L 1 is shifted to overlap L 2 , u and v will be the only non-zero overlaps between the shifted and original non-zero points ( Figure 3). No other non-zero points can overlap above or below L 2 by definition of S. Therefore, (X X) v−u becomes Notice that only u and v overlap once the shift is applied.
X u X v + N 2 −1 i=1 0.(11)Because X u X v ∈ T, (X X) v−u ∈ T which is a non- convex set.
Therefore, the set of correlation's of values on the complex unit circle masked by A is also not convex, and so is P SF .
Time-averaged PSFs span the convex hull of the set of static-mask PSFs, meaning there exists some PSFs achievable only through intensity averaging PSFs from a sequence of phase masks. This implies multi-phase mask learning may reach a better minimum.
Multi-Phase Mask Optimization
Optical Forward Model
Similar to PhaseCam3D [29], we model light propagation using Fourier optics theory [8]. In contrast to previous work, we compute the forward model (1) for multiple phase masks, producing a stack of output images, which when averaged form our coded image. This coded image simulates the recorded signal from imaging a scene using a sequence of phase masks in a single exposure ( Figure 4).
Specialized Networks
For the monocular depth estimation task, we use the Mi-DaS Small network [20]. This is a well known convolutional monocular depth estimation network designed to take in natural images and output relative depth maps. The network is trained end-to-end with the phase masks. A meansquared error (MSE) loss term is defined in terms of the depth reconstruction prediction,D and the ground truth depth map D,
L Depth = 1 N D −D 2 2(12)
where N is the number of pixels. This process allows for the simultaneous optimization of the phase masks as well as fine tuning MiDaS to reconstruct from our coded images.
For the extended depth-of-field task, we use an Attention U-Net [17] to reconstruct all-in-focus images. The network is optimized jointly with the phase mask sequence. To learn a reconstructionÎ to be similar to the all-in-focus ground truth image I, we define the loss term using MSE error
L AiF = 1 N I −Î 2 2(13)
where N is the number of pixels.
Joint Task Optimization
We also present an alternative to the specialized networks: a single network jointly trained for monocular depth estimation and extended depth-of-field using a sequence of phase masks. This network has a basic Attention U-Net architecture outputting 4 channels representing depth maps as well as all-in-focus images. Similar to prior works, we use a combined loss function, adding a coefficient to weight the losses for each individual task:
L total = λ Depth L Depth + λ AiF L AiF .(14)
6. Experiments
Training Details
We use the FlyingThings3D from Scene Flow Datasets [15], which uses synthetic data generation to obtain all-in-focus RGB images and disparity maps. We use the cropped 278 × 278 all-in-focus images from [29]. In total, we use 5077 training patches and 419 test patches.
Both the optical layer and reconstruction networks are differentiable, so the phase mask sequence and neural network can be optimized through back-propagation. Each part is implemented in PyTorch. During training, we use the Adam [11] optimizer with parameters β 1 = 0.99 and β 2 = 0.999. The learning rate for the phase masks is 10 −8 and for the reconstruction network it is 10 −4 , and the batch size was 32. Finally, training and testing were performed on NVIDIA Quadro P6000 GPUs.
We parameterize 23 × 23 phase masks pixel-wise as [13] found pixel-wise parameterization to produce the best overall performance. The monocular depth estimation task uses a the MiDaS Small architecture pretrained weights for monocular depth estimation downloadable from PyTorch [20]. The extended depth-of-field task pretrains an Attention U-Net with a fixed Fresnel lens for 300 epochs. For the joint task, we set λ Depth = λ AiF = 1 to balance overall performance, and we pretrain the Attention U-Net for 300 epochs with a fixed Fresnel lens. In simulation, the red, blue, and green channels are approximated by discretized wavelengths, 610 nm, 530 nm, and 470 nm respectively. Additionally, the depth range is discretized into 21 bins on the interval [−20, 20], which is larger than previous works. A sequence of phase masks are used to generate a sequence of depth-dependent PSFs. These PSFs are convolved with depth masked clean images to simulate depth dependent convolution. The images produced by each phase mask are averaged to create a coded image which is fed into an attention U-Net. The reconstruction loss is back-propagated end-toend through the network and the optical model to design phase masks and algorithms capable of performing monocular depth estimation and extended depth-of-field simultaneously.
Evaluation Details
For ablation studies on our method, we used the testing split of the FlyingThings3D set for both monocular depth estimation and extended depth-of-field imaging [15]. For comparisons to existing work, we also tested our monocular depth estimation network on the labeled NYU Depth v2 set [16]. The ground truth depth maps were translated to layered masks for the clean images by bucketing the depth values into 21 bins, allowing us to convolve each depth in an image with the required PSF. We use root mean squared error (RMSE) between ground truth and estimated depth maps for depth estimation and RMSE between ground truth and reconstructed all-in-focus images for extended depthof-field imaging. We also use peak signal-to-noise ratio (PSNR) and structural similarity index [27] (SSIM) for extended depth-of-field imaging.
Ablation Studies
Effect of Phase Mask Sequence Length
For both all-in-focus imaging and depth estimation, we vary the phase mask count that the end-to-end system is trained with to gauge the benefits of using multiple phase masks. The forward model and initial phase masks were held standard while the phase mask count was varied. The resulting networks were evaluated at convergence. For the extended depth-of-field task, the masks were all initialized with random noise uniform from 0 to 1.2 × 10 −6 . For the depth estimation task, the masks were initialized with the Fisher mask with added Gaussian noise parameterized by a 5.35 × 10 −7 mean and 3.05 × 10 −7 standard deviation.
End-to-end optimization on each task with a specialized network yielded improved performance as the phase mask count increased, visualized in Figure 5. This result implies that sequences of phase masks are successful in making the PSF space more expressive. Additionally, even for the more complex joint task, learning a system that can produce both Figure 5. RMSE for specialized tasks for each phase mask sequence length. RMSE decreases with respect to phase mask sequence length for both specialized extended depth-of-field imaging and monocular depth estimation tasks. 0 phase masks refers to a reconstruction neural network with a fixed Fresnel lens.
all-in-focus images and depth maps, error decreases with phase mask count until a plateau, visualized in Figure 6.
All-in-focus without Reconstruction Networks
A phase mask generating a PSF of the unit impulse function at every depth would be ideal for extended depth-of-field as each depth is in focus. If possible, this phase mask would not require any digital processing. We optimize phase mask sequences of varying lengths to produce an averaged PSF close to the unit impulse function for all depths. For each sequence length, phase masks are optimized using MSE loss between the unit impulse function and the averaged PSF at each depth until convergence. We ran 1000 trials of random phase mask initialization for each length. Observe that a side-effect of longer phase masks is training stability. The range of RMSE between the simulated capture image and ground truth all-in-focus image decreases as the sequence length increases (Figure 7). This indicates training longer Figure 6. RMSE for joint optimization of monocular depth estimation and extended depth-of-field imaging for each phase mask sequence length. RMSE decreases with respect to phase mask sequence length for this complex joint task, demonstrating the benefit of multi-phase mask learning. 0 phase masks refers to a reconstruction neural network with a fixed Fresnel lens. Figure 7. All-in-focus imaging RMSE distribution for each phase mask length without a reconstruction network. The best RMSE for each phase mask count has low correlation with respect to phase mask sequence length, but the variance of RMSE decreases.
sequences is more resilient to initialization.
Phase Mask Initialization for Depth Perception
Deep optics for depth perception can be very dependent on the initialization of optical parameters before training [29]. To find the extent of the effect of mask initialization on performance, we varied the the initial phase masks while keeping number of masks, the optical model, and duration of training fixed. We trained for 200 epochs. We tested four initializations of sequences of 5 phase masks as shown in Figure 8. The first was uniformly distributed noise from 0 to 1.2×10 −6 . The second was the first mask in the sequence set to a Fisher mask while the rest are uniform noise. The third is setting each mask to a rotation of the Fisher mask and adding Gaussian noise parameterized by a 5.35 × 10 −7 mean and 3.05×10 −7 standard deviation to 4 masks. Lastly, Figure 8. Visualization of phase mask initializations. Each row represents a different initial phase mask sequence.
Initialization RMSE↓
1 Fisher + All noise 0.0329 1 Fisher + Fisher w/ Noise 0.0271 All noise 0.0254 3 Fisher + Fisher w/ Noise 0.0207 Table 1. Quantitative evaluation of phase mask initializations. Four sequence initializations are evaluated on the monocular depth estimation task. Ultimately, 3 Fisher masks and 2 noisy Fisher masks have the best performance after training.
we set each mask to a rotation of the Fisher mask and added noise to only the last two masks in the sequence. Of the four initializations, it is clear that the 3 Fisher masks and 2 Fisher masks with noise performed the best (Table 1).
Modeling State Switching in SLMs
Our optical forward model assumes an SLM can swap between two phase patterns instantly. In practice, however, some light will be captured during the intermediate states between phase patterns. These phase patterns, in the worst case, could be random phase patterns, effectively adding noise to our coded images. We model these intermediate states by averaging output images produced by phase masks and the randomized phase patterns weighted by the time that they are displayed for. We model the total exposure time as 100ms, with various durations of switching times from 1 to 16ms per swap. We evaluate our joint optimized network on these new, more noisy, coded images without any additional training (Figure 12). Observe that because the 5 phase mask system includes more swaps, performance degrades faster than fewer phase mask systems. However, for Figure 9. Qualitative results of a specialized network on extended depth-of-field imaging. Both 1 and 5 phase mask systems are evaluated on FlyingThings3D. Error is computed pixel wise between the ground truth all-in-focus image and the reconstructed output and is boosted by a factor of 3. Notice that the 5 phase mask system introduces minimal error. short switching times, 5 phase masks still out performs the others without needing any fine tuning.
Results
We compare our time averaged dynamic PSF method to the state-of-the-art methods for both extended depth-offield imaging and monocular depth estimation. The relevant works we compare to are as follows:
1. PhaseCam3D [29] used a 23 × 23 phase mask based on 55 Zernike coefficients. The phase mask parameters were then end-to-end optimized with a U-Net reconstruction network to perform depth estimation.
2. Chang et al. [3] used a singlet lens introducing chromatic aberrations with radially symmetric PSFs. Similar to [29], the lens parameters were also then end-toend optimized.
3. Ikoma et al. [10] used a radially symmetric diffractive optical element (DOE). The blurred image was preconditioned with an approximate inverse of the PSF depth dependent blur. The RGB image stack was fed into a U-Net to produce both an all-in-focus image and a Figure 10. Qualitative results of a specialized networks on monocular depth estimation. Performance using the five phase mask method outperforms one phase mask on both datasets. Figure 11. Qualitative results of a joint optimized system for extended depth-of-field imagining and monocular depth estimation. Both one and five phase mask networks are evaluated on the FlyingThings3D datasets. Notice that five masks has fewer artifacts than a single mask.
depth map. The DOE and U-Net parameters were optimized in an end-to-end fashion. Figure 12. Effect of switching time on joint system performance. Reconstruction error across phase mask counts as a function of switching time with 100ms overall exposure. Performance of the jointly optimized system degrades as the switching time between phase masks increases, as expected. Our system still performs well when the time spent switching is less than 25% of the overall exposure.
4. Liu et al. [13] used various phase mask parameterizations with the same U-Net architecture as [10]. One method used pixel-wise height maps (PW) and the other introduced orbital angular momentum (OAM).
Sitzmann et al. [24] implements a single DOE based
on Zernike coefficients, and solves the Tikhonovregularized least-squares problem to reconstruct an allin-focus image.
6. MiDaS [19] and ZoeDepth [2] are state of the art single shot monocular depth estimation methods with all-infocus images as inputs.
Because both [10] and [13] simultaneously learn all-infocus images and depth maps, when comparing against our specialized methods, we take their best performing weighting of each task.
Individual Tasks. For monocular depth estimation, our specialized method using a sequence of 5 phase masks trained for 300 epochs outperforms prior work on FlyingTh-ings3D (Table 2). Additionally, our approach performs significantly better and achieves lower error than previous methods on NYUv2 without any additional fine tuning. For extended depth-of-field, our specialized method using a sequence of 5 phase masks out performs prior work on Fly-ingThings3D (Table 3). This demonstrates the benefit of multi-phase mask learning on computational imaging tasks.
Multi-Objective Optimization. We also evaluate our method against other joint all-in-focus and depth map learning approaches. This problem is challenging because good depth cues to produce depth maps is antithetical to producing an all-in-focus image. Our combined 5 phase mask Method FlyingThings3D NYUv2 PhaseCam3D [29] 0.521 0.382 Chang et al. [3] 0.490 0.433 Ikoma et al. [10] 0.184 -MiDaS [19] -0.357 ZoeDepth [2] -0.277 TiDy (1) 0.026 0.259 TiDy (5) 0.019 0.175 Table 2. RMSE comparison of monocular depth estimation methods. We present quantitative results on two datasets to compare to state of the art optical and single shot monocular depth estimation methods. Our methods performs best with our 5 phase mask system achieving the lowest error on both datasets. Table 3. Comparison of extended depth-of-field imaging methods. We present quantitative results on FlyingThings3D to compare to state-of-the-art. Our methods performs best with our 5 phase mask system achieving the best PSNR. Table 4. Comparison of multi-objective optimization of extended depth-of-field imaging and depth estimation methods. We compare quantitative results on FlyingThings3D to the stateof-the-art. Our methods performs best with our 5 phase mask system achieving the best balance between objectives.
trained for 300 epochs approach outperforms prior jointly trained approaches (Table 4).
Limitations
While we were successful in learning dynamic phase masks to improve state-of-the-art performance on imaging tasks, our method still carries some limitations. First, our optical model assumes perfect switching between phase masks during training. While evaluation with non-zero switching times showed little degradation of performance, accounting for state switching while training could produce phase masks that are more performant. Our optical model also simulates depths as layered masks over an image, which does not account for blending at depth boundaries. Additionally, our method assumes that scenes are static for the duration of a single exposure. Lastly, though their prices are falling, SLMs are still quite expensive and bulky.
Conclusion
This work is founded upon the insight that the set of PSFs that can be described by a single phase mask is nonconvex and that, as a result, time-averaged PSFs are fundamentally more expressive. We demonstrate that one can learn a sequence of phase masks that, when one dynamically switches between them over time, can substantially improve computational imaging performance across a range of tasks, including depth estimation and all-in-focus imaging. Our work unlocks an exciting new direction for PSF engineering and computational imaging system design.
Generalized Proof of PSF Non-convexity
This proof, similar to the one included in the main paper, will simplify the convexity of the PSF set to the convexity of cross-correlation. We generalize the result for any aperture by showing there always exists a shift such that the overlap of any set of points and their shifts is a single element.
setmax(S, v) = {x ∈ S : x · v = max x∈S x · v} (1) setmin(S, v) = {x ∈ S : x · v = min x∈S x · v}.(2)
setmax produces the set of all points in S that are furthest in direction v, and setmin similarly produces the set of all points that are furthest in the opposite direction of v.
Lemma 1. For all finite non-empty sets of points S, there exists some shift δ such that card(S ∩ (S + δ)) = 1 where S +δ = {x+δ : x ∈ S}, and card(·) denotes the cardinally of a set. That is, there exists some shift such that S and S shifted overlap at exactly one point.
Proof. Consider the set of all directions without a unique maximizer,
V = {v ∈ D : card(setmax(S, v)) > 1}.(3)
Notice that for all v ∈ V, we can treat v as a normal vector to the line formed by points in setmax(S, v) ( Figure 1). V is the set of normal vectors whose corresponding line intersects multiple points of S. We can upper bound card(V) as the number of unique lines that intersect two points in S.
card(V) ≤ card({ − → xy : x, y ∈ S}) < ∞(4)
Therefore, V is a finite set (whereas D, the set of all unit vectors, is clearly an infinite set). Then, there always exists some u such that u ∈ D and u ̸ ∈ V. Because u ̸ ∈ V, card(setmax(S, u)) = 1, the direction u has a unique maximizer. Let m be the single element of setmax(S, u), and choose δ ∈ (m − setmin(S, u)). δ is the difference between u's unique maximizer and one of u's minimizers. Observe that setmax(S, u) and setmin(S, u) define the extents of S in the direction u ( Figure 2). Therefore, when applying the shift δ, only the furthest point in S in direction u and −u will overlap (Figure 3). Let T include all points from S except m. Then, T and T +δ are disjoint by definition. Therefore, S ∩ (S + δ) = {m}, which is a single element.
The following is similar to the proof included in the main paper; however, we relax the condition on A to be any arbitrary aperture. Therefore, this proof of PSF non-convexity produces a more general result.
f (M ) = A ⊙ exp(iD + icM )(5)
where ⊙ denotes entry-wise multiplication, D ∈ R N ×N and c ∈ R − {0} (the reals except for 0) are fixed constants, and A ∈ {0, 1} N ×N is the aperture.
where F denotes the discrete Fourier Transform with sufficient zero-padding, | · | denotes entry-wise absolute value, and ∥ · ∥ F denotes the Frobenius norm.
Lemma 2. From fourier optics theory [1], any single phase mask's PSF at a specific depth can be written as
P SF = g • f.
Theorem 3. The range of PSF is not a convex set.
Proof. f is clearly surjective, so it suffices to argue the range of g is not convex. Assume by way of contradiction that the range of g is convex. Then, for all X (1) , . . . , X (k) ∈ T A (N ) there exists Y ∈ T A (N ) such that g(Y ) = 1 k k i=1 g(X (i) ). By Parseval's Theorem,
∥F(X)∥ 2 F = N 2 ∥X∥ 2 F = N 2 N i=0 N j=0 A i,j(7)
so the condition is
|F(Y )| ⊙ |F(Y )| = 1 k k i=1
|F(X (i) )| ⊙ |F(X (i) )| (8) or equivalently
F(Y ) ⊙ F(Y ) = 1 k k i=1
F(X (i) ) ⊙ F(X (i) ).
Then the cross-correlation theorem reduces it to
F(Y ⋆ Y ) = 1 k k i=1 F(X (i) ⋆ X (i) )(10)
where ⋆ denotes cross-correlation. Because the Fourier Transform is linear we finally have
Y ⋆ Y = 1 k k i=1 X (i) ⋆ X (i) .(11)
Therefore, the convexity of the range of g is equivalent to the convexity of the set {X ⋆ X : X ∈ T A (N )}. We will show the set's projection onto a particular coordinate is not convex.
(X ⋆ X) s,r = N i=0 N j=0 X i,j X i+s,j+r
where we adopt the convention that X s,r = 0 when s, r > N or s, r < 0. Observe that cross-correlation can be represented geometrically as shifting X over X. Let S be the set of coordinates with non-zero entries in X.
Applying Lemma 1 to S shows that X and X will overlap at exactly one point. Select points v, u ∈ S such that v − u = δ, then,
(X ⋆ X) δ = X u X v + N 2 −1 i=1 0.(13)
Because X u X v ∈ T, (X ⋆ X) δ ∈ T which is a non-convex set. Therefore, the set of correlation's of values on the complex unit circle masked by A is also not convex. Consequently, the range of P SF is not a convex set.
Discussion: Time Averaging Compared to Multi-Shot Sequences
Our optical model images a static scene through multiple phase masks which we switch between over the course of single exposure (Figure 4a). A natural question, then, is why limit ourselves to a single exposure. Why not capture a burst of images, each with a different phase mask (Figure 4b)?
While it is true that superimposing the outputs of multiple PSFs creates challenges in disambiguating outputs from phase masks, it also offers several benefits. First, because we only capture a single frame, our system uses less memory due to less I/O required. Second, imaging in a single exposure is more light efficient. Over a fixed time interval, a single exposure allows you to capture the entirety of the light from the scene. Multi-shot, alternatively, would miss photons during readout between shots.
Definition 1 .
1A ∈ {0, 1} N ×N is some valid aperture with a non-zero region S such that there exists lines L 1 and L 2 where S can be contained between them, and L 1 L 2 and u = S ∩ L 1 and v = S ∩ L 2 are single points(Figure 2).
Figure 2 .
2Example aperture that satisfies constraints on A. The aperture is fitted between parallel lines L1 and L2, which only intersect the aperture at one point each. Common aperture shapes fit into these constraints.This definition of A supports most commonly used apertures including but not limited to circles, squares, and nsided regular polygons. See supplement for proof for all shapes.
Definition 2 .
2Let T A (N ) be the set of N × N matrices in T N ×N with non-zero support A, i.e. the matrix is supported only where A = 1, where T is the complex unit circle. The PSF induced by a phase mask M can be modeled as the squared magnitude of the Fourier transform of the pupil function f [29].
Definition 3 .
3Let f : R N ×N → T A (N ) be defined by
Figure 3 .
3Geometric interpretation of correlation (X X)v−u.The figure represents the correlation step when the shift is v − u.
Figure 4 .
4Multi-phase mask forward model overview.
Definition 1 .
1Let D = {v ∈ R 2 : ∥v∥ = 1} be the set of all unit vector directions. Definition 2. Let setmax and setmin be defined by,
Definition 3 .
3Let T A (N ) be the set of N × N matrices in T N ×N with non-zero support A, i.e. the matrix is supported only where A = 1, where T is the complex unit circle.
Figure 1 :
1Example of vectors in V. Observe that each vector v 1 , v 2 , v 3 is perpendicular to a side.
Figure 2 :
2Example of S and a valid direction u. Observe that there is only one point furthest in direction u, but can be multiple points furthest in the opposite direction −u.The PSF induced by a phase mask M can be modeled as the squared magnitude of the Fourier transform of the pupil function f[2].
Definition 4 .
4Let f : R N ×N → T A (N ) be defined by
Figure 3 : 2 F
32Example of overlap between S and S + δ.Definition 5. Let g : T A (N ) → R N ×N be defined by g(X) = |F(X)| ⊙ |F(X)| ∥F(X)∥
[ 21 ]
21Anupa Sabnis and Leena Vachhani. Yoav Shechtman, Steffen J. Sahl, Adam S. Backer, and W. E. Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli.Single image based
depth estimation for robotic applications. In 2011 IEEE Re-
cent Advances in Intelligent Computational Systems, pages
102-106, 2011. 1
[22] Moerner. Optimal point spread function design for 3d imag-
ing. Phys. Rev. Lett., 113:133902, September 2014. 2
[23] Yoav Shechtman, Lucien Weiss, Adam Backer, Steffen Sahl,
and William Moerner. Precise three-dimensional scan-free
multiple-particle tracking over large axial ranges with tetra-
pod point spread functions. Nano letters, 15, May 2015. 1
[24] Vincent Sitzmann, Steven Diamond, Yifan Peng, Xiong Dun,
Stephen Boyd, Wolfgang Heidrich, Felix Heide, and Gor-
don Wetzstein. End-to-end optimization of optics and image
processing for achromatic extended depth of field and super-
resolution imaging. ACM Trans. Graph., 37(4), July 2018.
1, 2, 8
[25] Huixuan Tang, Scott Cohen, Brian Price, Stephen Schiller,
and Kiriakos N. Kutulakos. Depth from defocus in the wild.
In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), July 2017. 2
[26] Yan Wang, Wei-Lun Chao, Divyansh Garg, Bharath Hariha-
ran, Mark Campbell, and Kilian Weinberger. Pseudo-lidar
from visual depth estimation: Bridging the gap in 3d ob-
ject detection for autonomous driving. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recogni-
tion (CVPR), 2019. 1
[27] Image quality assessment: from error visibility to struc-
tural similarity. IEEE Transactions on Image Processing,
13(4):600-612, 2004. 5
[28] Woontack Woo, Wonwoo Lee, and Nohyoung Park. Depth-
assisted real-time 3d object detection for augmented reality.
In International Conference on Artificial Reality and Telex-
istence (ICAT), 2011. 1
[29] Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin
Sankaranarayanan, and Ashok Veeraraghavan. Phasecam3d
-learning phase masks for passive single view depth esti-
mation. In 2019 IEEE International Conference on Compu-
tational Photography (ICCP), pages 1-12, 2019. 1, 2, 3, 4,
6, 7, 8
[30] Feng Xue, Guirong Zhuo, Ziyuan Huang, Wufei Fu,
Zhuoyue Wu, and Marcelo H Ang. Toward hierarchical
self-supervised monocular absolute depth estimation for au-
tonomous driving applications. In 2020 IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems (IROS),
pages 2330-2337. IEEE, 2020. 1
[31] Menglong Ye, Edward Johns, Ankur Handa, Lin Zhang,
Philip Pratt, and Guang-Zhong Yang. Self-supervised
siamese learning on stereo image pairs for depth estimation
in robotic surgery, 2017. 1
Acknowledgements C.M. was supported in part by the AFOSR Young Investigator Program Award FA9550-22-1-0208.
Adapting Texas Instruments DLP technology to demonstrate a phase spatial light modulator. Terry A Bartlett, William C Mcdonald, James N Hall, Emerging Digital Micromirror Device Based Systems and Applications XI. Michael R. Douglass, John Ehmke, and Benjamin L. Lee109320SPIETerry A. Bartlett, William C. McDonald, and James N. Hall. Adapting Texas Instruments DLP technology to demonstrate a phase spatial light modulator. In Michael R. Douglass, John Ehmke, and Benjamin L. Lee, editors, Emerging Digital Mi- cromirror Device Based Systems and Applications XI, vol- ume 10932, page 109320S. International Society for Optics and Photonics, SPIE, 2019. 2
Zoedepth: Zero-shot transfer by combining relative and metric depth. Reiner Shariq Farooq Bhat, Diana Birkl, Peter Wofk, Matthias Wonka, Müller, Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias Müller. Zoedepth: Zero-shot transfer by com- bining relative and metric depth, 2023. 8
Deep optics for monocular depth estimation and 3d object detection. Julie Chang, Gordon Wetzstein, Proc. IEEE ICCV. IEEE ICCVJulie Chang and Gordon Wetzstein. Deep optics for monoc- ular depth estimation and 3d object detection. In Proc. IEEE ICCV, 2019. 2, 7, 8
Timemultiplexed neural holography: A flexible framework for holographic near-eye displays with fast heavily-quantized spatial light modulators. Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, O' Matthew, Gordon Toole, Wetzstein, Proceedings of the ACM SIG-GRAPH. the ACM SIG-GRAPHSuyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, Matthew O'Toole, and Gordon Wetzstein. Time- multiplexed neural holography: A flexible framework for holographic near-eye displays with fast heavily-quantized spatial light modulators. In Proceedings of the ACM SIG- GRAPH, page 1-9, 2022. 2
Neural 3d holography: Learning accurate wave propagation models for 3d holographic virtual and augmented reality displays. Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, Gordon Wetzstein, ACM Trans. Graph. 406Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, and Gordon Wetzstein. Neural 3d holography: Learn- ing accurate wave propagation models for 3d holographic virtual and augmented reality displays. ACM Trans. Graph., 40(6), December 2021. 2
Extended depth of field through wave-front coding. R Edward, W. Thomas Dowski, Cathey, Appl. Opt. 3411Edward R. Dowski and W. Thomas Cathey. Extended depth of field through wave-front coding. Appl. Opt., 34(11):1859- 1866, April 1995. 2
Microscopy in 3d: A biologist's toolbox. Robert Fischer, Yicong Wu, Pakorn Kanchanawong, Hari Shroff, Clare Waterman-Storer, Octo- ber 2011. 1Trends in cell biology. 21Robert Fischer, Yicong Wu, Pakorn Kanchanawong, Hari Shroff, and Clare Waterman-Storer. Microscopy in 3d: A biologist's toolbox. Trends in cell biology, 21:682-91, Octo- ber 2011. 1
Introduction to fourier optics. Freeman. Joseph W Goodman, 24Joseph W. Goodman. Introduction to fourier optics. Free- man, 2017. 2, 3, 4
A layerbased restoration framework for variable-aperture photography. W Samuel, Kiriakos N Hasinoff, Kutulakos, IEEE 11th International Conference on Computer Vision. Samuel W. Hasinoff and Kiriakos N. Kutulakos. A layer- based restoration framework for variable-aperture photogra- phy. In 2007 IEEE 11th International Conference on Com- puter Vision, pages 1-8, 2007. 2
Depth from defocus with learned optics for imaging and occlusion-aware depth estimation. Hayato Ikoma, Cindy M Nguyen, Christopher A Metzler, Yifan Peng, Gordon Wetzstein, IEEE International Conference on Computational Photography (ICCP). 7Hayato Ikoma, Cindy M. Nguyen, Christopher A. Metzler, Yifan Peng, and Gordon Wetzstein. Depth from defocus with learned optics for imaging and occlusion-aware depth esti- mation. IEEE International Conference on Computational Photography (ICCP), 2021. 2, 7, 8
Adam: A method for stochastic optimization. CoRR, abs/1412. P Diederik, Jimmy Kingma, Ba, 6980Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. 4
Freeman. Image and depth from a conventional camera with a coded aperture. Anat Levin, Rob Fergus, Frédo Durand, William T , ACM SIGGRAPH 2007 Papers, SIG-GRAPH '07. New York, NY, USA170Association for Computing MachineryAnat Levin, Rob Fergus, Frédo Durand, and William T. Free- man. Image and depth from a conventional camera with a coded aperture. In ACM SIGGRAPH 2007 Papers, SIG- GRAPH '07, page 70-es, New York, NY, USA, 2007. Asso- ciation for Computing Machinery. 1, 2
Investigating deep optics model representation in affecting resolved all-in-focus image quality and depth estimation fidelity. Xin Liu, Linpei Li, Xu Liu, Xiang Hao, Yifan Peng, Opt. Express. 3020Xin Liu, Linpei Li, Xu Liu, Xiang Hao, and Yifan Peng. Investigating deep optics model representation in affecting resolved all-in-focus image quality and depth estimation fi- delity. Opt. Express, 30(20):36973-36984, September 2022. 2, 4, 8
Single image 3d vehicle pose estimation for augmented reality. Yawen Lu, Sophia Kourian, Carl Salvaggio, Chenliang Xu, Guoyu Lu, 2019 IEEE Global Conference on Signal and Information Processing. Yawen Lu, Sophia Kourian, Carl Salvaggio, Chenliang Xu, and Guoyu Lu. Single image 3d vehicle pose estimation for augmented reality. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pages 1-5, 2019. 1
A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. Nikolaus Mayer, Eddy Ilg, Philip Häusser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 45Nikolaus Mayer, Eddy Ilg, Philip Häusser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 4040-4048, 2016. 4, 5
Indoor segmentation and support inference from rgbd images. Derek Pushmeet Kohli Nathan Silberman, Rob Hoiem, Fergus, ECCV. Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. 5
Attention u-net: Learning where to look for the pancreas. Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven Mcdonagh, Y Nils, Bernhard Hammerla, Ben Kainz, Daniel Glocker, Rueckert, In Medical Imaging with Deep Learning. 4Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. Attention u-net: Learning where to look for the pancreas. In Medical Imaging with Deep Learning, 2018. 4
Robust depth estimation for light field microscopy. Luca Palmieri, Gabriele Scrofani, Nicolò Incardona, Genaro Saavedra, Manuel Martínez-Corral, Reinhard Koch, 2019. 1Sensors. 193Luca Palmieri, Gabriele Scrofani, Nicolò Incardona, Genaro Saavedra, Manuel Martínez-Corral, and Reinhard Koch. Ro- bust depth estimation for light field microscopy. Sensors, 19(3), 2019. 1
Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun, 2020. 8IEEE Transactions on Pattern Analysis and Machine Intelligence. René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence (TPAMI), 2020. 8
Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun, 2022. 4IEEE Transactions on Pattern Analysis and Machine Intelligence. 443René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 44(3):1623-1637, 2022. 4
Introduction to fourier optics. Joseph W Goodman, Freeman. 2Joseph W. Goodman. Introduction to fourier optics. Freeman, 2017. 2
Phasecam3d (a) Time averaging phase masks (b) Multi-shot phase masks. Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin Sankaranarayanan, Ashok Veeraraghavan, Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin Sankaranarayanan, and Ashok Veeraraghavan. Phasecam3d (a) Time averaging phase masks (b) Multi-shot phase masks
Observe that multi-shot systems capture multiple coded images, while time averaging only captures one. This means our system is more light and memory efficient. -learning phase masks for passive single view depth estimation. 2019 IEEE International Conference on Computational Photography (ICCP). 4Time averaging and multi-shot optical systemsFigure 4: Time averaging and multi-shot optical systems. Observe that multi-shot systems capture multiple coded im- ages, while time averaging only captures one. This means our system is more light and memory efficient. -learning phase masks for passive single view depth esti- mation. In 2019 IEEE International Conference on Computa- tional Photography (ICCP), pages 1-12, 2019. 2
| []
|
[
"Elemental chalcogens as a minimal model for chiral charge and orbital order",
"Elemental chalcogens as a minimal model for chiral charge and orbital order"
]
| [
"Ana Silva \nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n",
"Jans Henke \nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n",
"Jasper Van Wezel \nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n"
]
| [
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands",
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands",
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands"
]
| []
| Helices of increased electron density can spontaneously form in materials containing multiple, interacting density waves. Although a macroscopic order parameter theory describing this behaviour has been proposed and experimentally tested, a detailed microscopic understanding of spiral electronic order in any particular material is still lacking. Here, we present the elemental chalcogens Selenium and Tellurium as model materials for the development of chiral charge and orbital order. We formulate minimal models capturing the formation of spiral structures both in terms of a macroscopic Landau theory and a microscopic Hamiltonian. Both reproduce the known chiral crystal structure and are consistent with its observed thermal evolution and behaviour under applied pressure. The combination of microscopic and macroscopic frameworks allows us to distil the essential ingredients in the emergence of helical charge order, and may serve as a guide to understanding spontaneous chirality both in other specific materials and throughout materials classes. | 10.1103/physrevb.97.045151 | [
"https://arxiv.org/pdf/1704.00075v1.pdf"
]
| 111,381,005 | 1704.00075 | 7e5689034ab3f89c275d3f7e6f4c833ccf82fef5 |
Elemental chalcogens as a minimal model for chiral charge and orbital order
31 Mar 2017
Ana Silva
Institute for Theoretical Physics
Institute of Physics
University of Amsterdam
1090 GLAmsterdamThe Netherlands
Jans Henke
Institute for Theoretical Physics
Institute of Physics
University of Amsterdam
1090 GLAmsterdamThe Netherlands
Jasper Van Wezel
Institute for Theoretical Physics
Institute of Physics
University of Amsterdam
1090 GLAmsterdamThe Netherlands
Elemental chalcogens as a minimal model for chiral charge and orbital order
31 Mar 2017(Dated: April 4, 2017)
Helices of increased electron density can spontaneously form in materials containing multiple, interacting density waves. Although a macroscopic order parameter theory describing this behaviour has been proposed and experimentally tested, a detailed microscopic understanding of spiral electronic order in any particular material is still lacking. Here, we present the elemental chalcogens Selenium and Tellurium as model materials for the development of chiral charge and orbital order. We formulate minimal models capturing the formation of spiral structures both in terms of a macroscopic Landau theory and a microscopic Hamiltonian. Both reproduce the known chiral crystal structure and are consistent with its observed thermal evolution and behaviour under applied pressure. The combination of microscopic and macroscopic frameworks allows us to distil the essential ingredients in the emergence of helical charge order, and may serve as a guide to understanding spontaneous chirality both in other specific materials and throughout materials classes.
The bulk transition metal dichalcogenide 1T -TiSe 2 has been shown, uniquely, to harbour a charge density wave transition that breaks inversion symmetry in a chiral way [1][2][3][4][5]. The spontaneous formation of helicity is wellknown in magnetic materials, in which spins may wind around a propagation direction to yield spirals of magnetisation. In contrast, helices of increased electronic density necessarily require the onset of charge order to be accompanied by a simultaneous onset of orbital order [3,6,7]. Although this restricts the class of materials in which chiral charge order may appear [8], it has nevertheless been theoretically suggested to play an important role in determining material properties of various transition metal dichalcogenides [8][9][10], and even cuprate high-temperature superconductors [11][12][13].
Chiral charge order was suggested to be present in 1T -TiSe 2 based on indirect experimental evidence [1]. In addition, several predictions arising from a theoretical model of chiral charge order, based on a Ginzburg-Landau free energy expansion, have been experimentally confirmed [3][4][5]. Nevertheless, it has proven difficult to obtain direct experimental confirmation of the broken inversion symmetry. The main reason for this is believed to be the presence of small, nanometer wide, domains of varying handedness [5], which are averaged over by almost all direct bulk probes. A microscopic understanding of the chiral phase transition, going beyond the predictions of macroscopic order parameter theory, is thus essential for guiding further experiments into this novel type of charge and orbital order.
Despite its relatively simple crystal structure, the ordered state of 1T -TiSe 2 involves too many orbitals and electronic bands for the construction of a microscopic theory to be a straightforward exercise, or to lead to intuitive insight into the mechanism underlying the formation of chiral charge order. Here, we therefore take an alternative approach, and construct a minimal microscopic model for the appearance of spiral chains in the atomic FIG. 1: Chiral charge and orbital order in elemental chalcogens. a) The chiral crystal structure of Se and Te can be understood as a spiral arrangement of short bonds in a simple cubic parent lattice. All atoms in the crystal are of the same type. Different colours indicate the three possible local configurations of short bonds, and the chiral unit cell includes one atom of each color. b) Because the short bonds involve charge transfer between specific orbitals only, the chiral crystal lattice is also orbital ordered. Indicated here are the least occupied orbitals. The shaded planes connect like orbitals and are included as a guide to the eye. They are perpendicular to the spiral axis of the crystal structure.
structure of the elemental chalcogens Se and Te. These materials do not exhibit a charge ordering transition at any temperature, but their atomic lattices are well known to be chiral at ambient conditions. The handedness of a given sample of Te or Se can be straightforwardly determined by measuring either its diffraction pattern or its optical activity [14]. The crystal structure of Se and Te can be understood as consisting of short bonds arranged along helices in a simple cubic parent structure, as shown schematically in Fig. 1a [14]. This picture can in fact be taken literally, and the spiral bond order can be shown to be an instability of a hypothetic parent phase with simple cubic lattice structure [2,6,7]. The charge ordering transition leading from the simple cubic to the chiral phase is arXiv:1704.00075v1 [cond-mat.str-el] 31 Mar 2017 of the same type as the chiral transition in 1T -TiSe 2 [3], but as we show here, it can be understood on the level of an explicit microscopic model, explaining how different types of electron-phonon coupling and Coulomb interactions conspire to form the spiral structure. The minimal model constructed in this work is thus presented as a prototype description for the formation of chiral charge and orbital order in general.
Intuitive picture. Before presenting both a macroscopic Ginzburg-Landau and a microscopic mean-field theory for the formation of chiral charge order in elemental chalcogens, we will first give an intuitive picture showcasing their basic ingredients. The starting point for this is a simple cubic lattice structure. Both Se and Te actually possess the chiral crystal structure shown in Fig. 1a for any temperature at ambient pressure. Upon melting Te however, short-ranged chiral order in the fluid phase has been found to disappear, and a homogeneous metallic phase to form instead, not much above the melting point [15,16]. This observation can be understood as a latent structural phase transition in the crystal lattice of Te, which is preempted by the material melting before the transition temperature can be reached. The crystal structure of the hypothetical high-temperature phase can in that case be assumed to be simple cubic, since the element Po, which sits just below Te in the periodic table and thus has the same configuration of valence electrons, crystallises into a simple cubic rather than a chiral structure [17]. The expected phase transition into a chiral orbital ordered phase in Po is suppressed by the presence of strong spin-orbit coupling [18], which allows the parent simple cubic structure to emerge.
Elemental chalcogens have four valence electrons in their outermost p-shell (2/3 filling). Within a simple cubic lattice potential the p x , p y , and p z orbitals are degenerate, and may be chosen to point along the crystallographic x, y, and z axes respectively. Since their wave functions are elongated in a single direction, the overlap between neighbouring p x orbitals on the x axis will be larger than that between neighbouring p x orbitals on the y or z axes. Taking this difference to the extreme limit, we will consider a minimal model in which the overlap between orbitals aligned in a head-to-toe manner is non-zero, and all other overlaps are neglected. Although quantitatively unrealistic, this assumption does not change the qualitative physics of the chiral phase transition, and naturally leads to an intuitive minimal model.
In a tight-binding model starting from the simplified overlap integrals, an electron in a p i orbital, with i ∈ {x, y, z}, can only hop in the i direction, onto a neighbouring p i orbital. The simple cubic lattice is thus filled with independent one-dimensional chains of p x orbitals, interwoven by similar one-dimensional chains in the y and z directions. As there is no inter-chain hopping in any direction, the electronic structure consists of three one-dimensional bands, oriented along three orthogonal directions. Within the cubic first Brillouin zone each 1D band introduces a pair of parallel planar Fermi surfaces, whose intersections again form a cube. The Fermi surface is extremely well-nested, and a charge density wave instability is thus expected to emerge [19]. In fact, a single nesting vector Q, corresponding to a body diagonal of the cube of intersecting Fermi surfaces, connects each point on any of the Fermi surface sheets to a point within a parallel sheet. The dominant instability will therefore be towards the formation of charge density waves ρ j (x) = ρ 0 + A cos(Q · x) in each of the three orbital sectors (labeled by j), who all share the same propagation direction Q. Here ρ 0 is the average charge density in the normal state, and A is the amplitude of the charge modulation.
The presence of a non-zero electron-phonon coupling causes the atomic lattice to deform in response to the charge modulations. The resulting displacement waves u j (x) =ũe j sin(Q · x) have the same wave vector as the charge modulations, but a polarization e j whose direction is determined by the anisotropy of the local electronphonon coupling matrix elements [3]. In a chain of p x orbitals without any overlap in directions other than x, the electron-phonon coupling is maximally anisotropic, and the displacement direction e will be purely along x. Within the simple cubic lattice, each atom is affected simultaneously by three displacement waves, corresponding to the charge density waves in the three p-orbitals on the atom. The actual displacement is then simply the sum of the three orthogonal components u j .
The charge density wave in each orbital chain can be shifted along its propagation direction by the addition of a phase: ρ j (x) ∝ cos(Q · x + ϕ j ). The vector Q shows the charge order in Se and Te to have period three in all directions. The coupling of charge and lattice modulations then restricts the phase ϕ j to be a multiple of π/3, so that the point of highest charge always sits either on an atomic site or bond (resulting in a site-centered charge density wave, or a bond-centered charge density wave [20]). If additionally we consider a Coulomb interaction between the charges in orthogonal orbitals on the same site, the charge maxima along one chain will prefer to avoid the charge maxima along other chains. This effectively couples the phases in different orbital sectors, so that a configuration with ϕ i −ϕ i+1 = ±2π/3 becomes energetically favourable. The final configuration with three orthogonal charge density waves shifted with respect to each other produces precisely the charge redistribution and lattice deformations shown in Fig. 1a, which agree with the experimentally observed crystal structure of Se and Te [2,6].
Notice that because the charge maxima in orthogonal orbital chains avoid one another, each atom in the final structure ends up with a single p orbital being more oc-cupied than the other ones. The chiral charge ordered structure is therefore also automatically an orbital ordered phase, as shown in Fig. 1b. The handedness of the crystal structure can thus be equally interpreted as signifying the rotation of the least occupied orbital wave function upon traversing the crystal.
Macroscopic order parameter theory. To understand the emergence of chirality from the interplay between three order parameters in different orbital sectors in more detail, we first construct a Landau free energy describing the transition. The dimensionless order parameters α j (x) represent the (periodic) modulation of the average charge density within a given chain:ρ j (x) = ρ 0 (1 + α j (x)). Starting from three noninteracting orbital sectors, the first contribution to the free energy is just the sum of the Landau free energies F j for three independent charge density waves, with
F j = d 3 x a(x)α 2 j + b(x)α 3 j + c(x)α 4 j .
Notice that the presence of a discrete lattice is taken into account by expanding the coefficients in the free energy in terms of the reciprocal lattice vectors, so that for example a(x) = a 0 + a 1 n e iGn·x + . . . [21]. Terms with n > 0 originate from the electron-phonon coupling in a more microscopic model. In the following, we take into account the first order contributions only.
The on-site Coulomb interaction between electrons in orthogonal orbitals provides the additional interaction terms F Coul = j d 3 x A 0 α j α j+1 , which couple the three order parameters. The periodic charge distributions can be written as α j (x) = ψ 0 cos(Q · x + ϕ j ), with the amplitude ψ 0 equal for all three order parameters, and ϕ j the spatial shift of the charge density wave along j with respect to the atomic lattice. Performing the spatial integration over positions x in the expression for the full free energy F , results in an expression that depends both on the amplitude and phases of the order parameters:
F = 3 2 a 0 ψ 2 0 + 9 8 c 0 ψ 4 0 + 1 4 b 1 ψ 3 0 j cos(3ϕ j ) + 1 2 A 0 ψ 2 0 j cos(ϕ j − ϕ j+1 ).(1)
As usual, the temperature dependence of the quadratic term proportional to a 0 determines when ψ 0 first obtains a non-zero value, and charge order sets in. The combination of the final two terms, arising from the electronphonon coupling and the Coulomb interaction respectively, determine the values of the phases ϕ j in the presence of a non-zero order parameter. They can be simultaneously minimised by first of all taking ϕ 1 = nπ/3, where n is an odd or even integer depending on the sign of b 1 . Physically, this difference corresponds to the charge order being either site or bond centered. Additionally, the relative phase differences should be chosen as ϕ j −ϕ j+1 = ±2π/3. These solutions are then precisely the left and right handed chiral configurations consisting of mutually shifted one-dimensional charge density waves discussed in the previous section, one of which is shown in Fig. 1a.
It is instructive to compare the free energy of Eq. (1) to the one given in Ref. 3, describing the charge ordered phase in 1T -TiSe 2 . The higher complexity of the atomic configuration in TiSe 2 , as compared to the pure elements Se and Te, results in three charge density waves along different propagation directions, as well as a difference in strength of the electron-phonon coupling between Ti and Se sites. Nevertheless, the route to chiral charge and orbital order is largely the same as that observed in Eq. (1) for Se and Te. The onset of charge order is determined by terms that do not involve the phases of the individual charge density wave components. Instead, a term arising from the electron-phonon coupling favours values of the phases in individual sectors to be such that charge maxima fall on top of atomic sites or bonds. The on-site Coulomb interaction finally, provides a coupling between charge density waves in different orbital sectors. The coupling may be indirect in the case of bond-centered charge order, like in TiSe 2 , where the variation of bond densities implies charge redistributions on the atomic sites, which are then subject to local Coulomb interactions. The coupling between orbital sectors leads to relative phase shifts between them, and hence the emergence of a chiral charge and orbital ordered pattern.
Applied pressure. Upon the application of large uniform pressure, Se and Te undergo a series of structural transitions into non-chiral phases [22,23]. Within the minimal model considered here, the suppression of chirality under pressure can be captured by introducing a pressure dependence of the critical temperature. Using the values for the phases appropriate in the chiral state, the quadratic coefficient in the free energy then becomes:
a 0 (T, P ) = 3b 2 1 32c 0 + A 0 2 + α T T 0 C + P P 0 C − 1(2)
Here P 0 C is the critical pressure at zero temperature, while T 0 C is the critical temperature at zero applied pressure. Notice that for the purposes of this minimal model, the relation between critical temperature and pressure is assumed to be linear, and that the high-pressure, non-chiral phase can only be simple cubic in structure. In spite of these simplifications, the free energy expansion captures the suppression of chirality by pressure, and may be straightforwardly extended to qualitatively examine the result of applying for example uniaxial rather than uniform pressure.
Anisotropic phases can be included in the minimal model by allowing the amplitudes ψ j of charge density waves in different orbital sectors to develop independently, and for each to have its own critical temperature, depending on the amount of pressure applied along its . The zig-zag phase consists of planes perpendicular to the direction of applied uniaxial strain with zig-zag chains of short bonds, as indicated in the inset. The applied pressure along the x axis is parameterised as P + Px, with P the uniform pressure and Px the uniaxial strain component. The applied pressure along the y and z axes is simply P . Notice that the linearity of the critical lines and planes in this phase diagram is a direct consequence of the simplified thermal and pressure dependences assumed in Eq. (2). particular axis. The expression for the free energy then becomes:
F = j 1 2 a 0 (T, P j )ψ 2 j + 3 8 c 0 ψ 4 j + 1 4 b 1 ψ 3 j cos(3ϕ j ) + 1 2 A 0 ψ j ψ j+1 cos(ϕ j − ϕ j+1 ).(3)
Applying pressure along a single axis only, the charge order in a singe direction may be suppressed without destroying the order in orthogonal directions. The result is a phase of stacked planes each containing zigzag charge order, as indicated schematically in Fig. 2, along with the phase diagram resulting from this minimal model. The anisotropic structure resulting from the minimal model agrees both with the predictions of an earlier semiclassical approach in terms of so-called vector charge density waves [6], and with the experimental observation of layered structures in Se under high pressures [22,23]. Microscopic model. To see how the terms in the Landau free energy emerge from the interplay of microscopic degrees of freedom, we construct a minimal Hamiltonian model for Se and Te. The starting point is again a two-third filled p-shell within the simple cubic lattice. We then include a tight-binding approximation for the bare electronic band structure, an on-site Coulomb interaction, and the influence of lattice distortions on both the kinetic and potential energies of electrons.
Including hopping only between neighbouring orbitals that are aligned head-to-toe in the simple cubic lattice, the tight-binding part of the Hamiltonian can be written as H TB = t x,jĉ † j (x)ĉ j (x + a j ) + H.c., whereĉ † j (x) creates an electron in orbital j on position x, and a j is the simple cubic lattice vector in direction j. The overlap integral t is positive because the overlapping orbital lobes on neighbouring sites have opposite signs. The Coulomb interaction acts on-site, through the interaction
H Coul = V x,jĉ † j (x)ĉ j (x)ĉ † j+1 (x)ĉ j+1 (x)
. The displacementû j (x) of the atom on position x in the direction of j may be written in terms of the phonon operatorb † j (x), which is taken to be a dispersionless Einstein mode with H boson = ω q,jb † j (q)b j (q). The electron-phonon coupling consists of two terms:
H (1) e-ph = α (1) x,j û j (x) −û j (x + a j ) · ĉ † j (x)ĉ j (x + a j ) +ĉ † j (x + a j )ĉ j (x) Ĥ (2) e-ph = α (2) x,j û j (x + a j ) −û j (x − a j ) ĉ † j (x)ĉ j (x).(4)
The first type of electron-phonon coupling represents the change of electronic kinetic energy with varying bond length. If an interatomic distance is decreased, the orbital overlap across the affected bond increases. The second process reflects the change of electronic potential energy with a variation in local ionic density. If a given atom is approached more closely by its two neighbours, the potential energy of electrons located on the central position is lowered to compensate for the larger density of positive core charges. The full Hamiltonian can be diagonalised in the mean field approximation by introducing Ansatz averages reflecting the possible ordered states found in the Landau free energy analysis:
ĉ † j (x)ĉ j (x) = ρ 0 + A cos (Q · x + ϕ j ) ĉ † j (x)ĉ j (x + a j ) = σ 0 + B cos (Q · (x + a j )/2 + χ j ) û j (x) =ũ sin (Q · x + φ j ) .(5)
Here A is the mean field expectation value describing modulations of the on-site charge density, B represents the bond-density variations, andũ measures the variations in atomic positions. The mean fields A and B can be directly related to the macroscopic order parameter α appearing in the Landau free energy expansions above. Using these definitions, the Hamiltonian decomposes into a fermionic and a bosonic part. The latter can be straightforwardly diagonalised by introducing shifted boson operators [24], and relates the atomic displacements to the electronic order parameters by setting −φj ) ). Demanding the displacement to be real restricts the difference between the phases of any two order parameters to be an integer multiple of π.
u = 2 √ 3/ ω(2Bα (1) e (χj −φj ) −Aα (2) e i(ϕj
The fermionic part of the mean field Hamiltonian can be diagonalised numerically, and the ground state val- The vertical axis measures the charge density wave order parameter B, while the colouring indicates the normalised atomic displacement. Each kind of electron-phonon coupling favours a particular type of chiral charge and orbital ordered state, both of which can be constructed from one-dimensional chains with relative phase shifts. The two kinds of chains appropriate for the two types of electron-phonon coupling are shown schematically in the region where they dominate. Increased bond density is indicated by double lines, while increased on-site electronic density results from charge transfer along the curved arrows.
ues of the phases and order parameters determined selfconsistently. In the presence of electron-phonon coupling, but with no on-site Coulomb interaction, the three orbital sectors are independent from one another and each develops an individual charge density wave. The phases are simply ϕ j = n j π/3, with n j integer, which includes both non-chiral solutions in which the n j are all equal, and chiral ones. For any non-zero value of the Coulomb interaction this degeneracy is lifted, and the left-and right-handed chiral charge ordered configurations with n j − n j+1 = ±2π/3 become the lowest energy states.
For each handedness, the n j may be odd or even multiples of π/3. These solutions correspond to the location of maximum charge in each charge density wave being either bond-centered or site-centered, as indicated in the insets of Fig. 3. Which of these phases has the lowest energy depends on the balance between the different types of electron-phonon coupling. As shown in the phase diagram of Fig. 3, the bond-centered solution dominates for large α (1) , while the site-centered one is consistent with large α (2) . The atomic structure observed in elemental Se and Te corresponds to the bond-centered charge order [14], with α (1) prevailing.
Within the chiral phase, the short bonds in the three orbital chain directions connect to form a spiral. The resulting enlarged unit cell and atomic structure agree with the experimentally observed structure for Te and Se, shown schematically in Fig. 1a. The displacements in the x, y and z directions arise from charge order in chains of p x , p y , and p z orbitals respectively. The modulation of charge density can thus also be seen as a spatial modulation of orbital occupation. Because the charge density waves in the three orthogonal directions are shifted by 2π/3 with respect to each other, each site in the atomic lattice of Se and Te has precisely one less-occupied porbital next to two others that remain equally occupied. Drawing only the least occupied orbitals results in Fig. 1b, which clearly shows that the chiral charge ordered state is also an orbital ordered state. The handedness of the orbital order is the same as that of the structural order, and can be seen for example by following the rotation of the least occupied orbital as one progresses through the crystal along the ordering Q vector (a body diagonal of the original simple cubic structure). The emergence of orbital order in conjunction with chiral charge order is inevitable, since both arise from the same relative phase shifts between charge density waves in distinct orbital sectors.
Discussion. Indirect evidence for the emergence of chiral charge and orbital order has been found in the low-temperature phase of the layered transition metal dichalcogenide 1T -TiSe 2 [1,3,4]. In addition, experimental predictions from a macroscopic Landau theory for the chiral state in this material were successfully tested [4]. The broken inversion symmetry, however, has not been observed directly by any experiment yet. Probing the bulk helicity is likely complicated by the presence of small domains of opposite handedness, of which indirect signatures are seen in scanning-tunneling microscopy experiments [5]. In addition, the interplay between many different orbitals located throughout the chiral unit cell prevent overly simplified theoretical models from being applicable and complicate the extraction of physical insight from realistic microscopic models [25,26].
Having an alternative model material, which harbours a similar chiral state but is structurally simple and wellunderstood, is therefore crucial to aid in building a general understanding of the novel charge and orbital ordered phase. Here, we argue that the elemental chalcogens Tellurium and Selenium constitute precisely such model materials. We construct minimal models for these materials capturing the essential ingredients in the formation of their chiral structures, both in terms of a macroscopic Landau theory and a microscopic mean-field description.
Comparing the results presented here to the chiral phase of 1T -TiSe 2 , highlights both the universal and material-specific aspects of inversion symmetry breaking through combined charge and orbital order. In both cases, the starting point is a material with multiple den-sity wave instabilities in its electronic structure, residing in distinct orbital sectors. The different orbital orientations lead to differently polarised displacement waves in both materials. The on-site Coulomb repulsion then causes maxima of different density waves to repel each other. This results in shifts or relative phase differences between the density waves, which break inversion symmetry and yield the known chiral crystal structure. Because the density waves originate in distinct orbital sectors, the relative phase differences imply simultaneous charge and orbital order.
The driving mechanism underlying the density wave instabilities in 1T -TiSe 2 likely differs from that in the elemental chalcogens [24,27], but plays no role in determining whether or not the combined state will be chiral. In contrast to the chalcogens, the propagation vectors for different density waves in TiSe 2 are all different. The order is also site-centered in Se and Te, but bond-centered in TiSe 2 . Finally, the site-centered Coulomb repulsion providing the coupling between different density waves, yields an indirect interaction between the bond-centered charges in the case of TiSe 2 .
Whether or not the phase shifts induced by local Coulomb interactions break inversion symmetry, depends sensitively on the crystal structure in which they reside [8,10]. If the crystal structure in the presence of the phase shifts includes a mirror symmetry, the result is not chiral even if inversion symmetry is broken. The mechanism described above then instead causes the formation of a polar charge and orbital ordered state, as seen for example in 2H -TaS 2 [9,10]. In the case of TiSe 2 as well as Se and Te however, the crystal symmetries favour the formation of a chiral charge and orbital ordered state.
The theoretical understanding developed here, of how chiral charge and orbital order emerges in elemental Se and Te, can be used as a guiding principle for the understanding of similar phases in other materials. These may include other elements and transition metal dichalcogenides, but the simplicity of the minimal models presented in this work suggests the main mechanism to be be applicable generically to materials harbouring multiple simultaneous density wave instabilities. As long as charge order develops in distinct orbital sectors that are coupled by a local interaction, relative phase shifts will occur and generically lead to the spontaneous breakdown of inversion symmetry.
FIG. 2 :
2Schematic phase diagram indicating the relative stability of the chiral and zig-zag phases within the free energy of Eq.(3)
FIG. 3 :
3The ground state phase diagram as a function of the two contributions to the electron-phonon coupling in Eq. (4).
. J Ishioka, Y H Liu, K Shimatake, T Kurosawa, K Ichimura, Y Toda, M Oda, S Tanda, Phys. Rev. Lett. 105176401J. Ishioka, Y. H. Liu, K. Shimatake, T. Kurosawa, K. Ichimura, Y. Toda, M. Oda, and S. Tanda, Phys. Rev. Lett. 105, 176401 (2010).
. J Van Wezel, P Littlewood, Physics. 387J. van Wezel and P. Littlewood, Physics 3, 87 (2010).
. J Van Wezel, EPL. 9667011J. van Wezel, EPL 96, 67011 (2011).
. J.-P Castellan, S Rosenkranz, R Osborn, Q Li, K E Gray, X Luo, U Welp, G Karapetrov, J P C Ruff, J Van Wezel, Phys. Rev. Lett. 110196404J.-P. Castellan, S. Rosenkranz, R. Osborn, Q. Li, K. E. Gray, X. Luo, U. Welp, G. Karapetrov, J. P. C. Ruff, and J. van Wezel, Phys. Rev. Lett. 110, 196404 (2013).
. M Iavarone, R Di Capua, X Zhang, M Golalikhani, S A Moore, G Karapetrov, Phys. Rev. B. 85155103M. Iavarone, R. Di Capua, X. Zhang, M. Golalikhani, S. A. Moore, and G. Karapetrov, Phys. Rev. B 85, 155103 (2012).
. H Fukutome, Prog. Theor. Phys. 711H. Fukutome, Prog. Theor. Phys. 71, 1 (1984).
. Y Shimoi, H Fukutome, Prog. Theor. Phys. 87307Y. Shimoi and H. Fukutome, Prog. Theor. Phys. 87, 307 (1992).
. J Van Wezel, Physica B. 4071779J. van Wezel, Physica B 407, 1779 (2012).
. I Guillamón, H Suderow, J G Rodrigo, S Vieira, P Rodière, L Cario, E Navarro-Moratalla, C Martí-Gastaldo, E Coronado, New J. Phys. 13103020I. Guillamón, H. Suderow, J. G. Rodrigo, S. Vieira, P. Rodière, L. Cario, E. Navarro-Moratalla, C. Martí- Gastaldo, and E. Coronado, New J. Phys. 13, 103020 (2011).
. J Van Wezel, Phys. Rev. B. 8535131J. van Wezel, Phys. Rev. B 85, 035131 (2012).
. P Hosur, A Kapitulnik, S A Kivelson, J Orenstein, S Raghu, Phys. Rev. B. 87115116P. Hosur, A. Kapitulnik, S. A. Kivelson, J. Orenstein, and S. Raghu, Phys. Rev. B 87, 115116 (2013).
. P Hosur, A Kapitulnik, S A Kivelson, J Orenstein, S Raghu, W Cho, A Fried, Phys. Rev. B. 9139908P. Hosur, A. Kapitulnik, S. A. Kivelson, J. Orenstein, S. Raghu, W. Cho, and A. Fried, Phys. Rev. B 91, 039908 (2015).
. M Gradhand, J Van Wezel, Phys. Rev. B. 9241111M. Gradhand and J. van Wezel, Phys. Rev. B 92, 041111(R) (2015).
. Y Tanaka, S P Collins, S W Lovesey, M Matsumami, T Moriwaki, S Shin, J. Phys. Cond. Mat. 22122201Y. Tanaka, S. P. Collins, S. W. Lovesey, M. Matsumami, T. Moriwaki, and S. Shin, J. Phys. Cond. Mat. 22, 122201 (2010).
. R Bellissent, G Tourand, J. Non-Crys. Sol. 351221R. Bellissent and G. Tourand, J. Non-Crys. Sol. 35, 1221 (1980).
. M Inui, T Noda, K Tamura, J. Non-Crys. Sol. 261M. Inui, T. Noda, and K. Tamura, J. Non-Crys. Sol. 205-207, 261 (1996).
. W H Beamer, C R Maxwell, J. Chem. Phys. 14569W. H. Beamer and C. R. Maxwell, J. Chem. Phys. 14, 569 (1946).
. C.-J Kang, K Kim, B I Min, Phys. Rev. B. 8654115C.-J. Kang, K. Kim, and B. I. Min, Phys. Rev. B 86, 054115 (2012).
R E Peierls, 978-0-691- 02522-3More Surprises in Theoretical Physics. Princeton University PressR. E. Peierls, More Surprises in Theoretical Physics (Princeton University Press, 1991), ISBN 978-0-691- 02522-3.
. D V Efremov, J Van Den, D I Brink, Khomskii, Nat Mater. 3853D. V. Efremov, J. van den Brink, and D. I. Khomskii, Nat Mater 3, 853 (2004).
. W L Mcmillan, Phys. Rev. B. 141496W. L. McMillan, Phys. Rev. B 14, 1496 (1976).
. Y Akahama, M Kobayashi, H Kawamura, Phys. Rev. B. 4720Y. Akahama, M. Kobayashi, and H. Kawamura, Phys. Rev. B 47, 20 (1993).
. O Degtyareva, E Gregoryanz, H K Mao, R J Hemley, Res. 2517High PressO. Degtyareva, E. Gregoryanz, H. K. Mao, and R. J. Hemley, High Press. Res. 25, 17 (2005).
. J Van Wezel, P Nahai-Williamson, S S Saxena, Phys. Rev. B. 81165109J. van Wezel, P. Nahai-Williamson, and S. S. Saxena, Phys. Rev. B 81, 165109 (2010).
. B Zenker, H Fehske, H Beck, C Monney, A R Bishop, Phys. Rev. B. 8875138B. Zenker, H. Fehske, H. Beck, C. Monney, and A. R. Bishop, Phys. Rev. B 88, 075138 (2013).
. S Zhu, J Van Wezel, S. Zhu and J. van Wezel, To be published (2017).
. A Kogar, S Vig, M S Rak, A A Husain, F Flicker, Y I Joe, L Venema, G J Macdougall, T C Chiang, E Fradkin, arXiv:1611.042171611.04217cond-matA. Kogar, S. Vig, M. S. Rak, A. A. Husain, F. Flicker, Y. I. Joe, L. Venema, G. J. MacDougall, T. C. Chiang, E. Fradkin, et al., arXiv:1611.04217 [cond-mat] (2016), 1611.04217.
| []
|
[
"Quasiparticle electronic band structure of the alkali metal chalcogenides",
"Quasiparticle electronic band structure of the alkali metal chalcogenides"
]
| [
"S V Syrotyuk \nLviv Polytechnic National University\n12 S. Bandera Str79013LvivUkraine\n",
"V M Shved \nLviv Polytechnic National University\n12 S. Bandera Str79013LvivUkraine\n"
]
| [
"Lviv Polytechnic National University\n12 S. Bandera Str79013LvivUkraine",
"Lviv Polytechnic National University\n12 S. Bandera Str79013LvivUkraine"
]
| [
"Condensed Matter Physics"
]
| The electronic energy band spectra of the alkali metal chalcogenides M 2 A (M: Li, Na, K, Rb; A: O, S, Se, Te) have been evaluated within the projector augmented waves (PAW) approach by means of the ABINIT code. The Kohn-Sham single-particle states have been found in the GGA (the generalized gradient approximation) framework. Further, on the basis of these results the quasiparticle energies of electrons as well as the dielectric constants were obtained in the GW approximation. The calculations based on the Green's function have been originally done for all the considered M 2 A crystals, except Li 2 O. | 10.5488/cmp.18.33702 | null | 111,385,975 | 1510.06546 | 8869c0cd5b7679893fe9aae58aa4ea25d7529a87 |
Quasiparticle electronic band structure of the alkali metal chalcogenides
2015
S V Syrotyuk
Lviv Polytechnic National University
12 S. Bandera Str79013LvivUkraine
V M Shved
Lviv Polytechnic National University
12 S. Bandera Str79013LvivUkraine
Quasiparticle electronic band structure of the alkali metal chalcogenides
Condensed Matter Physics
183201510.5488/CMP.18.33702Received December 29, 2014, in final form March 16, 2015electronic structureGGAGWAprojector augmented wave methoddielectric constant PACS: 7115Mb7115Ap7115Nc7120Nr7820Ci7115Qe
The electronic energy band spectra of the alkali metal chalcogenides M 2 A (M: Li, Na, K, Rb; A: O, S, Se, Te) have been evaluated within the projector augmented waves (PAW) approach by means of the ABINIT code. The Kohn-Sham single-particle states have been found in the GGA (the generalized gradient approximation) framework. Further, on the basis of these results the quasiparticle energies of electrons as well as the dielectric constants were obtained in the GW approximation. The calculations based on the Green's function have been originally done for all the considered M 2 A crystals, except Li 2 O.
Introduction
The alkali metal chalcogenides M 2 A (M: Li, Na, K, Rb; A: O, S, Se, Te) are found to crystallize in the cubic anti-fluorite (anti-CaF 2 -type) structure at ambient conditions. They draw considerable attention of researchers due to their possible applications in power sources, fuel cells, gas-detectors and ultraviolet space technology devices [1].
The properties of the crystals M 2 O have been extensively studied experimentally [2], whereas the sulfides, selenides and tellurides of alkali metal have received less experimental attention. The electronic energy band spectra of the M 2 A crystals have been evaluated using the full potential linearized augmented plane waves plus local orbitals (FP APW+lo) method based on DFT [1]. However, it is well known that the resulting band gap values in this approach are much underestimated. A proper way of calculating single-particle excitation energies or quasiparticle energies is provided by the Green's function theory. Here, the GW approximation (GWA) is used, which is the simplest working approximation beyond the Hartree-Fock approach taking screening into account [3].
Calculations of the electron energy spectrum beyond the local (LDA) or quasilocal (GGA) approximations were made only for the crystal Li 2 O [4]. The Kohn-Sham ground-state data have been evaluated [4] on the norm-conserving pseudopotential basis. On this basis there were obtained quasiparticle corrections to the eigenenergies using the GWA.
Then, the Bethe-Salpeter equation, which includes the screened electron-hole interaction as well as the unscreened electron-hole exchange term [4], was solved and the lowest exciton eigenvalue at 6.6 eV was found. This value is well compared with the optical absorption energy at about 6.6 eV. The GW corrections open the gap at the Γ point by 2.1 eV yielding a minimum direct gap of 7.4 eV. Therefore, the difference between the GW energy and the excitonic energy gives the exciton binding energy of 0.8 eV.
The above listed applications of the alkali metal chalcogenides do not exhaust the potential capabilities of these crystals. In fact, the recently registered patents suggest a possible use of these crystals, doped with d -or f -transition elements, in spintronics [5]. Finally, it is worth to mention an interesting theoretical prediction of the occurrence of a ferromagnetic half-metallic ordering in these crystals caused by the doping with nonmagnetic elements C, Si, Ge, Sn and Pb [6].
The compounds considered here have large lattice constants. As a result, the hybridization between respective orbitals of an impurity and a parent atom would be weak. Thus, the alkali metal atom, such as K, Na, Li or Rb can be substituted with each of the 3d , 4d and 5d transition metal elements and the rareearth 4 f elements [5]. The transition metal element is incorporated in the alkali chalcogenide compound in the form of a solid solution. The substitution of the alkali metal with the d or f transition element is performed at up to about 25% through a non-equilibrium crystal growth process at a low temperature to provide a ferromagnetic characteristic thereto.
Taking into account the importance of these materials due to their practical application, we reach the conclusion that a more precise calculation of the parameters of the electron energy spectra for them is an actual problem. And now let us turn to the solution.
Calculation
The first stage is to calculate the electron energy spectrum and eigenfunction in the generalized gradient approximation (GGA). For this purpose, the Kohn-Sham equations (2.1) are solved in a self-consistent way [7,8]:
−∇ 2 + V ext + V H + V xc ψ GGA nk (r) = ε GGA nk ψ GGA nk (r),(2.−∇ 2 + V ext (r) + V H (r) ψ q p nk (r) + Σ r, r , ε q p nk ψ q p nk (r )dr = ε q p nk ψ q p nk (r), (2.2)
where Σ(r, r , ε q p nk ) is the non-local self-energy operator. The wave functions can be expanded as follows: |ψ q p nk 〉 = n a n n |ψ GGA nk 〉.
nn (E ) = ε GGA nk δ nn + ψ GGA nk |Σ(E ) − V xc |ψ GGA n k , (2.4)
where the perturbation is written as Σ(E ) − V xc . We have generated the PAW functions for the following valence basis states: 1s 2 2s 1 2p 0 for Li, 2s 2 2p 6 3s 1 3p 0 for Na, 3s 2 3p 6 4s 1 4p 0 for K, 4s 2 4p 6 5s 1 5p 0 for Rb, 2s 2 2p 4 for O, 2s 2 2p 6 3s 2 3p 4 for S, 3s 2 3p 6 4s 2 4p 4 for Se, and 4s 2 4p 6 5s 2 5p 4 for Te. All the PAW basis functions were obtained using the program atompaw [11]. The radii of the augmentation spheres are 1. 6
Electronic properties
Total density of electronic states (DOS) of Li 2 Se crystal is shown in figure 1. As can be seen, the wave functions of electrons in all energy bands are hybridized. This is indicated by the mark appearing next 33702-2 The widths of the corresponding bands are equal to 0.75 and 1.00 eV, respectively. Finally, the full width of the valence band, obtained in the GGA and GWA, equals 9.52 and 9.64 eV, respectively. Now, consider the results of the calculation presented in table 1. The values of the band energies obtained in the FP APW [1] approach by means of the WIEN2K code, and evaluated here in the PBE PAW framework with ABINIT code, are substantially underestimated. Let us first consider the properties of the Li 2 O crystal for which the experimental value of the energy of the optical absorption is known [4].
The value of the X − Γ gap, found in [1] within the DFT is 4.96 eV, and our value equals 5.07 eV. And the value of this parameter, calculated here within the GWA equals 7.55 eV. However, the experimental value of the optical absorption energy is equal to 6.6 eV. Now it is possible to estimate the binding energy of an exciton, which is simply equal to the difference of the last two values of the energy that is 0.95 eV. The corresponding value found recently from the Bethe-Salpeter equation is 0.98 eV [14]. Table 1 shows that all the Li 2 A, K 2 A and Rb 2 A crystals have an indirect band gap X − Γ, and Na 2 A crystals are characterized by a direct gap Γ − Γ. Table 1 shows that the values of the direct and indirect gaps in the Li 2 A crystals monotonously decrease with the replacement of the second element O → S → Se → Te. A similar behavior is also shown by the direct gap X − X in the crystals Na 2 A. Now, consider the results of the calculation presented in table 2. As can be seen, the most significant changes in the energy gaps ∆E are obtained for the crystal Li 2 O. We pay attention to the fact that the values of all the changes in the energy gaps ∆E obtained for each crystal are different. The greatest value of the ∆E change for the direct gap Γ − Γ is obtained for the Li 2 O crystal and the smallest value is found for the Li 2 Te crystal.
The macroscopic dielectric function ε LF M (ω), including local field effects, is related to the inverse of
33702-5
the microscopic dielectric matrix [12]: ε LF M (ω) = lim q→0 1 ε −1 00 (q,ω)
. If local fields are neglected (no local fields, NLF), the irreducible polarizability is computed in the independent particle approximation. In this case, ε NLF M (ω) = lim q→0 ε 00 q, ω . The value ε M (0) is the static dielectric constant ε ∞ presented in table 3. The value of the dielectric constant for the Li 2 O crystal obtained here is 2.65, and the one evaluated in work [16] is 2.62. The corresponding experimental result equals 2.68 [4]. The convergence of the values of dielectric constants listed in table 3 served as an additional criterion of the choice of the plane wave basis in the Kohn-Sham problem, and in the calculation of the exchange Σ x and correlation Σ c parts [12] of the self-energy.
Conclusions
The electron energy spectra for M 2 A crystals have been originally calculated based on quasiparticle corrections within the GW approach. The results obtained herein show that the values of the interband gaps found without the quasiparticle corrections are usually underestimated by 20 − 50 percent (see table 2). All the Na 2 A crystals considered here are characterized by direct gaps Γ − Γ. The rest of the M 2 A crystals have indirect gaps X − Γ. The non-local self-energy operator Σ in equation (2.2) was evaluated without application of the plasmon pole model. The GW calculations have been carried out using the ABINIT code employing the contour deformation method [12,15]. As can be seen from table 2, the corrections ∆E are not weakly dependent on the wave vector. Therefore, the scissor operator is not a good approximation for all the crystals considered here. The long wave limits of the dielectric constants of the considered crystals have been evaluated for the first time. The last one found for Li 2 O crystal is well compared with the experimental value. Table 3 shows that the nine crystals listed therein have dielectric constants less than 3.0. We can assume that the exciton binding energy possessed by them is in the range from about 0.5 to 1.0 eV. Thus, the bandgap calculated in the GWA would exceed the experimental value of the optical absorption energy by the value of the binding energy of the exciton [4,14]. We hope that the results obtained here will stimulate the experimental study of these materials, which is important for practical applications.
Figure 1 .
1The total DOS of Li 2 Se obtained in the GGA.
Figure 2 .
2The band structure of Li 2 Se obtained in the GWA.
Figure 3 .
3The total DOS of Na 2 Se obtained in the GGA.
Figure 4 .
4The band structure of Na 2 Se obtained in the GWA.33702-3to the peaks on the DOS curves. For example, in the bottom of the valence band of the Li 2 Se crystal, the s-states of Se dominate and the contributions to the DOS of p and s electrons of Li are less significant. The dispersion curves, shown in figure 2, indicate that the crystal Li 2 Se is a semiconductor with an indirect gap Γ-X.Figure 1shows the electronic DOS of the Li 2 Se crystal, evaluated within the GGA on the PAW basis. As can be seen, the bottom of the valence band is characterized by a small dispersion, and the corresponding curve is localized in a narrow strip about 0.53 eV wide. The value of the corresponding parameter obtained within GWA (seefigure 2) is slightly greater and equals 0.72 eV. The widths of the upper parts of the valence bands are characterized by the values of 2.91 and 3.49 eV obtained in the GGA and GWA, respectively. The width of the valence band found in the GGA equal 10.90 eV, and the corresponding value obtained in the GWA is 10.53 eV. Now, let us analyze the results of the calculation obtained for the Na 2 Se crystal. Let us consider the DOS in figure 3 and the dispersion curves in figure 4. They show that this crystal is characterized by a direct gap at the point Γ. As can be seen from figure 3 (GGA), the bottom of the valence band is characterized by a small dispersion and the corresponding curve is localized in a narrow strip about 0.29 eV wide. Analogous parameter obtained in the GWA (figure 4) is a little greater and equals 0.44 eV. The values of the widths of the upper part of the valence band obtained within the GGA and GWA, are equal to 1.71 and 2.18 eV, respectively. The total width of the valence band found in the GGA equals 10.04 eV, and the corresponding value obtained in the GWA is 11.13 eV. Figures 5 and 6 show the electronic energy bands of the K 2 Se crystal evaluated within the GGA and GWA, respectively. As can be seen, the crystal has an indirect gap Γ-X. The lowest bands calculated within the GGA and GWA are localized in very narrow strips of 0.27 and 0.34 eV, respectively. They consist of the core states of the K atom. The bottom of the valence band corresponding to the GGA and GWA is localized within a very narrow strip of about 0.14 and 0.19 eV wide, respectively. The strips containing the upper parts of the valence band calculated by means of the GGA and GWA are 0.66 and 0.90 eV wide, respectively. The total width of the valence band obtained in the GGA and GWA is 9.41 and 9.49 eV, respectively. At last, let us turn to the analysis of the results found for the crystal Rb 2 Se represented in figures 7 and 8. As can be seen from figures 7 and 8, the Rb 2 Se crystal has an indirect gap Γ-X. In the lowest bands, the core p-states of Rb dominate. They lie in a strip of the width of 0.80 and 0.92 eV, respectively. The bottom of the valence band is created by s-states of selenium. It is located just above the core p-states of rubidium. The widths of the bottoms of the valence bands are 0.407 and 0.413 eV obtained within the GGA and GWA, respectively. The top of the valence band consists mainly of sand p-states of selenium.
Figure 5 .
5The total DOS of K 2 Se obtained in the GGA.
Figure 6 .
6The band structure of K 2 Se obtained in the GWA.
Figure 7 .
7The total DOS of Rb 2 Se obtained in the GGA.
Figure 8 .
8The band structure of Rb 2 Se obtained in the GWA.
, 1.65, 2.3, 2.6, 1.45, 1.4, 1.8, and 2.4 a.u. for Li, N, K, Rb, O, S, Se, and Te, respectively. The values of the experimental lattice constants of the crystals A 2 B used in calculations equal [1] 8.642, 10.488, 12.170, and 12.741 a.u. for Li 2 O, Na 2 O, K 2 O, and Rb 2 O, respectively; 10.790, 12.332, 13.967, and 14.456 a.u. for Li 2 S, Na 2 S, K 2 S, and Rb 2 S, respectively; 11.342, 12.894, 14.967, and 15.154 a.u. for Li 2 Se, Na 2 Se, K 2 Se, and Rb 2 Se, respectively; 12.315, 13.850, 15.435, and 16.044 a.u. for Li 2 Te, Na 2 Te, K 2 Te, and Rb 2 Te, respectively.The electronic energy bands and DOS have been evaluated by means of the ABINIT code[12]. Intein the GWA and GGA calculation, respectively. The iterations were performed to ensure the calculation of the total energy of the crystal with an accuracy of 10 −8 Ha. The symmetry of the considered M 2 A crystals is described by the space group Fm3m (number 225), and the Bravais lattice is cF (face-centered cubic).gration over the Brillouin zone was performed on the Monkhorst-Pack [13] grid of 6 × 6 × 6 and 8 × 8 × 8
Quasiparticle electronic band structure of the alkali metal chalcogenides-10
0
10
0
2
4
6
Li, p
Se, p
Se, s
Li, s
Se, p
Se, s
Li, p
Li, s
Se, p
Li, p
Li, s
Se, p
Li, s
Li, p
Se, s
Li, p
Li, s
st./eV
E-E F , eV
Table 1 .
1The calculated electronic band gaps of the crystals M 2 A, in eV.Table 2. The differences between the energy gaps of the crystals M 2 A evaluated within the GWA and GGA approaches.∆E = E GWA − E GGA , eV ∆E /E GWAPBE FP APW [1]
E GGA
E GWA
Li 2 A
O
S
Se
Te
O
S
Se
Te
O
S
Se
Te
Γ − Γ 5.15 4.19 3.45 3.19 5.52 4.26 3.67 3.79 8.46 6.03 5.32 4.05
X − Γ 4.96 3.36 2.93 2.46 5.07 3.47 3.04 2.59 7.55 4.73 4.36 3.59
X − X 6.31 4.77 4.36 3.69 6.48 4.88 4.48 4.05 9.35 6.54 6.12 5.35
Na 2 A
Γ − Γ 1.83 2.40 2.09 2.11 2.00 2.56 2.25 2.51 3.93 4.24 3.90 4.00
X − Γ 4.61 2.85 2.58 2.72 4.74 3.93 3.55 3.13 6.48 5.24 4.94 4.28
X − X 4.86 4.27 3.96 3.93 4.98 4.37 4.06 3.72 6.81 5.82 5.59 5.02
K 2 A
Γ − Γ 5.14 2.40 2.32 2.28 2.34 2.68 2.19 2.60 3.73 4.10 3.65 4.02
X − Γ 1.71 2.24 2.03 2.02 1.86 2.47 2.11 2.57 3.09 3.82 3.58 4.00
X − X 3.23 3.41 3.22 2.94 3.22 3.54 3.36 3.22 4.59 4.83 4.78 4.54
Rb 2 A
Γ − Γ 1.88 2.28 2.21 2.18 2.40 2.73 2.42 2.56 3.59 4.03 3.85 3.93
X − Γ 1.31 1.94 1.88 1.96 1.78 2.37 2.08 2.33 2.62 3.50 3.40 3.62
X − X 2.69 3.11 3.15 3.02 2.97 3.40 3.14 3.04 3.59 4.45 4.42 4.24
Li 2 A
O
S
Se
Te
O
S
Se
Te
Γ − Γ 2.94 1.77 1.65 0.26 0.35 0.29 0.31 0.06
X − Γ 2.48 1.26 1.32 1.00 0.33 0.27 0.30 0.28
X − X 2.87 1.66 1.64 1.30 0.31 0.25 0.27 0.24
Na 2 A
Γ − Γ 1.93 1.68 1.65 1.49 0.49 0.40 0.42 0.37
X − Γ 1.74 1.31 1.39 1.15 0.27 0.25 0.28 0.27
X − X 1.83 1.45 1.53 1.30 0.27 0.25 0.27 0.26
K 2 A
Γ − Γ 1.39 1.42 1.46 1.42 0.37 0.35 0.40 0.35
X − Γ 1.23 1.35 1.47 1.43 0.40 0.35 0.41 0.36
X − X 1.37 1.29 1.42 1.32 0.30 0.27 0.30 0.29
Rb 2 A
Γ − Γ 1.19 1.30 1.43 1.37 0.33 0.32 0.37 0.35
X − Γ 0.84 1.13 1.32 1.29 0.32 0.32 0.39 0.36
X − X 0.62 1.05 1.28 1.20 0.17 0.24 0.29 0.28
33702-6
Table 3 .
3The calculated dielectric constants for the crystals M 2 A found with and without the local field (LF) effects.ε ∞ , with LF
ε ∞ , without LF
A
O
S
Se
Te
O
S
Se
Te
Li 2 A
2.65 3.65 3.88 4.17 2.89 4.47 4.72 5.09
Na 2 A 2.98 3.24 3.43 3.54 3.26 3.96 4.19 4.38
K 2 A
2.98 2.94 2.93 2.96 3.49 3.72 3.72 3.81
Rb 2 A 3.91 2.86 2.94 2.89 4.57 3.59 3.73 3.72
© S.V. Syrotyuk, V.M. Shved, 2015 33702-1 arXiv:1510.06546v1 [cond-mat.mtrl-sci] 22 Oct 2015
. S M Alay-E-Abbas, N Sabir, Y Saeed, A Shaukat, 10.1142/S021797921110093XInt. J. Mod. Phys. B. 253911Alay-E-Abbas S.M., Sabir N., Saeed Y., Shaukat A., Int. J. Mod. Phys. B, 2011, 25, 3911; doi:10.1142/S021797921110093X.
. E A Mikajlo, K L Nixon, M J Ford, 10.1088/0953-8984/15/13/302J. Phys.: Condens. Matter. 152155Mikajlo E.A., Nixon K.L., Ford M.J., J. Phys.: Condens. Matter, 2003, 15, 2155; doi:10.1088/0953-8984/15/13/302.
. F Aryasetiawan, O Gunnarsson, 10.1088/0034-4885/61/3/002Rep. Prog. Phys. 273Aryasetiawan F., Gunnarsson O., Rep. Prog. Phys., 1998, 61, 273; doi:10.1088/0034-4885/61/3/002.
. Albrecht S Onida, G Reining, L , 10.1103/PhysRevB.55.10278Phys. Rev. B. 55Albrecht S., Onida G., Reining L., Phys. Rev. B, 1997, 55, 10278; doi:10.1103/PhysRevB.55.10278.
. H Yoshida, M Seike, K Sato, A Yanase, US Patent Application Publication. Yoshida H., Seike M., Sato K., Yanase A., US Patent Application Publication, No.: US 2006/0231789 A1, Pub. Date: Oct. 19, 2006.
. R D Eithiraj, G Kalpana, 10.1016/j.jpcs.2010.12.011J. Phys. Chem. Solids. 72227Eithiraj R.D., Kalpana G., J. Phys. Chem. Solids, 2011, 72, 227; doi:10.1016/j.jpcs.2010.12.011.
. P E Blochl, 10.1103/PhysRevB.50.17953Phys. Rev. B. 5017953Blochl P.E., Phys. Rev. B, 1994, 50, 17953; doi:10.1103/PhysRevB.50.17953.
. M Torrent, F Jollet, F Bottin, G Zerah, X Gonze, 10.1016/j.commatsci.2007.07.020Comp. Mater. Sci. 42337Torrent M., Jollet F., Bottin F., Zerah G., Gonze X., Comp. Mater. Sci., 2008, 42, 337; doi:10.1016/j.commatsci.2007.07.020.
. B Arnaud, M Alouani, 10.1103/PhysRevB.62.4464Phys. Rev. B. 624464Arnaud B., Alouani M., Phys. Rev. B, 2000, 62, 4464; doi:10.1103/PhysRevB.62.4464.
. M Shishkin, G Kresse, 10.1103/PhysRevB.74.035101Phys. Rev. B. 7435101Shishkin M., Kresse G., Phys. Rev. B, 2006, 74, 035101; doi:10.1103/PhysRevB.74.035101.
. A R Tackett, N A W Holzwarth, G E Matthews, 10.1016/S0010-4655(00)00241-1Comput. Phys. Commun. 135Tackett A.R., Holzwarth N.A.W., Matthews G.E., Comput. Phys. Commun., 2001, 135, 348; doi:10.1016/S0010- 4655(00)00241-1.
. X Gonze, B Amadon, P.-M Anglade, J.-M Beuken, F Bottin, P Boulanger, F Bruneval, D Caliste, R Caracas, M Côté, T Deutsch, L Genovese, Ghosez Ph, M Giantomassi, S Goedecker, D R Hamann, P Hermet, F Jollet, G Jomard, S Leroux, M Mancini, S Mazevet, M J T Oliveira, G Onida, Y Pouillon, T Rangel, G.-M Rignanese, D Sangalli, R Shaltaf, M Torrent, M J Verstraete, G Zerah, J W Zwanziger, 10.1016/j.cpc.2009.07.007Comput. Phys. Commun. 1802582Gonze X., Amadon B., Anglade P.-M., Beuken J.-M., Bottin F., Boulanger P., Bruneval F., Caliste D., Caracas R., Côté M., Deutsch T., Genovese L., Ghosez Ph., Giantomassi M., Goedecker S., Hamann D.R., Hermet P., Jollet F., Jomard G., Leroux S., Mancini M., Mazevet S., Oliveira M.J.T., Onida G., Pouillon Y., Rangel T., Rignanese G.-M., Sangalli D., Shaltaf R., Torrent M., Verstraete M.J., Zerah G., Zwanziger J.W., Comput. Phys. Commun., 2009, 180, 2582; doi:10.1016/j.cpc.2009.07.007.
. H J Monkhorst, J D Pack, 10.1103/PhysRevB.13.5188Phys. Rev. B. 13Monkhorst H.J., Pack J.D., Phys. Rev. B, 1976, 13, 5188; doi:10.1103/PhysRevB.13.5188. 33702-7
S V Syrotyuk, V M Shved, Proceedings of the Conference. the ConferenceLvivSyrotyuk S.V., Shved V.M., In: Proceedings of the Conference "Oxide Materials for Electronic Engineering" (Lviv, 2014), 55-56.
. S V Faleev, M Van Schilfgaarde, T Kotani, 10.1103/PhysRevLett.93.126406Phys. Rev. Lett. 93126406Faleev S.V., van Schilfgaarde M., Kotani T., Phys. Rev. Lett., 2004, 93, 126406; doi:10.1103/PhysRevLett.93.126406.
Te) були пiдрахованi за методом проекцiйних приєднаних хвиль (PAW) за допомогою програми ABINIT. Одночастинковi стани у формалiзмi Кона-Шема були знайденi в рамках GGA (узагальнене градiєнтне наближення). P Sony, A Shukla, Li, K Na, ; O Rb, S , Se , 10.1103/PhysRevB.73.165106Далi на основi цих результатiв були отриманi квазiчастинковi енергiї електронiв та дiелектричнi константи у наближеннi GW. Для розглянутих кристалiв M 2 A розрахунки на основi функцiї Грiна були зробленi вперше. 73165106вул. С. Бандери, 12, 79013 Львiв, Україна Електроннi енергетичнi спектри халькогенiдiв лужних металiв M 2 A. за винятком Li 2 OSony P., Shukla A., Phys. Rev. B, 2006, 73, 165106; doi:10.1103/PhysRevB.73.165106. Квазiчастинкова електронна енергетична структура халькогенiдiв лужних металiв С.В. Сиротюк, В.М. Швед Нацiональний унiверситет "Львiвська полiтехнiка", вул. С. Бандери, 12, 79013 Львiв, Україна Електроннi енергетичнi спектри халькогенiдiв лужних металiв M 2 A (M: Li, Na, K, Rb; O, S, Se, Te) були пiдра- хованi за методом проекцiйних приєднаних хвиль (PAW) за допомогою програми ABINIT. Одночастинковi стани у формалiзмi Кона-Шема були знайденi в рамках GGA (узагальнене градiєнтне наближення). Далi на основi цих результатiв були отриманi квазiчастинковi енергiї електронiв та дiелектричнi константи у на- ближеннi GW. Для розглянутих кристалiв M 2 A розрахунки на основi функцiї Грiна були зробленi вперше, за винятком Li 2 O.
| []
|
[
"Prospects for population synthesis in the H band: NeMo grids of stellar atmospheres compared to observations",
"Prospects for population synthesis in the H band: NeMo grids of stellar atmospheres compared to observations"
]
| [
"J Frémaux \nSection de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance\n",
"F Kupka \nMax-Planck-Institute for Astrophysics\nKarl-Schwarzschild Str. 185741GarchingGermany\n",
"C Boisson \nSection de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance\n",
"M Joly \nSection de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance\n",
"V Tsymbal \nTavrian National University\nYaltinskaya 4330000Simferopol, CrimeaUkraine\n\nInstitute for Astronomy\nUniversity of Vienna\nTürkenschanzstraße 17A-1180ViennaAustria\n"
]
| [
"Section de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance",
"Max-Planck-Institute for Astrophysics\nKarl-Schwarzschild Str. 185741GarchingGermany",
"Section de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance",
"Section de Meudon\nLUTH\nUMR 8102\nCNRS\nUniversité Denis Diderot\nObservatoire de Paris\n92195Meudon CedexFrance",
"Tavrian National University\nYaltinskaya 4330000Simferopol, CrimeaUkraine",
"Institute for Astronomy\nUniversity of Vienna\nTürkenschanzstraße 17A-1180ViennaAustria"
]
| []
| Context. For applications in population synthesis, libraries of theoretical stellar spectra are often considered an alternative to template libraries of observed spectra, because they allow a complete sampling of stellar parameters. Most attention in published theoretical spectral libraries has been devoted to the visual wavelength range. Aims. We present a detailed comparison of theoretical spectra in the range 1.57-1.67µm, for spectral types from A to early M and for giants and dwarf stars, with observed stellar spectra at resolutions around 3000, which would be sufficient to disentangle the different groups of late type stars. Methods. We have selected the NeMo grids of stellar atmospheres to perform such a comparison. Results. We first demonstrate that after combining atomic and molecular line lists, it is possible to match observed spectral flux distributions with theoretical ones very well for almost the entire parameter range covered by the NeMo grids at moderate resolution in the visual range. In the infrared range, although the overall shape of the observed flux distributions is still matched reasonably well, the individual spectral features are reproduced by the theoretical spectra only for stars earlier than mid F type. For later spectral types the differences increase and theoretical spectra of K type stars have systematically weaker line features than those found in observations. These discrepancies are traced back to stem primarily from incomplete data on neutral atomic lines, although some of them are also related to molecules. Conclusions. Libraries of theoretical spectra for A to early M type stars can be successfully used in the visual regions for population synthesis but their application in the infrared is restricted to early and intermediate type stars. Improving atomic data in the near infrared is a key element in making the construction of reliable libraries of stellar spectra in the infrared feasible. | 10.1051/0004-6361:20053699 | [
"https://export.arxiv.org/pdf/astro-ph/0511125v1.pdf"
]
| 12,081,111 | astro-ph/0511125 | 60788a5cde472105e5dcd6a13aa5acd4ae3b75b4 |
Prospects for population synthesis in the H band: NeMo grids of stellar atmospheres compared to observations
J Frémaux
Section de Meudon
LUTH
UMR 8102
CNRS
Université Denis Diderot
Observatoire de Paris
92195Meudon CedexFrance
F Kupka
Max-Planck-Institute for Astrophysics
Karl-Schwarzschild Str. 185741GarchingGermany
C Boisson
Section de Meudon
LUTH
UMR 8102
CNRS
Université Denis Diderot
Observatoire de Paris
92195Meudon CedexFrance
M Joly
Section de Meudon
LUTH
UMR 8102
CNRS
Université Denis Diderot
Observatoire de Paris
92195Meudon CedexFrance
V Tsymbal
Tavrian National University
Yaltinskaya 4330000Simferopol, CrimeaUkraine
Institute for Astronomy
University of Vienna
Türkenschanzstraße 17A-1180ViennaAustria
Prospects for population synthesis in the H band: NeMo grids of stellar atmospheres compared to observations
arXiv:astro-ph/0511125v1 4 Nov 2005 Astronomy & Astrophysics manuscript no. jfremaux˙aaformat c ESO 2022 20th March 2022 Received 27 June 2005 / Accepted 22 October 2005stars: atmospheres -infrared: stars
Context. For applications in population synthesis, libraries of theoretical stellar spectra are often considered an alternative to template libraries of observed spectra, because they allow a complete sampling of stellar parameters. Most attention in published theoretical spectral libraries has been devoted to the visual wavelength range. Aims. We present a detailed comparison of theoretical spectra in the range 1.57-1.67µm, for spectral types from A to early M and for giants and dwarf stars, with observed stellar spectra at resolutions around 3000, which would be sufficient to disentangle the different groups of late type stars. Methods. We have selected the NeMo grids of stellar atmospheres to perform such a comparison. Results. We first demonstrate that after combining atomic and molecular line lists, it is possible to match observed spectral flux distributions with theoretical ones very well for almost the entire parameter range covered by the NeMo grids at moderate resolution in the visual range. In the infrared range, although the overall shape of the observed flux distributions is still matched reasonably well, the individual spectral features are reproduced by the theoretical spectra only for stars earlier than mid F type. For later spectral types the differences increase and theoretical spectra of K type stars have systematically weaker line features than those found in observations. These discrepancies are traced back to stem primarily from incomplete data on neutral atomic lines, although some of them are also related to molecules. Conclusions. Libraries of theoretical spectra for A to early M type stars can be successfully used in the visual regions for population synthesis but their application in the infrared is restricted to early and intermediate type stars. Improving atomic data in the near infrared is a key element in making the construction of reliable libraries of stellar spectra in the infrared feasible.
Introduction
Due to the rapidly increasing spectral resolution of galaxy surveys (e.g. Sloan Digital Sky Survey), modelling any galaxy with spectral synthesis technique requires a high spectral resolution stellar library.
Over the last years much progress has been made in synthesis models (e.g. Pelat 1997;Leitherer et al. 1999;Bruzual & Charlot 2003;Le Borgne et al. 2004;Cid Fernandes et al. 2005). Moderate to high spectral resolution observations of stars in order to construct reliable template libraries have also been performed in the visible domain (e.g. STELIB of Le Borgne et al. 2003; UVES of Bagnulo et al. 2003; CoudeFed of Valdes et al. 2004) and in the IR (e.g. Dallier et al. 1996;Meyer et al. 1998;Ivanov et al. 2004).
However, the major limitation of all these libraries is the sampling of stellar parameters such as metallicity.
One way to avoid such a difficulty is to build libraries of theoretical stellar spectra in order to choose the desired physical parameters. In this sense, some extensive libraries of synthetic spectra have appeared recently for the visible range. Murphy & Meiksin (2004), based on Kurucz's ATLAS9 model atmospheres (Kurucz 1993a), have built a high resolution (λ/∆λ = 250000) stellar library over an extended visible range (3000 to 10000 Å). The convection zone is treated using the Mixing Length Theory (MLT) with the overshooting treatment of Kurucz (cf. Castelli et al. 1997). This library provides spectra for 54 values of effective temperature from 5250 to 50000 K, 11 values of log surface gravity from 0.0 to 5.0 and 19 metallicities from -5.0 to 1.0. They compared their synthetic library with observed spectra (STELIB library of Le Borgne et al. 2003) for the colours and the Lick indices and found in general a good agreement.
Still based on Kurucz's models, but with enhanced molecular line lists and the overshooting option switched off, Munari et al. (2005) also present a library of synthetic spectra, for a similar wavelength range (2500 to 10500 Å). They use a new grid of ATLAS9 model atmospheres (Castelli & Kurucz 2003). The effective temperature of these spectra is contained between 3500 and 47500 K, the log surface gravity between 0.0 and 5.0 and the metallicity between -2.5 and 0.5. These spectra are computed at a resolving power of λ/∆λ = 500000 and then Gaussian convolved to lower resolution (≤ 20 000). In contrast with Murphy & Meiksin (2004), the predicted energy level lines are not included into the line lists used to build this library, as Munari et al. (2005) favour the spectroscopic use rather than the photometric use of the synthetic spectra. The addition of these "predicted lines" permits to have a better statistical flux distribution, but individual wavelengths can be wrong by up to 5%.
We would like to point out here that usually only the lower lying energy levels of atoms have been determined in the laboratory, particularly for complex spectra such as those from neutral or singly ionized iron. If only those transitions were taken into account, the atmospheric line blanketing computed from such data would be severly incomplete. A lot of weak lines, possibly unidentified even in the solar spectrum but nevertheless present, would be missed. This would lead to an overestimation of the ultraviolet flux which in turn would be "compensated" by a lack of flux in the visual (Kurucz 1992). To avoid this deficiency and to improve the temperature structure of the model atmospheres, the spectrophotometric flux distribution, and the photometric colors requires to account for lines for which one or both energy levels have to be predicted from quantum mechanical calculations. This has been one of the main goals of the ATLAS9 models of Kurucz (1992Kurucz ( , 1993a. As the theoretical predictions are accurate to only a few percent, individual wavelengths can be wrong by up to a few 100 Å in the visual. Also the line oscillator strengths are sufficiently accurate merely in a statistical sense. This still permits to improve the total flux within a wavelength band of a few dozen Å in the visual, but individual features do appear in the wrong part of the spectrum. For spectroscopy at higher resolutions, particularly if the spectrum is rectified or when working within small wavelength bands, adding the predicted energy level lines "pollutes" the theoretical spectrum with extra "noise". This has to be avoided in libraries devoted to automatic fitting procedures or if particular line features are essential to identify a certain spectral type (lines predicted for the wrong wavelength will make this more difficult). Hence, with present atomic data, either choice is only a compromise solution.
By combining three different model atmospheres, the highresolution stellar library of Martins et al. (2005) provides the largest coverage in effective temperature (from 3000 to 50000 K) and log surface gravity (from -0.5 to 5.5). This library, still in the visible wavelength range (3000 to 7000 Å), uses the non-LTE model atmosphere TLUSTY (Hubeny 1988, Hubeny & Lanz 1995, Lanz & Hubeny 2003 for T eff ≥ 27500K, the Kurucz's ATLAS9 model for 4750 ≤ T eff ≤ 27000K and Phoenix/NextGen models (Allard & Hauschildt 1995, Hauschildt et al. 1999) which use spherical symmetry for cooler stars with low surface gravity. A comparison with observed spectra (from the STELIB library of Le Borgne et al. 2003 and the Indo-US library of Valdes et al. 2004) shows the good agreement of this theoretical library with observations. Thus, the visible range is now quite well covered by theoretical libraries, for photometric use as well as spectroscopic use and for a wide range of physical parameters. All the comparisons with observations show that these theoretical spectra can reasonably well mimic real stars, at least at the spectral resolution where the comparisons were made.
The goal of the present work is to go one step further, exploring the near-infrared range where few observed fully calibrated and no theoretical libraries are available. This research takes place in a more general framework which consists in the synthesis of the stellar population of galaxies hosting active galactic nuclei by an inverse method, described in Pelat (1997) and . The H-band provides very good luminosity discriminators for stars later than K0 (cf Dallier et al. 1996) and the particular region 1.57-1.64 µm of the Hband is clear of strong emission lines (except Brackett lines). It allows to sample the stellar content of the very nucleus of Seyfert 1 galaxies, at the contrary of the visible range where the strong broad emission lines of the active nucleus contaminate so much the spectra of the galactic inner part that there are too few absorption lines from the stellar component to synthesize this region.
The lack of stellar observations at medium resolution for the near-infrared range, especially for super-metallic stars, drove us to work with theoretical spectra. But the behavior of model atmospheres and fluxes is not very well known in the infrared. Decin et al. (2003) have compared several observed stars with theoretical spectra computed with the MARCS models (Gustafsson et al. 1975, Plez et al. 1992 in the range 2.38 to 12 µm for the ISO-SWS calibration, at a resolving power R ≃ 1000. This study points out the difficulties of modelisation due to strong molecular opacities and the bad accuracy and completeness of the atomic data in these wavelength ranges.
In this paper, we compute theoretical spectra using the NeMo (Vienna New Model) grid of atmospheres (Heiter et al. 2002, Nendwich et al. 2004) based on the model atmosphere code ATLAS9 by Kurucz (1993aKurucz ( , 1998 and Castelli et al. (1997), combined with the list of absorption lines VALD (Vienna Atomic Line Database, Kupka et al. 1999), eventually completed by molecular data collected by one of us (VT). These models are described in Sect. 2. Synthetic stellar spectra are computed with the code SynthV (built by VT), as shown in Sect. 3, using the model atmospheres described in Sect. 2 as input. Several tests on the input parameters of the spectra are done in Sect. 4.
In a first step of applying our synthesis calculations (Sect. 5), we compare a set of observed stellar spectra with their corresponding models in the visible wavelength range (5000 to 9000 Å) to check the range of validity of the NeMo grid, exploring the whole range of physical parameters (effective temperature, surface gravity and metallicity). In a second step, we generate synthetic spectra in the near-infrared range and compare them with observed ones. The results of this comparison are described in Sect. 6, as are tests which demonstrate that the particular choice of model atmospheres can be expected to be less important than the set of line lists used for the computation of spectra. Our conclusions are summarized in Sect. 7.
Description of the model atmospheres
NeMo differs from the original grids of model atmospheres based on ATLAS9 in the treatment of the convective energy transport. It provides also a higher vertical resolution of the atmospheres and a finer grid in effective temperature and surface gravity.
This model grid of stellar atmosphere uses convection treatment without overshooting. The overshooting prescription has been introduced by Kurucz (1993aKurucz ( , 1998 and modified by Castelli et al. (1997). It was supposed to take into account the change in the temperature gradient of the stable atmosphere layers near a convective zone due to the overshooting of gas from that zone into the stellar atmosphere. But this prescription is left aside in the present work, because, even if the properties of various numerical simulations are well described and in good agreement when compared to observations of the Sun, models with overshooting are worse than models without for other stellar types (see Heiter et al. 2002 for a detailed discussion).
The NeMo grids offer a choice among different convection models. One of them is the mixing length theory (MLT), with α = 0.5. The parameter α represents the ratio between the characteristic length (distance traveled by an element of fluid before its dissolution) and the scale height of the local pressure. This parameter is subject to discussion: according to comparisons between observed and computed energy distributions for the Sun done by Castelli et al. (1997), α should be set at least to 1.25, but Van't Veer & Mégessier (1996), using the same codes and input data as Castelli et al. (1997), but different observations for the Sun, found that α = 0.5 is required to fit both H α and H β profiles. Fuhrmann et al. (1993) were the first to notice that a value of 0.5 for the parameter α is needed to reproduce the Balmer line profiles of cool dwarf stars. In addition, this parameter has to span a large domain (from 1 to 3) to reproduce the red giants (Stothers & Chin 1997). The alternative convective models available in the NeMo grids are of "Full Spectrum Turbulence" (FST) type. Introduced by Canuto & Mazzitelli (1991; thereafter CM model) and Canuto, Goldman & Mazzitelli (1996;thereafter CGM model), these models avoid the one-eddy approximation of MLT. In addition, both models were suggested to be used with a scale length different from the usual multiple α of the local pressure scale height (see Heiter et al. 2002 for further details).
The latter models were introduced in NeMo to allow a choice among different treatments of the internal structure of the stars, depending on the aim of the model computation and its underlying assumption of how to describe the convective energy transport within the limitations of a simple convection model (using only algebraic rather than differential equations).
Two levels of vertical resolution are also offered and hence we can either work with 72 or 288 layers. The MLT models are computed with 72 layers, CM models with 288 and CGM ones are computed for both values.
The metallicity of the model atmopheres covers a large range between -2.0 and +1.0 dex and have 13 different values. This range of metallicity is enough for our purpose. The super metal rich stars, in particular, are represented with five different levels of metallicity (+0.1, +0.2, +0.3, +0.5 and +1.0 dex) reaching the highest possible value for a real star.
NeMo provides model atmospheres for effective temperatures between 4000K and 10000K, by successive steps of 200K; for lowest temperatures, the model atmospheres computed with ATLAS9 become inadequate, mainly because of the molecular opacities which become very important for cool stars. The MARCS6 models (Gustafsson et al. 1975, Plez et al. 1992, more dedicated to the cool stars, handle this problem with a more complete treatment of molecular opacity.
The available values for the surface gravity (log g) of the stellar atmospheres in the NeMo grid span a range from 2.0 to 5.0 with steps of 0.2. It is bounded at 2.0 owing to the planeparallel approximation used in ATLAS9; for lower values of log g, spherically symmetric geometry should be used instead (cf. Hauschildt et al. 1999 andBaraffe et al. 2002).
Other models, working with the appropriate geometry like MARCS6 or Phoenix/NextGen (Allard & Hauschildt 1995, Hauschildt et al. 1999) are necessary for these small values. Indeed, MARCS6, whose main purpose is to model cool stars, uses also the approximation of spherically symmetric geometry to reproduce supergiants and the cool giants stars, which have a low surface gravity (Plez et al. 1992). NextGen models, like ATLAS9, assume LTE and plane-parallel geometry for dwarf stars, but a spherical symmetry is used for low-gravity giant and pre-main sequence stars (log g < 3.5, see Hauschildt et al. 1999).
Contrary to NeMo and MARCS6, NextGen can use a non-LTE model for high temperature stars. But using NLTE does not improve significantly the modelisation of our observed stars, as NLTE effects begin to occur only from 7000 K to higher effective temperature (Hauschildt et al. 1999), but are still small to at least 10000 K. Moreover, NextGen does not reproduce well enough the individual lines owing to the treatment of atomic and molecular lines with a direct opacity sampling method. Indeed, working with opacity distribution functions, like in ATLAS9, would ask too much computer resources when using NLTE calculations (Hauschildt et al. 1999). In addition to that, too few layers are used in the published models to describe the bottom part of the photosphere.
However, NextGen could be an alternative to ATLAS9 type model atmospheres in a next step of our project, for generating spectra of stars with a log g below 2.0 (for which spherical symmetry is needed) and/or an effective temperature above 10000 K. Bertone et al. (2004) have compared both ATLAS9 and NextGen models to observations in the visible range along the whole spectral-type sequence. The conclusions of this work are that both models reproduce very well the spectral energy distribution of F type stars and earlier but this good agreement decreases at lower temperature, especially for K stars, owing to the lack of molecular treatment in those models. ATLAS9 provides a better fit, in general, from B to K type stars but as said previously NextGen is more suitable for M stars, due to the use of spherical geommetry for the giants and a more complete molecular line opacity. However, Martins et al. (2005) note that this comparison is made with a previous generation of NextGen models, using for example a mixing length parameter of 1 instead of 2, preferred by hydrodynamic models. This is also true for the ATLAS9 models, as Bertone et al. (2004) did not use the latest versions of ATLAS9, including new opacity distribution functions (Castelli & Kurucz 2003), computed with more up-to-date solar abundances and molecular contributions than the previous one.
Thus, a new comparison with observations in the visible range is not unnecessary. Moreover, an extensive comparison of spectra based on the NeMo grid of model atmospheres for the entire range of A to early M stars including both dwarfs and giants has not been done before. We hence begin our comparison with observations in the visual before proceeding to the infrared. The implications of changing abundances or the description of convection at spectral resolutions relevant for studies of galaxies are included as part of the discussion of our comparisons.
Obtaining a theoretical spectrum
First of all, we downloaded the model atmospheres corresponding to the stellar types wanted from the NeMo website (http://ams.astro.univie.ac.at/nemo/). The models are classified according to the convection model (CM, CGM or MLT) and to the number of layers representing the atmosphere. The model CGM with 72 layers is detailed enough for our purpose. Models with 288 layers are used only for specific applications like the calculation of the convective scale length in stellar interior models (Heiter et al. 2002). Reduced to the medium resolution of our observations, both computations of a model with 72 and 288 layers respectively give similar spectra.
The next parameter to determine is the microturbulence velocity. For cool dwarf stars, this velocity is low: about 0-1 km/s, but the value is increasing towards higher luminosities, reaching values as high as 5 km/s (Gray 1992). A few stars do not follow this rule: hot stars, like B and O type, have a negligible microturbulence velocity and some specific types of A stars can either have a null velocity (Ap type stars, for them magnetic field effects are important instead) or a velocity of 4 km/s (Am type stars). As the microturbulence velocity has only a small influence on the overall shape of the spectra and on the line profiles at our spectral resolution, we can use a common value of 2 km/s for comparison with all our stellar spectra, composed by A to early-M type dwarf and F to K type giant stars, as 2 km/s is a good compromise for these stars (Gray 1992).
Then, the three main physical parameters of the star have to be chosen. The metallicity, the effective temperature and the surface gravity of the theoretical stellar spectrum should correspond as good as possible to the observed star to be compared. Therefore, once the stellar characteristics are determined, the nearest set of parameters (T, log g, Z) in the NeMo grid is taken.
The metallicity is taken from Nordstroem et al. (2004), Cayrel de Strobel et al. (2001) and Barbuy & Grenon (1990) when available for individual stars of the sample, otherwise assumed to be solar. Nordstroem et al. (2004) have determined the effective temperature of most of the dwarf stars in our sample, for other stars the effective temperature is assumed, as well as the surface gravity, according to their spectral type from the corresponding values of temperature and gravity as given in Schmidt-Kaler (1982) and Gray (1992).
Once the most suitable model atmosphere is determined, we can generate a theoretical flux calibrated spectrum with the code SynthV (by VT). This code requires several input parameters such as the wavelength range for which the spectrum will be computed, as well as the wavelength step. This step has to be small enough compared to 2.5 Å as the desired opacity in each wavelength point includes absorption from all nearest lines within 2.5 Å, so large wavelength steps would give wrong results. We take 0.1 Å in the visible and in the infrared range. Then, we can enter a rotation profile for the star, if needed, and indicate a list of absorption lines to be used. For our work, we take the Vienna Atomic Linelist Database (VALD), completed by several molecular line lists (C 2 , CN, CO, H 2 , CH, NH, OH, MgH, SiH, SiO, TiO, H 2 O 1 ). The line profiles are approximated by a Voigt function. SynthV also provides the possibility to change individual abundances.
The final theoretical spectrum has to be reduced to the same resolution and the same sampling as the observed spectrum for further comparisons. So, the calculated spectrum is Gaussian smoothed and resampled by Fourier interpolation to the same step as the observed spectrum.
Testing parameters
The determination of the physical parameters of the observed stars is not as acurate as we would like. So it is necessary to investigate the nearest values of T eff / log g / Z of the grid. The chemical abundances can also be changed; as these abundances are not very well determined, it is crucial to notice how a variation of the abundance of one element modifies the spectrum.
The most important change is caused by the temperature. Indeed, the step of 200 K as was chosen for the grid computation is still quite large for our purpose and a deviation of this range can be dramatic for the slope of the spectrum. The coldest stars (M, K and even G type) are the ones most affected by a change of 200 K. Fig. 1 and 2 show the evolution of the spectra with temperature in the visible range for the dwarfs and the giants, respectively, and Fig. 3 shows this evolution for dwarf stars in the infrared range. We can see that for intermediate temperature, there is mainly a difference in continuum. But for the extreme values, the modification of the spectrum is more dramatic, as it affects also the absorption line features.
A change in log g causes a variation of the line profiles. This parameter is not very well known for observed stars. So it is important to test different values around a first guess. Even if The metallicity has an influence on the slope of the continuum: increasing the metallicity of a theoretical spectrum has a similar influence on the continuum as decreasing the temperature (see e.g. Ramírez & Meléndez, 2005, for a detailled discussion). A variation of metallicity has also a clear influence on the strength of the absorption lines.
The variation of individual abundances can also cause some changes in the spectra. SynthV uses by default solar abundances of Anders & Grevesse (1989). But some studies more recent like Kurucz (1993a) or Holweger (2001) give different values for various elements (like He, Fe, O, C, N...). These changes can be quite important (up to 0.2 dex for Fe). A simple test to probe the influence of different abundances was made comparing two spectra assuming the same physical parameters, except for the solar abundance of Holweger (2001) in one case and the one used by Kurucz (1993a) in the other. The result shows only slight differences, and the comparison with our observations cannot determine whether one is better than the other. For our work, we chose the values given by Holweger (2001). In order to fit metallic stars, it is also important to check the influence of the modification of an individual element abundance on the synthetic spectrum. A change of the individual abundances of O and C leads to a modification of OH and CO line strengths, respectively. Indeed, when the ratio C/O increases, more CO molecules will be formed; on the other hand, when it decreases, more oxygen will be left to form OH molecules (see Decin et al. 2000).
For the hottest stars, it is important to take into account rotational velocity. At the medium resolution of our observations, a convolution with a Gaussian profile is good enough to reproduce this effect. Hence, we do not need to include a more accurate description of the change in the profile of the lines caused by the rotational velocity. Consequently, even though SynthV allows to compute spectra with a rotational velocity profile for the lines, we compute the spectra without rotational velocity in order to save computation time and convolve them afterwards with a gaussian profile.
Additional tests have been made with a different microturbulence velocity for a very cool star (4 km/s for a M0V-type spectrum in the visible range) and a different number of layers for the convection model (288 instead of 72 for the same CGM model and the same physical parameters). At the resolution of the observed samples, these modifications do not lead to any difference.
Results in the visible range
Although our goal is to explore spectra in the infrared wavelength range, a study of the behavior of the NeMo model atmospheres and spectra in the visible range provides us a good indication of their reliability as a function of the physical parameters of the stars.
Observations in the visible
18 spectra of observed stars (A to M dwarfs and G to K giants) corresponding to the range of the parameters in the NeMo grid have been compared to theoretical stellar spectra. This sample of observed spectra at a resolving power of R ≃ 600 is taken from the stellar library used by Boisson et al. (2000). One part of these observations comes from the stellar library of Silva & Cornell (1992), made at the KPNO with the MARK III spectrograph, they cover the wavelength range 3500-9000Å. These spectra are, for most of them, a mean-value of several stars of nearby spectral type. The name of these stars and the mean associated spectral types as given by Silva & Cornell are listed in Table 1. The remaining of the library, mainly supermetallic stars, were observed by Serote Roos et al. (1996) at the CFHT with the Herzberg spectrograph and at OHP with the Aurelie spectrograph. The spectral range is limited to 5000-9000Å. The name, spectral type (or associated mean-spectral type) and parameters of these stars are listed in Table 1.
The atmospheric bands are removed from the observed spectra to compare with the theoretical spectra.
Comparisons
When the physical parameters (see Table 1) are known, we took the model with the nearest values, otherwise we used the mean values according to the spectral type as starting points and investigated the nearby values to find the best agreement between the observed and the computed spectra. The theoretical spectra are computed with a wavelength step of 0.1 Å, corresponding to a resolution of 60000 at 6000 Å, then Gaussian smoothed to the resolution of the observed spectra. By Fourier interpolation, we reduce the computed spectra to the same wavelength step as the observations. Spectra are normalized to 1 in the range 5440- Table 1. List of observed stars in the visible, with the values of the parameters taken from the mean-values listed by Gray (1992) and Schmidt-Kaler (1982) or from (1) Nordstroem et al. (2004), (2) Cayrel de Strobel et al. (2001) and (3) Barbuy & Grenon (1990). In the last column, when no information on metallicity is available, a tick mark replaces it. The quantity < v sin i > is given in km/s. 5460Å. When the star has a rotational velocity, we convolve the corresponding computed spectrum with a Gaussian of the same velocity.
The agreement between the observed and computed spectra is satisfactory for effective temperatures ranging 4600 to 9000 K, 9000 K corresponding to the highest temperature for the stars composing our sample. For these spectra, the main discrepancies, which consist in differences in the slope of the blue extremity of the continuum, can be explained by the difficulty to have a good flux calibration at the wavelength ends of the observational data (in particular at 5000Å where strong MgI, MgH and FeI absorptions are present). Various examples of comparisons for these stars are shown in Fig. 4-7. The spec- Three stars have a temperature below 4400 K, one dwarf and two giants. The M dwarf star (HD 36395) is particular as it has a very high metallicity. It is the most metallic star of Cayrel de Strobel's catalog. 2 Indeed, to fit correctly the observations, we need to set the metallicity of the theoretical star as high as possible (Fig. 8), but a metallicity of [M/H]=+1.0 dex is not realistic. Fig. 9 shows that [M/H]=+0.5 dex is not sufficient.
2 [Fe/H]=+0.6 dex
However, recently Woolf & Wallerstein (2005) have found a temperature of 3760 K (instead of 3850 K) and a metallicity of [Fe/H]=+0.2 dex for this star. So the discrepancies may be simply due to the difference in temperature, out of reach for NeMo.
The two others are K giant stars. First it is worth to say that at this low temperature, a difference of 200 K causes a variation of the slope of the continuum more important than for the highest temperatures. Thus, the continuum of the K3III observed star can neither be correctly fitted by a computed spectrum with a temperature of 4400 K nor with 4200 K. The first one is too blue and the second one is too red. Another difficulty comes from the Na line at 5894 Å and the MgH band computed too strong compared to the observations. These absorption features are very sensitive to a variation of temperature and a too strong value means that the temperature is too low, but increasing the temperature would lead to a continuum too blue. This is particularly clear for the K4III star, which lies on the boundaries of the NeMo grid for temperature and gravity (T=4000 K, log g=2.0), as we can see on Fig. 10. This star comes from the stellar library of Silva & Cornell (1992) whose original spectra extend down to 3500Å. The agreement for the slope of the continuum is satisfactory, but the computed NaI line, the MgH band as well as other lines and molecular bands such as e.g. CaII, CN, Gband are by far too strong. If we set the temperature at 4200 K, which is still reasonable for this kind of star, the accordance for the lines and molecular bands would be better, but the slope of the observed continuum would be too red.
The hypothesis of plane-parallel geometry of the model begins to become unrealistic (low log gravity) and the molecular opacities, which are not taken sufficiently into account in ATLAS9, become important for these cool stars. (HD 154733 and HD 21110), the theoretical one (in grey) is computed with the following parameters: T=4000K, log g=2.0, solar metallicity. The scale of this plot is not the same as for the previous figures.
Results in the infrared range
We can immediatly say that there are, by far, more discrepancies between the computed and the observed spectra in the infrared than in the visible range, even at the medium resolution we have.
Observations in the infrared
The observed spectra for this wavelength range come from Meyer et al. (1998) and Boisson et al. (2002). The Meyer's ones are observations at a resolving power of R ≃ 3000 at 1.6 µm with the KPNO Mayall 4 m Fourier Transform Spectrometer; these spectra have to be calibrated in flux. The stars from Boisson et al. (2002) come from the ISAAC spectrograph, mounted on the VLT telescope, at a resolving power of R ≃ 3300 at 1.6 µm. From these samples, we selected 23 stars matching the available parameters of NeMo (A to M dwarfs and F to K giants), listed in Table 2. Table 2. List of observed stars in the infrared, with the values of the parameters taken from the mean-values listed by Gray (1992) and Schmidt-Kaler (1982) or from (1) Nordstroem et al. (2004), (2) Cayrel de Strobel et al. (2001) and (3) Barbuy & Grenon (1990). In the last column, when no information on metallicity is available, a tick mark replaces it. Figure 11. The observed spectrum (in black) is from a F8.5V type star (HD 98231), the theoretical one (in grey) is computed with the following parameters: T=5800K, log g=4.6, [M/H]=-0.3dex.
Comparisons
Around 1.6 µm hot stars are dominated by the Brackett lines, and a good determination of the rotational velocity of these stars, which broadens the lines, is very important to have the best possible match between the computation and the observation. In our wavelength range, the Brackett lines at 1.588, 1.611 and 1.641 µm are nearly the only features of the observed spectra. They are well fitted by the theoretical spectra. When the temperature decreases, some atomic features appear and the comparison between observed and computed spectra deteriorates. Indeed, for the F6V, a quite large amount of metallic lines, visible in the observed spectra, are not present or too weak in the computed spectra and this trend continues with the F8.5V (Fig. 11). The continuum of these observed spectra is very well reproduced, but this is not the case for the lines: most of the metallic lines are computed too weak.
In addition, for the F8.5V star, the Brackett lines at 1.611 and 1.641 µm, are computed too strong. Indeed, the Brackett lines have almost disappeared in the observed spectrum but are still strong in the computation.
These Brackett lines are also present in all the theoretical G stars, which is not always the case for observed stars, as seen in Fig. 12. The behaviour of the computed Brackett lines towards the temperature is shown on Fig. 3; the Brackett lines are still present in theoretical spectra for temperatures as low as 5200K.
Then, when the temperature decreases, as expected, the Brackett lines are fainter and atomic lines (FeI, SiI, MgI and CaI) become stronger and stronger for the observed spectra, as well as the theoretical spectra. But the agreement between the computed and the observed stars becomes worse. From Fig. 12 to 15, the residuals between observed and theoretical spectra show that several absorption lines are missing in the theoretical stars, iron lines for the most part. The model atmosphere is not the reason because the continuum shape is very good and several lines match perfectly, but the line list needs to be improved.
In the infrared range, the lack of several metallic and molecular lines causes the discrepancies, with enhanced differences at low temperature due to the greatest strength of the lines for the coolest star (Fig. 15). Fig. 13 and 14 present two similar stars (K2V and K4V, respectively), the first one is from Meyer and the second one is a VLT observation at higher resolution. Both comparisons show this lack of absorption lines, with more details visible for Fig. 14. The comparison for the coolest dwarf star of this sample, a M0V, is not so bad for such a low temperature. The continuum is good in spite of the limitations of the model (Fig. 16). We notice, however, that at the contrary of the previous spectra, the absorption lines are computed too strong for the theoretical spectrum, as seen thanks to the residual. This is probably due to the limit of validity of the model atmospheres, as already seen in the visible range.
In order to investigate further which lines are missing in the computations, we have compared the high-resolution spectrum of the well-known K1III star Arcturus (Hinkle et al. 1995) to a theoretical spectrum computed with the parameters of this star (T eff = 4400 K, log g = 2.0, [M/H]=-0.2, v sin i = 3.5 km/s) and point out the discrepancies: Fig. 19 shows a detail of this comparison, and Table 3 lists the missing lines in the whole range. In addition to the lines quoted in Table 3, several other features are computed too weak, in particular OH and CO molecular bands, certainly due to an inaccurate determination of the oscillator strengths, as discussed in Lyubchik et al. (2004).
This study, based on the NeMo grids of atmospheres, remains valid for the entire familly of the ATLAS models. Indeed, as shown in Fig. 17, two theoretical spectra computed for the same physical parameters, with the NeMo grid and the ATLAS9 models with the overshooting prescription (Kurucz 1993a(Kurucz , 1998Castelli et al. 1997, respectively), are very similar at our spectral resolution. The discrepancies between the two different theoretical spectra are very faint compared to the discrepancies between the models and the observed spectra. The same ATLAS9 models, but without overshooting (NOVER models, Castelli et al. 1997), present even less differences with the NeMo spectra, in particular the slight discrepancy found for the Brackett lines disappear. They are more sensitive than other lines to the fact that the Kurucz overshooting prescription changes the temperatures at Rosseland optical depths of 0.1 to 0.5.
Conclusions
In spite of some discrepancies, the comparisons between observed and theoretical spectra in the visible range suggests a reasonable agreement, even at the limits of the parameter range of NeMo. However, we have to be careful when we compute a model near the lower limit in temperature.
The spectra modelized with NeMo can be used to build a theoretical spectral library for A to K dwarf and giant stars in the visible range, but this is not the case for the near-infrared range.
Indeed, in the range 1.57 to 1.67 µm, the spectra computed do not reproduce very well the observations. Albeit the good agreement for the overall flux distribution shape, we can see that there are many differences for the line features when focusing on details of the spectra. The strength of the infrared absorption lines is usually underestimated in calculations, and some lines are simply missing (Fe, OH and CO lines are the most problematic ones). As pointed out by Decin et al. (2003) for the MARCS6 models and Lyubchik et al. (2004) for NextGen models of ultracool dwarfs and a Kurucz model for Arcturus, it was not possible to generate synthetic spectra which can reproduce observed spectra in the infrared with the line lists that have been used in constructing the model atmospheres, even at a medium resolution. In particular, the oscillator strengths are still not known sufficiently well. We have also performed a comparison of spectra that include the lines from iron peak elements with predicted energy levels, as published by Kurucz (1998), with the observations of Arcturus discussed above. As to be expected, such spectra contain more lines, but the inaccuracy of their energy levels frequently places them at the wrong wavelengths and the overall match hardly improves (the total flux distribution over all wavelength ranges, particularly in the ultraviolet, is closer to observations when including this set of lines, but for the limited wavelength range around the H band the effects are small and sufficiently compensated when setting the zero point of the flux distribution). For the case of Arcturus we also performed a comparison with spectra computed with PHOENIX in LTE (P. Hauschildt, priv. comm. 2005). The resulting spectra for the 1.57 to 1.67 µm range were found to be rather similar to those from NeMo/VALD/SynthV when including the predicted level lines. One important reason for this is certainly the fact that the atomic line lists for PHOENIX are essentially those of Kurucz (1998). Because the flux distribution of the PHOENIX spectra is similar to the observations as well, at least for the K giants the detailed choice of the model atmosphere code appears to be clearly less important than the choice of atomic line lists (note Figure 14. The observed spectrum (in black) is from a K4V type star (HD 131977), the theoretical one (in grey) is computed with the following parameters: T=4800K, log g=4.6, solar metallicity. Figure 15. The observed spectrum (in black) is from a K3III type star (HD 3627), the theoretical one (in grey) is computed with the following parameters: T=4400K, log g=2.0, solar metallicity.
that PHOENIX uses its own collection of molecular line lists, different from the one we have used here). Considering the uniformity of the deterioration of the match of spectra in the 1.57 to 1.67 µm range when looking at the sequence from F to K stars we conclude that the insufficient line lists, and in particular lists of atomic lines, are the main obstacle for a more satisfactory match of observed spectra of these groups of stars. The modelizing in the infrared range needs some further improvements, in particular for the absorption lines database, before to build a theoretical spectral library which can be used with high benefit instead of an observed star library.
The lack of M stars in spectral library would be very much prejudicial to the study of stellar populations as the variations of their strong atomic lines and molecular bands along their evolution from dwarf to supergiant to giant provides very good age discriminators (from 10 6 to 10 10 yrs). M stars peak in a wavelength range which is not much absorbed even in heavily reddened region, as young stellar clusters, making them easily detectable. Moreover they are known to be very important contributors to the stellar populations of galaxies as well for the mass as for the luminosity following the age of the population. Figure 16. The observed spectrum (in black) is from a M0V type star (GL 338), the theoretical one (in grey) is computed with the following parameters: T=4000K, log g=4.6, solar metallicity. Figure 17. Comparison between two theoretical models (ATLAS9 and NeMo) with the following parameters: T=4400K, log g=2.0, solar metallicity. Note that the scale for the residual is enhanced compared to all other figures.
All this make a good theoretical library of M stars very critical in order to extend incomplete observed library.
The prospects of constructing such a library from the upcoming generation of model atmospheres (MARCS, PHOENIX, perhaps future versions of ATLAS) are indeed improving because of the enormous efforts spent in extending the molecular line data and equation of state. To match the spectra of the hot end of M stars in the H band will nevertheless require more complete atomic data, although this is less crucial as for K stars. Efforts along this direction are currently made.
Figure 1 .
1Results of a variation of temperature in the visible range for dwarf stars. From the top to the bottom: T=6000K to 4600K with a step of 200K between two spectra, arbitrarily shifted by a constant value for the purpose of clarity.
Figure 2 .
2Results of a variation of temperature in the visible range for giant stars. From the top to the bottom: T=5000K to 4000K with a step of 200K between two spectra, arbitrarily shifted by a constant value for the purpose of clarity.the variation caused by a gap of 0.2 in log g is not very important, it can improve the comparison.
Figure 3 .
3Results of a variation of temperature in the infrared range for dwarf stars. From the top to the bottom: T=7200K, T=6400K and T=6000K to 5200K with a step of 200K, arbitrarily shifted by a constant value for the purpose of clarity.
Figure 4 .
4The observed spectrum (in black) is from a F2V type star (HD 88815), the theoretical one (in grey) is computed with the following parameters: T=7200K, log g=4.0, [M/H]=-0.1. Atmospheric bands are removed from the observed spectrum. The residual between the two spectra is (theoretical flux -observed flux)/theoretical flux.
Figure 5 .
5The observed spectrum (in black) is from a G0IV type star (HD 121370), the theoretical one (in grey) is computed with the following parameters: T=6000K, log g=4.4, [M/H]=+0.3.
Figure 6 .
6The observed spectrum (in black) is from a G8III type star (HD 163993), the theoretical one (in grey) is computed with the following parameters: T=5000K, log g=2.8, [M/H]=-0.1.tral type of the observed stars and the physical parameters used to compute the theoretical spectra are noticed for each figure.
Figure 7 .
7The observed spectrum (in black) is from a K0V type star (HD 93800), the theoretical one (in grey) is computed with the following parameters: T=5200K, log g=4.4, [M/H]=+0.5.
Figure 8 .
8The observed spectrum (in black) is from a M1V type star (HD 36395), the theoretical one (in grey) is computed with the following parameters : T=4000K, log g=4.6, [M/H]=+1.0. The scale for the residual is twice the scale of the previous figures.
Figure 9 .
9Same as Fig. 8, with [M/H]=+0.5 for the theoretical spectrum.
Figure 10 .
10The observed spectrum (in black) is the mean-value of two K4III type stars
Figure 12 .
12The observed spectrum (in black) is from a G4V type star (HD 106116), the theoretical one (in grey) is computed with the following parameters: T=5800K, log g=4.4, [M/H]=+0.3dex.
Figure 13 .
13The observed spectrum (in black) is from a K2V type star (HD 22049), the theoretical one (in grey) is computed with the following parameters: T=4800K, log g=4.6, [M/H]=-0.1dex.
Figure 18 .
18Comparison between a high resolution observation ( λ ∆λ ≃ 100000) of Arcturus (in black) and the corresponding theoretical star (in grey). The flux is given with continuum normalized to 1 in order to better show the missing lines.
Figure 19 .
19The same asFig. 18but zoomed in into a limited wavelength range. the importance of the line lists used relative to the particular choice of model atmosphere codes.
Table 3 .
3List of the missing lines, according to the comparison between Arcturus and a computed spectrum. The Ni line is not missing as such but shifted by 2 Å in the atomic database.wavelength (µm) element
wavelength (µm) element
1.5764
Fe
1.6208
Fe
1.5893
Fe
1.6214
Fe
1.5895
Fe
1.6231
Fe
1.5913
Fe
1.6285
Fe
1.5939
Fe
1.6316
Fe
1.5954
Fe
1.6319
Fe
1.5968
Fe
1.6362
Ni
1.6007
Fe
1.6394
Fe
1.6008
Fe
1.6440
Fe
1.6041
Fe
1.6450
OH
1.6071
Fe
1.6517
Fe
1.6076
Fe
1.6524
Fe
1.6088
Fe
1.6532
Fe
1.6116
Fe
1.6569
Fe
1.6126
Fe
1.6175
Fe
1.6195
Fe
file VColl molec.lns built by VT from Kurucz' CDROMs 15, 24, 25, and 26, see Kurucz 1993b and 1999; the file is available from V. Tsymbal upon request
Acknowledgements. FK gratefully acknowledges the hospitality of the Observatoire de Paris-Meudon during his stays as an invited visitor. JF also thanks the nice people of the AMS group at Vienna Observatory for their hospitality and help during part of this work. VT acknowledges the Austrian Fonds zur Förderung der wissenschaftlichen Forschung FwF (P17580) and by the BM:BWK (project COROT). This research has made use of the model atmosphere grid NeMo, provided by the Department of Astronomy of the University of Vienna, Austria, and funded by the Austrian FwF (P14984). We are thankful to Peter Hauschildt who has computed comparison spectra for Arcturus for us which helped to demonstrate
. F Allard, P H Hauschildt, ApJ. 445433Allard, F., & Hauschildt, P.H. 1995, ApJ, 445, 433
. E Anders, N Grevesse, Geochim. Cosmochim. Acta. 53197Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197
The ESO Paranal Science Operations Team. S Bagnulo, E Jehin, C Ledoux, R Cabanac, C Melo, R Gilmozzi, The Messenger. 11410Bagnulo, S., Jehin, E., Ledoux, C., Cabanac, R., Melo, C., Gilmozzi, R., The ESO Paranal Science Operations Team 2003, The Messenger, 114, 10
. I Baraffe, G Chabrier, F Allard, P H Hauschildt, A&A. 382563Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P.H. 2002, A&A, 382, 563
B Barbuy, M Grenon, A92-18101 05-90ESO/CTIO Workshop on Bulges of Galaxies. 83Barbuy, B., & Grenon, M. 1990, In: ESO/CTIO Workshop on Bulges of Galaxies. (A92-18101 05-90),p.83
. E Bertone, A Buzzoni, M Chavez, L H Rodriguez-Merino, AJ. 128829Bertone, E., Buzzoni, A., Chavez, M., Rodriguez-Merino, L.H. 2004, AJ, 128, 829
. C Boisson, M Joly, J Moultaka, D Pelat, M Serote Roos, A&A. 357850Boisson, C., Joly, M., Moultaka, J., Pelat, D., & Serote Roos, M. 2000, A&A 357, 850
. C Boisson, S Coupé, J G Cuby, M Joly, M J Ward, A&A. 396489Boisson, C., Coupé, S., Cuby, J. G., Joly, M., Ward, M. J. 2002, A&A, 396, 489
. G Bruzual, S Charlot, MNRAS. 3441000Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
. V M Canuto, I Goldman, I Mazzitelli, ApJ. 473550Canuto, V.M., Goldman, I., & Mazzitelli, I. 1996, ApJ, 473, 550
. V M Canuto, I Mazzitelli, ApJ. 370295Canuto, V.M., & Mazzitelli, I. 1991, ApJ, 370, 295
. V M Canuto, I Mazzitelli, ApJ. 389724Canuto, V.M., & Mazzitelli, I. 1992, ApJ, 389, 724
. F Castelli, R Gratton, R L Kurucz, A&A. 318432A&ACastelli, F., Gratton, R., & Kurucz, R.L. 1997, A&A, 318, 841 (erra- tum: 1997, A&A, 324, 432)
F Castelli, R L Kurucz, Modelling of Stellar Atmospheres, IAU Symposium. N.E. Piskunov, W.W.Weiss & D.F. Gray21020Castelli, F., & Kurucz, R.L. 2003, in Modelling of Stellar Atmospheres, IAU Symposium vol. 210, eds. N.E. Piskunov, W.W.Weiss & D.F. Gray, p. A20C
. G Cayrel De Strobel, C Soubiran, N Ralite, A&A. 373159Cayrel de Strobel G., Soubiran C., Ralite N. 2001, A&A, 373, 159
. Cid Fernandes, R Mateus, A Sodré, L Stasińska, G Gomes, J M , MNRAS. 358363Cid Fernandes, R., Mateus, A., Sodré, L., Stasińska, G., Gomes, J.M. 2005, MNRAS, 358, 363
. R Dallier, C Boisson, M Joly, A&A SS. 116239Dallier, R., Boisson, C., & Joly, M. 1996, A&A SS, 116, 239
. L Decin, B Vandenbussche, C Waelkens, K Eriksson, B Gustafsson, B Plez, A J Sauval, K Hinkel, A&A. 400679Decin, L., Vandenbussche, B., Waelkens, C., Eriksson, K., Gustafsson, B., Plez, B., Sauval, A.J., & Hinkel, K. 2003, A&A, 400, 679
. L Decin, C Waelkens, K Eriksson, B Gustafsson, B Plez, A J Sauval, W Van Assche, B Vandenbussche, A&A. 364137Decin, L., Waelkens, C., Eriksson, K., Gustafsson, B., Plez, B., Sauval, A.J., Van Assche, W. & Vandenbussche, B. 2000, A&A, 364, 137
. K Fuhrmann, M Axer, T Gehren, A&A. 271451Fuhrmann, K., Axer, M., & Gehren, T. 1993, A&A, 271, 451
D F Gray, The observation and analysis of stellar photospheres. Cambridge University PressGray, D.F. 1992, The observation and analysis of stellar photospheres (Cambridge University Press)
. B Gustafsson, R A Bell, K Eriksson, Nordlund, A&A. 42407Gustafsson, B., Bell, R.A., Eriksson, K., Nordlund, Å 1975, A&A, 42, 407
. P H Hauschildt, F Allard, J Ferguson, E Baron, D R Alexander, ApJ. 525871Hauschildt, P.H., Allard, F., Ferguson, J., Baron, E., & Alexander, D.R. 1999, ApJ, 525, 871
. U Heiter, F Kupka, C Van't Veer-Menneret, C Barban, W W Weiss, M.-J Goupil, W Schmidt, D Katz, R Garrido, A&A. 392619Heiter, U., Kupka, F., van't Veer-Menneret, C., Barban, C., Weiss, W.W., Goupil, M.-J., Schmidt, W., Katz, D., & Garrido, R. 2002, A&A, 392, 619
. K Hinkle, L Wallace, W Livingston, PASP. 1071042Hinkle, K., Wallace, L., & Livingston, W. 1995, PASP, 107, 1042
H Holweger, SOHO/ACE Workshop "Solar and Galactic Composition. R.F. Wimmer-SchweingruberNew YorkSpringer23AIP Conference Series 598Holweger, H. 2001, In: SOHO/ACE Workshop "Solar and Galactic Composition", R.F. Wimmer-Schweingruber (eds.), AIP Conference Series 598 (Springer, New York), p. 23
. I Hubeny, Computer Physics Comm. 52103Hubeny, I. 1988, Computer Physics Comm., 52, 103
. I Hubeny, T Lanz, ApJ. 439875Hubeny, I., & Lanz, T. 1995, ApJ, 439, 875
. V D Ivanov, M J Rieke, C W Engelbracht, A Alonso-Herrero, G H Rieke, K L Luhman, ApJS. 151387Ivanov, V.D., Rieke, M.J., Engelbracht, C.W., Alonso-Herrero, A., Rieke, G.H., Luhman, K.L. 2004, ApJS, 151, 387
. F Kupka, N E Piskunov, T A Ryabchikova, H C Stempels, W W Weiss, A&AS. 138119Kupka, F., Piskunov, N.E., Ryabchikova, T.A., Stempels, H.C., & Weiss, W.W. 1999, A&AS, 138, 119
R L Kurucz, The Stellar Population of Galaxies, IAU Symp. Barbuy B., Renzini A.Kluwer, Dordrecht149225Kurucz, R.L. 1992, in The Stellar Population of Galaxies, IAU Symp. 149, eds. Barbuy B., Renzini A., Kluwer, Dordrecht, p. 225
ATLAS9 Stellar atmospheres programs and 2km/s grid. R L Kurucz, CD-ROM. 13Kurucz, R.L. 1993a, ATLAS9 Stellar atmospheres programs and 2km/s grid, CD-ROM 13, SAO
R L. ; Kurucz, Cd-Rom 15, R L Sao Kurucz, L ; R, Atomic data for TiO and H2O. 24Atomic data for moleculesKurucz, R.L. 1993b, Atomic data for molecules, CD-ROM 15, SAO Kurucz, R.L. 1998, http://kurucz.harvard.edu/, http://cfaku5.cfa.harvard.edu/ Kurucz, R.L. 1999, Atomic data for TiO and H2O, CD-ROMs 24, 25 and 26, SAO
. T Lanz, I Hubeny, ApJS. 146417Lanz, T., & Hubeny, I. 2003, ApJS, 146, 417
. Le Borgne, D Rocca-Volmerange, B Prugniel, P Lançon, A Fioc, M Soubiran, C , Le Borgne, J.-F Bruzual, G Pelló, R Lançon, A Rocca-Volmerange, B Sanahuja, B Schaerer, D Soubiran, C Vílchez-Gómez, R , A&A. 402433Le Borgne, D., Rocca-Volmerange, B., Prugniel, P., Lançon, A., Fioc, M., Soubiran, C. 2004, Le Borgne, J.-F., Bruzual, G., Pelló, R., Lançon, A., Rocca- Volmerange, B., Sanahuja, B., Schaerer, D., Soubiran, C., Vílchez-Gómez, R. 2003, A&A, 402, 433
. C Leitherer, D Schaerer, J D Goldader, R M Delgado, C Robert, D F Kune, D F De Mello, D Devost, T M Heckman, ApJS. 1233Leitherer, C., Schaerer, D., Goldader, J.D., González Delgado, R.M., Robert, C., Kune, D.F., de Mello, D.F., Devost, D., Heckman, T.M. 1999, ApJS, 123, 3
. Y Lyubchik, H R A Jones, Y V Pavlenko, S Viti, J C Pickering, R Blackwell-Whitehead, A&A. 416655Lyubchik, Y., Jones, H.R.A., Pavlenko, Y.V., Viti, S., Pickering, J.C., & Blackwell-Whitehead, R. 2004, A&A, 416, 655
. L P Martins, R M Delgado, C Leitherer, M Cerviño, P Hauschildt, MNRAS. 35849Martins, L.P., González Delgado, R.M., Leitherer, C., Cerviño, M., & Hauschildt, P. 2005, MNRAS, 358, 49
. M R Meyer, S Edwards, K H Hinkle, S E Strom, ApJ. 508397Meyer, M.R., Edwards, S., Hinkle, K.H. & Strom, S.E. 1998, ApJ, 508, 397
. J Moultaka, D Pelat, MNRAS. 314409Moultaka, J., & Pelat, D. 2000, MNRAS 314, 409
. U Munari, R Sordo, F Castelli, T Zwitter, astro-ph/0502047A&A. in pressMunari, U., Sordo, R., Castelli, F., & Zwitter, T. 2005, A&A, in press, astro-ph/0502047
. T Murphy, A Meiksin, MNRAS. 3511430Murphy, T., & Meiksin, A. 2004, MNRAS, 351, 1430
. J Heiter, U Kupka, F Nesvacil, N Weiss, W W , Comm. Asteroseism. 14443NeMo websiteNeMo website, 2003, http://ams.astro.univie.ac.at/nemo/ Nendwich, J., Heiter U., Kupka F., Nesvacil N., Weiss W.W. 2004, Comm. Asteroseism., 144, 43
. B Nordström, M Mayor, J Andersen, J Holmberg, F Pont, B R Jørgensen, E H Olsen, S Udry, N Mowlavi, A&A. 418989Nordström, B., Mayor, M., Andersen, J., Holmberg, J., Pont, F., Jørgensen, B. R., Olsen, E. H., Udry, S., & Mowlavi, N. 2004, A&A, 418, 989
. D Pelat, MNRAS. 284365Pelat D. 1997, MNRAS 284, 365
. B Plez, J M Brett, Å Nordlund, A&A. 256551Plez, B., Brett, J.M., & Nordlund, Å. 1992, A&A, 256, 551
. I Ramírez, J Meléndez, ApJ. 626446Ramírez, I., & Meléndez, J. 2005, ApJ, 626, 446
Landolt-Börnstein; Stars and star clusters. Numerical data and functional relationships in science and technology. Th Schmidt-Kaler, Group IV. Schaifers K., Voight H.H.2Schmidt-Kaler, Th. 1982, In: Schaifers K., Voight H.H. (eds)Landolt- Börnstein; Stars and star clusters. Numerical data and functional relationships in science and technology. Group IV, Vol. 2b
. M Serote Roos, C Boisson, M Joly, A&AS. 11793Serote Roos, M., Boisson, C., & Joly, M. 1996, A&AS, 117, 93
. D Silva, M Cornell, ApJS. 81865Silva, D., & Cornell, M. 1992, ApJS, 81, 865
. R B Stothers, C Chin, ApJ. 478103Stothers, R.B., & Chin, C. 1997, ApJ, 478, L103
. F Valdes, R Gupta, J A Rose, H P Singh, D J Bell, ApJSS. 152251Valdes, F., Gupta, R., Rose, J.A., Singh, H.P., & Bell, D.J. 2004, ApJSS, 152, 251
. C Van't Veer-Menerret, C Mégessier, A&A. 309879van't Veer-Menerret, C., & Mégessier, C. 1996, A&A, 309, 879
. V M Woolf, G Wallerstein, MNRAS. 356963Woolf, V.M., & Wallerstein, G. 2005, MNRAS, 356, 963
| []
|
[
"Betting strategies with bounded splits",
"Betting strategies with bounded splits",
"Betting strategies with bounded splits",
"Betting strategies with bounded splits"
]
| [
"Tomislav Petrović ",
"Tomislav Petrović "
]
| []
| []
| We show that a pair of Kolmogorov-Loveland betting strategies cannot win on every non-Martin-Löf random sequence if either of the two following conditions is true:(I) There is an unbounded computable function g such that both betting strategies, when betting on an infinite binary sequence, almost surely, for almost all ℓ, bet on at most ℓ − g(ℓ) positions among the first ℓ positions of the sequence.(II) There is a sublinear function g such that both betting strategies, when betting on an infinite binary sequence, almost surely, for almost all ℓ, bet on at least ℓ − g(ℓ) positions among the first ℓ positions of the sequence. | 10.48550/arxiv.2212.14279 | [
"https://export.arxiv.org/pdf/2212.14279v1.pdf"
]
| 255,341,184 | 2212.14279 | 37baa229e38c2acb89d5dfbdbb80721b7f1f2cc1 |
Betting strategies with bounded splits
29 Dec 2022 January 2, 2023
Tomislav Petrović
Betting strategies with bounded splits
29 Dec 2022 January 2, 2023
We show that a pair of Kolmogorov-Loveland betting strategies cannot win on every non-Martin-Löf random sequence if either of the two following conditions is true:(I) There is an unbounded computable function g such that both betting strategies, when betting on an infinite binary sequence, almost surely, for almost all ℓ, bet on at most ℓ − g(ℓ) positions among the first ℓ positions of the sequence.(II) There is a sublinear function g such that both betting strategies, when betting on an infinite binary sequence, almost surely, for almost all ℓ, bet on at least ℓ − g(ℓ) positions among the first ℓ positions of the sequence.
Introduction
Whether Kolmogorov-Loveland randomness is equal to Martin-Löf randomness is a well known open question in the field of algorithmic randomness [9], [10].
A Kolmogorov-Loveland betting strategy, starting with a finite amount of capital, makes bets on the values of bits at positions of an infinite binary sequence. The positions can be chosen adaptively, based on the values of bits at positions that the betting strategy has previously bet on.
Once a new position to bet on is chosen, the Kolmogorov-Loveland betting strategy then guesses the value of the bit of the sequence at the position and uses some fraction of its capital as a wager. If the guess was correct, the wager is doubled, and the capital increases by the wagered amount, and if not, the wager is lost, and the capital decreases by the wagered amount. The betting strategy wins on a sequence if, in the succession of bets, the supremum of capital is unbounded.
A sequence is Kolmogorov-Loveland random (KLR) if no partialy computable Kolmogorov-Loveland betting strategy (KLBS) wins on the sequence. In fact, we can consider only computable KLBS-es, since for every partialy computable KLBS there is a pair of computable KLBS-es that wins on the same sequences [2].
In this paper we'll look at Martin-Löf randomness only with respect to the uniform Lebesgue measure.
A constructive null cover is a computable sequence f n of computable sequences of finite binary strings such that for all n, s∈fn 2 −|s| ≤ 2 −n , and for every string s in the sequence f n+1 there is a prefix of s in f n .
The set of infinite binary sequences that, for all n, have a prefix in f n is called an effective nullset. A sequence is Martin-Löf random (MLR) if it is not contained in any effective nullset.
There is another characterization of MLR sequences, in terms of turing machines. We'll use the monotone Turing machine and monotone complexity as defined in the textbook [3]. A monotone turing machine is a Turing machine with an infinite input tape, an infinite output tape and an infinite work tape. In each step, a monotone turing machine either reads the next bit from the input tape, or writes the next bit on the output tape, or does a step of computation on the work tape. The montone complexity of a string s with respect to the monotone Turing machine T is the length of the shortest sequence of bits that need to be read from the input tape, before the string s is written on the output tape. We denote it with Km T (s).
It can be shown, [7], [8], that there is a (universal) monotone Turing machine U such that for every other monotone Turing machine T and every string s, Km U (s) lower bounds Km T (s) up to some additive constant that depends on T . We call Km U (s) the monotone complexity of the string s. A sequence is MLR if and only if there is an additive constant such that the monotone complexity of any prefix of the sequence is within the constant of the length of the prefix.
It is easy to see that a single partialy computable KLBS cannot win on all non-MLR sequences. Suppose the KLBS is not total computable. Then there is some finite sequence of bets, and after the last bit was reveiled no further bets are made. The supremum of capital in this sequence of bets is bounded, and the set of infinite binary sequences consistent with the outcomes of bets contains non-MLR sequences, for instance, the ones that end in infinitely many zeros.
On the other hand, suppose the KLBS is total computable. We can find an infinite sequence of bets where the betting strategy never increases capital (to wit, the loosing streak). The set of infinite binary sequences, consistent with this sequence of betting outcomes, is an effective null set.
However, it is still not known if there is maybe just a pair of partialy computable KLBS-es such that on every non-MLR sequence at least one of them wins [10].
Related results
In [1], Kolmogorov-Loveland randomness was defined, however, there it is called unpredictability. They show a pair of KLBS-es that, given an unbounded computable function g, win on any sequence that, for every ℓ, has a prefix of length ℓ whose monotone complexity is upper bounded by ℓ − g(ℓ). (Theorem 9.1 in [1])
In [2] a pair of KL betting strategies is defined, that, given a computable partition of positions into two infinite sets, wins on any sequence whose subsequence on positions in the first set, and sub-sequence on positions in the second set, are both non-MLR. (12 Theorem in [2])
In [10] a more restrictive kind of betting strategy than KLBS is proposed, the betting strategy has to bet on positions non-adaptively, in some computable order. The proofs in [4], [5], [2] on permutation randomness, and methods used in [6] are similar to the ones we'll use to prove item (II) from the abstract (theorem 3).
Main result
We'll prove item (I) from the abstract, for a more general kind of betting strategy than KLBS.
To make a bet, a KLBS chooses a position, and places a wager on the value of the bit at that position. The outcome of the bet reveals the bit value the sequence has at the chosen position, and the capital is updated. Note that choosing a position splits a set of sequences into two clopen sets, the ones that have 0 at the chosen position, and the ones that have 1.
A general betting strategy splits a set of sequences into any two clopen sets v 0 , v 1 , and places a wager on one of those sets. The outcome of the bet reveals in which of the two sets the sequence is in, and the capital is updated. Unlike for KLBS, the sets v 0 , v 1 might have unequal uniform Lebesgue measure (size), and in case the general betting strategy correctly guesses in which set the sequence is in, the wagered amount might be more (or less) than doubled depending on the size of the set on which the wager was placed. Namely, let v be the set of sequences consistent with outcomes of previous bets, and let c be the capital after the last one. We say that v is a part of the betting strategy, and c is its capital.
The betting strategy makes a new bet by splitting v into v 0 , v 1 and places a wager w ≤ c on one of them (say v 0 ). If the sequence is in part v 1 , the wager is lost, and its capital is c − w. If the sequence is in v 0 , the wager is increased by dividing it with the size of v 0 , conditional on v. The capital of part v 0 is then c − w + 1 λ(v0|v) w. For a sequence of bets, and positions [1, ℓ], we will look at the number of times the bets have split the set of sequences in a way that depends on the first ℓ bits of the sequence. More precisely, for a "tail" sequence ρ, we will count the number of times a bet was made that splits a part into two parts so that both contain a sequence that, after first ℓ positions, ends with ρ. Let this number be N , and let σ be an infinite binary sequence that is consistent with the sequence of betting outcomes, and, after first ℓ positions ends with the tail ρ. We'll say that σ was N times [1, ℓ]-split on by the betting strategy. Clearly, N ≤ 2 ℓ .
We'll say that a betting strategy has splits upper bounded by function f if, almost surely, a sequence was [1, ℓ]-split on at most f (ℓ) many times, for almost all ℓ.
We show that if a pair of general betting strategies has splits upper bounded by ℓ − log ℓ − g(ℓ), where g is computable and unbounded, then there is a non-MLR sequence on which neither betting strategy wins (theorem 1).
Note that a KLBS has the following property: for a finite set of positions I, and a sequence of binary values ρ, if a part v is split into two parts v 0 , v 1 so that both contain sequences that on positions outside I have values ρ, then the number of such sequences in v 0 and in v 1 is the same. We'll say that betting strategies with this property are half-splitting. Clearly, a sequence can be [1, ℓ]split on at most ℓ many times by a half-splitting betting strategy.
We show that if a pair of half-splitting betting strategies has splits upper bounded by ℓ − g(ℓ), where g is computable and unbounded, then there is a non-MLR sequence on which neither betting strategy wins (theorem 2).
Notation
We denote the set of (finite binary) strings with {0, 1} * , the empty string with Λ, the strings of length ℓ with {0, 1} ℓ , the set of infinite binary sequences with {0, 1} ∞ . When there is no confusion, we abbreviate infinite binary sequence to sequence. We denote that s is a prefix of a string (or a sequence) s ′ with s ≺ s ′ . The length of a string s is denoted with |s|. The set of all sequences prefixed by a string s is denoted with [s], such sets are called basic sets. A union of possibly infinitely many basic sets is called an open set. The complement of an open set is called a closed set. A union of finitely many basic sets is called a clopen set.
We'll use size as a shorthand for the uniform Lebesgue measure. We'll denote with λ the size of a set of sequences (e.g. λ([s]) = 2 −|s| ).
The set of natural numbers is denoted with N, N k denotes the set of sequences of natural numbers of length k, and N * the set of finite sequences of natural numbers. The empty sequence of natural numbers is denoted with .
To stress that we're taking a union of disjoint sets, we use ⊔ instead of ∪.
Definition 2.1. A map from a subset of N to binary values is called a restriction. For a restriction with domain I ⊆ N, we'll say that the restriction restricts I, and that I are positions restricted by the restriction. A restriction with an empty domain is called the empty restriction. A restriction with a finite domain is a finite restriction. We denote the set of all restrictions that restrict I with {0, 1} I . We'll say that a sequence is consistent with the restriction r ∈ {0, 1} I when the sequence has the same binary values at positions in I as the restriction r. We denote with [r] the set of sequences consistent with r.
If two restrictions map the same position to different binary values, we'll say they are inconsistent.
Let r 1 , r 2 be two restrictions that restrict two disjoint sets of positions I 1 , I 2 . We denote with r 1r2 the restriction r that restricts positions I 1 ⊔ I 2 such that r restricts the positions in I 1 to the same values as r 1 , and positions in I 2 to the same values as r 2 . We say that r is the concatenation of r 1 and r 2 .
Let r 1 , r 2 be two restrictions that restrict two nested sets of positions I 1 ⊆ I 2 . If r 2 has the same values as r 1 on positions in I 1 , we say that r 2 is an extension of r 1 , and we write r 1 ≺ r 2 .
Note that every sequence is consistent with the empty restriction, and that the set of sequences consistent with a finite restriction is clopen.
If two restrictions are inconsistent, then the sets of sequences consistent with the restrictions are disjoint.
A set of sequences consistent with concatenation of two restrictions is the intersection of sequences consistent with one restriction with sequences consistent with the other restriction.
If restriction r ′ is an extension of r, then the set of sequences consistent with r ′ is a subset of sequences consistent with r.
Definitions
Definition 3.1. A partition refinement of a set Ω is a partial function S that maps t, x ∈ N × {0, 1} * to nonempty subsets of Ω. Instead of writing S(t, x) we'll write S t (x). When defined, we call the set S t (x) a part of the partition refinement S at time t, with the coordinate x. It has the following properties:
• The empty string is mapped to the whole set Ω, that is, S 0 (Λ) = Ω
• If S is defined on t, x, then for all t ′ > t it's value remains the same, that is,
S t ′ (x) = S t (x).
• If S is defined on t, x0, then it is also defined on t − 1, x and on t, x1. Furthermore, {S t (x0), S t (x1)} is a partition of S t (x).
If for some t, x, S t (x) is defined and S t (x0), S t (x1) are undefined, we'll call both the coordinate x, and the part S t (x) terminal at t (w.r.t. the partition refinement S). If S t (x) is a terminal part at t − 1, but not at t, we say that this part is split at t into two parts, S t (x0), S t (x1) . For a part S t (x), we say that the part was split |x| many times. Let S, S ′ be partition refinements of the same set Ω, such that for all coordinates for which S is still undefined at time t, S ′ remains undefined at all times, and for all the remaining coordinates S ′ is the same as S. We'll say that S ′ is S up to time t. Definition 3.2. A mass function µ is a map from strings to non-negative reals with the property that for any string x, µ(x) = µ(x0) + µ(x1). Definition 3.3 (Betting Strategy). Let S be a partition refinement of the set of infinite binary sequences whose range are clopen subsets of {0, 1} ∞ . Let µ be a mass function. A pair BS = (S, µ) is called a betting strategy. A betting strategy is computable, if S is computable and, for any coordinate x, if there is some t such that S t (x) is defined, then µ(x) halts.
For the betting strategy BS and a coordinate x if there is some t such that the part S t (x) is defined, the capital of the coordinate x is defined as the value
c(x) = µ(x) λ(S t (x)) .
The maximum capital,ĉ, of a coordinate x is the maximum of capital over all coordinates that prefix x, that is,ĉ(
x) = max x ′ x c(x ′ ).
For a sequence σ and a coordinate x such that x is terminal at t and σ ∈ S t (x), we'll say that the betting strategy up to time t achieved capitalĉ(x) (when betting) on σ.
The limit of achieved capital up to time t on a sequence σ, when t goes to infinity, is called the achieved capital of the betting strategy on σ. If the achieved capital is unlimited, the betting strategy wins on σ.
Definition 3.4. A Kolmogorov-Loveland (KL) partition refinement is a partition refinement of sequences whose range are sets of sequences that are consistent with some restriction. Remark 3.6. This definition of KLBS is equivalent to the more standard one, given in the introduction.
For a partition refinement of sequences S and the coordinate Λ, S 0 (Λ) is the set consistent with the empty restriction. Suppose at some t > 0, the part
v = S 0 (Λ) is split into parts v 0 = S t (0), v 1 = S t (1). If S is a KL partition refinement, then there are two restrictions r 0 , r 1 such that v 0 = [r 0 ] and v 1 = [r 1 ].
Both r 0 and r 1 can restrict only one position, as otherwise λ(v 0 ) + λ(v 1 ) < λ(v), and {v 0 , v 1 } would not be a partition of v, and for the same reason r 0 , r 1 must restrict the same position to different values, as otherwise the sets v 0 , v 1 intersect.
Thus, from the KL partition refinement we can obtain the function that chooses the next position to bet on, and vice-versa. Definition 3.7. Let f be a computable function from N × N to basic sets. Let n ∈ N and let f n be the union over all k of basic sets f (n, k).
If for all n, the size of the (open) set f n is less than 2 −n , and f n+1 ⊆ f n , we call f n the n-th level of the ML test. If a sequence is in the n-th level of the ML test, we say it fails the test at the n-th level, and if it fails at all levels we say it fails the test. The intersection n f n is called an effective nullset, this is precisely the set of sequences that fail the ML-test.
A sequence is non-Martin-Löf random (MLR) if it it fails some Martin-Löf test (equivalently, the sequence is in some effective nullset).
Finite game
In section 5, from a pair of betting strategies, we construct an open set that has small size and contains a sequence on which neither betting strategy wins, if the betting strategies have splits bounded in a certain way.
The basic sets that get chosen into this open set depend on the bets of the strategy pair. We can view this as a game between a player that constructs an open set by choosing a small amount of basic sets against a player that constructs a pair of betting strategies that must observe the bound on splits. We'll show that the first player has a way of winning in this game, that is, some of the chosen basic sets will contain sequences on which neither betting strategy achieves capital larger than some fixed treshold.
For the purpose of the analisys of this construction we will look at the projection of this game on the set of sequences consistent with a restriction that leaves only finitely many positions unrestricted. We'll say that a restriction r ∈ {0, 1} J is I-granular at t w.r.t. S if for all s ∈ {0, 1} I the restriction sˆr is elementary (at t w.r.t. S).
Definition 4.4. Let r be an I-granular restriction at t w.r.t. S and let v be a part of a partition refinement S that is split at t into parts v 0 , v 1 . We'll say that v is split on r at t w.r.t. S when both v 0 and v 1 intersect [r].
Let x be a coordinate of a part v that is defined at t and N the number of coordinates x ′ ≺ x such that the part with the coordinate x ′ is split on r (at some t ′ < t). We say that the part v was split on r N times. Let I be some finite set of positions, s a restriction that restricts I, and ρ a restriction that restricts all positions except the ones in I. Denote the (only) string consistent with the restriction sˆρ with σ. Let N be the maximum, over parts in a partition refinement of a betting strategy that contain σ, of the number of times a part was split on ρ.
We'll say that σ was I-split on N many times by the betting strategy.
Definition 4.8. Suppose that the partition refinement of the betting strategy is such that for every ρ with finitely many unrestricted positions, whenever a part v is split on ρ into parts v 0 , v 1 , both v 0 and v 1 contain the same number of elements from [ρ]. We'll call such betting strategies half-splitting.
Remark 4.9. The Kolmogorov-Loveland betting strategies are half-splitting. Moreover, when part v is split into parts v 0 , v 1 if v is split on ρ ∈ R I , it is also split on every ρ ′ ∈ R I that has non-empty intersection [ρ ′ ] ∩ v. Furthermore, there is a position p ∈ I such that v 0 contains the sequences in v that have 0 at position p and v 1 the sequences that have 1 at position p.
Definition 4.10. Let S be a partition refinement of sequences with clopen parts, I a finite set of positions, and r a restriction that is I-granular w.r.t. S at all times. We'll recursively define a partition refinement T of restrictions in {0, 1} I from the partition refinement S.
Set T 0 (Λ) = {0, 1} I . Let u, v be terminal parts at t − 1 w.r.t. T, S such that s∈u [sˆr] = v ∩ [r] (corresponding parts).
If v does not split at t, both v and u remain terminal parts at t w.r.t. S and T .
If v does split at t (into parts v 0 , v 1 ), but does not split on r, part u remains terminal at t w.r.t. T and corresponds to the part in
{v 0 , v 1 } that contains v ∩ [r].
If v splits on r at t (w.r.t. S), then u splits at t (w.r.t. T ) into parts u 0 , u 1 such that u 0 corresponds to v 0 and u 1 to v 1 .
We'll say that T is S projected on r Definition 4.11. An evaluation function is a function ν t (x) that maps t, x ∈ N, {0, 1} * to non-negative reals. It is non-decreasing both in t and continuations of x. That is, for any
t ≤ t ′ , x x ′ , ν t (x) ≤ ν t ′ (x ′ ).
Definition 4.12. A partition evaluation is a pair of a partition refinement of a finite set and an evaluation function.
Definition 4.13. Let BS = (S, µ) be a betting strategy, I a finite set of positions and r a restriction that that is I-granular w.r.t. S at all times. Let T be S projected on r. Let x be a terminal coordinate at t w.r.t. T and x ′ a terminal coordinate at t w.r.t. S such that parts T t (x) and S t (x ′ ) correspond. Let ν t (x) =ĉ(x ′ ). We'll say that the pair (T, ν) is BS projected on r.
Lemma 4.14. Let I be a finite subset of N, ρ a restriction in R I . A betting strategy projected on ρ is a partition evaluation.
Proof. Let (T, ν) be some betting strategy projected on ρ. Partition refinement T is a partition refinement of the set {0, 1} I . This set is finite since it contains 2 |I| restrictions. The evaluation function ν is non-decreasing both in t and continuations of x because the maximal capital of the betting strategy also has those properties.
In section 5, from a pair of betting strategies BS A , BS B , and additional parameters: a finite set of positions I, an upper bound on the number of splits N , and a lower bound on the number of elements φ, we will construct a sequence of finite restrictions, C. (And with it the open set of (infinite binary) sequences consistent with those restrictions.)
For a ρ ∈ R I , C will have a subsequence of the form C ′ = s 1r1 , . . . , s zrz , where s i restricts I, and r 1 . . .
r z ≺ ρ. Let partition refinements P A = (T A , ν A ), P B = (T B , ν B ) be the betting strategies BS A , BS B projected on ρ.
The sequence C ′ will have the property that there are times t 0 = 0, t 1 , . . . , t z such that:
• At t i−1 , the restriction s i is contained in a pair of terminal parts a, b (w.r.t.
T A , T B ) such that both parts have split less than N many times, have more than φ many elements, and their evaluations are less than 2. We say that
s i is (N, φ)-good at t i−1 w.r.t. P A , P B .
• Let t i be the smallest t > t i−1 such that either part a or part b split at t, or the evaluation of either parts becomes large (larger than 2) at t. If there is such t, we say that s i becomes stale at t i w.r.t. P A , P B . If there is no such t, t i remains undefined and we say that s i is forever-fresh after t i−1 , and in this case, s iri is the last restriction in C ′ .
• If s i becomes stale at t i , and there are still some
(N, φ)-good restrictions in {0, 1} I at t i w.r.t. P A , P B , one of them is the sequence s i+1 . If there are no (N, φ)-good restrictions, then s iri is the last restriction in C ′ .
C ′ cannot have too many elements since the restrictions s i are chosen to be in the intersection of two terminal parts of T A , T B that are large (have more than φ many elements).
Because of this, when s i becomes stale because an evaluation of a part becomes large, then also φ other restrictions are contained in this part with large evaluation. This can happen at most 2 |I| /φ many times per partition evaluation, because then, at some t, all of the terminal parts have large evaluation, and every
restriction in {0, 1} I is (N, φ)-bad.
Similarly, when s i becomes stale because a part was split, then also φ other sequences are contained in parts whose number of splits was incremented by 1. This can happen at most N 2 |I| /φ many times per partition evaluation. Therefore, for a pair of general betting strategies, C ′ can have at most 2(N + 1)2 |I| /φ elements.
For the half-splitting betting strategies, we can improve the upper bound on the number of elements in C ′ . The size of a part in the projection of a halfsplitting betting strategy on ρ ∈ R I , and the number of times the part was split are related, namely, if a part was split n many times, its size is 2 |I|−n . Let M be the smaller of N, |I|− log φ. Then M is the upper bound on the number of times the part was split, before all of the restrictions in the part become (N, φ)-bad. The number of parts that were split less than M times is at most 2 M+1 , so at most 2 |I|−log φ+1 = 2·2 |I| /φ restrictions in C ′ become stale because a part in the projection of the half-splitting betting strategy was split. Therefore, for a pair of half-splitting betting strategies, C ′ can have at most 2(2 + 1)2 |I| /φ elements. Definition 4.15. Let S be a partition refinement of a finite set Ω. Let P = (S, ν) be a partition evaluation.
We'll say that an element of Ω is (N, φ)-good at t if a part that is terminal at t w.r.t. S that contains the element was split less than N times, and has more than φ many elements, and the evaluation of the part's coordinate is less than 2. If the part is not (N, φ)-good at t it is (N, φ)-bad at t. Definition 4.16. Let S be a partition refinement of a finite set Ω. Let P = (S, ν) be a partition evaluation.
An element of Ω becomes stale after t at t ′ w.r.t. P if t ′ is the smallest number larger than t such that a part of S that contains the element is split at t ′ , or, the evaluation of a part that contains the element becomes larger than 2 at t ′ .
Definition 4.17. Let S A , S B be partition refinements of a finite set Ω, and
P A = (S A , v A ), P B = (S B , v B ) a pair of partition evaluations. An element of Ω is (N, φ)-good at t w.r.t. P A , P B if it is (N, φ)-good at t w.r.t. both P A and P B . It is (N, φ)-bad w.r.t. P A , P B if it is (N, φ)-bad w.r.t. P A or w.r.t. P B .
An element of Ω becomes stale after
t at t ′ w.r.t. P A , P B if it stale after t at t ′ w.r.t. P A or w.r.t. P B .
If the element never becomes stale after t, we'll say that the element is forever-fresh after t.
Let C = {s 1 , . . . , s k } be a sequence of elements from Ω. Let t 0 = 0 and for every i ∈ [1, k] let t i be such that s i becomes stale after
t i−1 at t i (w.r.t. P A ,P B ) (if s i is forever-fresh after t i−1 , t j is undefined for all j ≥ i ). We say that C is an (N, φ)-sequence w.r.t. P A , P B if for all i ∈ [1, k], t i−1 is defined, and s i is (N, φ)-good at t i−1 .
Proposition 4.18. Let S A , S B be partition refinements of a set Ω with 2 ℓ many elements, and P A = (S A , ν A ), P B = (S B , ν B ) a pair of partition evaluations. Let C be an (N, φ)-sequence w.r.t. P A , P B . The sequence C has less than 2 2 ℓ φ (N +1) elements.
Furthermore, if S A , S B are such that whenever a part is split, it is split into two parts with equal number of elements, then |C| ≤
6 2 ℓ φ Proof. Let a i , b i be the parts with coordinates x i , y i that are terminal at t i−1 w.r.t. S A , S B and contain s i , the i-th element of C.
If s i becomes stale at t i then (a) (1) the evaluation ν A of x i becomes larger than 2, or (2) a i is split at t i , or (b) (1) the evaluation ν B of y i becomes larger than 2, or
(2) b i is split at t i .
Since s i is (N, φ)-good at t i−1 , both a i and b i contain more than φ many elements and both of them have split less than N times. In case (a)(1), at least φ many elements will be in a part with evaluation larger than 2. Since evaluation function is nondecreasing both in t and in further splits, for all t ≥ t i these elements remain (N, φ)-bad and therefore cannot show up later in the sequence C. Thus (a)(1) can happen for at most 2 ℓ φ many elements in C before all of the elements in Ω become (N, φ)-bad and s i is the last element in C.
In case (a) (2), for at least φ many elements the number of times the partition S A has split increases by 1. Once a part has split more than N times, the elements in that part become (N, φ)-bad and cannot show up later in the sequence C. Therefore (a) (2) can happen for at most 2 ℓ φ N many elements in C, in which case s i is the last element in C.
Summing (a)(1) and (a)(2) we have that (a) happens for at most 2 ℓ φ (N + 1) elements in C, and, by the same reasoning, this is also true for (b). Then, in total, C has less than 2 2 ℓ φ (N + 1) many elements. When S A , S B are half-splitting, the analisys of the case (a)(1) remains the same, however, we can improve the bound on the size of the sequence C in the case (a) (2).
If S A is such that whenever a part is split, it is split into two parts of equal size, then
|a i | = 2 ℓ−|xi| . Since the element s i ∈ a i is (N, φ)-good at t i−1 (w.r.t. P A )
, we have that x i is shorter than N (otherwise the part a i has split more than N times), but also it must be shorter than ℓ − log φ as otherwise the part a i has size smaller than φ.
Let M = min(N, ℓ − log φ). If s i is (N, φ)-good at t i−1 then the length of x i is at most M . If |x i | = M and a i is split at t i , it is split into two parts a ′ , a ′′ with coordinates x i 1, x i 0, and since |x i 1|, |x i 0| are both equal to M + 1, the elements in both parts a ′ , a ′′ are (N, φ)-bad at all t ≥ t i . But the number of coordinates shorter than M is less than 2 M+1 , and therefore the case (a)(2) can happen at most 2 M+1 times before all of the elements in Ω become (N, φ)-bad and s i is the last element in the sequence C.
Since M ≤ ℓ − log φ, the case (a)(2) happens for at most 2 ℓ−log φ+1 = 2 2 ℓ φ many elements in C, and the case (a)(1) for at most 2 ℓ φ . Together, (a) happens for at most 3 2 ℓ φ elements, and the same is true for (b), which brings us to a total of at most 6 2 ℓ φ elements in C. We show that, for projections of betting strategies on ρ ∈ R I , if the bound N on the number of splits is smaller than |I| − log φ by h, only a small (2 −h ) fraction of sequences in [ρ] can be contained in parts that are smaller than φ but have split less than N many times. Of course, for half-splitting betting strategies, this fraction is 0.
Lemma 4.19. For any partition refinement of a finite set Ω with 2 ℓ many elements, and any t, the total number of strings that are contained in terminal parts that have strictly less than φ many elements and have split at most ℓ − log φ − h times is less than 2 ℓ−h .
Proof. Let k = ℓ − log φ − h The number of terminal coordinates for a partition S t that are shorter than k is at most 2 k . Even if all of these coordinates are mapped by S t to parts with (strictly) less than φ many elements, the total number of elements in such parts is less than 2
k φ = 2 ℓ−h . Lemma 4.20. Let N = ℓ − log φ − h
The number of (N, φ)-bad elements at t w.r.t. P A , P B that are contained in terminal parts with evaluation below 2 and have split less than N times, is at most 2 ℓ−h .
Proof. By lemma 4.19. If [z] has a large (> 0) subset of sequences on which the betting strategies achieve only small (≤ 2) capital, and under a certain condition on the splits of the betting strategies, some of the chosen restrictions will also have a large subset of sequences on which the betting strategies achieve small capital.
Basic game
In section 6 we'll iteratively use sets of chosen restrictions to construct levels of a ML-test that contains a sequence on which the betting strategy pair achieves only small capital, showing that there is a non-MLR sequence on which the betting strategy pair doesn't win.
The restrictions are chosen in such a way that for any ρ ∈ R I that extends z, the subset of C consistent with ρ will be of the form {s 1r1 , . . . , s zrz } where ρ is an extension of all r 1 , . . . , r z , and s 1 , . . . , s z is an (N, φ)-sequence w.r.t. the pair of betting strategies projected on ρ.
Lemma 5.1. Let BS = (S, µ) be a betting strategy and r an I-granular restriction at t w.r.t. S. Any r ′ that extends r and leaves positions in I unrestricted is also I-granular at t w.r.t. S. Furthermore, BS up to time t projected on r is equal to BS up to time t projected on r ′ .
Proof. Since r is I-granular (at t w.r.t. the partition refinement of BS), for all s ∈ {0, 1} I , sˆr is elementary, and if sˆr is elementary so is sˆr ′ . Therefore, r ′ is also I-granular.
Furthermore, for any part v, defined by the partition refinement of the BS up to time t, if the intersection v ∩ [r] is non-empty, then, by definition 4.10, v has corresponding parts u, u ′ in projections of the partition refinement on r, r ′ , both having the same coordinate and defined at the same time as part v. By definition 4.13, the evaluation functions of projections of the BS up to time t on r, r ′ are the same as well, for all coordinates and times. But then, the projections of BS up to time t on r, r ′ are the same. (I, (N, φ))-good.
We'll say that sˆr becomes I-stale after t at t ′ w.r.t. BS A , BS B iff s becomes stale after t at t ′ w.r.t. both projections on r of BS A and BS B up to time t ′ .
To make the construction of the set of chosen sequences exact, for a given betting strategy pair, at time t, we use a uniquely defined set of I-granular restrictions, called the common set of I-granular restrictions at time t.
Proof.
A basic set is a set of sequences that extend some string. A clopen set consists of finitely many basic sets. There are finitely many parts terminal at t w.r.t. S. These are clopen sets and therefore there is a minimal length of a string ℓ such that each terminal part can be represented as a union of basic sets that extend strings of length ℓ.
Let K = [1, ℓ], and r a restriction in {0, 1} K . The set [r] is a set of sequences that extend a string of length ℓ, and is therefore a subset of some terminal part (at t w.r.t. S). That is, r is an elementary restriction (at t w.r.t. S). We'll now construct a choice function C that, as time progresses, chooses finite restrictions into the chosen set. We index the chosen restrictions with finite sequences of numbers, and for m ∈ N, when C(m) is defined, we'll say that C(m) is the m-th chosen restriction. We denote the time when the m-th restriction was chosen with T (m).
The parameters for our construction are: a pair of betting strategies BS A , BS B , a finite set of positions I, a restriction z that doesn't restrict positions in I, a lower bound on the number of splits N , and an upper bound on the number of elements of a part in the finite game φ.
Recall that denotes the empty sequence of numbers. The first chosen restriction C( ) = sˆr , is the concatenation of the restriction s that restricts all positions in I to 0 and the restriction r = z. The restriction C( ) is chosen at time T ( ) = 0, when partition refinements of both betting strategies have only one part defined that is the entire set of sequences. It has the following property, that all of the chosen restrictions will have:
• the restriction C( ) is a concatenation of two restrictions, s that restricts I, called the head of the chosen restriction and r called the tail.
• The tail is I-granular w.r.t. BS A , BS B at time when the restriction was chosen.
• The head is (N, φ)-good w.r.t. BS A , BS B projected on the tail at the time when the restriction was chosen.
• For the tail, the head is uniquely defined, it is the lexicographically least (N, φ)-good restriction w.r.t. BS A , BS B projected on the tail at the time when the restriction was chosen.
In other words, the chosen restriction C(m) = s mrm is (I, (N, φ))-good w.r.t. BS A , BS B at the time when it was chosen, and the choice of head s m is uniquely defined for the tail r m .
If at some t > T (m), for some common I-granular restriction r ′ that extends the tail r m , the restriction s mr ′ becomes I-stale w.r.t. BS A , BS B , and there are still some (N, φ)-good heads w.r.t. BS A , BS B projected on r ′ , then another restriction is chosen, with lexicographically least head s ′ such that s ′r′ is (I, (N, φ))-good w.r.t. BS A , BS B at time t. Suppose that r ′ is the n-th such extension of r m so far. Let m ′ = m, n and set C(m ′ ) = s ′r′ and T (m ′ ) = t. We'll recursively define a partial map C from finite sequences of natural numbers to restrictions together with an auxilliary partial map T from N * to N. For m ∈ N * , we'll say that C(m) is the m-th chosen restriction which was chosen at T (m) in the basic game on I, z against BS A , BS B with parameters N, φ.
C( ) is the extension of restriction z that restricts positions in I to zero and T ( ) = 0.
Suppose that for some argument m ∈ N * the maps C and T are already defined and C(m) = sˆr, where s restricts positions in I, and r is I-granular at T (m) w.r.t. S A , S B . For t ≥ T (m), let F t (m) denote the subset of the common set of I-granular restrictions at t w.r.t. S A , S B , such that r ′ ∈ F t (m) extends r and sˆr ′ becomes I-stale after T (m) at t. We'll say that the restrictions F t (m) fail at t in the basic game on I, z against BS A , BS B .
We'll say that, at t, an I-granular restriction r is viable if there is a restriction s ∈ {0, 1} I such that sˆr is (I, (N, φ))-good, and otherwise we say that r is choiceless. Let G t (m) be the restrictions in F t (m) that are viable, and H t (m) the restrictions in F t (m) that are choiceless.
For the failed restrictions that are still viable, we choose another extension on positions in I that is (I, (N, φ))-good. Let i = T (m)<t ′ <t |G t ′ |. For n ≤ |G t | we define C(m, (i + n)) to be the restriction sˆr n where r n is the n-th element of G t (m) and s is the lexicographically least restriction in {0, 1} I such that sˆr n is (I, (N, φ))-good at t w.r.t. BS A , BS B . We define T (m, (i + n)) = t.
We We have that for every n 1 , . . . , n k ∈ N k for which the choice function is defined, the restrictions C( ) = s 0r0 , . . . , C(n 1 , . . . , n k ) = s krk are such that the tails extend each other, r 0 . . . r k , and the heads are an (N, φ)-sequence w.r.t. the projections of betting strategies on any ρ that extends r k and restricts all of the positions except the ones in I. By proposition 4.18, there is a bound on the number of elements in an (N, φ)-sequence, Q. So the choice function is defined only on sequences of numbers of length at most Q.
Note that for any m ∈ N * , n, n ′ ∈ N, n = n ′ such that both C(m, n) and C(m, n ′ ) are defined, the tails of C(m, n), C(m, n ′ ) are inconsistent restrictions that extend the tail of C(m). Therefore the sets of sequences consistent with the tails of the chosen restrictions with indices of length k are mutually disjoint subsets of the set of sequences consistent with z. That is, denoting the tail of the m-th chosen restriction with C r (m), m∈N k ∩dom C [C r (m)] ⊆ [z]. Since for every chosen restriction the set of sequences consistent with the restriction has size, conditional on the tail, 2 −|I| , we have that m∈N k ∩dom C λ([C(m)]) ≤ 2 −|I| λ([z]). But then, the sum of sizes of the sets of sequences consistent with the chosen restrictions is less than Q2 −|I| λ([z]). Proof. Let m = n 1 , . . . , n z such that C(m) is defined. Then C( ) = s 0r0 , C(n 1 ) = s 1r1 , . . . , C(n 1 , . . . , n z ) = s zrz are restrictions such that r z is a restriction that extends all of the restrictions r 0 , . . . , r z−1 , and s irz is (I, (N, φ))-good at t i = T (n 1 , . . . , n i ), and for all i < z, s irz becomes I-stale after t i at t i+1 = T (n 1 , . . . , n i , n i+1 ), w.r.t. BS A , BS B . By lemma 5.1, for every ρ ∈ R I that extends r z , we'll have that s 0 , . . . , s z is an (N, φ)-sequence w.r.t. to the pair of betting strategies projected on ρ. By proposition 4.18 it cannot have more than Q elements, where Q = 2 |I| 6 φ if the betting strategies are half-splitting, and Q = 2 |I| 2(N +1) φ otherwise. Let F be the union over t of the granular restrictions containing sequences from the m-th chosen restriction that fail at t in the basic game on I, z against the pair of betting strategies.
The restrictions in F are mutually inconsistent extensions of r.
Proof. Let J t be the common set of I-granular restrictions at t w.r.t. the pair of partition refinements of the betting strategies. We have that F = t≥T (m) F t (m). By definition, F t (m) is a subset of J t , and the restrictions in J t are mutually inconsistent, since they restrict the same set of positions.
All that is left to prove is that for any t < t ′ and r, r ′ ∈ F t (m), F t ′ (m) the restrictions r, r ′ are inconsistent. By definition, r ∈ F t (m) implies that sˆr becomes I-stale after T (m) at t (w.r.t. the pair of betting strategies) . Suppose r, r ′ are consistent. Since the set of inspected positions grows with t, r restricts a subset of positions restricted by r ′ and therefore, if r, r ′ are consistent, r ′ is an extension of r. If sr becomes I-stale after T (m) at t then also sr ′ becomes I-stale after T (m) at t. On the other hand, r ′ ∈ F t ′ (m) implies that sˆr ′ becomes I-stale after T (m) at t ′ > t, a contradiction. Therefore r, r ′ are inconsistent. Proof. Let C denote the choice function on z, I with parameters N, φ against BS A , BS B . Let C r (m) be the restriction that leaves the positions in I unrestricted and is the same as restriction C(m) on positions outside I.
By definition C r ( ) = z, and by lemma 5.9, for any m ∈ N,
n∈N∧m,n∈dom C λ([C r (m, n)]) ≤ λ([C r (m)])
. Therefore, for any k,
m∈N k ∩dom C λ([C r (m)]) ≤ λ(z) implying m∈N k ∩dom C λ([C(m)]) ≤ 2 −|I| λ(z)
. By lemma 5.8 C is defined only on sequences of natural numbers shorter than Q. We have:
m∈dom C λ([C(m)]) = k∈N m∈N k ∩dom C λ([C(m)]) = k≤Q m∈N k ∩dom C λ([C(m)]) ≤ Q2 −|I| λ(z)
For every ρ that extends z and restricts all of the positions except the ones in I, since (N, φ)-sequences are finite, there is the last chosen restriction sˆr, whose tail r is consistent with ρ. The last restriction is such that either the head s is forever-fresh after the time it was chosen (w.r.t. BS A , BS B projected on ρ), or, at some t, for some tail r ′ , r r ′ ≺ ρ, the restriction sˆr ′ becomes I-stale w.r.t. BS A , BS B and there are no more (I, (N, φ))-good restrictions with the tail r ′ that can be chosen (choiceless r ′ ). We can divide the restrictions in R I that extend z into two subsets, H, the set of restrictions extending a choiceless restriction, and M , the set of restrictions that extend the tail of some chosen restriction whose head remains forever-fresh.
Note that the set of sequences consistent with H is open, and with M closed. The set of sequences consistent with some ρ ∈ H consists of:
• the set of sequences on which either betting strategy achieves high (larger than 2) capital, denoted with W
• the set of sequences contained in a part of one of the partition refinements of BS A , BS B that was split more than N times on ρ, denoted with L But then for at least one m ∈ N k we'll have that the size ofM (m) conditional on the tail of the m-th chosen restriction is
θ ′ ≥ θ−ǫ−2 −h Q .
This implies that for this m, the set of sequences consistent with C(m) and some ρ ∈ M (m) has size, conditional on [C(m)], at least θ ′ . The betting strategies don't achieve high capital on any sequence in this set, and if θ ′ > 0 we have that there is a large subset of sequences that have low capital consistent with the m-th chosen restriction.
Recall the definition 4.7. . By lemma 5.8, for large enough Q and any m ∈ N Q , C(m) is not defined. By definition, C r ( ) = z, and we have
[z] = m∈dom C M (m) ⊔H(m)(1)
Let X t be the subset of sequences consistent with a restriction that is (I, (N, φ))-bad at t w.r.t. BS A , BS B , such that a sequence is in X t only if it was I-split on less than N many times by BS A , BS B up to time t, and only if BS A , BS B , up to time t, achieve capital of less than 2 on the sequence.
Note that this implies that, for some s ∈ {0, 1} I , ρ ∈ R I , the sequence consistent with the restriction sˆρ is in X t , only if it is contained in a part of a partition refinement of one of the betting strategies up to time t, whose corresponding part in the projection on ρ has less than φ many elements.
Let r ∈ H t (m) and let ρ be a restriction in R I that extends r. By lemma 5.1 all of the sequences in [ρ] are consistent with some (I, (N, φ))-bad restriction w.r.t. BS A , BS B up to time t.
By lemma 4.20, there is at most 2 |I|−h sequences in [ρ] that are contained in intersections of parts u, v that split less than N many times w.r.t. BS A , BS B up to time t projected on ρ, and have evaluations ν A (u), ν B (v) less than 2.
Summing (integrating) over ρ ∈ R I that extend r, we have that the size of X t , conditional on [r], is at most 2 −h . This implies that the size, conditional on [r], of the set of sequences contained in parts of the partitions refinements of the betting strategies that were split on r more than N many times or have maximal capital larger than 2 is more than 1 − 2 −h .
For distinct m, m ′ ∈ N * , if C(m), C(m ′ ) are defined, the setsH(m),H(m ′ ) are disjoint subsets of [z]. The size, conditional on [z], of the set of sequences on which the betting strategies achieve capital larger than 2 is at most 1 − θ, and we have m∈dom C λ(H(m)) (
1 − 2 −h ) ≤ (1 − θ)λ([z]) + ǫλ[z]. This implies m∈dom C λ(H(m)) ≤ ((1 − θ) + ǫ + 2 −h )λ[z]
Then from (1),
m∈dom C λ(M (m)) ≥ (θ − ǫ − 2 −h )λ([z]
). Let Q be the maximal length of an (N, φ)-sequence against a pair of partition evaluations of a set with 2 |I| elements. By lemma 5.8, for any k ≥ Q, C is not defined on any m ∈ N k . But then for at least one k < Q,
m∈N k ∩dom C λ(M (m)) ≥ ( θ−ǫ−2 −h Q )λ(z). Since M (m) ⊆ [C r (m)] and m∈N k ∩dom C λ([C r (m)]) ≤ λ([z]), for at least one k, m ∈ N k we have λ(M (m)|[C r (m)]) ≥ θ−ǫ−2 −h Q . On the other hand, λ( ρ∈M(m) [C s (m)ˆρ]|[C(m)]) = λ(M (m)|[C r (m)])
, and since the C s (m) is forever-fresh after T (m) w.r.t. BS A , BS B projected on ρ, the betting strategies achieve capital less than 2 on the sequence consistent with C s (m)ˆρ, and the result follows.
Constructing the Martin-Löf-test
We'll now construct a ML-test for a given pair of computable betting strategies BS A , BS B .
We play the basic games on a sequence of disjoint sets of positions I 1 , I 2 , . . . paired up with parameters (N 1 , φ 1 ), (N 2 , φ 2 ), . . . , called game zones. We'll call the pair (I i , (N i , φ i )) the i-th zone.
The basic game on I i , z against BS A , BS B with parameters N i , φ i we call the basic game for z on the i-th zone.
The n-th level of the ML-test will be the set of sequences consistent with restrictions in the n-th level of the chosen restrictions (to be defined). The 0-th level of the chosen restrictions has only the empty restriction. For n ∈ N, the n-th level consists of restrictions chosen in the basic games for restriction c on the i-th zone, for all c in the (n − 1)-th level of the chosen restrictions, and all i such that c does not restrict any positions in I i .
Let Q i be the upper bound on the length of an (N i , φ i )-sequence for basic game on the I i -th zone (propositions 4.18,5.8). Suppose the zones are picked in such a way that
Q i ≤ 2 |Ii|−i−1(2)
Then by proposition 5.10, the sum of sizes of the sets of sequences chosen in the basic game for c on the i-th zone is at most 2 −i−1 λ([c]), and summing the sizes over all zones, 1/2λ([c]). This implies that the size of the set of sequences consistent with the n-th level of chosen restrictions is less than 2 −n , as it should be for the n-th level of a ML-test.
We'll say that the bound on the number of splits in the i-th zone was violated for a sequence σ if the restriction, that restricts all of the positions to the same bits as σ, was I i -split on more than N i times.
Suppose the zones are picked in such a way that
N i ≤ |I i | − log φ i − i(3)
Let's call a sequence on which the betting strategies achieve only low (≤ 2) capital a sequence with low capital. By proposition 5.11, for large enough i, if a restriction z has a large (> 0) subset of sequences with low capital, and the size, of the set of sequences for which the bound on the number of splits in the i-th zone was violated, is small enough, then one of the chosen restrictions in the basic game for z on the i-th zone, z ′ , has a large subset of sequences with low capital. Let ǫ i denote the size of the set of sequences for which the bound on the number of splits in the i-th zone was violated. If
lim i ǫ i = 0(4)
then for every restriction z, if [z] has a large subset of sequences with low capital, there is some i and a restriction z ′ , chosen in the basic game for z on the i-th zone, such that [z ′ ] has a large subset of sequences with low capital.
Note that the set of sequences with low capital is large and consistent with the empty restriction. By induction, there is some sequence of restrictions z 1 ≺ z 2 ≺ . . . such that z n is in the n-th level of chosen restrictions, and [z n ] has a (large) subset of sequences with low capital. By compactness, the set of sequences consistent with all of the restrictions z 1 , z 2 , . . . contains a sequence with low capital.
We have shown that if the zones satisfy conditions 2 and 3, and BS A , BS B satisfy condition 4 then the set of sequences consistent with the n-th level of chosen restrictions is an n-th level of a ML-test, and there is a sequence which fails every level of this ML-test on which neither betting strategy wins. Definition 6.1. Let I be a finite set of positions, and (N, φ) a pair of natural numbers. We will call the pair (I, (N, φ)) a zone. A sequence of zones (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . with disjoint sets of positions is called game zones. Definition 6.2. Let Z = (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . be some game zones. If a betting strategy I i -splits on a sequence more than N i times, we'll say that the bound on the number of splits in the i-th zone was violated by the betting strategy for this sequence.
Let BS A , BS B be a pair of betting strategies. When BS A , BS B are known, we'll say that a sequence has low capital if both BS A , BS B achieve capital on the sequence that is less than or equal to the treshold 2.
Let z be a restriction that restricts a finite set of positions P . If z does not restrict positions in I i , let C i be the choice function on I i , z against BS A , BS B with parameters N i , φ i . If z does restrict positions in I i , let C i be undefined on all inputs m ∈ N * , that is, the set of restrictions chosen by C i is empty.
We'll call the union over i of restrictions chosen by C i the restrictions chosen against BS A , BS B on zones Z for restriction z. We'll call the set of sequences consistent with those restrictions the sequences chosen against BS A , BS B on zones Z for restriction z. (N 1 , φ 1 )), (I 2 , (N 2 , φ 2 )), . . . be some game zones and z some restriction that restricts a finite set of positions.
Denote with ǫ i the size of the set of sequences for which the bound on the number of splits in the i-th zone was violated by BS A or BS B .
If the size of the set of sequences with low capital consistent with z is larger than 0, and (i) ǫ i goes to zero as i goes to infinity, and
(ii) N i ≤ |I i | − log φ i − i
then there is a restriction z ′ chosen against BS A , BS B on zones Z for restriction z such that the size of the set of sequences with low capital consistent with z ′ is larger than 0.
Proof. Let θ be the size, conditional on [z], of the set of sequences with low capital. We have that θ > 0. For large enough i, the value ǫ i /λ([z]) + 2 −i becomes arbitrarily small and by proposition 5.11 if for some i, θ−ǫ i /λ([z])−2 −i is larger than 0, then the the choice function on I i , z against BS A , BS B with parameters N i , φ i chooses a restriction [z ′ ] such that the size, conditional on [z ′ ], of the set of sequences with low capital is larger than 0. Equivalently, the set of sequences with low capital consistent with z ′ is larger than 0. Definition 6.5. Let BS A , BS B be a pair of betting strategies and let Z = (I 1 , (N 1 , φ 1 )), (I 2 , (N 2 , φ 2 )), . . . be some game zones. For n ∈ N, we recursively define sets of restrictions L n . Let L 1 be the set of restrictions chosen against BS A , BS B on zones Z for the empty restriction. Let L n+1 be the union over restrictions z ∈ L n of the set of restrictions chosen against BS A , BS B on zones Z for restriction z.
We'll call the set of sequences that are, for all n, consistent with some restriction in L n the chosen sequences against BS A , BS B on zones Z. We'll call L n the n-th level of chosen restrictions against BS A , BS B on zones Z. Denote the set of sequences on which the betting strategies achieve capital less than 2 (sequences with low capital) with D. From µ A (Ω) + µ B (Ω) = 1 the size of D is at least 1/2. Since every sequence is consistent with the empty restriction, the size of the set of sequences with low capital consistent with the empty restriction is therefore larger than 0. By lemma 6.4 there is a restriction in the first level of chosen restrictions, z 1 , such that the subset of sequences with low capital consistent with z 1 is larger than 0. Suppose there are some restrictions z 1 . . . z n such that z i is in the i-th level of chosen restrictions and [z n ] has a subset of sequences with low capital larger than 0. Again by lemma 6.4, there is an extension of z n in the n + 1-th level of the chosen restrictions, z n+1 with a subset of sequences with low capital larger than 0. By induction, there is a sequence of restrictions z 1 z 2 . . . such that all of the sequences in n [z n ] are chosen sequences, and for every n, λ([z n ] ∩ D) > 0. Since D is a closed set, the set [z n ] ∩ D is closed, and (by compactness) the intersection of nonempty closed sets n [z n ] ∩ D is non empty and therefore n [z n ] contains sequences with low capital.
Suppose that for a betting strategy there is a computable sequence of positions π = p 1 , p 2 , . . . and an unbounded computable function f , such that, for an infinite binary sequence σ it is almost surely true (w.r.t. λ) that for all but finitely many ℓ, the betting strategy {p 1 , . . . , p ℓ }-split on σ at most f (ℓ) times. We'll say that the betting strategy on π has splits upper bounded by f . Definition 6.7. Let BS = (S, µ) be a betting strategy. Let π = p 1 , p 2 , . . . be a computable sequence of distinct positions and f some computable function. Denote with ρ σ ℓ a restriction consistent with a sequence σ that restricts all except the first ℓ positions in π.
Let X be the set of sequences such that for any sequence σ ∈ X for all but finitely many ℓ, for any part of the partition refinement S that contains the sequence, the part was split on ρ σ ℓ at most f (ℓ) times. If X has size 1, we'll say that BS on π has splits upper bounded by f . Let X be the set of sequences such that for any sequence σ ∈ X for all but finitely many ℓ, for any part of the partition refinement S that contains the sequence, the part was split on ρ σ ℓ at least f (ℓ) times. If X has size 1, we'll say that BS on π has splits lower bounded by f .
Game zones for bounded splits
We show that if both betting strategies, on some computable sequence of positions π, have splits upper bounded by f (ℓ) = ℓ − log ℓ − g(ℓ), where g, called the gap function, is unbounded and computable, then we can construct game zones, called the zones on positions π with gap g, that have the properties required by proposition 6.6. This implies there is a non-MLR sequence on which neither betting strategy wins. Definition 6.8. Let π be a computable sequence of distinct positions, and g some unbounded computable function. We partition the positions π into smallest consecutive intervals I 1 , I 2 , . . . such that g(
k i=1 |I i |) ≥ 2k + 2 + k−1 i=1 |I i |.
For all k, let φ k = 2 k+2 |I k | and N k = |I k | − ⌊log φ k ⌋ − k. We call the sequence of zones (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . the zones on positions π with gap g. Lemma 6.9. Let (I i , N i , φ i ) be the i-th zone on positions π with gap g. The upper bound on the length of an (N i , φ i )-sequence against a pair of partition evaluations of a set with 2 |Ii| elements is less than 2 |Ii|−i−1 .
Proof. By proposition 4.18, The upper bound on the length of an (N i , φ i )sequence against a pair of partition evaluations of a set with 2 |Ii| elements is
Ni+1 φi 2 |Ii|+1 = |Ii|−⌊log φi⌋−i+1 φi 2 |Ii|+1 ≤ |Ii| φi 2 |Ii|+1 = |Ii| 2 i+2 |Ii| 2 |Ii|+1 = 2 |Ii|−i−1
Lemma 6.10. Let π be a computable sequence of positions, and BS A , BS B be a pair of betting strategies that have splits on π upper bounded by some f . Let (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . be some game zones where I 1 , I 2 , . . . are consecutive intervals of π.
If f ( k i=1 |I i |) ≤ N k then the size of the set of sequences for which the bound on the number of splits in the i-th zone was violated goes to 0 when i goes to infinity.
Proof. Let P k the union of the intervals of positions of the first k zones and let ℓ k be the size of P k , that is,
ℓ k = k i=1 |I i |.
If a betting strategy P k -splits on a sequence n many times, then it I k -splits on a sequence at most n many times, since I n ⊆ P k . Then from definition 6.7, the size of the set of sequences on which either of the betting strategies P k -split more than f (ℓ k ) times goes to zero as k goes to infinity. But then also, the size of the set of sequences on which either of the betting strategies I k -split more than N k times goes to zero as k goes to infinity. In other words, the size of the set of sequences for which the bound on the number of splits in the k-th zone was violated goes to 0 when k goes to infinity. Lemma 6.11. Let (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . be the zones on positions π with gap g. Let BS A , BS B be a pair of betting strategies that have splits on π upper bounded by f (ℓ) = ℓ − log ℓ − g(ℓ).
The size of the set of sequences for which the bound on the number of splits in the i-th zone was violated goes to 0 when i goes to infinity.
Proof. Let ℓ k be the length of the initial segment of positions in π contained in the intervals of positions of the first k zones, that is,
ℓ k = k i=1 |I i |. ℓ k − log ℓ k − g(ℓ k ) ≤ k i=1 |I i | − log( k i=1 |I i |) − k−1 i=1 |I i | − 2k − 2 ≤ |I k | − log |I k | − 2k − 2 = |I k | − (k + 2 + log |I k |) − k = |I k | − log φ k − k ≤ N k .
By definition 6.8, I 1 , I 2 , . . . are consecutive intervals of π and the result follows from lemma 6.10.
Theorem 1. Let BS A = (S A , µ A ), BS B = (S B , µ B
) be a pair of computable betting strategies with µ A (Ω) + µ B (Ω) = 1 . Let π be a computable sequence of distinct positions, g an unbounded computable function and let Z be the zones on positions π with gap g.
If both BS A , BS B on π have splits upper bounded by ℓ − log ℓ − g(ℓ), there is a non-Martin-Löf random sequence on which neither strategy wins.
Proof. By lemmas 6.9, 6.11, and definition 6.8 the conditions (I), (II), (III) of proposition 6.6 are fullfilled and the result follows.
Half-splitting game zones for upper bounded splits
For a pair of half-splitting betting strategies, the length of an (N, φ)-sequence against a pair of partition evaluations of a set with 2 |I| elements depends only on φ (proposition 4.18). This allows us to get a better bound on the number of splits. Suppose that the betting strategies, on some computable sequence of positions π, have splits upper bounded by f (ℓ) = ℓ − g(ℓ), where g is unbounded and computable, then we can construct game zones, called the half-splitting zones on positions π with gap g, that have the properties required by proposition 6.6. This implies there is a non-MLR sequence on which neither betting strategy wins.
Definition 6.12. Let π be a computable sequence of distinct positions, and g some unbounded computable function. We partition the positions π into smallest consecutive intervals I 1 , I 2 , . . . such that g(
k i=1 |I i |) ≥ 2k + 4 + k−1 i=1 |I i |.
For all k, let φ k = 6 · 2 k+1 and N k = |I k | − 2k − 4. We call the sequence of zones (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . the half-splitting zones on positions π with gap g. Lemma 6.13. Let (I i , N i , φ i ) be the i-th half-betting zone on positions π with gap g. The upper bound on the length of an (N i , φ i )-sequence against a pair of partition evaluations of a set with 2 |Ii| elements is less than 2 |Ii|−i−1 .
Proof. By definition 6.12, φ i = 6 · 2 i+1 and the result follows from proposition 4.18. Lemma 6.14. Let (I k , N k , φ k ) be the k-th half-splitting zone on positions π with gap g. We have N k ≤ |I k | − log φ k − k.
Proof. |I k | − log φ k − k = |I k | − log 6 · 2 k+1 − k = |I k | − log 6 − 2k − 1 ≥ |I k | − 2k − 4 = N k
Lemma 6.15. Let (I 1 , N 1 , φ 1 ), (I 2 , N 2 , φ 2 ), . . . be the half-splitting zones on positions π with gap g. Let BS A , BS B be a pair of half-splitting betting strategies that have splits on π upper bounded by f (ℓ) = ℓ − g(ℓ).
The size of the set of sequences for which the bound on the number of splits in the i-th zone was violated goes to 0 when i goes to infinity. less than q n−1 /2, and by induction is for all n less than 2 −n , that is, the size of [z n ].
On the other hand, for all n, the size, conditional on [z n ], of the set of sequences that were, for all i ≤ n, {p i }-split on by both KLBS-es up to time t i , is larger than 1/2. This implies the set [z n ] at all times t contains a sequence on which the capital of terminal parts that contain the sequence is below 2, as otherwise q n would be larger than λ([z n ]). This is still not enough to prove the claim that the restriction z 0 z 1 . . . ζ contains a sequence on which neither KLBS wins, since we need to look at the maximum over all t of capital of a terminal part that contains the sequence. To remedy this, we use the "savings trick", Proposition 7.1 ("slow-but-sure winnings" lemma in [6]). For any given betting strategy BS = (S, µ), we can construct BS ′ = (S, µ ′ ) that wins on every sequence on which BS wins, and the difference between the capital and the maximal capital of a part is bounded.
If the difference between the capital and the maximal capital of a part is bounded, then the set [z n ], at all times t, contains a sequence on which the betting strategies achieve bounded capital. That is, the KLBS pair doesn't win on some sequence in [z n ], for all n. But then [ζ] also contains a sequence on which the KLBS-es do not win. Since [ζ] is an effective nullset, all of the sequences in it are non-MLR.
Theorem 3. If a pair of computable KLBS-es has splits, on some sequence of positions π, lower bounded by ℓ − o(ℓ), then there is a non-MLR sequence on which neither strategy wins.
Proof.
Claim 7.2. There must be a sequence of position p 1 , p 2 , . . . and times t 1 , t 2 , . . . such that the size of the set of sequences that were {p i }-split on by both KLBS-es up to time t i is at least 1 − 2 −2i Proof. Suppose the claim is not true: there is some bound d such that for every position p, at all times t, the size of the set of sequences that were {p}-split on by both KLBS-es, up to time t, is at most 1 − d.
Let x denote the sum, over the first ℓ positions p in π, of the sizes of sets of sequences on which at least one KLBS from the pair did not {p}-split on. The sum x is at least dℓ.
On the other hand, let ǫ, c be such that the size of the set of sequences on which either of the betting strategies has split less than ℓ − cℓ many times up to time t is less than ǫ. The size of the set of sequences on which the betting strategies do not {p}-split on, for every p among the first ℓ positions in π is at most ǫ, and for the remaining sequences there are at most 2cℓ positions p such that one of the two strategies did not {p}-split on the sequence. Therefore, the sum x is at most ǫℓ + (1 − ǫ)(2cℓ).
We have that dℓ ≤ ǫℓ + (1 − ǫ)(2cℓ) ≤ (ǫ + 2c)ℓ. This is in contradiction with the assumption of the lemma, that on positions π, for every pair of constants ǫ, c for large enough ℓ, t, the size of the set of sequences on which either of the betting strategies has split less than ℓ − cℓ many times up to time t is less than ǫ.
Note that the sequence of positions p 1 , p 2 , . . . , and times t 1 , t 2 , . . . in the claim 7.2 can be found effectively. But then, we can also effectively find a subsequence p ′ 1 , p ′ 2 , . . . and t ′ 1 , t ′ 2 , . . . such that t ′ 1 < t ′ 2 < . . . and the size of the set of sequences that were {p ′ i }-split on by both KLBS-es up to time t ′ i is at least 1 − 2 −2i . By proposition 7.1, it is enough to consider only betting strategy with the savings property: the difference between capital and maximum capital of a part is bounded. Let BS A = (S A , µ A ), BS B = (S B , µ B ) denote the pair of KLBSes with the savings property, and µ A ({0, 1} ∞ ) + µ B ({0, 1} ∞ ) = 1. We will construct a sequence of restrictions z 0 , z 1 , z 2 . . . such that z n extends z n−1 by restricting the position p ′ n , and for each n, [z n ] contains a sequence on which both strategies achieve capital below some treshold.
Let z 0 be the empty restriction, and let V 0 A , V 0 B be the subset of parts, terminal at t 0 = 0 w.r.t. S A , S B , that are contained in [z 0 ] (namely, both V 0 A , V 0 B contain the only part that is terminal at 0, that is, the set of all sequences). When sets V n A , V n B are defined, denote with W n the union of sequences contained in intersections of parts in V n A , V n B , that is, W n = ( v∈V n . Denote these subsets of parts with V n A , V n B . Since the value assigned to p ′ n by z n minimizes the sum, we have that µ A (V n A ) + µ B (V n B ) ≤ 1 2 (µ A (V n−1 A ) + µ B (V n−1 B )). The size of the set of sequences that were {p ′ n }-split on by both KLBS-es up to time t ′ n is at least 1 − 2 −2n , implying that the size of the set of sequences on which at least one strategy, up to time t ′ n , did not {p ′ n }-split on is at most 2 −2n . But then, λ(W n ) ≥ 1 2 (λ(W n−1 ) − 2 −2n ) ≥ 1 2 ( 1 2 (1 + 2 −(n−1) )2 −(n−1) − 2 −2n ) = 1 2 (2 −n + 2 −2n+1 − 2 −2n ) = 1 2 (1 + 2 −n )λ([z n ]). By induction, for all n, the size of W n is more than half the size of [z n ], and the sum of masses of parts that contain sequences from W n is at most 2 −n (µ A ({0, 1} ∞ ) + µ B ({0, 1} ∞ )) = 2 −n . Then, for every t there must be a sequence in W n contained in intersection of parts a, b, terminal at t, with capital less than 2. Since the betting strategies have the savings property, this implies that for all n, [z n ] contains a sequence on which the betting strategies achieve capital below some treshold. The set of sequences on which the betting strategies achieve capital below the treshold is closed, and by compactness, the set n [z n ] contains a sequence on which the betting strategies achieve capital below the treshold (they do not win on the sequence). The set n [z n ] is an effective nullset and all of the sequences in it are non-MLR.
Definition 3.5. A Kolmogorov-Loveland betting strategy (KLBS) is a betting strategy that has a KL partition refinement. A sequence is Kolmogorov-Loveland random (KLR) if no computable KLBS wins on it.
Definition 4 . 1 .
41Let S be a partition refinement of sequences. We say that a restriction r is elementary at t w.r.t. S if there is terminal part v (at t w.r.t. S) that contains[r].
Remark 4.2.A restriction that restricts all positions is elementary w.r.t. any partition refinement of sequences at all times because the set of sequences consistent with such restriction contains a single sequence.
Definition 4 . 3 .
43Let S be a partition refinement of sequences, and let I, J be two disjoint finite subsets of positions.
Definition 4 . 5 .
45Let I be a finite subset of N. We denote with R I the set of restrictions that restrict all of the positions except the ones in I, that is R I = {0, 1} N\I .
Remark 4. 6 .
6For any subset of positions I a restriction that restricts all positions except the ones in I is I-granular at all times w.r.t. any partition refinement of sequences.
a given pair of betting strategies, an interval of positions I, and a finite restriction z that restricts positions not in I , we will construct a set of finite restrictions C, the chosen restrictions. The chosen restrictions will be extensions of z, and the open set c∈C [c] will have a fraction of the size of [z].
Definition 5. 2 .
2Let I be some finite set of positions and let BS A = (S A , µ A ), BS B = (S B , µ B ) be a pair of betting strategies. Let s be a restriction that restricts I and r an I-granular restriction at t w.r.t. S A , S B . The restriction sˆr is (I, (N, φ))-good at t w.r.t. BS A , BS B iff s is (N, φ)-good at t w.r.t. both projections on r of BS A and BS B up to time t. The restriction sˆr is (I, (N, φ))-bad (at t w.r.t. BS A , BS B ) if it is not
Lemma 5. 3 .
3Let S be a partition refinement whose parts are clopen. For every t there is a set of positions K such that all of the restrictions in {0, 1} K are elementary at t w.r.t. S.
Lemma 5. 4 .
4Let S be a partition refinement of sequences whose parts are clopen. For every t there is a finite set of positions K such that all of the restrictions in {0, 1} K are elementary at t w.r.t. S, and for every other set of positions L, if {0, 1} L is a set of elementary restrictions, then K ⊆ L. Proof. By lemma 5.3 for all S, t there is a finite set of positions such that the restrictions that restrict positions in that set are elementary at t. To prove the lemma, it is enough to show that for two finite sets of positions K, L if both {0, 1} K and {0, 1} L are sets of restrictions elementary (at t, w.r.t. S) then {0, 1} K∩L is also a set of elementary restrictions. Let o ∈ {0, 1} K∩L , p ∈ {0, 1} K\L and r ∈ {0, 1} L\K . The restrictions oˆp and oˆr are both elementary since they are elements of {0, 1} K and {0, 1} L , respectively. The restriction oˆpˆr is also elementary, since it extends the elementary restrictions oˆp and oˆr. For any p ∈ {0, 1} K\L , [oˆpˆr] is a subset of the terminal part that contains [oˆr], and for any r ∈ {0, 1} L\K , [oˆpˆr] is a subset of the terminal part that contains [oˆp]. We conclude that there is a terminal part that contains [oˆpˆr], for all p, r ∈ {0, 1} K\L , {0, 1} L\K . But then, [o] is a subset of that terminal part, that is, o is an elementary restriction.Definition 5.5. Let S be a partition refinement of sequences whose parts are clopen. Let K(S, t) denote the finite set of positions such that all of the restrictions in {0, 1} K(S,t) are elementary at t w.r.t. S, and for every other set of positions L, if {0, 1} L is a set of elementary restrictions, then K(S, t) ⊆ L. By lemma 5.4 K(S, t) is defined for any S, t. We'll call K(S, t) the positions inspected by S up to time t. Definition 5.6. Let A, B be a pair of partition refinements of sequences whose parts are clopen. Let K(A, B, t) denote the union of positions inspected by A and by B up to time t. We'll call K(A, B, t) the set of positions inspected by A, B up to time t.We'll say that a restriction r ∈ {0, 1} J is I-granular at t w.r.t. A, B if it is I-granular at t w.r.t. both A and B.Let K be the set of positions inspected by A, B up to time t. We'll call the set of restrictions {0, 1} K\I the common set of I-granular restrictions at t w.r.t. A, B.
Definition 5 . 7 .
57Let BS A = (S A , µ A ), BS B = (S B , µ B ) be a pair of betting strategies. Let I be a finite subset of positions and z a finite restriction such that none of the restricted positions are in I. Let N, φ be natural numbers.
'll call the mapping C the choice function on z, I against the pair of betting strategies BS A , BS B . The set of restrictions {C(m) : m ∈ dom C} is called the set of restrictions chosen by the choice function C. The (open) set of sequences m∈dom C [C(m)] is called the set of sequences chosen by the choice function C.
Lemma 5 . 8 .
58The choice function on z, I with parameters N, φ against the pair of betting strategies is undefined on all sequences of numbers longer than the maximal number of elements in the (N, φ)-good sequence against a pair of partition evaluations of a set with 2 |I| elements.
Lemma 5 . 9 .
59Let C denote the choice function on z, I with parameters N, φ against a pair of betting strategies. Let m ∈ N * be such that C(m) is defined and C(m) = sˆr, where s restricts positions in I.
Proposition 5 . 10 .
510For any pair of betting strategies BS A , BS B , a set of positions I and a restriction z that restricts a finite set of positions disjoint from I, the sum of sizes of sets of sequences consistent with restrictions chosen by the choice function on z, I with parameters N, φ against BS A , BS B is less than Q2 −|I| λ([z]), where Q is the bound on the size of an (N, φ)-sequence against a pair of partition evaluations of a set with 2 |I| elements.
•
the set of sequences contained in a part v of one of the partition refinements of BS A , BS B , such that |v ∩ [ρ]| < φ, denoted with U The size of the set of sequences consistent with restrictions in M is then larger thanλ([z]) − λ(W ) − λ(L) − λ(U \ (L ∪ W )). Assume that there is some θ, ǫ such that λ(W |[z]) ≤ 1 − θ and λ(L) ≤ ǫλ([z]). Suppose N ≤ |I| − log φ − h,then by lemma 4.20 λ(U \ (L ∪ W )|[z]) ≤ 2 −h . Then the size of the set of sequences consistent with restrictions in M is larger than λ([z])(θ − ǫ − 2 −h ). Let M (m) denote the restrictions in M consistent with the tail of the m-th chosen restriction andM (m) the set of sequences consistent with restrictions in M (m). For at least one k smaller than the maximum length of an (N, φ)sequence, Q, we have λ( m∈N k ∩dom CM (m)) ≥ λ([z]) θ−ǫ−2 −h Q .
Proposition 5 . 11 .
511Let BS A , BS B be a pair of betting strategies, I a finite set of positions, and z a restriction that restricts a finite set of positions disjoint from I. Let N, φ, h be such that N ≤ |I| − log φ − h. Let θ be the size, conditional on [z], of the set of sequences on which the betting strategies achieve capital less than 2. Let ǫ be the size, conditional on [z], of the set of sequences on which either betting strategy I-splits more then N many times. There is a restriction c chosen by the choice function C on z, I with parameters N, φ against BS A , BS B such that the size, conditional on [c], of the set of sequences on which the betting strategies achieve capital less than 2 is at leastθ−ǫ−2 −h Q where Q = 2 2 |I| φ (N + 1).Proof. Let C s (m) denote the restriction that restricts positions in I, and C r (m) the restriction that restricts positions outside I such thatC(m) = C s (m)ˆC r (m). Let F (m) denotethe union over t of the granular restrictions containing sequences from the m-th chosen restriction that fail at t in the basic game on I, z against BS A , BS B . That is, F (m) = t F t (m). LetF (m) be the set of sequences consistent with restrictions in F (m). That is,F (m) = r∈F (m) [r]. Let H(m) denote the choiceless restrictions in F (m), andH(m) the sequences consistent with them. That is, H(m) = t H t (m),H(m) = r∈H(m) [r]. Let G(m) denote the viable restrictions in F (m), andG(m) the sequences consistent with them. That is, G(m) = t G t (m),G(m) = r∈G(m) [r]. Let M (m) = [C r (m)] \F (m). That is, M (m) is the set of sequences consistent with restrictions in R I that extend the tail of the m-th chosen restriction, C r (m), such that for every ρ ∈ M (m) the head of the m-th chosen restriction, C s (m), is forever-fresh after T (m) w.r.t. BS A , BS B projected on ρ. By lemma 5.9, the restrictions in F (m) are mutually inconsistent, and extend C r (m). By definition, F t (m) = G t (m) ⊔ H t (m) and therefore F (m) = G(m) ⊔ H(m). We have that for all m for which C is defined, [C r (m)] = M (m)⊔G(m)⊔ H(m). We have that n∈N,(m,n)∈dom C C r (m, n) = G(m), and we can write [C r (m)] = M (m) ⊔H(m) n∈N,(m,n)∈dom C [C r (m, n)]. But then, [C r ( )] = ( 0≤k<Q m∈N k ∩dom C M (m) ⊔H(m)) ⊔ ( m∈N Q ∩dom C [C r (m)])
Lemma 6 . 3 .
63Let BS A , BS B be a pair of betting strategies and Z = (I 1 , (N 1 , φ 1 )), (I 2 , (N 2 , φ 2 )), . . . be some game zones. Let Q i be the upper bound on the length of an (N i , φ i )sequence against a pair of partition evaluations of a set with 2 |Ii| elements. Let z be a restriction that restricts a finite set of positions.If Q i ≤ 2 |Ii|−i−1 then the sum of sizes of the sets of sequences consistent with restrictions chosen against BS A , BS B on zones Z for restriction z is less than 1 2 λ([z]). Proof. By proposition 5.10, on the i-th zone, the size of the set of chosen sequences is at most Q i 2 −|Ii| λ([z]) ≤ 2 −i−1 λ([z]). Summing over all i we get the result.
Lemma 6 . 4 .
64Let BS A , BS B be a pair of betting strategies. Let Z = (I 1 ,
Proposition 6 . 6 .
66Let BS A = (S A , µ A ), BS B = (S B , µ B ) be a pair of computable betting strategies with µ A (Ω) + µ B (Ω) = 1, where Ω = {0, 1} ∞ . Let Z = (I 1 , (N 1 , φ 1 )), (I 2 , (N 2 , φ 2 )), . . . be some game zones .Denote with Q i the upper bound on the length of an (N i , φ i )-sequence against a pair of partition evaluations of a set with 2 |Ii| elements.Denote with ǫ i the size of the set of sequences for which the bound on the number of splits in the i-th zone was violated. If(I) Q i ≤ 2 |Ii|−i−1 , and(II) ǫ i goes to zero as i goes to infinity, and(III) N i ≤ |I i | − log φ i − ithen there is a non-MLR sequence on which neither of BS A , BS B wins. Proof. By lemma 6.3, the sum of sizes of sets of sequences consistent with restrictions in the first level of chosen restrictions (against BS A , BS B on zones Z) has size at most 1/2. Suppose that the sum of sizes of sets of sequences consistent with restrictions in n-th level of the chosen restrictions has size at most 2 −n . Again, by lemma 6.3 the sum of sizes of sets of sequences consistent with restrictions chosen in (n + 1)-th level has size at most 2 −n−1 . By induction, for all n, the (open) set of sequences consistent with restrictions in the n-th level of chosen restrictions has size at most 2 −n . Let this set be the n-th level of a ML-test. The chosen sequences fail every level of this ML-test, and are therefore non-MLR.
defined, and λ(W n−1 ) ≥ 1 2 (1+2 −(n−1) )λ([z n−1 ]). The restriction z n restricts position p ′ n to the value that minimizes the sum of masses of both strategies assigned to the parts, terminal at t ′ n w.r.t. S A , S B , that are contained in [z n ] and are a subset of part in V n
Proof. Let ℓ k be the length of the initial segment of positions in π contained in the intervals of positions of the first k zones, that is,By definition 6.12, I 1 , I 2 , . . . are consecutive intervals of π and the result follows from lemma 6.10.Theorem 2. Let BS A , BS B be a pair of betting strategies that are half-splitting. Let π be a computable sequence of distinct positions, g an unbounded computable function and let Z be the half-splitting zones on positions π with gap g.If both BS A , BS B on π have splits upper bounded by ℓ − g(ℓ), there is a non-Martin-Löf random sequence on which neither strategy wins.
where g is sublinear, we have that for any two constants c, ǫ and large enough ℓ, t, the set of sequences that were [1, ℓ]-split on at least ℓ(1 − c) many times by both KLBS-es up to time t, has size at least 1 − ǫ. This implies that among first ℓ positions, there is a position p such that the set of sequences that were {p}-split on (once) by both KLBS-es up to time t, has size at least (1 − ǫ)(1 − c) ≥ 1 − ǫ − c. Furthermore, for any sequence of bounds ξ 1. Loveland betting strategies that have splits lower bounded by ℓ − g(ℓ). set of sequences that were {p i }-split on by both KLBS-es up to time t i , for all i ≤ n is larger than 1/2. We construct an infinite restriction ζ that restricts the positions in π and contains a sequence on which neither KLBS wins. The proof is somewhat similar to the proofs that permutation random sequences are the same as computably random sequences. which use tools introduced in [6]. The difference is that. instead of always betting on all positions in some prescribed order (as in the definition of permutation randomness in [10]), here we'll have that a KLBS bets on almost all positions in π almost surely, and can do so adaptively. Suppose a finite restriction z n−1 that restricts the first n − 1 positions in πML-test for a pair of KLBS-es with lower bounded splits For a pair of Kolmogorov-Loveland betting strategies that have splits lower bounded by ℓ − g(ℓ), where g is sublinear, we have that for any two constants c, ǫ and large enough ℓ, t, the set of sequences that were [1, ℓ]-split on at least ℓ(1 − c) many times by both KLBS-es up to time t, has size at least 1 − ǫ. This implies that among first ℓ positions, there is a position p such that the set of sequences that were {p}-split on (once) by both KLBS-es up to time t, has size at least (1 − ǫ)(1 − c) ≥ 1 − ǫ − c. Furthermore, for any sequence of bounds ξ 1 , ξ 2 , . . . , we can find positions π = p 1 , p 2 , . . . and times t 1 , t 2 , . . . such that the set of sequences that were {p i }-split on by both KLBS-es up to time t i , has size at least 1 − ξ i . We can pick small enough bounds ξ 1 , ξ 2 , . . . so that for any n and any re- striction r that restricts first n positions in π, there is some t n , such that the size, conditional on [r], of the set of sequences that were {p i }-split on by both KLBS-es up to time t i , for all i ≤ n is larger than 1/2. We construct an infinite restriction ζ that restricts the positions in π and contains a sequence on which neither KLBS wins. The proof is somewhat similar to the proofs that permutation random sequences are the same as computably random sequences [5], [2], [4], which use tools introduced in [6]. The difference is that, instead of always betting on all positions in some prescribed order (as in the definition of permutation randomness in [10]), here we'll have that a KLBS bets on almost all positions in π almost surely, and can do so adaptively. Suppose a finite restriction z n−1 that restricts the first n − 1 positions in π
Mathematical Metaphysics of Randomness. Andrei A Muchnik, Alexei L Semenov, Vladimir A Uspensky, Theor. Comput. Sci. 2072Andrei A. Muchnik, Alexei L. Semenov, Vladimir A. Uspensky: Mathe- matical Metaphysics of Randomness. Theor. Comput. Sci. 207(2): 263-317 (1998)
Kolmogorov-Loveland randomness and stochasticity. Wolfgang Merkle, Joseph S Miller, André Nies, Jan Reimann, Frank Stephan, Ann. Pure Appl. Logic. 1381-3Wolfgang Merkle, Joseph S. Miller, André Nies, Jan Reimann, Frank Stephan: Kolmogorov-Loveland randomness and stochasticity. Ann. Pure Appl. Logic 138(1-3): 183-210 (2006)
An Introduction to Kolmogorov Complexity and Its Applications, Third Edition. Texts in Computer Science. Ming Li, M B Paul, Vitányi, 978-0-387-33998-6SpringerMing Li, Paul M. B. Vitányi: An Introduction to Kolmogorov Complexity and Its Applications, Third Edition. Texts in Computer Science, Springer 2008, ISBN 978-0-387-33998-6, pp. i-xxiii, 1-790
Separations of non-monotonic randomness notions. Laurent Bienvenu, Rupert Hölzl, Thorsten Kräling, Wolfgang Merkle, J. Log. Comput. 224Laurent Bienvenu, Rupert Hölzl, Thorsten Kräling, Wolfgang Merkle: Sep- arations of non-monotonic randomness notions. J. Log. Comput. 22(4): 701-715 (2012)
Comparing notions of randomness. Bart Kastermans, Steffen Lempp, Theor. Comput. Sci. 4113Bart Kastermans, Steffen Lempp: Comparing notions of randomness. Theor. Comput. Sci. 411(3): 602-616 (2010)
A generalization of resource-bounded measure, with application to the BPP vs. EXP problem. Harry Buhrman, Kenneth W Dieter Van Melkebeek, D Regan, Martin Sivakumar, Strauss, SIAM J. Comput. 302Harry Buhrman, Dieter van Melkebeek, Kenneth W. Regan, D. Sivakumar, Martin Strauss: A generalization of resource-bounded measure, with ap- plication to the BPP vs. EXP problem. SIAM J. Comput. 30(2): 576-601 (2000)
On the notion of a random sequence. L A Levin, Soviet Math. Dokl. 145L.A. Levin: On the notion of a random sequence. Soviet Math. Dokl 14 (5), 1413-1416 (1973)
Process Complexity and Effective Random Tests. C P Schnorr, J. Comput. Syst. Sci. 74C.P. Schnorr: Process Complexity and Effective Random Tests. J. Comput. Syst. Sci. 7(4): 376-388 (1973)
Randomness in computability theory. K Ambos-Spies, A Kučera, Computability Theory and Its Applications: Current Trends and Open Problems. P. Cholak, S. Lempp, M. Lerman, and R.A. ShoreAmerican Math. SocietyK. Ambos-Spies and A. Kučera. Randomness in computability theory. In P. Cholak, S. Lempp, M. Lerman, and R.A. Shore, editors, Computability Theory and Its Applications: Current Trends and Open Problems, vol-ume 257 of Contemporary Mathematics, pages 1-14. American Math. Society, 2000.
André Nies: Randomness and Computability: Open Questions. Joseph S Miller, Bulletin of Symbolic Logic. 123Joseph S. Miller, André Nies: Randomness and Computability: Open Ques- tions. Bulletin of Symbolic Logic 12(3): 390-410 (2006)
| []
|
[
"OBBStacking: An Ensemble Method for Remote Sensing Object Detection",
"OBBStacking: An Ensemble Method for Remote Sensing Object Detection"
]
| [
"Journal Of L A T E X Class ",
"Files "
]
| []
| []
| Ensemble methods are a reliable way to combine several models to achieve superior performance. However, research on the application of ensemble methods in the remote sensing object detection scenario is mostly overlooked. Two problems arise. First, one unique characteristic of remote sensing object detection is the Oriented Bounding Boxes (OBB) of the objects and the fusion of multiple OBBs requires further research attention. Second, the widely used deep learning object detectors provide a score for each detected object as an indicator of confidence, but how to use these indicators effectively in an ensemble method remains a problem. Trying to address these problems, this paper proposes OBBStacking, an ensemble method that is compatible with OBBs and combines the detection results in a learned fashion. This ensemble method helps take 1st place in the Challenge Track Fine-grained Object Recognition in High-Resolution Optical Images, which was featured in 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation. The experiments on DOTA dataset and FAIR1M dataset demonstrate the improved performance of OBB-Stacking and the features of OBBStacking are analyzed. Code will be available at https://github.com/Haoning724/obbstacking. | 10.1109/jstars.2023.3243168 | [
"https://export.arxiv.org/pdf/2209.13369v1.pdf"
]
| 252,544,876 | 2209.13369 | 92444aa656ecd8f538459efbe41346df306e64cb |
OBBStacking: An Ensemble Method for Remote Sensing Object Detection
AUGUST 2021 1
Journal Of L A T E X Class
Files
OBBStacking: An Ensemble Method for Remote Sensing Object Detection
148AUGUST 2021 1Index Terms-Remote sensingensembleobject detectionstackingoriented bounding box
Ensemble methods are a reliable way to combine several models to achieve superior performance. However, research on the application of ensemble methods in the remote sensing object detection scenario is mostly overlooked. Two problems arise. First, one unique characteristic of remote sensing object detection is the Oriented Bounding Boxes (OBB) of the objects and the fusion of multiple OBBs requires further research attention. Second, the widely used deep learning object detectors provide a score for each detected object as an indicator of confidence, but how to use these indicators effectively in an ensemble method remains a problem. Trying to address these problems, this paper proposes OBBStacking, an ensemble method that is compatible with OBBs and combines the detection results in a learned fashion. This ensemble method helps take 1st place in the Challenge Track Fine-grained Object Recognition in High-Resolution Optical Images, which was featured in 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation. The experiments on DOTA dataset and FAIR1M dataset demonstrate the improved performance of OBB-Stacking and the features of OBBStacking are analyzed. Code will be available at https://github.com/Haoning724/obbstacking.
I. INTRODUCTION
W ITH deep learning, researchers can design arbitrarily structured models as they see fit to a specific problem, which in turn leads to a wide range of off-the-shelf deep learning models. Ensemble methods are a reliable way to combine these models and achieve stronger performance. However, in the remote sensing object detection scenario, the potential of ensemble methods is rarely exploited.
Non-Maximum Suppression (NMS) [1] is a widely used method to suppress redundant detection bounding boxes in a close neighborhood, by clustering the overlapped bounding boxes (BBs) and eliminating the non-confidence-maximum BBs in each cluster. Beyond its wide application in single object detectors, it can also be used as a simple ensemble method. However, NMS adopts an affirmative voting strategy and thus assumes all of the detection results are true positives, and favors the models that vote for a detected object over those that vote against it. Weighted Boxes Fusion (WBF) [2] aims to alleviate the weakness of NMS, by taking into account all the confidence scores of the to-be-fused bounding boxes and assigning an average confidence score to the resulting bounding boxes.
This method, however, leaves two problems unaddressed. First, WBF treats the confidence scores from different models equally and takes the non-weighted mean value as the fused confidence score, disregarding three facts: 1. Some models may perform better than other models and their scores should have more weight. 2. Some models may share a similar neural network structure and produce similar results, so the ensembled result may bias towards a group of similar-structured models. 3. Deep learning models are poorly calibrated and different models will be overconfident to different extents, so a simple ensemble method may favor the more overconfident models.
Second, WBF is only compatible with horizontal bounding boxes.
When deep learning was first introduced into the remote sensing object detection problem, the position of a detected object was initially encoded in the same format as those in the other scenarios, i.e. a non-oriented rectangular bounding box with its sides always horizontal to either one of the side of 0000-0000/00$00.00 © 2021 IEEE arXiv:2209.13369v1 [cs.CV] 27 Sep 2022 the image coordinate grids. This format soon posed a problem. Due to the high altitude viewpoint and the steep viewing angle of the remote sensing images, the presented objects can have arbitrary orientations. Some types of objects, such as large ships, buses, buildings, and airport runways, have a large length-to-width ratio and are poorly represented by horizontal bounding boxes, especially when the objects are at a roughly ±45 • angle to the image axes.
Oriented Bounding Box (OBB) was proposed to address this problem. OBB keeps the rectangular form but obtains orientation as a new degree of freedom (DoF), the other existing DoFs being the position of its center, length, and width. OBB introduces finer labels to the objects in the remote sensing images and a better data format for the detection accuracy criteria. However, the existing ensemble methods are not compatible with OBB.
In this paper, to address the first problem, a stacking ensemble method is proposed. The stacking model is trained to best combine the member models, while simultaneously considering three factors, model calibration, model redundancy, and the performance gap between the models. For the second problem, a new bounding box fusion method is proposed for the oriented bounding boxes. The bounding boxes are parameterized with orientation, position, width, and height, and each parameter is fused separately. The combined method, OBBStacking, helps take 1st place in the Challenge Track Fine-grained Object Recognition in High-Resolution Optical Images, which was featured in 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation.
This paper is structured as follows. Related work will be discussed in Section II. The proposed ensemble method is introduced in Section III. The experiment setup and the quantitative results are described in Section IV. We also provide some analysis of OBBStacking in Section V. The conclusion is given at the end.
II. RELATED WORK
A. Remote sensing object detection
Quite a few deep neural network detectors are proposed in recent years. Notably, Liu et al. [3] are among the earliest to utilize oriented bounding boxes (OBB) for object detection in remote sensing images. The method is built upon Faster RCNN [4] and proposes a rotated region of interest (RROI) pooling layer for accurate feature extraction; and an OBB regression model for precise object positioning. Later methods [5]- [7] adopt oriented anchors for a better formulation of the bounding box that's easier to learn for the neural networks, but at the cost of relying on a redundant number of rotated anchors. Ding et al. [8] propose ROI Transformer to alleviate the problem by formulating RROI as offset parameters relative to only non-oriented ROIs. Han et al. [9] build upon general rotation equivariant CNNs [10] and ROI Transformer to create an oriented object detection model (ReDet) with rotation equivariant features. Xie et al. [11] further simplify the OBB inference process of ROI Transformer with 1/3000 number of parameters used and propose a new model, Oriented R-CNN, which is currently state-of-the-art on Dota [12] Dataset.
ReDet and Oriented R-CNN are two of the models we select to generate the detection results for our ensemble method. This is due to their recognized performance on similar problems and their large backbone network difference, where ReDet uses rotation equivariant CNN and Oriented R-CNN uses the more traditional ResNet [13] architecture. The intrinsic difference in their backbone will help increase the model diversity and in turn, increase the effectiveness of the ensemble process.
B. Transformer
Transformer is another neural network structure we take interest in, due to its structural difference from CNN. It was first introduced by Vaswani et al. [14] for the natural language processing (NLP) problem. It is designed for sequential data and is effective at modeling long-distance dependencies, which is typical in language data. Its success motivated its adaptation to the computer vision domain, with the major hurdle being the difference in the structuring of data (one dimension vs. two/three dimensions) and the increased data length at each dimension.
ViT [15] by Dosovitskiy et al. was one of the notable Transformer models for computer vision problems. ViT divides one full image into several small patches to be treated as tokens, like the words in NLP, and proposes large-scale pre-training to compensate Transformer's lack of intrinsic properties for image data, such as translation equivariance and feature locality.
Swin Transformer [16] is one of the latest vision Transformer models. Swin Transformer proposes to boost its efficiency by utilizing the locality characteristic of the images and increasing the scale of features step-by-step through a hierarchical design. Swin Transformer will also be one of the backbones for our member neural network detectors.
C. Calibration of the neural networks
A well-calibrated model can produce the probability of correctness for each prediction. Guo et al. [17] show that while modern neural networks excel at making correct predictions, their level of calibration degrades. This hinders the attempt to effectively combine different neural networks and their application in critical scenarios. Guo et al. propose to calibrate the models in a post-processing manner and train a simple parametric model (Temperature Scaling) [18] to map the confidence scores of the models to the probabilities of correctness. Wenger et al. [19] propose a latent Gaussian process to correct the model output. Zhang et al. [20] propose an ensemble of post-processing methods that is data efficient and with high generalizability.
The above methods are post-processing calibration methods that are most related to our work. There are also calibration methods such as Bayesian neural network methods [21] and neural network regulation methods [22] that change the design philosophy or the objective functions to achieve more calibrated neural networks. Object detection methods, along with other vision-related algorithms, may produce redundant activations in a close spatial neighborhood. Non-Maximum Suppression (NMS) has been used in such scenarios for over half a century [23] and to this day, is still being used in the deep neural network pipelines. Specifically, modern neural network detectors generate redundant results for a single object and NMS postprocesses the results by checking the spatial overlaps of the results and keeping the ones with the highest confidence scores.
NMS eliminates the redundant bounding boxes completely, which may lead to false negatives when there are overlaps between the ground truth bounding boxes. Soft-NMS [24] alleviates the problem by keeping all the bounding boxes and only mapping the confidence scores of the to-be-suppressed bounding boxes to a lower value.
Weighted Boxes Fusion (WBF) [2] targets specifically at post-processing the bounding boxes from different models. Instead of selecting one best bounding box (NMS) or keeping all of the bounding boxes (Soft-NMS), WBF produces a weighted average of the bounding boxes in terms of position and size, so all of the to-be-fused bounding boxes can contribute to the final bounding box and no redundant bounding boxes are introduced.
III. METHODS
OBBStacking is a stacking ensemble method that is compatible with OBBs. In a stacking method, a new model called a meta-learner, is trained to best combine the results of multiple existing models. OBBStacking has two stages ( Fig. 2), training the meta-learner, and applying the meta-learner to the member models. First, we will introduce the meta-learner proposed in our method. Then, we will be discussing the key processes that constitute the two stages-namely bounding box clustering, meta-learner parameter optimization and bounding box fusion.
A. The Meta-Learner
In a stacking method, every member model makes an independent prediction based on a data sample, and the metalearner combines the predictions to form a more accurate one. In OBBStacking, we choose a simple model, logistic regression, as the meta-learner. The model takes the form
σ WA (z) = σ(zw + b) (1) where z = [z 1 , z 2 , ..., z M ] ∈ R 2×M
is the concatenation of the logit output from M member models, σ(z) =
B. Bounding box clustering
Under the OBB detection setting, each member model produces a set of OBBs, but the correspondence of OBBs between different sets is unknown. Therefore the first goal is to collect output z on the same object from the different member models. We assume the OBBs are relatively accurate in terms of position and shape such that OBBs generated from the same object but different models have a significant spatial overlap. Therefore, an OBB spatial clustering method is used to assign OBBs from the same object into the same cluster.
The clustering method has the following steps: 1) Aggregates all the OBBs from the member models into a list S, sorted by their bounding box scores s in descending order. 2) Create an empty list C for the resulting clusters.
3) Pop the first OBB from S as a new cluster center, and push the cluster into C. 4) Iterate through S and find OBBs from other member models and have an overlap greater than iou thresh with the cluster center and move them from S to the new cluster. 5) Go back to Step 3 and repeat until S is empty. Note that although both stages of OBBStacking include bounding box clustering, the method is applied to different sets of data. The whole scheme requires three sets of data, the training set, the validation set, and the test set. Training set is used to train the member models. Validation set is used to train the meta-learner (Stage 1 of OBBStacking). Testing set is used for measuring the final performance of OBBStacking (Stage 2). Member models and the meta-learner are trained on separate data sets to prevent the meta-learner from favoring the member models that overfit the training set.
C. Meta-learner Parameter Optimization
After the member models are trained on the training set and produce M sets of detection OBBs on the validation set, the bounding box clustering method is applied to acquire the clustered OBBs C val = {c i |i = 1, 2, ...n}. Each OBB in a cluster c i represents the prediction of a member model from one data sample x i .
Here, the major role of the meta-learner is to fuse the bounding box scores s in the same clusters. Note that we use the logit output z in Eq. 1. In most detectors, s and z can be acquired by keeping both outputs before and after the last logistic function. Additionally, in most clusters, one or more member models will be absent when they predict the probability is lower than a threshold. We set z for these cases to a fixed negative value to keep the optimization simple.
We use Negative Log Likelihood (NLL) as the objective function, which can be formulated as:
L = − n i=1 log(σ W A (z i ) (yi) ) (2) = − n i=1 log(σ(z i w + b) (yi) )(3)
where y i is the ground truth label of each cluster. To determine y i , we calculate IOU (Intersection over Union) between the cluster center OBB and all the ground-truth OBBs in the validation set. A cluster is marked as a true positive (y = 1) if it has an overlapped ground-truth OBB, and a false positive (y = 0) otherwise. Eq. 3 is a convex function regarding to w and b, and can be easily optimized.
D. Oriented bounding box fusion
Before this step, the trained member models produce M sets of OBBs from the test set, which are then clustered into C test with the bounding box clustering method.
This step aims to fuse the OBBs O = {o 1 , ..., o K } that belong to the same cluster into one OBB. We represent an OBB with a 7-tuple:
o = (x, y, w, h, θ, z, l)(4)
where x, y, w, h, z represent the center coordinates on the x-y axis, width, height, and logit score, respectively. l ∈ {1, 2, ..., M } is the index of its source model. Orientation θ ∈ [0, π) represents the angle between the longest axis of the bounding box and the x-axis. The fusion process needs to derive the first 5 elements in o to acquire the final OBB, and these elements will be fused separately. With regard to the first 4 elements, the fusion process can be formulated as,
o (j) fused = n p=1 o (j) p s * p n p=1 s * p , j = 1, 2, 3, 4(5)
where j is the index of the element in o, p is the index of the OBB in the cluster, o f is the fused OBB. s * is the calibrated score derived from OBB's logit score and the weight parameters in Eq. 1:
s * p = σ(z (1) p w (lp) + b)(6)
s * acts like an improved weight for each OBB that addresses the output calibration and the redundancy in the member models.
Orientation parameter θ receives special treatment due to its cyclic property. First, the orientation of the bounding box with the largest score s * is designated as the major orientation θ MJ of the cluster. Then, the fused orientation is determined by averaging the relative orientations to θ MJ :
θ f = n p=1 r(θ p , θ MJ )s * p n p=1 s * p + θ MJ(7)
where r is a bivariate function that calculates the relative difference of two angles while considering their cyclic property:
r(θ 1 , θ 2 ) = θ 1 − θ 2 , for abs(θ 1 − θ 2 ) ≤ π 2 θ 1 − θ 2 + π, for θ 1 − θ 2 < − π 2 θ 1 − θ 2 − π, for θ 1 − θ 2 > π 2(8)
Note that here we assume θ ∈ [0, π) since we do not discriminate between the head and the tail of an OBB. Lastly, the score of the fused bounding box is determined with Eq. 1 with the learned meta-learner.
IV. RESULTS
A. Datasets
Two datasets are used to validate our method, FAIR1M dataset [25] and DOTA dataset [12]. Both datasets have an evaluation server that evaluates the detection results on a test set of which the ground truth labels are not shared publicly. Both these evaluation servers adopt mean average precision (mAP) as the evaluation criteria, consistent with PASCAL VOC 2007 [26] and VOC 2012.
FAIR1M dataset: This dataset was introduced alongside 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation. It contains 32912 images with widths ranging from 600 pixels to 10000 pixels and spatial resolutions between 0.3 and 0.8 meters. The images are collected from Gaofen satellites and Google Earth, covering over 100 civil airports, harbors and cities. The dataset contains 1.02 million objects annotated with OBBs and assigned into 5 major categories and 37 fine-grained sub-categories. The major categories include vehicles, ships, airplanes, sports fields and road structures. The training, validation, and testing sets contain 16488, 8287, and 8137 images, respectively.
DOTA dataset: This dataset was released in 2018. It contains 2806 images from satellites (GF-2 and JL-1), Google Earth, and aerial images with spatial resolution between 0.1 and 4.5 meters. It covers similar types of objects as FAIR1M does but with fewer sub-categories. It contains 15 categories and 0.2 million instances. The proportions of the training set, validation set, and testing set are 1/2, 1/6, and 1/3, respectively.
B. Member Models
As previously mentioned in Sec. II, we select 3 types of neural network detectors as the member models in the ensemble process, Oriented R-CNN, ReDet and a Swin detector. These 3 types of detectors have different design preferences so the diversity between the member models is assured.
The Swin detector in our experiment is a simple modification to the original one [16] for its compatibility with OBB detection. Both Swin backbone and the recent CNN backbone produce a feature pyramid [27], consisting of layers of image features with different spatial resolutions and semantic depths, so their outputs have a similar structure and they can share the same types of detectors. We keep the original backbone and replace the original detector head with the one from Oriented R-CNN, since its OBB detector structure is elegant and concise.
For Oriented R-CNN and ReDet, we follow the experiment setups in the original papers, except for those that can be limited by the GPU specifications. We use a similar setting in Swin Detector to the ones in Oriented R-CNN since they share the same type of detectors. We use 2 GTX 3080 Ti for training and inference. The images are cropped into 1024 × 1024 patches and the batch size is set to 2, 2, and 1 per GPU for Oriented R-CNN, ReDet and Swin Detector, respectively, due to the limit of GPU memory. Multi-scale training and testing are also used because they are often used in combination with ensemble methods to achieve the highest performance possible.
C. Quantitative Comparison
First, for a fair comparison, we augment the original NMS and WBF with OBB compatibility, and evaluate the performance of the member models and the selected ensemble methods on DOTA dataset. Since most of the experiments in the literature [9], [11] combine the training and the validation sets to train their models to achieve maximum performance, and our ensemble model needs a separate validation set to learn the parameters of the meta-learner, we do two separate experiments to verify the effectiveness of our method. (1) We follow the original scheme of our method, and train all the member models on the training set only, leaving the validation set for the parameter training of the meta-learner.
(2) We follow the training scheme of other methods and train the member models with data from both the training set and the validation set, and use the trained meta-learner from Experiment (1).
In the following tables on DOTA dataset, the names of the categories are abbreviated to conserve space. The categories, in order, are plane, baseball-diamond, bridge, groundtrack-field, small-vehicle, large-vehicle, ship, tennis-court, basketball-court, storage-tank, soccer-ball-field, roundabout, harbor, swimming-pool, and helicopter. The quantitative results of Experiment (1) are listed in Table I. Oriented R-CNN achieves the best performance among the member models and obtains 79.86% mAP. The ensemble methods all obtain a 1-2% mAP increase over the best member model and our method achieves the top score with 81.50% mAP, 0.61% over WBF.
For Experiment (2), we assume the performance gap, the calibration, and the redundancy of the member models do not drift too much from Experiment (1), and we could reuse the meta-learner for the ensemble. The results are shown in Table II. The results are generally similar to the previous one, with a slight overall performance increase of 1% mAP among the member models and 0.1-0.4% mAP increase among the ensemble methods. Our method, with the meta-learner from Experiment (1), still outperforms WBF by 0.24% mAP. This shows that our assumption holds when the training data expands, and even though our method requires a separate validation set, it still outperforms the existing ensemble methods.
Next, we evaluate the member models and the ensemble models on FAIR1M dataset using Experiment (1) setup and show the results in Table III. Among the member models, Oriented R-CNN still achieves the best performance with 47.77% mAP. Compared to the individual methods, the ensemble models obtain a huge performance increase by around 4% mAP, where our method achieves the best score with 52.42% mAP, a 4.65% increase over Oriented R-CNN, a 0.57% mAP increase over WBF.
V. DISCUSSION
In this section, we demonstrate how OBBStacking addresses the three problems that arise during an ensemble process on deep learning models-namely model calibration, the performance gap between the models, and model redundancy.
A. Model Calibration
Deep learning models tend to overfit the training data and are overconfident about their predictions. When the member models are overconfident to different degrees, their predictions are on different measurements and do not indicate true probability values. Therefore, the ensemble methods may not work well on these models as intended, and a model calibration process is needed.
In this section, we show that one of the calibration methods, Temperature Scaling (TS) [17], can be regarded as a special form of our meta-learner, indicating that OBBStacking includes the feature of model calibration.
TS attempts to map the non-accurate predictions to the real probability of correctness, by 'softening' the final logistic layer in the neural networks and introducing a temperature parameter T > 1. The 'softened' logistic layer is
σ TS (z) = σ(z/T + t)(9)= 1 1 + exp(−z/T + t)(10)
When T → ∞, all results of σ TS approach 1 2 and indicate maximum uncertainty.
The inference of parameter T also uses NLL as the objective function, since NLL is a standard measure of a probabilistic model's quality [28]. Here, the objective function can be defined as:
L = − n i=1 log(σ(z i /T + t) (yi) )(11)
As can be seen, our meta-learner, Eq. 1 becomes Eq. 10 when the number of the member models is 1 and thus can calibrate models in the same fashion.
B. Performance Gap
In this section, we experiment to try to demonstrate how OBBStacking adjusts the weights when there is a performance gap between the models.
Our model tackles three problems simultaneously, model calibration, redundancy and performance gap. We assume these three problems can be disentangled and thus the factorization of the parameter exists, w = p r g, where the operator is the elementwise multiplication, p, r, g are the weight vectors for the model calibration, model redundancy and the performance gap, respectively.
We want to minimize the effect of the first two factors and see how OBBStacking handles the performance gap between the models. Along with the Swin detector used in our previous experiment, 3 additional Swin detectors are added to the Swin detector family. The only difference between these Swin detectors is the total number of epochs used in training, which are 12, 9, 16, and 18 epochs, respectively. At different epochs during the training with stochastic gradient descent, the neural networks may randomly lean towards more accuracy on some categories instead of others, and rely upon different features, thus creating a sequence of different models with relatively high redundancy.
We first run OBBStacking on the Swin family and acquire w for later comparison. Then, to show the factor of redundancy among the Swin family, we apply the bounding box clustering method to the detection results and calculate Pearson's correlation between the confidence scores of the models. As can be seen in Fig. 4, compared to the other models, the correlation coefficient between the Swin models are very close to each other, so we assume r is approximate to a vector of 1s.
As for the weight vector p from model calibration, it can be easily derived by applying TS on the Swin detectors individually, and we get p = [ 1 T1 , 1 T2 , ..., 1 T M ]. We list the above results and the separate mAP performances of the models in Table IV. As can be seen, the weight factor g is correlated to the mAP performance of the individual models. Model Swin 9, the model trained to 9 epochs, has the best prediction mAP on the validation set and the largest value in g. Swin 16 has the worst mAP and also the smallest value in g. This is in accord with the basic ensemble idea of putting more weight on the better predictors. Swin 12 and Swin 18 have similar values in g and similar performance in mAP, which is a reasonable range considering the small performance gap between the two models and the error from the assumed r value.
C. Model Redundancy
In this part, we build upon the previous experiments to show how OBBStacking handles model redundancy. 2 collections of models are included. Collection 1 consists of Oriented R-CNN, ReDet and Swin 12. Collection 2 includes all the models in Collection 1 and the additional Swin 9, Swin 16 and Swin 18, adding up to 6 models in total.
The correlation coefficient between all the models is shown in Fig. 4 and the weight parameters w of the meta-learner are shown in Table V. We notice that in Collection 2, because of the redundancy among the Swin families, their weights decrease drastically, with a sum value of 0.36, in between the weights of Oriented R-CNN and ReDet. The weights of Oriented R-CNN and ReDet decrease slightly because the Swin family improves its performance with the increase of its members.
VI. CONCLUSION
We propose an ensemble method, OBBStacking, that is compatible with the oriented bounding box (OBB) which is widely used in object detection in the remote sensing field. OBBStacking consists of a meta-learner that can address the problems in the ensemble process of the deep neural network detectors, namely the model calibration, the redundancy between the models and the performance gap between the models. OBBStacking outperforms other ensemble methods in the DOTA dataset and the FAIR1M dataset and helps us win 1st place in the Challenge Track Fine-grained Object Recognition in High-Resolution Optical Images featured in 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation.
Fig. 1 .
1Ilustration of the bounding box fusion results of different ensemble methods. The blue rectangles are the bounding boxes fed into the methods, the red rectangles are the fused bounding boxes.
Fig. 2 .
2Two stages of the proposed OBBStacking D. Bounding box post-processing methods
Fig. 3 .
3Showcase of the ensemble results of OBBStacking on DOTA dataset and FAIR1M dataset. Only the objects with a confidence score larger than 0.2 are shown.
TABLE I QUANTITATIVE
IRESULTS ON DOTA DATASET, TRAINED WITH THE TRAINING SET ONLY RESULTS ON DOTA DATASET, TRAINED WITH THE TRAINING AND THE VALIDATION SET COMBINEDMethods
PL
BD
BR
GTF
SV
LV
SH
TC
BC
ST
SBF
RA
HA
SP
HC
mAP
Individual
Oriented R-CNN 89.84 85.16 60.99 79.57 79.75 84.92 88.44 90.88 84.43 87.56 70.39 68.38 81.51 77.81 68.35 79.86
ReDet
88.20 84.25 56.05 79.95 76.97 85.82 88.39 90.90 87.39 86.24 67.27 63.32 77.68 74.89 71.12 78.56
Swin Det
88.77 81.99 57.59 76.63 65.26 84.24 87.96 90.83 84.49 87.24 63.36 66.45 80.74 67.34 65.82 76.58
Ensemble
NMS
89.49 84.84 60.18 80.94 78.91 86.26 88.90 90.90 87.43 87.59 72.93 69.35 82.12 77.34 75.24 80.83
WBF
89.49 84.94 60.20 80.94 78.99 86.25 88.90 90.90 87.43 87.84 73.06 70.62 82.45 76.13 75.24 80.89
Ours
89.31 85.66 61.76 81.47 79.29 86.45 88.87 90.89 87.68 88.50 73.02 72.47 83.06 78.49 75.53 81.50
TABLE II
QUANTITATIVE Methods
PL
BD
BR
GTF
SV
LV
SH
TC
BC
ST
SBF
RA
HA
SP
HC
mAP
Individual
Oriented R-CNN 89.95 85.05 60.50 81.06 80.10 85.69 88.59 90.90 87.09 88.03 71.53 72.18 81.41 79.37 70.72 80.81
ReDet
88.28 84.82 59.13 78.56 77.23 85.83 88.71 90.88 87.17 86.75 67.31 65.79 78.23 78.82 69.85 79.16
Swin Det
89.66 83.79 59.39 76.22 76.57 84.15 88.49 90.87 83.61 86.59 61.95 62.00 80.79 69.93 72.62 77.77
Ensemble
NMS
89.62 85.21 61.05 78.88 79.73 86.52 89.05 90.90 86.59 87.64 72.07 68.35 82.84 79.81 76.12 80.96
WBF
89.62 85.57 60.91 78.88 79.88 86.53 89.06 90.90 86.59 87.90 72.07 72.42 83.13 80.03 75.94 81.30
Ours
89.68 85.79 62.52 80.32 80.10 86.75 89.05 90.86 87.38 88.26 72.79 72.25 83.89 79.74 76.68 81.74
TABLE III QUANTITATIVE
IIIRESULTS ON FAIR1M DATASETMethods
Individual
Ensemble
Oriented R-CNN ReDet Swin
NMS
WBF
Ours
mAP
47.77
46.98
47.00
51.85 51.96 52.42
Plane
Boeing737
47.95
43.54
36.60
51.38 51.38 51.60
Boeing747
86.49
88.06
84.36
88.23 88.23 88.88
Boeing777
30.61
25.89
21.46
34.97 34.97 34.27
Boeing787
53.84
49.42
55.36
60.77 60.77 62.13
C919
23.00
21.56
23.39
26.92 26.92 28.08
A220
51.45
47.35
50.23
54.84 54.84 55.70
A321
72.66
67.59
66.44
73.88 73.88 74.57
A330
71.69
71.94
71.48
77.48 77.48 77.92
A350
80.10
79.45
76.08
81.33 81.33 81.89
ARJ21
41.40
44.58
35.70
46.60 46.60 48.67
Ship
Passenger Ship
16.62
22.42
19.34
23.38 23.38 24.19
Motorboat
68.83
74.73
71.56
75.81 75.91 76.48
Fishing Boat
12.68
15.77
10.95
16.19 16.27 15.83
Tugboat
29.67
40.03
38.01
41.54 41.37 42.72
Engineering Ship
15.72
16.28
19.41
20.85 20.85 21.03
Liquid Cargo Ship
31.14
30.24
29.90
35.69 35.81 35.69
Dry Cargo Ship
41.55
44.26
37.39
46.39 46.41 47.18
Warship
36.63
40.05
38.64
47.01 47.08 46.31
Vehicle
Small Car
77.47
71.84
73.39
76.59 77.32 77.60
Bus
56.06
44.26
55.43
59.53 59.54 59.97
Cargo Truck
55.30
49.26
55.18
57.89 58.11 58.67
Dump Truck
61.96
57.79
59.14
64.40 64.52 64.60
Van
77.66
72.57
73.96
75.73 76.05 76.23
Trailer
22.53
20.52
20.72
28.84 28.90 30.30
Tractor
7.82
3.61
6.47
7.55
7.55
8.10
Excavator
26.08
18.01
25.40
29.24 29.69 30.84
Truck Tractor
3.72
2.05
8.31
6.83
6.83
6.67
Court
Basketball Court
61.38
56.00
60.41
62.73 63.18 63.33
Tennis Court
88.11
87.80
86.76
90.21 90.56 90.36
Football Field
64.86
72.42
71.02
72.71 73.24 74.12
Baseball Field
89.11
90.02
88.85
91.45 91.45 91.40
Road
Intersection
62.20
62.83
63.74
64.71 64.80 65.26
Roundabout
27.48
17.59
18.76
28.89 28.89 27.48
Bridge
30.72
47.66
44.05
42.40 42.49 44.06
ORCN ReDet Swin 12 Swin 9 Swin 16 Swin 18Fig. 4. The correlation coefficient of models in Collection 2.ORCN
ReDet
Swin 12
Swin 9
Swin 16
Swin 18
1
0.84
0.84
0.85
0.82
0.84
0.84
1
0.78
0.8
0.77
0.78
0.84
0.78
1
0.89
0.87
0.89
0.85
0.8
0.89
1
0.86
0.88
0.82
0.77
0.87
0.86
1
0.88
0.84
0.78
0.89
0.88
0.88
1
0.80
0.85
0.90
0.95
1.00
TABLE IV WEIGHT
IVVECTORS OF THE SWIN FAMILYModels
Swin 12 Swin 9 Swin 16 Swin 18
w
0.1705
0.2062
0.1283
0.1542
r
1
1
1
1
p
0.5028
0.5690
0.4406
0.4504
g
0.3390
0.3625
0.2912
0.3423
mAP
48.21
48.80
46.63
47.99
TABLE V WEIGHT
VVECTORS OF COLLECTION 1 AND COLLECTION 2Collections
OR-CNN ReDet Swin 12
9
16
18
1
0.57
0.34
0.24
-
-
-
2
0.45
0.24
0.09
0.10 0.08 0.09
1+exp(−z) is the logistic function, and w ∈ R M and b ∈ R are the weight and the intercept parameter of the meta-learner, respectively.Note that logit z ∈ R 2 is the non-probabilistic output of the member models and the 2 dimensions correspond to the tendency of refusing a target and accepting a target, respectively. In the context of deep learning, logits are often converted to probabilistic output through the logistic function, but here the logits are used because of their amenity to Eq. 1.Later in Section V, we will show how this simple form of the meta-learner can simultaneously consider model calibration, model redundancy and the performance gap between the models.
Object detection with discriminatively trained part based models. P F Felzenszwalb, R B Girshick, D Mcallester, D Ramanan, 20P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, "Object detection with discriminatively trained part based models," p. 20.
Weighted boxes fusion: Ensembling boxes from different object detection models. R Solovyev, W Wang, T Gabruseva, Image and Vision Computing. 107104117R. Solovyev, W. Wang, and T. Gabruseva, "Weighted boxes fusion: Ensembling boxes from different object detection models," Image and Vision Computing, vol. 107, p. 104117, Mar. 2021.
Rotated region based CNN for ship detection. Z Liu, J Hu, L Weng, Y Yang, 2017 IEEE International Conference on Image Processing (ICIP). IEEEZ. Liu, J. Hu, L. Weng, and Y. Yang, "Rotated region based CNN for ship detection," in 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017, pp. 900-904.
Faster R-CNN: Towards realtime object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, IEEE Transactions on Pattern Analysis and Machine Intelligence. 396S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real- time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137- 1149, Jun. 2017.
Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks. Z Zhang, W Guo, S Zhu, W Yu, IEEE Geoscience and Remote Sensing Letters. 1511Z. Zhang, W. Guo, S. Zhu, and W. Yu, "Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks," IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 11, pp. 1745- 1749, 2018.
Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks. X Yang, H Sun, K Fu, J Yang, X Sun, M Yan, Z Guo, Remote Sensing. 101132X. Yang, H. Sun, K. Fu, J. Yang, X. Sun, M. Yan, and Z. Guo, "Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks," Remote Sensing, vol. 10, no. 1, p. 132, 2018.
Towards multi-class object detection in unconstrained remote sensing imagery. S M Azimi, E Vig, R Bahmanyar, M Körner, P Reinartz, Asian Conference on Computer Vision. SpringerS. M. Azimi, E. Vig, R. Bahmanyar, M. Körner, and P. Reinartz, "Towards multi-class object detection in unconstrained remote sensing imagery," in Asian Conference on Computer Vision. Springer, 2018, pp. 150-165.
Learning roi transformer for oriented object detection in aerial images. J Ding, N Xue, Y Long, G.-S Xia, Q Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPiscataway, NJIEEEJ. Ding, N. Xue, Y. Long, G.-S. Xia, and Q. Lu, "Learning roi trans- former for oriented object detection in aerial images," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019, pp. 2849-2858.
Redet: A rotation-equivariant detector for aerial object detection. J Han, J Ding, N Xue, G.-S Xia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Han, J. Ding, N. Xue, and G.-S. Xia, "Redet: A rotation-equivariant detector for aerial object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2786-2795.
General e (2)-equivariant steerable cnns. M Weiler, G Cesa, Advances in Neural Information Processing Systems. 32M. Weiler and G. Cesa, "General e (2)-equivariant steerable cnns," Advances in Neural Information Processing Systems, vol. 32, 2019.
Oriented R-CNN for object detection. X Xie, G Cheng, J Wang, X Yao, J Han, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionX. Xie, G. Cheng, J. Wang, X. Yao, and J. Han, "Oriented R-CNN for object detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3520-3529.
DOTA: A large-scale dataset for object detection in aerial images. G.-S Xia, X Bai, J Ding, Z Zhu, S Belongie, J Luo, M Datcu, M Pelillo, L Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionG.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, "DOTA: A large-scale dataset for object detection in aerial images," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3974-3983.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv:1512.03385 [cs], Dec. 2015.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Proceedings of the 29th International Conference on Neural Information Processing Systems. the 29th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT PressA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Proceedings of the 29th International Conference on Neural Information Processing Systems. Cambridge, MA, USA: MIT Press, 2017, pp. 1982-1990.
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.11929arXiv preprintA. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, and S. Gelly, "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision1022Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012-10 022.
On calibration of modern neural networks. C Guo, G Pleiss, Y Sun, K Q Weinberger, International Conference on Machine Learning. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, "On calibration of modern neural networks," in International Conference on Machine Learning. PMLR, 2017, pp. 1321-1330.
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. J Platt, Advances in large margin classifiers. 10J. Platt, "Probabilistic outputs for support vector machines and com- parisons to regularized likelihood methods," Advances in large margin classifiers, vol. 10, no. 3, pp. 61-74, 1999.
Non-parametric calibration for classification. J Wenger, H Kjellström, R Triebel, International Conference on Artificial Intelligence and Statistics. PMLR, 2020. J. Wenger, H. Kjellström, and R. Triebel, "Non-parametric calibration for classification," in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 178-190.
Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning. J Zhang, B Kailkhura, T Y , -J Han, International Conference on Machine Learning. PMLR, 2020. J. Zhang, B. Kailkhura, and T. Y.-J. Han, "Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning," in International Conference on Machine Learning. PMLR, 2020, pp. 11 117-11 128.
Subspace inference for Bayesian deep learning. P Izmailov, W J Maddox, P Kirichenko, T Garipov, D Vetrov, A G Wilson, Uncertainty in Artificial Intelligence. PMLR. P. Izmailov, W. J. Maddox, P. Kirichenko, T. Garipov, D. Vetrov, and A. G. Wilson, "Subspace inference for Bayesian deep learning," in Uncertainty in Artificial Intelligence. PMLR, 2020, pp. 1169-1179.
Regularizing neural networks by penalizing confident output distributions. G Pereyra, G Tucker, J Chorowski, \ Kaiser, G Hinton, arXiv:1701.06548arXiv preprintG. Pereyra, G. Tucker, J. Chorowski, \. Kaiser, and G. Hinton, "Reg- ularizing neural networks by penalizing confident output distributions," arXiv preprint arXiv:1701.06548, 2017.
Edge and curve detection for visual scene analysis. A Rosenfeld, M Thurston, IEEE Transactions on computers. 1005A. Rosenfeld and M. Thurston, "Edge and curve detection for visual scene analysis," IEEE Transactions on computers, vol. 100, no. 5, pp. 562-569, 1971.
Soft-NMS-improving object detection with one line of code. N Bodla, B Singh, R Chellappa, L S Davis, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionPiscataway, NJIEEEN. Bodla, B. Singh, R. Chellappa, and L. S. Davis, "Soft- NMS-improving object detection with one line of code," in Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2017, pp. 5561-5569.
FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing. 184"FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery," ISPRS Journal of Photogram- metry and Remote Sensing, vol. 184, pp. 116-130, Feb. 2022.
The pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K Williams, J Winn, A Zisserman, International journal of computer vision. 882M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisser- man, "The pascal visual object classes (voc) challenge," International journal of computer vision, vol. 88, no. 2, pp. 303-338, 2010.
Feature pyramid networks for object detection. T.-Y Lin, P Dollar, R Girshick, K He, B Hariharan, S Belongie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPiscataway, NJIEEET.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017, pp. 2117-2125.
T Hastie, R Tibshirani, J H Friedman, J H Friedman, The Elements of Statistical Learning: Data Mining. Springer2T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman, The Ele- ments of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2009, vol. 2.
| [
"https://github.com/Haoning724/obbstacking."
]
|
[
"Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers",
"Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers"
]
| [
"Adebayo Oshingbesan \nCarnegie Mellon University Kigali\nRwanda\n",
"Courage Ekoh [email protected] \nCarnegie Mellon University Kigali\nRwanda\n",
"Germann Atakpa [email protected] \nCarnegie Mellon University Kigali\nRwanda\n",
"Yonah Byarugaba [email protected] \nCarnegie Mellon University Kigali\nRwanda\n"
]
| [
"Carnegie Mellon University Kigali\nRwanda",
"Carnegie Mellon University Kigali\nRwanda",
"Carnegie Mellon University Kigali\nRwanda",
"Carnegie Mellon University Kigali\nRwanda"
]
| []
| Text-to-text transformers have shown remarkable success in the task of multi-task transfer learning, especially in natural language processing (NLP). However, while there have been several attempts to train transformers on different domains, there is usually a clear relationship between these domains, e.g." code summarization, where the natural language summary describes the code. There have been very few attempts to study how multi-task transfer learning works on tasks in significantly different domains. In this project, we investigated the behavior of multi-domain, multi-task learning using multi-domain text-to-text transfer transformers (MD-T5) on four tasks across two domains -Python Code and Chess. We carried out extensive experiments using three popular training strategies: Bert-style joint pretraining + successive finetuning, GPT-style joint pretraining + successive finetuning, and GPT-style joint pretraining + joint finetuning. Also, we evaluate the model on four metrics -Play Score, Eval Score, BLEU Score, and Multi-Domain Learning Score (MDLS). These metrics measure performance across the various tasks and multidomain learning. We show that while negative knowledge transfer and catastrophic forgetting are still considerable challenges for all the models, the GPT-style joint pretraining + joint finetuning strategy showed the most promise in multi-domain, multi-task learning as it performs well across all four tasks while still keeping its multi-domain knowledge. 0 Github Repo Preprint. Under review. | 10.48550/arxiv.2209.10106 | [
"https://export.arxiv.org/pdf/2209.10106v1.pdf"
]
| 252,407,491 | 2209.10106 | 939b4b1ff5a21108bb2f8c81117f1d5b230180a9 |
Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers
Adebayo Oshingbesan
Carnegie Mellon University Kigali
Rwanda
Courage Ekoh [email protected]
Carnegie Mellon University Kigali
Rwanda
Germann Atakpa [email protected]
Carnegie Mellon University Kigali
Rwanda
Yonah Byarugaba [email protected]
Carnegie Mellon University Kigali
Rwanda
Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers
Text-to-text transformers have shown remarkable success in the task of multi-task transfer learning, especially in natural language processing (NLP). However, while there have been several attempts to train transformers on different domains, there is usually a clear relationship between these domains, e.g." code summarization, where the natural language summary describes the code. There have been very few attempts to study how multi-task transfer learning works on tasks in significantly different domains. In this project, we investigated the behavior of multi-domain, multi-task learning using multi-domain text-to-text transfer transformers (MD-T5) on four tasks across two domains -Python Code and Chess. We carried out extensive experiments using three popular training strategies: Bert-style joint pretraining + successive finetuning, GPT-style joint pretraining + successive finetuning, and GPT-style joint pretraining + joint finetuning. Also, we evaluate the model on four metrics -Play Score, Eval Score, BLEU Score, and Multi-Domain Learning Score (MDLS). These metrics measure performance across the various tasks and multidomain learning. We show that while negative knowledge transfer and catastrophic forgetting are still considerable challenges for all the models, the GPT-style joint pretraining + joint finetuning strategy showed the most promise in multi-domain, multi-task learning as it performs well across all four tasks while still keeping its multi-domain knowledge. 0 Github Repo Preprint. Under review.
Introduction
Teaching a machine to carry out more human-like tasks like creative thinking is a concept that goes as far as back as the 1930s, with significant progress made over the past eighty years due to big data, increased computational power, and better architectures [1,2,3]. Chess-playing is one of the tasks that was considered a crucial measure of progress in AI. A world-champion chess-playing computer was listed as one of the AI Grand Challenges in 1995 [4]. This challenge has since been achieved with computers consistently beating the best humans in chess, even in handicap matches where the chess engines start with fewer pieces [5,6,7]. The same push that we saw for chess-playing computers in the 1990s is now being seen for code-writing AI [8].
With significant progress made in teaching neural networks to learn how to perform a single task, there has been a new push in the past few years to teach one neural network model how to perform multi-task learning [9,10,11]. One architecture that has shown some promise in this area is the transformer architecture [12,13]. This architecture has found applications in several domains of deep learning and has been shown to be capable of zero-shot or few-shot learning on several natural language tasks [9,11,14,15].
In this project, we aim to assess how the multi-task learning paradigm with unified text-to-text transformers extend beyond natural language tasks into extremely different domains -chess and code.
Our key research question is -will unified text-to-text transformers perform as well across tasks from multiple domains as they have performed across NLP tasks? The following sections describe what related works have been carried out, our methods, results & discussion, and then the conclusion.
Related Works
Deep Learning for Chess
Several works have attempted the use of deep learning for chess-playing. DeepChess leveraged the combination of Deep Belief and Siamese Networks to build a chess engine that had a playing style resembling that of human grandmasters [16]. Other researchers have tried to incorporate explainability into chess-playing machines through automated commentary to make the engines easier to understand [17,18].
In 2019, a novel end-to-end deep learning model for chess was proposed. It leveraged the use of a sentiment-based evaluation function obtained by training on chess commentaries using an LSTM model [19]. The first chess transformer model was built in 2020 by finetuning the GPT-2 architecture to generate chess moves [20]. Rather than predict moves, [21] evaluated the ability of language models to track chess states and showed some success.
Deep Learning for Code
Just like chess, there have been several attempts to carry out code-like tasks such as code summarization, code generation, code-to-code translation, among others. However, unlike chess, most of the works in chess have been using the transformer framework. Several variants of the transformer model for code-related tasks such as CodeBERT, CodeGPT, CodeT5, and so on have been proposed with varying levels of success [8,21,22,23,24,25]. Of all these variants, only CodeT5 follows the unified text-to-text framework that we adopt in this project.
Multi-task Learning
Multi-task learning at scale using a text-to-text framework was popularized by the T5 architecture [14]. T5 is a transformer-based architecture that uses a text-to-text approach to model varied NLP tasks such as translation, question answering, and classification as feeding the model text as input and training it to generate some target text. [11] showed that the unified text-to-text framework enabled zero-shot task generalization to multiple NLP tasks.
ExT5 [15] scaled up the idea of multi-task learning for NLP to a lot of tasks (over 100) and focused on multi-task pretraining rather than finetuning. Other works, such as [26], tried to understand how the relationship between tasks affects downstream learning in large language models between NLP tasks, highlighting the problems of catastrophic forgetting and negative transfer.
Multi-domain learning has been previously attempted in several works [17,18,24,27]. However, the domains are related, e.g., English chess commentary or code docstring generation. As far as we know, no work has considered how multi-task learning works across multiple domains that are significantly different where there is no relationship between the tasks' domains.
Methods
Introduction
The unified text-to-text framework [14] provides a relatively simple way to train a single model on a wide variety of tasks using the same loss function and decoding procedure. Despite not having the advantage of specialization of task-specific architectures, this framework obtains comparable performance to task-specific architectures.
In this research, we train several transformers models using the unified text-to-text framework with a multi-domain, multi-task objective. In particular, we pretrain the model on both code and chess data before then finetuning the model to carry out the following tasks:
• Chess move generation.
• Chessboard state evaluation.
• Code generation from an English prompt.
• Code summarization using English language. The two domains were chosen based on these criteria. First, the transformer model has been applied individually to these two domains with a reasonable level of success. Second, the two domains do not have any direct or indirect relationship with one another except for the fact that they can both be modelled as text-to-text problems. Finally, the two domains have some creative element to them.
Datasets
We curated chess PGN data from several open-source channels such as Lichess [28] and Kaggle [29,30,31,32]. At the end of our extensive dataset collection process, we ended up with 14.3 million chess PGN games and 12.7 million evaluated chess positions. This combined chess dataset is about 7x the dataset size typically used in the literature [19,20,21].
We use about 10.5 million games of the 14.3 million chess games for pretraining and the rest for finetuning for the move prediction task. Furthermore, the 3.8 million games for finetuning were split into train and test set in a 99/1 ratio. For the board evaluation task, we also used a 99/1 ratio train-test ratio during finetuning for the 12.7 million evaluated chess positions.
For the coding dataset, we rely on three well-known code datasets -CodeNet [8], CodeSearchNet [25], and CodeXGLUE [27]. We extracted 1 million Python functions from the CodeSearchNet and 0 Code Dataset, Chess Dataset CodeXGLUE datasets and used this during pretraining. Similarly, we extract about 350k Python code and related docstrings from the CodeNet dataset [25] for finetuning and split it into train and test set in a 90/10 ratio.
Models Description
AutoRegressive Language Models (GPT Family)
Autoregressive Language Models are pretrained on the classic language modeling task in which they guess the next token having read all the previous ones. They correspond to the decoder of the original transformer model [33], and a mask is used on top of the full sentence so that the attention heads can only see what was before in the text and not what's after. Although these models can be fine-tuned and achieve great results on many tasks, the most natural application is text generation. A typical example of such models is GPT [34].
The GPT architecture comes from a Generative Pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative finetuning on each specific task [34]. It leads to large performance gains on Natural Language Understanding tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. The training procedure consists of two stages. The first is learning a high-capacity language model on a large corpus of text, and the second is the finetuning stage, where the model is adapted to a discriminative task with labeled data.
Unsupervised Pretraining
Let U = {u 1 , . . . , u n } be an unsupervised corpus of tokens. A multi-layer transformer decoder model, which is a variant of the original transformer model, is used to maximize the following likelihood:
L 1 (U ) = Σ i logP (u i |u i−k , . . . , u i−1 ; Θ) (1)
where k is the size of the context window, and the conditional probability P is modeled using a neural network with parameters Θ.
Supervised finetuning
After training the model with the objective in Equation: 1, the parameters are adapted to the supervised target task with a labeled dataset C with a sequence of input tokens for each instance, x 1 , . . . , x m , and a label y. The inputs are passed through the pre-trained model to obtain the final transformer block's activation h m l , which is then fed into an added linear output layer with parameters W y to predict y. P (y|x 1 , . . . , x m ) = sof tmax(h m l W y ) (2) This gives the following objective to maximize:
L 2 (C) = Σ (x,y)logP (y|x 1 ,...,x m )(3)
Masked Language Models (BERT Family)
Autoencoding models are pretrained by corrupting the input tokens in some way and trying to reconstruct the original sentence. These models rely on the encoder part of the original transformer model allow the model to look at all the tokens in the attention heads. Masked language models randomly mask some of the tokens from the input during the pretraining, and the objective is to predict the original vocabulary id of the masked word based only on its context. The Bidirectional Encoder Representations from Transformers [35], "BERT", is one example of such models. One of the key differences between the BERT model and the GPT model is that BERT uses a bidirectional Transformer while GPT uses a left-to-right Transformer. Also, the pretraining in BERT uses a Masked Language Model while GPT uses an AutoRegressive Language Model.
Baselines
Since no model has worked across the domain of chess and coding, we have several baseline models across both tasks. The first baseline is a variant of the chess transformer [20] for chess move prediction. The second baseline is another variant of the chess transformer [20] for board state evaluation. The third and fourth baselines are the finetuned baseline for code summarization and code generation by CodeT5 [24].
The Chess Transformer
This choice of baseline was motivated by the fact that it was the most recent published work we could find and looks to be state of the art in the field of using transformers for chess-related tasks. Rather than use the complete GPT-2 architecture as the authors did, we decided to go with a smaller GPT architecture due to memory and training time constraints. Specifically, Table 1 shows where our architecture choice for the baseline differed from [17]. Also, we trained the GPT model from scratch and did not try to fine-tune the previously pre-trained weights of GPT-2 as we believed that this would yield comparable results at a shorter time. Like in GPT2, we use the Byte-Pair Encoding tokenizer. To ensure that our model only learns from very good moves sequence, we only trained the baseline model on moves from players ranked 2400 and above (rather than the entire combined dataset). The performance obtained using this modified baseline did not differ from the reported performance (9.6% of illegal moves generated versus 10%).
Parameter
For the chess board state evaluation, we chose to finetune the full GPT-2 architecture on about 2 million evaluated chess positions for 3,000 steps. We chose to pretrain rather than train from scratch as the first variant because we wanted to leverage the pretrained knowledge of numbers in text that the original GPT-2 model has.
CodeT5
The CodeT5 baseline choice is motivated by the fact that it is the only code language model that is currently available that uses the same unified text-to-text framework as our work. Because of how extensive our project is and the amount of compute time it will take to train this baseline, we chose to use the pretrained weights publicly provided by [24] on their summarization and code generation tasks through the HuggingFace library.
Experiments
Description
There are two stages in this framework: pretraining and finetuning. The approach of unsupervised pretraining followed by supervised finetuning is used. Specifically, we used two architectures: the BERT-style masked language model objective and the GPT-style autoregressive language model objective. In the finetuning step, we experimented with two formulations of the text-to-text framework: finetuning the pretrained models on each task separately and concatenating all the subtasks into one, and finetuning once. We chose these two formulations from a literature review of several multi-task learning works described in the related works section.
All our experiments were carried out using the HuggingFace library and Pytorch. We ran three sets of experiments described in the following subsections. The models generated from each of the experiment set is named MD-T5-x where x is the experiment set. We used the following parameters across all experiments:
• vocabsize = 50, 000
• maxpositionembeddings = 514
• numembeddings = 768
• numattentionheads = 12
Experiment Set A
In this experiment set, we trained a Byte Level Byte-Pair Encoding (BPETokenizer) on the mixed dataset of Python functions and Chess games using a vocabulary size of 50,000. We then pretrained a Roberta masked language model, where we replace 15% of the spans of text on a mixed dataset of both Python functions and Chess games for a total of 500,000 training steps (about 70 hours of training on NVIDIA T4 GPUs). We then proceeded to finetune this pretrained model on the four tasks as described in section 3.1 individually in succession using an encoder-decoder architecture with the encoder and decoder weights tied. Each finetuning task was trained at an average of 30 hours per task on NVIDIA T4 GPUs.
Experiment Set B
In this experiment set, we also trained a Byte Level Byte-Pair Encoding (BPETokenizer) on the mixed dataset of Python functions and Chess games using a vocabulary size of 50,000. We then pretrain a GPT-2 language model using the same vocab size, embeddings size, and attention head as the Roberta model for a total of 500,000 training steps (about 60 hours of training on NVIDIA P100 GPUs). We then proceeded to finetune this pretrained model on the four tasks described in section 3.1 individually in succession using the pretrained GPT-2 architecture. Each finetuning task was trained for an average of about 15 hours per task (120,000 steps per task) on NVIDIA P100 GPUs.
Experiment Set C
In this experiment set, we followed the same tokenizer training and model pretraining paradigm of experiment set B. However, rather than finetune on the four tasks individually, we finetuned once on a joint balanced dataset of the four tasks using the pretrained GPT-2 model from experiment set b. The joint finetuning was trained for about 15 hours (120,000 steps) on NVIDIA P100 GPUs.
Evaluation Metrics
We propose four different evaluation metrics that measure how well the models perform both on the different tasks and multi-domain learning. The first metric is the play score, inspired by the Elo score [36], which ranks how well the chess engine plays in relation to its competitors. The second metric is the chess evaluation score which ranks how well the chess engine evaluates the board state. The third metric is the BLEU score. The BLEU score measures the quality of the code-to-text and text-to-code translation tasks. Finally, we propose a new metric which measures how well the model keeps the knowledge of the different tasks -multi-domain learning score. We describe these evaluation metrics below:
Play Score (PS): This is an aggregate rank score that incorporates several metrics (see appendix A) covering how accurate the chess engine play is and how well it understands the board state in comparison to its competitors. The higher the PS, the better the model. Mathematically, the PS of a chess engine is formulated as:
P S = 1/n × n i=0 R i(4)
where n is the number of metrics and R i is the rank (from largest to smallest) of a chess engine among its competitors for metric i.
Evaluation Score (ES): This is an aggregate rank score that incorporates several metrics (see appendix B) covering how well the chess engine correctly evaluates the board states in comparison to its competitors. The higher the ES, the better the model. Mathematically, the ES of a chess engine is formulated as:
ES = (X + Y )/2(5)
where X is the rank (from largest to smallest) of a chess engine among its competitors for the regression part of the task and Y is the rank (from largest to smallest) of a chess engine among its competitors for the classification part of the task.
Bilingual Evaluation Understudy Score (BLEU Score): BLEU score is a score used for comparing a candidate's translation of the text to one or more reference translations. The higher the BLEU score, the better the model. BLEU is computed using a couple of ngram modified precisions as shown below:
BLEU = BP × exp(Σ N n=1 w n logp n )(6)
where p n is the modified precision for ngram, w n is the weight between 0 and 1 for logp n and BP is the brevity penalty to penalize short machine translations.
The BP is computed as:
BP = { 1 if c>r exp(1− r c )} if c≤r(7)
where c is the number of unigrams (length) in all the candidate sentences, and r is the best match length for each candidate sentence in the corpus.
Multi-Domain Learning Score (MDLS) This is the average of the non-token mix ratio and the cross-domain recall ratio multiplied by 100 (see appendix C). It is inspired by the F1 score. We find the harmonic mean of these two components for the multi-domain learning score. The higher the score, the better the model. Table 2 shows the results across each metric for all the tasks for all the experiment sets. Appendices D to F shows the raw sub-metrics scores that were aggregated to obtain the overall metric scores. Appendix G show some sample outputs from the MD-T5 models for both the chess-related and code-related tasks. On a high level, Table 2 shows MD-T5-B models perform better than the baseline on three out of four tasks with additional multi-domain knowledge. Similarly, MD-T5-C models outperform the baseline on two out of four tasks with significantly better multi-domain knowledge score. From appendix A, we see a variety of behavioral differences between the different MD-T5 models and the baseline on the move prediction task. First, the MD-T5-B model tends to generate much longer games than the other MD-T5 variants and the baseline. Furthermore, it tends to keep track of the board state for much longer on average (76 moves vs 1/54/68 moves). Despite its ability to keep track of the board state much longer, it generally plays very accurate moves, outperforming the baseline and having similar performance to the MD-T5-C model that generates much shorter games (and thus have to play much easier moves). However, all MD-T5 models struggle with ending the game and this could be attributed to the fact that they lose track of the game state as they get to the end of the game. The MD-T5-A model struggled with the task immensely.
Results and Discussion
From table 2, we see that all MD-T5 models outperform the baselines on the board state evaluation task. However, we note that all the models, including the baseline, perform poorly on this task. Appendix E shows that while MD-T5-A can generate numerical values correctly when a numerical value is required 70% of the time, its mse and accuracy values show that it still struggles with the task. The story is quite similar for MD-T5-B and MD-T5-C models, even though MD-T5-B makes better predictions for both numerical and non-numerical values compared to the other ones. Since this is a joint regression and classification problem modeled as a text-to-text problem, these results are not entirely surprising. Table 2 shows the MD-T5-B and MD-T5-C models significantly outperform the baseline. We posit that this is because these models were trained on just Python code while the baseline model was trained on multiple programming languages and negative transfer [26] may have occurred. Yet, this result is impressive as shown by three of the best performing outputs provided (appendix G). One could even argue that the summarization provided by the MD-T5-B and MD-T5-C models for input 1 and input 2 is better or just as good as the target given the function. This is much more impressive given the fact that these models were never pretrained or even finetuned on English language data. Yet, they were able to generate fluent, concise, and relevant summaries.
While the MD-T5-B and MD-T5-C models could not outperform the baseline on the code generation task, they generate reasonable function names and code structure given the text prompt (appendix G). This is pretty impressive given that even a human programmer would show the similar behavior given no other context except a text prompt. Again, we posit that while the training across multiple languages could have been a disadvantage for the code summarization task, it was probably an advantage for the baseline model here as knowledge transfer would be helpful on the challenging task of code generation. Furthermore, the baseline model was trained for much longer (96 hours on a cluster of A100 GPUs) than our MD-T5 models and we know that longer training time is an additional advantage for model performance on complex tasks.
Perhaps the vital aspect of this analysis is multi-domain knowledge as measured by the multi-domain learning score. Once again, the MD-T5-A model does not yield outstanding performance while MD-T5-B and MD-T5-C models post pretty impressive results with little or no token mix (Appendix 5). However, MD-T5-B models are susceptible to catastrophic forgetfulness [26] as seen from the cross-domain recall scores (Appendix 6). This is because the finetuning is done separately rather than together as in MD-T5-C. Appendix H shows typical recall results from MD-T5-B after prompting. From the prompting outputs provided, we can see that MD-T5-B suffers from catastrophic forgetting even though it still maintains some multi-domain knowledge if the prompting is done sufficiently long enough. This makes the results of MD-T5-C much more impressive as it was able to perform almost as well as MD-T5-B while still maintaining a much substantial part of its multi-domain knowledge.
The MD-T5-A model struggled with generating any meaningful performance after finetuning despite having a reasonably good performance at the pretraining stage. We believe this poor performance could be a result of two things -negative knowledge transfer and inadequate training time. Negative knowledge transfer is a known phenomenon in transfer learning typical of multi-task learning when the tasks are dissimilar [26,37]. While we see some negative knowledge transfer in the GPT-style models (MD-T5-B and MD-T5-C), we believe the denoising objective is why the effects are much more profound in the BERT-style model. This brings us to the next point -inadequate training time.
While we trained all the models for approximately the same amount of time since they had roughly the same number of parameters, it is possible that the BERT-style model requires more training time to generate good results.
A comparison of the GPT-style models shows that they hold up well against some solid baselines. The MD-T5-B model seems to outperforms other MD-T5 models on each task across the domain. However, as we saw from the prompting examples and the cross-domain recall score, this came at the cost of forgetting most of what it had learned from the other domain i.e. losing its multi-domain knowledge. Given this context, it shows how impressive the MD-T5-C model performance is. Not only is it able to perform each task reasonably well, but it also does this while still keeping it multi-domain knowledge. Thus, the GPT-style joint pretraining and joint finetuning framework is the most promising direction for multi-domain, multi-task learning using a unified text-to-text transfer transformer architecture.
Conclusion
We investigated the abilities of transformers to perform well across tasks from multiple domains using a unified text-to-text framework. We curated datasets from several sources and then carried out several experiments using three different training strategies: Bert-style joint pretraining + successive finetuning, GPT-style joint pretraining + successive finetuning, and GPT-style joint pretraining + joint finetuning. We chose these strategies as they were the most common in related works.
Our experiments and analysis show that the multi-domain text-to-text transfer transformer framework that we propose compares well on the individual tasks across multiple domains against powerful transformer baseline models. Furthermore, we see that the joint pretraining and finetuning framework in experiment set C performs well on individual tasks while still keeping its multi-domain knowledge. While these results are encouraging, it is still limited by the fact that it seems that we lose multidomain knowledge when we do not finetune the tasks jointly, a strategy that is not as popularly adopted in real-world scenarios as the first two.
This research has focused on just three training strategies. However, there are a lot more strategies, even if they are far less popular [14,15]. Thus, one natural extension to this project is the application of one or more of these other strategies to the task of extreme multi-domain learning. Similarly, this project was limited by available compute resources and it would be interesting to see how performance changes with more compute resources and training time. Finally, it will also be interesting to scale up this work to tens of extremely different domains and hundreds of tasks and see what happens then or at a smaller scale, increasing the number of tasks significantly across two or three domains. Average Centipawn Loss: Centipawn loss is a numerical score given by a chess engine (usually StockFish) to the difference between the move you played against the strongest move available at that time. Since conventional chess engines are way better than even humans at chess, it is used as a benchmark of how well a chess player plays. A strong model should have an average centipawn score close to 0.
Game Length: In general, better chess playing involves being able to play longer games as this implies that you are not losing quickly. This is particularly important for chess engines. The game length is the number of moves that the chess move prediction model can generate on average.
B Eval Score Sub-Metrics Description
The Ratio of Correct Numerical Values: The chess move evaluation is an evaluation function used to heuristically determine the relative value of a position. It's usually a real number that we decided to bound between -10 and +10 and bin in 44 bins. However, it can also be a string from a finite set of strings, when it's possible to force a mate or a draw in a few numbers of moves. Thus, we compute the fraction of time the model predicts a numerical token (cast as a string, considering the text-to-text framework) when the true evaluation is also a numerical token.
The Ratio of Correct Non-numerical Values: This metric is conceptually the same as for the "Ratio of correct numerical values", except that here we compute the fraction of time the model predicts a non-numerical token (cast as a string, considering the text-to-text framework) when the true evaluation is also a non-numerical token.
Mean Squared Error: The chess board state evaluation is usually a real number that we decided to bound between -10 and +10 and bin in 44 bins. The motivation was to cast the regression part of the problem into a simpler classification problem given the already complex nature of the data. Then we use the mean squared error to compute the divergence between the predictions of the models and the true evaluation of the board state.
Accuracy: The accuracy is one metric for evaluating classification models, and it's the fraction of correct non-numerical chess board state evaluation that the model gets correctly. The chess move evaluation is sometimes a string, from a finite set of strings, instead of a numerical value. So we computed the accuracy to find the fraction of time the model gets the non-numerical values correctly.
C Multi-Domain Learning Sub-Metrics Description
Non-Token Mix Ratio(NMR): This is the ratio of time the model generates a chess token in a code-related task and vice-versa of all the text generated. The higher the non-token mix ratio, the better the model. Mathematically, the non-token mix ratio for a model is given as
N M R = 1 − |A ∩ B| |A ∪ B|(9)
where A is the tokens from chess-related tasks, and B is the tokens from code-related tasks generated by that model.
Cross-Domain Recall Ratio (CRR):
This is the ratio of time the model successfully returned tokens from another domain when prompted with tokens from that domain after being finetuned on a separate domain. Table 3 shows the results from the chess move prediction task. The numbers in the bracket represent the metric number given the fact that each game was limited to just 70 games as this is the 90th percentile of the game length of the training dataset. Table 4 shows the results from the chess move evaluation task. Table 5 shows how many times across each experiment set did the model not introduce a token from another domain to a specific domain. Table 6 shows how many times across each experiment set did the model successfully recall crossdomain knowledge.
D Raw Sub-Metrics Scores for Chess Move Prediction
G Sample Outputs from MD-T5 Models
Figure G provides some sample positions from a chess game played between the strongest MD-T5 variant and Stockfish-14, the best chess engine currently available.
. Here are three of the best performing outputs from the code summarization tasks:
• input 1: summarize: def get_courses_for_regid(self, regid, params={}): """ """ self._as_user = regid data = self._get_resource("/api/v1/courses", params=params) self._as_user = None courses = [] for datum in data: if "sis_course_id" in datum: courses.append(CanvasCourse(data=datum)) else: • input 2: summarize: def save_authorization_code(self, client_id, code, request, *args, **kwargs): """""" log.debug( 'Persist authorization code %r for client %r', code, client_id ) request.client = request.client or self._clientgetter(client_id) self._grantsetter(client_id, code, request, *args, **kwargs) return request.client.default_redirect_uri target: Persist the authorization code.. instruction_list.extend(instruction.instruction_list()) else:
instruction_list.append(instruction) return instruction_list """ raise ValueError("E5001 must be 5001 must be 500.") if e50 <= e50 <= e50 <= e50:
Figure 3 .
31 provides a pictorial summary of each of the tasks as a text-to-text problem.
Figure 1 :
1Multi-Domain Text-to-Text Transfer Transformer (MD-T5) Tasks
on − token mix ratio) * (cross − domain recall ratio) (N on − token mix ratio) + (cross − domain recall ratio)
of Illegal Chess Moves Generated: An illegal move in chess is any move that violates the standard rules of chess. Using the python-chess library [32], we can check to see if any generated chess move by the model is illegal. The proportion of illegal moves generated is therefore the percentage of illegal moves of all the moves generated during gameplay.Average Move Number of Illegal Move Generation: In general, we expect the model to perform well in the opening and middle game phases (move 1 -50) since it is likely to have seen similar positions several times in the training dataset. However, we expect it to struggle in the endgame where many positions are unique and even top human chess players struggle in the phase. Thus, the average move number of illegal move generation is to average move number where the model makes an illegal move, and it will help to keep track of how well the model plays in the end game which is a measure of how well the model understands the game of chess.The Proportion of Missed End State: An end state in chess is either a white win (1-0), a black win (0-1), or a draw (½ -½). Using the python-chess library [32], we can keep track of the game state and ascertain when one of the following end states has occurred. We expect that the model can keep track of these too and generate one of the end states tokens if the game is over. The proportion of missed end state is therefore the percentage of games in which the model keeps generating chess moves even after the game should have ended.
Figure 2 :
2Sample Positions of Game Between A MD-T5 Model And Stockfish 14
for to the a an of from in_.].'].').'),'))'])()) )))))])]))]),],'],']']:]::]] ()((_(( (destpathname1'2, default experiment B: Returns the list of courses for the given regid experiment C: Return a list of courses for the given regid.
in self.data: if isinstance(instruction, CompositeGate):
Similarly, the cross-domain recall ratio is a recall-like component which measures how much of the other domain the model remembers after finetuning. Using the example of the code summarization task again, if a model is prompted with a sequence of chess moves, we expect it still outputs valid chess chess moves. This is to ensure that the model has not lost its multi-domain knowledge even if it may be better at one domain than the other. Humans do not forget all their previous knowledge of a domain when they acquire a new knowledge of another domain even if the new domain is significantly different.The non-token mix ratio is a precision-like component which measures how well the model keeps the
knowledge of the two domains separate after finetuning. For example, if the model was trained on
the code summarization task, it should not output chess moves (eg. e4) in its summarization outputs.
This is equivalent to a human answering a technical deep learning question with a Shakespearean
quote just because they happen to minor in literature. If this happens, the precision-like component
of the score penalizes the model.
Table 2 :
2Results of Extreme Multi-Domain, Multi-Task Learning ExperimentMetric
Baseline MD-T5-A MD-T5-B MD-T5-C
Play Score
3
1
3.2
2.8
Eval Score
1.5
2.5
3.5
2.5
CS BLEU
20.36
10.79
31.64
28.9
CG BLEU
41.48
11.42
31.37
30.81
MDLS
0
5.77
13.3
95
Table 3 :
3Metrics for Chess Move Prediction Task Metric Baseline Experiment Set A Experiment Set B Experiment Set C Proportion Of Illegal Moves Generated E Raw Sub-Metrics Scores for Chess Board State Evaluation9.50% (9.48%)
100% (100%)
40.1 (16.5%)
60% (29.7%)
Table 4 :
4Metrics for Chess Evaluation Prediction TaskMetricBaseline Experiment Set A Experiment Set B Experiment Set C Ratio of correct numerical values F Raw Sub-Metrics Scores for Multi-Domain Learning0.2%
70%
50.6%
51.8%
Ratio of correct non-numerical values
10.3%
2.2%
7.3%
7.1%
Mean Squared Error
2.42
60.59
43.42
76.45
Accuracy
0%
0.0%
26.02%
25.35%
'
Table 5 :
5Average Non-Token Mix Ratio Across Experiment SetsExperiment A Experiment B Experiment C
Non-Token Mix Ratio
96%
0%
0.001%
Table 6 :
6Average Cross-Domain Recall Ratio Across Experiment Sets Experiment A Experiment B Experiment C Cross Domain Recall Ratio courses.append(self.get_course(datum["id"], params)) return courses target: Return a list of courses for the passed regid.10.4%
7.1%
90.6%
prompt 4: e4 e5 Nf3 Nc6 Bb5 f6 O-O fxe5 Bxc6 dxc6 Nxe5 Bd6 d3 Ne7 c3 Ng6 Nxg6output 4:
O4 15
failed_e2 = FAILE(failed_e2, buffer)
if failed_e2.has_effective():
prb.set_effective(failed_e2, buffer)
# Setting all events for a series to make sure that
all_effective_effective_effective_effective_effective_effective_effective_effectives(
failed_event, buffer)
# Parse that the reverse times to prevent only
next_eff
target: Return a list of instructions for this CompositeGate. experiment A:=. thename _ _ to allsr this an of((' ( on a not, andThe0 in or is from for Retrieve None self get str model methodzes CalculateRunssummarize through format cexperiment B: Returns a list of instructions for the given instruction. experiment C: Returns a list of "ListingCompositeGate" instructions.Here are three of the best performing outputs from the code summarization tasks: processdef LOG Log b d a h g e exc err error elif if because somethinggwverificationwiflaceongs Find Generate Par directly give'.''. Perform generate """""" """""")'))): "the function at base64 b64 b64 KEY " "for full functions.") if e50 <= e50 <= e50: raise ValueError("E5001 must be 5000.") if e50 <= e50 <= e50 <= e50 <= e50: raise ValueError("E5001 must be 500.") if e50 <= e50 <= e50 <= e50 <= e50 <= e50:
Machines that think. Thomas Ross, Scientific American. 1481933Thomas Ross. Machines that think. Scientific American, 148:206-209, 04 1933.
The perceptron: A probabilistic model for information storage and organization in the brain. F Rosenblatt, Psychological Review. 65F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386-408, 1958.
. Wikipedia Contributors. Ai winter. 102019Wikipedia Contributors. Ai winter, 10 2019.
Grand challenges in ai. Raj Reddy, ACM Computing Surveys. 27Raj Reddy. Grand challenges in ai. ACM Computing Surveys, 27:301-303, 09 1995.
Deep blue (chess computer). Wikipedia Contributors, 042019Wikipedia Contributors. Deep blue (chess computer), 04 2019.
Human-computer chess matches. Wikipedia Contributors. 102019Wikipedia Contributors. Human-computer chess matches, 10 2019.
A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis, Science. 362David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362:1140-1144, 12 2018.
Codenet: Training large scale neural networks in presence of soft-errors. Sanghamitra Dutta, Ziqian Bai, Tze Meng Low, Pulkit Grover, arXiv:1903.01042cs, mathSanghamitra Dutta, Ziqian Bai, Tze Meng Low, and Pulkit Grover. Codenet: Training large scale neural networks in presence of soft-errors. arXiv:1903.01042 [cs, math], 03 2019.
A survey on multi-task learning. Yu Zhang, Qiang Yang, Yu Zhang and Qiang Yang. A survey on multi-task learning, 2017.
Michael Crawshaw, arXiv:2009.09796Multi-task learning with deep neural networks: A survey. 09cs, statMichael Crawshaw. Multi-task learning with deep neural networks: A survey. arXiv:2009.09796 [cs, stat], 09 2020.
Multitask prompted training enables zero-shot task generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, arXiv:2110.08207Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan102021Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. RushVictor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M. Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization. arXiv:2110.08207 [cs], 10 2021.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate, 2014.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Q Vinh, Dara Tran, Jianmo Bahri, Jai Ni, Kai Gupta, Sebastian Hui, Donald Ruder, Metzler, arXiv:2111.10952Ext5: Towards extreme multi-task scaling for transfer learning. 112021Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. Ext5: Towards extreme multi-task scaling for transfer learning. arXiv:2111.10952 [cs], 11 2021.
Deepchess: End-to-end deep neural network for automatic learning in chess. E Omid, Nathan S David, Lior Netanyahu, Wolf, Artificial Neural Networks and Machine Learning -ICANN 2016. Omid E. David, Nathan S. Netanyahu, and Lior Wolf. Deepchess: End-to-end deep neural network for automatic learning in chess. Artificial Neural Networks and Machine Learning - ICANN 2016, pages 88-96, 2016.
Learning to generate move-by-move commentary for chess games from large-scale social forum data. Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, Taylor Berg-Kirkpatrick, 07Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. Learning to generate move-by-move commentary for chess games from large-scale social forum data, 07 2018.
Automated chess commentator powered by neural chess engine. Hongyu Zang, Zhiwei Yu, Xiaojun Wan, arXiv:1909.10413092019Hongyu Zang, Zhiwei Yu, and Xiaojun Wan. Automated chess commentator powered by neural chess engine. arXiv:1909.10413 [cs], 09 2019.
Sentimate: Learning to play chess through natural language processing. Isaac Kamlish, Isaac Bentata Chocron, Nicholas Mccarthy, arXiv:1907.08321092019Isaac Kamlish, Isaac Bentata Chocron, and Nicholas McCarthy. Sentimate: Learning to play chess through natural language processing. arXiv:1907.08321 [cs], 09 2019.
The chess transformer: Mastering play using generative language models. David Noever, Matt Ciolino, Josh Kalin, arXiv:2008.0405709David Noever, Matt Ciolino, and Josh Kalin. The chess transformer: Mastering play using generative language models. arXiv:2008.04057 [cs], 09 2020.
Learning chess blindfolded: Evaluating language models on state tracking. Shubham Toshniwal, Sam Wiseman, Karen Livescu, Kevin Gimpel, arXiv:2102.13249022021Shubham Toshniwal, Sam Wiseman, Karen Livescu, and Kevin Gimpel. Learning chess blindfolded: Evaluating language models on state tracking. arXiv:2102.13249 [cs], 02 2021.
Unified pretraining for program understanding and generation. Saikat Wasi Uddin Ahmad, Baishakhi Chakraborty, Kai-Wei Ray, Chang, arXiv:2103.06333arXiv preprintWasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Unified pre- training for program understanding and generation. arXiv preprint arXiv:2103.06333, 2021.
Studying the usage of text-to-text transfer transformer to support code-related tasks. Antonio Mastropaolo, Simone Scalabrino, Nathan Cooper, David Nader Palacio, Denys Poshyvanyk, Rocco Oliveto, Gabriele Bavota, 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEEAntonio Mastropaolo, Simone Scalabrino, Nathan Cooper, David Nader Palacio, Denys Poshy- vanyk, Rocco Oliveto, and Gabriele Bavota. Studying the usage of text-to-text transfer trans- former to support code-related tasks. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pages 336-347. IEEE, 2021.
Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. Yue Wang, Weishi Wang, Shafiq Joty, Steven C H Hoi, arXiv:2109.00859092021Yue Wang, Weishi Wang, Shafiq Joty, and Steven C. H. Hoi. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv:2109.00859 [cs], 09 2021.
Ruchir Puri, David S Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, Frederick Reiss, arXiv:2105.12655Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. 082021Ruchir Puri, David S. Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. arXiv:2105.12655 [cs], 08 2021.
Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, arXiv:2005.00770Subhransu Maji, and Mohit Iyyer. Exploring and predicting transferability across nlp tasks. arXiv preprintTu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. Exploring and predicting transferability across nlp tasks. arXiv preprint arXiv:2005.00770, 2020.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, arXiv:2102.04664A machine learning benchmark dataset for code understanding and generation. arXiv preprintShuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664, 2021.
lichess.org open database. lichess.org open database, 2021.
Chess evaluations. Chess evaluations, 2021.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
The impact of the search depth on chess playing strength. R Diogo, Ferreira, ICGA journal. 362Diogo R Ferreira. The impact of the search depth on chess playing strength. ICGA journal, 36(2):67-80, 2013.
. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet ; Dario, Sam Amodei, Ilya Mccandlish, Wojciech Sutskever, Zaremba, arXiv:2107.03374Felipe Petroski Such. Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford072021Evaluating large language models trained on codeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. arXiv:2107.03374 [cs], 07 2021.
| []
|
[
"Seymour's Second Neighborhood Conjecture for orientations of (pseudo)random graphs",
"Seymour's Second Neighborhood Conjecture for orientations of (pseudo)random graphs"
]
| [
"Fábio Botler [email protected] \nPrograma de Engenharia de Sistemas e Computação\nInstituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa em Engenharia Universidade Federal do Rio de Janeiro\nBrasil\n",
"Phablo F S Moura [email protected] \nDepartamento de Ciência da Computação Instituto de Ciências Exatas\nUniversidade Federal de Minas Gerais\nBrasil\n",
"Tássio Naia [email protected] \nDepartamento de Ciência da Computação Instituto de Matemática e Estatística\nUniversidade de São Paulo\nBrasil\n"
]
| [
"Programa de Engenharia de Sistemas e Computação\nInstituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa em Engenharia Universidade Federal do Rio de Janeiro\nBrasil",
"Departamento de Ciência da Computação Instituto de Ciências Exatas\nUniversidade Federal de Minas Gerais\nBrasil",
"Departamento de Ciência da Computação Instituto de Matemática e Estatística\nUniversidade de São Paulo\nBrasil"
]
| []
| Seymour's Second Neighborhood Conjecture (SNC) states that every oriented graph contains a vertex whose second neighborhood is as large as its first neighborhood. We investigate the SNC for orientations of both binomial and pseudo random graphs, verifying the SNC asymptotically almost surely (a.a.s.)(i) for all orientations of G(n, p) if lim sup n→∞ p < 1/4; and(ii) for a uniformly-random orientation of each weakly (p, A √ np)-bijumbled graph of order n and density p, where p = Ω(n −1/2 ) and 1 − p = Ω(n −1/6 ) and A > 0 is a universal constant independent of both n and p. | null | [
"https://export.arxiv.org/pdf/2211.06540v1.pdf"
]
| 253,511,058 | 2211.06540 | 640e3bd5712e86d0d1ba9137e5c9f58d0f6f0eaa |
Seymour's Second Neighborhood Conjecture for orientations of (pseudo)random graphs
November 15, 2022
Fábio Botler [email protected]
Programa de Engenharia de Sistemas e Computação
Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa em Engenharia Universidade Federal do Rio de Janeiro
Brasil
Phablo F S Moura [email protected]
Departamento de Ciência da Computação Instituto de Ciências Exatas
Universidade Federal de Minas Gerais
Brasil
Tássio Naia [email protected]
Departamento de Ciência da Computação Instituto de Matemática e Estatística
Universidade de São Paulo
Brasil
Seymour's Second Neighborhood Conjecture for orientations of (pseudo)random graphs
November 15, 2022
Seymour's Second Neighborhood Conjecture (SNC) states that every oriented graph contains a vertex whose second neighborhood is as large as its first neighborhood. We investigate the SNC for orientations of both binomial and pseudo random graphs, verifying the SNC asymptotically almost surely (a.a.s.)(i) for all orientations of G(n, p) if lim sup n→∞ p < 1/4; and(ii) for a uniformly-random orientation of each weakly (p, A √ np)-bijumbled graph of order n and density p, where p = Ω(n −1/2 ) and 1 − p = Ω(n −1/6 ) and A > 0 is a universal constant independent of both n and p.
Introduction
An oriented graph D is a digraph obtained from a simple graph G by assigning directions to its edges (i.e., D contains neither loops, nor parallel arcs, nor directed cycles of length 2); we also call D an orientation of G. Given i ∈ N, the i-th neighborhood of u ∈ V (D), denoted by N i (u), is the set of vertices v for which a shortest directed path from u to v has precisely i arcs. A Seymour vertex (see [14]) is a vertex u for which |N 2 (u)| ≥ |N 1 (u)|. Seymour conjectured the following (see [6]). Conjecture 1, known as Seymour's Second Neighborhood Conjecture (SNC), is a notorious open question (see, e.g., [3,7,9,14]). In particular, it was confirmed for tournaments (orientations of cliques) by Fisher [8] and (with a purely combinatorial argument) by Havet and Thomassé [10]; it was also studied by Cohn, Godbole, Harkness and Zhang [4] for the random digraph model in which each ordered pair of vertices is picked independently as an arc with probability p < 1/2. Throughout the paper, we denote by S the set of graphs {G : all orientations of G contain a Seymour vertex}.
Our contribution comes from considering this combinatorial problem in a random and pseudorandom setting (see, e.g., [5,13]). More precisely, we explore Conjecture 1 for orientations of the binomial random graph G(n, p), defined as the random graph with vertex set {1, . . . , n} in which every pair of vertices appears as an edge independently and with probability p.
We say that an event E holds asymptotically almost surely (a.a.s.) if Pr[E] → 1 as n → ∞. If G = G(n, p) is very sparse (say, if np ≤ (1 − ε) ln n for large n and fixed ε > 0), then a.a.s. G has an isolated vertex, which clearly is a Seymour vertex. Our first result extends this observation to much denser random graphs. If we impose restrictions on the orientations, requiring, for example, somewhat large minimum outdegree, the range of p can be further increased. with minimum degree at least Cn 1/2 contains a Seymour vertex.
For an even larger range of p, we show that most orientations of G(n, p) contain a Seymour vertex; i.e., Conjecture 1 holds for almost every (labeled) oriented graph. In fact, we prove a version of Theorem 4 in a more general setting, namely orientations of pseudorandom graphs (see Section 4). Theorem 5. There exists an absolute constant C > 1 such that the following holds. Let G be a weakly (p,
A √ np)-bijumbled graph of order n, where ε 3 np 2 ≥ A 2 C and p < 1 − 15 √ ε.
If D is chosen uniformly at random among the 2 e(G) possible orientations of G, then a.a.s. D has a Seymour vertex. This paper is organized as follows. In Section 2 we prove Conjecture 1 for wheelfree graphs, which implies the particular case of Theorem 2 when n 2 p 3 → 0. In Section 3 we complete the proof of Theorem 2 and prove Theorems 3 and 4 using a set of standard properties of G(n, p). These properties are collected in Definition 9 and Lemma 10 (proved in Appendix A). In Section 4, we introduce bijumbled graphs and prove Theorem 5. We make a few further remarks in Section 5.
To avoid uninteresting technicalities, we omit floor and ceiling signs. If A and B are sets of vertices, we denote by e (A, B) the number of arcs directed from A to B, by e(A, B) the number of edges or arcs with one vertex in each set, and by e(A) the number of edges or arcs with both vertices in A. The (underlying) neighborhood of a vertex u is denoted by N (u), and the codegree of vertices u, v
is deg(u, v) = N (u) ∩ N (v) .
We remark that Theorem 2 and a weaker version of Theorem 3 appeared in the extended abstracts [1,2].
Wheel-free graphs
A wheel is a graph obtained from a cycle C by adding a new vertex adjacent to all vertices in C. Firstly, we show that G(n, p) is wheel-free when p is small; then prove that all wheel-free graphs satisfy Conjecture 1. Lemma 6. If p ∈ (0, 1) and n 4 p 6 < ε/16, then Pr G(n, p) is wheel-free ≥ 1 − ε.
Proof. We can assume ε < 1. Since n 4 p 6 < ε/16, we have that
np 2 < (εp 2 /16) 1/4 < 1/2.(1)
Let X = n k=4 X k , where X k denotes the number of wheels of order k in G(n, p). By the linearity of expectation,
E X = n k=4 E X k = n k=4 n k k (k − 1)! 2(k − 1) p 2(k−1) < n n k=4 (np 2 ) k−1 = n 4 p 6 n−4 k=0 (np 2 ) k G.S. < n 4 p 6 1 − np 2 (1) < 2n 4 p 6 < ε 8 < ε.(2)
Where in (2) we use the formula ∞ i=0 r i = (1 − r) −1 for the geometric series (G.S.) of ratio r = np 2 < 1. Markov's inequality then yields Pr[X ≥ 1] ≤ E X < ε.
To show that every orientation of a wheel-free graph has a Seymour vertex, we prove a slightly stronger result. A digraph is locally cornering if the outneighborhood of each vertex induces a digraph with a sink (i.e., a vertex of outdegree 0). The next proposition follows immediately by noting that, in a locally cornering digraph, each vertex of minimum outdegree is a Seymour vertex. Proposition 7. Every locally cornering digraph has a Seymour vertex. Lemma 6 and Proposition 7 immediately yield the following corollary. Corollary 8. If p ∈ (0, 1), and n 4 p 6 < ε/16, then Pr G(n, p) ∈ S ≥ 1 − ε.
Proof. Note that every orientation of a wheel-free graph is locally cornering, since the (out)neighborhood of each vertex is a forest, and every oriented forest has a vertex with outdegree 0. Hence the result follows by Lemma 6 and Proposition 7.
Typical graphs
In this section we prove that if lim sup n→∞ p < 1/4, then a.a.s. G(n, p) ∈ S. We use a number of standard properties of G(n, p), stated for convenience in Definition 9. Definition 9. Let p ∈ (0, 1). A graph G of order n is p-typical if the following hold.
(i) For every X ⊆ V (G), we have e(X) − |X| 2 p ≤ |X| 3np(1 − p) + 2n.
(ii) If n ln n ≤ n ≤ n or n = n = n, then all X,
Y ⊆ V (G) with |X|, |Y | ≤ n satisfy e(X, Y ) − |X||Y |p ≤ 6n p(1 − p)|X||Y | + 2n . (iii) For every v ∈ V (G), we have | deg(v) − np | ≤ 6np(1 − p) ln n + 2 ln n. (iv) For every distinct u, v ∈ V (G), we have deg(u, v) − (n − 2)p 2 ≤ 6np 2 (1 − p 2 ) ln n + 2 ln n.
It can be shown, using standard Chernoff-type concentration inequalities, that G(n, p) is p-typical with high probability (see Appendix A).
Lemma 10. For every p : N → (0, 1), a.a.s. G = G(n, p) is p-typical.
We also use the following property of graphs satisfying Definition 9 (i). Lemma 11. Let G be a graph of order n which satisfies Definition 9 (i), and fix a ∈ N.
If D is an orientation of G and B = {v ∈ V (D) : deg + D (v) < a}, then |B| ≤ 2 p (a − 1) + 1 + 12n(1 − p) p + 4n |B|p .
Proof. The lemma follows by multiplying all terms in the inequality below by 2/|B|p.
|B|(a − 1) ≥ e(G[B]) 9(i) ≥ |B| 2 p − |B| 3np(1 − p) − 2n.
Proof of Theorem 2
Let us outline the proof of Theorem 2. Firstly, we find a vertex w whose outneighborhood contains many vertices with large outdegree. Then, we note that |N 1 (w)| = O(np) and that N 1 (w) ∪ N 2 (w) cannot be too dense. Finally, since many outneighbors of w have large outdegree, we conclude that N 1 (w) ∪ N 2 (w) must contain at least 2|N 1 (w)| vertices, completing the proof. This yields the following.
Lemma 12. Fix 0 < α < 1/4 and ε > 0. There is n 1 = n 1 (α, ε) such that S contains all p-typical graphs of order n such that n ≥ n 1 and εn −2/3 ≤ p ≤ 1/4 − α.
Lemma 12 is our last ingredient for proving Theorem 2. Indeed, fix ε > 0, set α = 1/4 − lim sup n→∞ p(n) and let n 0 be large enough so that p(n) ≤ 1/4 − α and so that G(n, p) is p-typical with probability at least 1 − ε for all n ≥ n 0 (this is Lemma 10). Now either p < εn −2/3 or εn −2/3 ≤ p(n) ≤ 1/4 − α. In the former case we use Corollary 8, and in the latter case Lemma 12, concluding either way that
Pr G(n, p) ∈ S ≥ 1 − ε.
Proof of Lemma 12. We may and shall assume (choosing n 1 accordingly) that np is large enough whenever necessary. Fix an arbitrary orientation of G. For simplicity, we write G for both the oriented and underlying graphs. Let
S = {v ∈ V (G) : deg + (v) < (1 − α)np/2} and T = V (G) \ S. Firstly, we show that |T | ≥ αn/2. This is clearly the case if |S| < αn (since α < 1/4 < 1 − α); let us show that this also holds if |S| ≥ αn. Indeed, since p ≥ εn −2/3 , from Lemma 11 with a = (1 − α)np/2 we obtain |S| ≤ 2(a − 1) p + 1 + 12n(1 − p) p + 4n |S|p < (1 − α)n + o(n) < 1 − α 2 n.
Therefore |T | = n − |S| ≥ αn/2 as desired. Recall that np is large and p ≤ 1/4. Then 3np(1 − p) ≥ 4/α, and hence, from Definition 9 (i) , we get
e T ≥ |T | 2 p − |T | 3np(1 − p) − 2n > |T | 2 p − 2|T | √ np > |T | 2 p 3 ,(3)
and therefore, by averaging, there exists w ∈ T satisfying
deg + T (w) ≥ e T |T | (3) ≥ αnp 6 .(4)
We next show that w is a Seymour vertex. Let X = N 1 G (w) and Y = N 2 G (w), and suppose, for a contradiction, that |Y | < |X|. From 9 (iii) and p + α ≤ 1/4, we have
|X| ≤ np + 6np ln n + 2 ln n < n p + α 2 < n 4 ≤ n 2 (1 − 2α − 2p).(5)
Moreover,
|X| = deg + (w) ≤ np + 6np ln n + 2 ln n < 2np.(6)
Recall that w ∈ T and let N = X ∩ T be the set of outneighbors of w in T . By the definition of N and (4) we have
|N | ≥ αnp 6 .(7)
Note that e (N, X) counts arcs induced by N precisely once (as N ⊆ X), and if the arc u → v is counted by e (N, X), then v is a common neighbor of w and u ∈ N . Hence, by Definition 9 (iv), we have that
e (N, X) + e(N ) ≤ |N | np 2 + 6np 2 ln n + 2 ln n .
Since vertices in T (and hence in N ) have at least (1 − α)np/2 outneighbors, we have
e (N, Y ) ≥ |N | (1 − α)np 2 − e (N, X) − e(N ) ≥ |N | (1 − α)np 2 − |N | np 2 + 6np 2 ln n + 2 ln n .(8)
The following estimate will be useful.
Claim 13. It holds that 2 ln n + 6np 2 ln n + 6|Y |np/|N | = o(np).
Proof. We prove that each term in the sum above is o(n) when divided by p. Clearly, 6np 2 ln n/p = o(n). Recall that p ≥ εn −2/3 and thus (2 ln n)/p = o(n). Also, |Y |6n |N |p
(7) ≤ |Y |36 αp 2 < |X|36 αp 2 (6) < 72n αp = o(n).
We divide the remainder of the proof in two cases. Fix γ ∈ (1/2, 2/3).
Case 1. Suppose firstly that p > n γ−1 /2. Using Definition 9 (ii) we obtain e (N, Y ) ≤ |N ||Y |p + 6np|N ||Y | + 2n.(9)
Thus, combining (8) and (9), we have
(1 − α)np 2 − (np 2 + 6np 2 ln n + 2 ln n) ≤ |Y |p + 6np|Y | |N | + 2n |N | .(10)
Also note that since p > n γ−1 /2 and γ > 1/2, we can estimate 2n |N |p
(7) ≤ 12 αp 2 < 24 αn 2γ−2 = o(n).(11)
Finally, we conclude that w is a Seymour vertex, since (10) becomes
|Y | ≥ (1 − α − 2p)n 2 − √ 6n ln n − 6n|Y | |N |p − 2n |N |p − 2 ln n p ( ) ≥ n 2 (1 − 2α − 2p) (5) > |X|,
where inequality ( ) follows from Claim 13 and (11).
Case 2.
Suppose now that p ≤ n γ−1 /2. In this case (6) implies |X| ≤ n γ . Since N ⊆ X and |Y | < |X|, Definition 9 (ii) (with n = n γ and n = n γ ln n) yields e (N, Y ) ≤ |N ||Y |p + 6(n γ ln n)p|N ||Y | + 2n γ ln n < |N ||Y |p + 6np|N ||Y | + 2n γ ln n.
Now, from (8) and (12), we obtain the following inequality, which is analogous to (10), but with the term 2n/|N | replaced by 2n γ ln n/|N |.
(1 − α)np 2 − p 2 n + 6np 2 ln n + 2 ln n ≤ |Y |p + 6np|Y | |N | + 2n γ ln n |N | .(13)
We claim that 2n γ ln n/|N | = o(np). Indeed, since p ≥ εn −2/3 and γ < 2/3, we have 2n γ ln n |N |p
(7) ≤ 12n γ ln n αnp 2 = 12n γ−1 ln n αp 2 ≤ 12n γ+1/3 ln n αε 2 = o(n),(14)
We complete the proof of Case 2 by solving (13) for |Y | as in Case 1 (using Claim 13 and (14) to estimate 2n γ ln n/|N | ).
Proof of Theorem 4
We are now in a position to prove Theorem 4, which we restate for convenience.
Orientations with large minimum outdegree
Our last result in this section yields yet another class of orientations of p-typical graphs which must always contain a Seymour vertex. In fact, we consider a larger class of underlying graphs, showing that if a graph G satisfies items (i) and (ii) of Definition 9, then every orientation D of G with minimum outdegree δ + (D) = Ω(n 1/2 ) contains a Seymour vertex. This may be useful towards extending the range of p for which a.a.s. G(n, p) ∈ S.
Lemma 14.
Fix β > 0. There exist a constant C = C(β) and n 0 = n 0 (β) such that the following holds for all n ≥ n 0 and p ≤ 2/3 − β. If G is a graph of order n that satisfies items (i) and (ii) of Definition 9, then every orientation D of G for which δ + (D) ≥ Cn 1/2 has a Seymour vertex.
Note that Lemma 14 and Lemma 10 immediately imply Theorem 3.
Proof of Lemma 14. Since (1 − 3p/2) ≥ 3β/2, we may fix C ≥ 4 so that
1 − 3p 2 C − 3p(1 − p) + 6p(1 − p) ≥ 3βC 2 − 4 ≥ 1. Fix v ∈ V (D) with deg + (v) = δ + (D), let X = N 1 (v) and Y = N 2 (v)
. We shall prove that |X| ≤ |Y |. Suppose to the contrary that |Y | < |X|. By Definition 9 (i),
e (X, Y ) = a∈X deg + (a) − e(X) ≥ |X| 2 − |X| 2 p 2 + |X| 3np(1 − p) + 2n = 1 − p 2 |X| 2 − |X| 3np(1 − p) + 2n ,(15)
and by Definition 9 (ii) (with n = n = n) we have
e (X, Y ) ≤ e(X, Y ) ≤ |X||Y |p + 6np(1 − p)|X||Y | + 2n < |X| 2 p + |X| 6np(1 − p) + 2n.(16)
Since |X| ≥ Cn 1/2 ≥ n 1/2 , combining (15) and (16) yields the following contradiction.
4n > 1 − 3p 2 |X| 2 − |X| 3np(1 − p) + 6np(1 − p) ≥ Cn 1 − 3p 2 C − 3p(1 − p) + 6p(1 − p) ≥ 4n.
Typical orientations of bijumbled graphs
In this section, we focus on a well-known class of pseudorandom graphs (that is, deterministic graphs which embody many properties of G(n, p) ), and argue that almost all of their orientations contain a Seymour vertex. The following results concern graphs of order n and density p, where Cn −1/2 ≤ p ≤ 1−ε, and C = C(ε) > 0 depends only on the constant ε > 0.
Definition 15 -(p, α)-bijumbled. Let p and α be given. We say that a graph G of order n is weakly
(p, α)-bijumbled if, for all U , W ⊂ V (G) with U ∩ W = ∅ and 1 ≤ |U | ≤ |W | ≤ np|U |, we have e(U, W ) − p|U ||W | ≤ α |U ||W |.(17)
If (17) holds for all disjoint U , W ⊂ V (G), then we say that G is (p, α)-bijumbled.
We note that the random graph is a.a.s. bijumbled. In what follows, A shall always denote the constant from Theorem 16. A simple double-counting argument shows the following.
Fact 17. If G is weakly (p, α)-bijumbled, then for every U ⊂ V (G) we have
e G[U ] − p |U | 2 ≤ α|U |.(18)
We also use the following result, whose simple proof we include for completeness.
Lemma 18. There exists a universal constant C > 1 such that if A ≥ 2 and ε, p ∈ (0, 1) are such that ε 3 np 2 ≥ A 2 C, then every weakly (p, A √ np)-bijumbled graph G of order n satisfies the following properties.
(i) {v ∈ V (G) : | deg(v) − np| > εnp} ≤ εn. (ii) {(u, v) ∈ V (G) 2 : deg(u, v) ≤ (1 − ε)np 2 } ≤ εn 2 .
(iii) For every orientation of G and every integer d, we have
{v ∈ V (G) : deg + (v) < d} ≤ 2 d − 1 p + 2A n p + 1
Proof. Let G be as in the statement. We may and shall assume that C is large enough so that the required inequalities hold. Throughout this proof, W denotes the set of vertices with degree strictly below (1 − 2ε/3)np. Firstly, we prove (i). We claim that |W | < εn/2. Indeed, suppose the contrary and consider a subset W ⊆ W of size precisely εn/2. By Fact 17, we have
e(W ) ≥ p (εn/2) 2 3 − A √ np(εn/2) = p (εn/2) 2 3 1 − 36A 2 ε 2 np > p (εn/2) 2 4 = √ 2 16 ε 3 np 1 − ε/2 ε 2 (1 − ε/2)n 3 p ≥ A np εn 2 (1 − ε/2)n = A np|W |(n − |W |). (19) Now, note that |V (G) \ W | < n < A 2 Cn/(ε 2 p) ≤ εn 2 p = np|W |, but e W , V (G) \ W < |W | · (1 − 2ε/3)np − 2e(W ) < |W | · (1 − ε/2)np − 2e(W ) = p|W |(n − |W |) − 2e(W ) (19) ≤ p|W |(n − |W |) − A np|W |(n − |W |),
which contradicts the weak bijumbledness of G.
Similarly, we show that the set Z of vertices having degree strictly greater than (1 + 2ε/3)pn satisfies |Z| < εn/2, which together with the argument above proves (i). More precisely, suppose |Z| ≥ εn/2, fix Z ⊆ Z with |Z | = εn/2. We claim that A √ np|Z | and A np|Z |(n − |Z |) are both small (constant) fractions of p|Z | 2 . Indeed, as |Z | 2 < |Z |(n − |Z |) < |Z |n, it follows that
A √ np|Z | p|Z | 2 < A np|Z |(n − |Z |) p|Z | 2 < A n 2 p|Z | p|Z | 2 = A 2 n 2 p|Z | 3 = A 2 n 2 p(εn/2) 3 ( ) ≤ 8p C ,
where ( ) is due to ε 3 np 2 ≥ CA 2 . Fact 17 and the previous inequalities imply
e(Z ) < p|Z | 2 2 + A √ np|Z | < p|Z | 2 1 2 + 8p C < p|Z | 2 1 2 + 32p C − A np|Z |(n − |Z |) < p|Z | 2 − A np|Z |(n − |Z |). Analogously, we have |V (G) \ Z | < np|Z |, but e(Z , V (G) \ Z ) ≥ (1 + 2ε/3)np|Z | − 2e(Z ) ≥ p|Z | n − |Z | + 1 2 + 2 3 εnp|Z | − 2e(Z ) > p|Z | n − |Z | + 2p|Z | 2 − 2e(Z ) > p|Z | n − |Z | + A np|Z |(n − |Z |),
which is again a contradiction to Definition 15. This concludes the proof of (i). We next prove (ii). For each u ∈ V (G), let B(u) be the set of vertices that have fewer than (1−ε)np 2 common neighbors with u. By definition, for any vertex u and set B ⊆ B(u) we have e N (u), B < (1 − ε)np 2 B . We shall prove that B(u) < εn/2 for all u ∈ V (G) \ W . Indeed, suppose for a contradiction, that u ∈ V (G) \ W and |B(u)| ≥ εn/2. Let N ⊂ N (u) be a set of size precisely (1 − 2ε/3)np, and let B ⊆ B(u) be a set of size precisely εn/2. Since ε 3 np 2 ≥ A 2 C, we have
εnp 2 |B | 3 = ε 2 n 2 p 2 6 > 1 6 ε 4 n 4 p 4 (1 − 2ε/3) 2 > A np|N ||B |.(20)
We claim that |B | ≤ np|N |. Indeed, |N | ≤ np ≤ εn 2 p/2 = np|B | because εn/2 > 1,
and |B | = εn/2 ≤ A 2 Cn/(3ε 3 ) ≤ n 2 p 2 /3 ≤ np|N | because ε 3 np 2 ≥ A 2 C and ε < 1.
Hence, since G is weakly bijumbled, we reach the following contradiction
p|N ||B | − A np|N ||B | ≤ e N , B < (1 − ε)np 2 B = 1 − 2ε 3 np 2 |B | − εnp 2 |B | 3 (20) < p|N ||B | − A np|N ||B |. Hence B(u) < εn/2 for all u ∈ V (G) \ W . Note that if |N (u) ∩ N (v)| < np 2 (1 − ε) for distinct u, v ∈ V (G), then either u ∈ W or v ∈ B(u).
We conclude that there are at most |W |n + n(εn/2) < εn 2 such pairs, as desired.
To prove (iii), fix an orientation D of G and put X = {v ∈ V (G) : deg + D (v) < d}. Fact 17 then yields the desired inequality:
|X|(d − 1) ≥ e(G[B]) ≥ |X| 2 p − A √ np|X|.
Almost all orientations of bijumbled graphs
In this section we show that almost every orientation of a weakly bijumbled graph contains a Seymour vertex.
Theorem 5. There exists an absolute constant C > 1 such that the following holds. Let G be a weakly (p, A √ np)-bijumbled graph of order n, where ε 3 np 2 ≥ A 2 C and p < 1 − 15 √ ε. If D is chosen uniformly at random among the 2 e(G) possible orientations of G, then a.a.s. D has a Seymour vertex.
Proof. We may and shall assume that A 2 C is larger than any given absolute constant.
Let V = V (G). For each u ∈ V , let B(u) = {v ∈ V : deg(u, v) ≤ (1 − ε)np 2 }. Also, let BAD 1 = u ∈ V : |B(u)| ≥ √ εn .
Lemma 18 (ii) guarantees that | BAD 1 | ≤ √ εn and, by definition, |B(u)| < √ εn for each u / ∈ BAD 1 . Fix an arbitrary orientation of G. For simplicity, we write G for both the oriented and underlying graphs.
Let BAD 2 = {v ∈ V (G) : deg + (v) < 2 √ εnp}. By Lemma 18 (iii), we must have | BAD 2 | ≤ 2(2 √ εnp − 1) p + 2A n p + 1 < 5 √ εn.
Let BAD = BAD 1 ∪ BAD 2 and put U = V \ BAD, and note that | BAD | ≤ 6 √ εn.
Claim 19. There exists w ∈ U such that
deg + G (w) < n/2 − √ εn.
Proof. Recall that p < 1 − 15 √ ε. Hence ε < 15 −2 < 1 and
(1 + ε)p 2 + 6 √ ε < (1 + ε)(1 − 15 √ ε) 2 + 6 √ ε < 1 − 2 √ ε 2 (21)
Note also that ε 3 np 2 ≥ A 2 C yields A ≤ ε 3 np 2 /C. Hence,
A √ np ≤ εnp εp C < εnp 2 .(22)
By Fact 17, we have
e(G[U ]) |U | ≤ p |U | |U | 2 + A √ np ≤ p|U | 2 + A √ np (22) ≤ (1 + ε) np 2 .(23)
Owing to (23), averaging the outdegrees of vertices in U yields that some w ∈ U satisfies deg +
G[U ] (w) ≤ e G[U ] /|U | < (1 + ε)np/2. Hence, deg + G (w) ≤ deg + G[U ] (w) + | BAD | < (1 + ε)np 2 + 6 √ εn (21) ≤ (1 − 2 √ ε)n 2 .
Note that since we picked an arbitrary orientation of G, the vertex w given by Claim 19 exists for any such orientation. To conclude the proof, we next show that in a random orientation of G almost surely every vertex in U is an (1 − 2 √ ε)-king, where a vertex v is said to be a λ-king if the number of vertices z for which there exists a directed path of length 2 from v to z is at least λn.
Claim 20. In a random orientation of G, a.a.s. for each
X ⊆ V (G) with |X| = 2 √ εnp we have N 1 (X) ≥ (1 − 2 √ ε)n, where N 1 (X) = x∈X N 1 (x).
Proof. Note that for all X, Y ⊆ V (G), there exist X ⊆ X and Y ⊆ Y such that X ∩ Y = ∅ and |X | = |X|/2 and
|Y | = |Y |/2. Fix X ⊆ V (G) with |X| = 2 √ εnp. If we choose Y such that |Y | = 2 √ εn, then |X | ≤ |Y | = √ εn ≤ √ εn 2 p 2 = np|X | because np 2 ≥ A 2 C/ε 3 ≥ 1. Hence, as G is weakly bijumbled, e(X, Y ) ≥ e(X , Y ) ≥ p|X||Y | 4 − A np|X||Y | 2 .(24)
Let E X denote the 'bad' event that N 1 (X) < (1 − 2 √ ε)n, so E X occurs if and only if there exists Y ⊆ V (G) with |Y | = 2 √ εn such that e (X, Y ) = 0. For any X such that |X| = 2 √ εnp, summing over all Y of size 2 √ εn yields
Pr[E X ] ≤ Y 2 −e(X,Y ) (24) ≤ n 2 √ εn exp −(ln 2) εn 2 p 2 − A εn 3 p 2 ≤ exp 2n √ ε ln e 2 √ ε − (ln 2)εn 2 p 2 1 − ε √ C ≤ exp 2n √ ε e 2 √ ε − (ln 2)εn 2 p 2 1 − ε √ C ≤ exp −2(ln 2)n(25)
using that εnp 2 ≥ A 2 Cε −2 ≥ 12 and that ε/ √ C ≤ C −1/2 < 1/2 because ε < 1 and C is a large constant. Taking a union bound over all X of size 2 √ εnp, we see that no bad event occurs is with high probability, since We conclude showing that w is a Seymour vertex. Indeed, since w / ∈ BAD 2 , we have deg + (w) ≥ 2 √ εnp. Now, Claim 20 implies that N 2 (w) ≥ (1 − 2 √ ε)n, and thus, by Claim 19, we have deg
+ (v) < (1 − 2 √ ε)n/2, which implies N 2 G (w) ≥ (1 − 2 √ ε)n − deg + G (w) > (1 − 2 √ ε)n 2 > deg + G (w).
Concluding remarks
In this paper we confirmed Seymour's Second Neighborhood Conjecture (SNC) for a large family of graphs, including almost all orientations of (pseudo)random graphs. We also prove that this conjecture holds a.a.s. for arbitrary orientations of the random graph G(n, p), where p = p(n) lies below 1/4. Interestingly, this range of p encompasses both sparse and dense random graphs. The main arguments in our proofs lie in finding a vertex w of relatively low outdegree whose outneighborhood contains many vertices of somewhat large outdegree. Since outneighbors of w cannot have small common outneighborhood, we conclude that N 2 (w) must be large.
Naturally, it would be interesting to extend further the range of densities for which arbitrary orientations of G(n, p) satisfy the SNC.
It is seems likely that other classes of graphs, such as (n, d, λ)-graphs, are susceptible to attack using this approach. Theorem 4 is also a small step towards the following weaker version of Conjecture 1. (1), taking x as ln n in both cases, and taking union bounds over n or n 2 events respectively. Hence G(n, p) satisfies properties (i), (iii) and (iv) with probability 1 − o(1).
The strategy to prove (ii) is similar to the above, but calculating the number of events in the union bound is slightly more involved. If n = n = n, then (as above) we consider e(X, Y ) in place of e(X), let x = n and take a union bound over 2 2n events. Otherwise, if 1 ≤ n ln n ≤ n ≤ n, then let Ω be the set of pairs {X, Y } with X, Y, ∈ V (G) and |X|, |Y | ≤ n , and note that |Ω| ≤ 1 + where we use that ln n ≤ n ln n ≤ n .
Theorem 2 .
2Let p : N → (0, 1). If lim sup n→∞ p < 1/4, then a.a.s. G(n, p) ∈ S.
Theorem 3 .
3For every β > 0, there exists C = C(β) such that the following holds for all p : N → (0, 1). If lim sup n→∞ p ≤ 2/3 − β, then a.a.s. every orientation of G(n, p)
Theorem 4 .
4Let p : N → (0, 1) and let G = G(n, p). If D is chosen uniformly at random among the 2 e(G) orientations of G, then a.a.s. D has a Seymour vertex.
Theorem 4 .
4Let p : N → (0, 1), and let G = G(n, p). If D is chosen uniformly at random among the 2 e(G) orientations of G, then a.a.s. D has a Seymour vertex.Proof of Theorem 4. Let G = G(n, p). If p < 1/5, then Pr[ G ∈ S ] = 1 − o(1) by Theorem 2. On the other hand, if p ≥ 1/5, then standard concentration results for binomial random variables (e.g., Chernoff-type bounds) yield that every ordered pair (u, v) of distinct vertices of G satisfies, say deg(u, v) ≥ n/50, and hence with probability 1 − o(1) every such pair is joined by a directed path of length 2. This is because building a random orientation of G(n, p) is equivalent to first choosing which edges are present and then choosing the orientation of each edge uniformly at random, with choices mutually independent for each edge. In other words, withprobability 1 − o(1), for all u ∈ V (G) we have V (G) = {u} ∪ N 1 (u) ∪ N 2 (u).Finally, by averaging outdegrees, we can find a vertex z ∈ V (D) with outdegree at most (n − 1)/2, because v∈V (D) deg + (v) = e(G) ≤ n(n − 1)/2. Such z is a Seymour vertex as desired.
Theorem 16 -
16Lemma 3.8 in[11]. For any p : N → (0, 1], the random graph G(n, p) is a.a.s. weakly (p, A √ np)-bijumbled for a certain absolute constant A ≤ e 2 √ 6.
X
Pr E X (25) ≤ 2 n exp −2(ln 2)n = o(1), and the claim holds as required.
Question 21 .
21Do most orientations of an arbitrary graph G satisfy the SNC? where Var(Z) is the variance of Z. By Lemma 22, if Z ∼ B(N, p) thenE 1(Z, x) = Pr 1(Z, x) = 1 < 2 exp(−3x). (27) Firstly, we show that a.a.s. (i) holds. For each X ⊆ V (G), let Z X = e(X) and let Z = X⊆V (G) 1 (Z X , n) , taking x = n. Note that Z X ∼ B |X| 2 , p for all X. By linearity of expectation, (−3n) < 2 n+1 exp (−3n) = o(1). Since Z ≥ 0 (it is the sum of indicator random variables), we may use Markov's inequality, obtaining Pr[Z ≥ 1] ≤ E Z = o(1). A similar calculation, considering in turn deg(v) or N (u) ∩ N (v) instead of e(X), proves that each of the items (iii) and (iv) fails to hold with probability o
i<
≤ n < n/2 for sufficiently large n, we have n i exp 2n (1 + ln n) By Lemma 22, for each {X, Y } ∈ Ω we have Pr 1(e(X, Y ), n ) < 2 exp(−3n ). Applying Markov's inequality to Z = {X,Y }∈Ω 1 e(X, Y ), n , we obtain Pr[Z ≥ 1] ≤ E Z < exp 2n (1 + ln n) · 2 exp(−3n ) ≤ 2 exp(−n /2) = o(1),
AcknowledgmentsThe authors thank Yoshiharu Kohayakawa for useful discussions, in particular for suggesting we consider bijumbled graphs.A Proof that G(n, p) is p-typical (Lemma 10)In this section, we show that G(n, p) satisfies the standard properties of Definition 9. To simplify this exposition, we make use of Lemma 22 below. Let B ∼ B(N, p) denote that B is a binomial random variable corresponding to the number of successes in N mutually independent trials, each with success probability p.Proof of Lemma 22 using Lemma 23. Let σ 2 = N p(1 − p) and t = √ x 2 + 6xσ 2 + x. Since (t − x) 2 = x 2 + 6xσ 2 , we have t 2 = 2tx + 6xσ 2 = 6x(σ 2 + t/3). By Lemma 23,Since t ≤ √ 6σ 2 x + 2x, we haveWe next show that G(n, p) is p-typical. The properties in Definition 9 follow by choosing x in Lemma 22 so as to make the appropriate a union bound small.N → (0, 1), a.a.s. G = G(n, p)is p-typical.Proof. We will show that a.a.s. (i)-(iv) of Definition 9 hold. Given a random variable Z and x > 0, let 1(Z, x) be the indicator variable of the 'bad' event |Z − E Z| > 6x Var(Z) + 2x,
Seymour's Second Neighborhood Conjecture in arbitrary orientations of a random graph. F Botler, P Moura, T Naia, Discrete Mathematics Days 2022. 26358Universidad de CantabriaF. Botler, P. Moura, and T. Naia. Seymour's Second Neighborhood Conjecture in arbitrary orientations of a random graph. In Discrete Mathematics Days 2022, volume 263, page 58. Ed. Universidad de Cantabria, 2022.
Seymour's Second Neighborhood Conjecture on sparse random graphs. F Botler, P Moura, T Naia, Anais do VII Encontro de Teoria da Computação. 2022F. Botler, P. Moura, and T. Naia. Seymour's Second Neighborhood Conjecture on sparse random graphs. In Anais do VII Encontro de Teoria da Computação, pages 37-40. SBC, 2022.
Second neighborhood via first neighborhood in digraphs. G Chen, J Shen, R Yuster, Ann. Comb. 71G. Chen, J. Shen, and R. Yuster. Second neighborhood via first neighborhood in digraphs. Ann. Comb., 7(1):15-20, 2003.
The number of Seymour vertices in random tournaments and digraphs. Z Cohn, A Godbole, E Wright Harkness, Y Zhang, Graphs Combin. 325Z. Cohn, A. Godbole, E. Wright Harkness, and Y. Zhang. The number of Seymour vertices in random tournaments and digraphs. Graphs Combin., 32(5):1805-1816, 2016.
Combinatorial theorems in sparse random sets. D Conlon, W T Gowers, Ann. of Math. 1842D. Conlon and W. T. Gowers. Combinatorial theorems in sparse random sets. Ann. of Math. (2), 184(2):367-454, 2016.
Squaring the tournament-an open problem. N Dean, B Latka, Congr. Numer. 109N. Dean and B. Latka. Squaring the tournament-an open problem. Congr. Numer., 109:73-80, 1995.
Remarks on the second neighborhood problem. D Fidler, R Yuster, J. Graph Theory. 553D. Fidler and R. Yuster. Remarks on the second neighborhood problem. J. Graph Theory, 55(3):208- 220, 2007.
Squaring a tournament: a proof of Dean's conjecture. D Fisher, J. Graph Theory. 231D. Fisher. Squaring a tournament: a proof of Dean's conjecture. J. Graph Theory, 23(1):43-48, 1996.
Seymour's second neighborhood conjecture for tournaments missing a generalized star. S , J. Graph Theory. 711S. Ghazal. Seymour's second neighborhood conjecture for tournaments missing a generalized star. J. Graph Theory, 71(1):89-94, 2012.
Median orders of tournaments: a tool for the second neighborhood problem and Sumner's conjecture. F Havet, S Thomassé, J. Graph Theory. 354F. Havet and S. Thomassé. Median orders of tournaments: a tool for the second neighborhood problem and Sumner's conjecture. J. Graph Theory, 35(4):244-256, 2000.
The induced size-Ramsey number of cycles. P E Haxell, Y Kohayakawa, T Łuczak, Combin. Probab. Comput. 43P. E. Haxell, Y. Kohayakawa, and T. Łuczak. The induced size-Ramsey number of cycles. Combin. Probab. Comput., 4(3):217-239, 1995.
Random graphs. S Janson, T Łuczak, A Ruciński, Wiley-InterscienceNew YorkS. Janson, T. Łuczak, and A. Ruciński. Random graphs. Wiley-Interscience, New York, 2000.
Extremal results for random discrete structures. M Schacht, Ann. of Math. 1842M. Schacht. Extremal results for random discrete structures. Ann. of Math. (2), 184(2):333-365, 2016.
The arc-weighted version of the second neighborhood conjecture. T Seacrest, J. Graph Theory. 783T. Seacrest. The arc-weighted version of the second neighborhood conjecture. J. Graph Theory, 78(3):219-228, 2015.
| []
|
[
"FTUAM-12-116 Flavour with a Light Dynamical \"Higgs Particle\"",
"FTUAM-12-116 Flavour with a Light Dynamical \"Higgs Particle\""
]
| [
"R Alonso \nDepartamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n",
"M B Gavela \nDepartamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n\nDepartment of Physics\nTheory Division\nCERN\nCH-1211Geneva 23Switzerland\n",
"L Merlo [email protected] \nDepartamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n\nDepartment of Physics\nTheory Division\nCERN\nCH-1211Geneva 23Switzerland\n\nTUM Institute for Advanced Study\nTechnische Universität München\nLichtenbergstrasse 2aD-85748GarchingGermany\n",
"S Rigolin \nDipartimento di Fisica \"G. Galilei\"\nUniversità di Padova\nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly\n",
"J Yepes [email protected] \nDepartamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n"
]
| [
"Departamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain",
"Departamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain",
"Department of Physics\nTheory Division\nCERN\nCH-1211Geneva 23Switzerland",
"Departamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain",
"Department of Physics\nTheory Division\nCERN\nCH-1211Geneva 23Switzerland",
"TUM Institute for Advanced Study\nTechnische Universität München\nLichtenbergstrasse 2aD-85748GarchingGermany",
"Dipartimento di Fisica \"G. Galilei\"\nUniversità di Padova\nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly",
"Departamento de Física Teórica and Instituto de Física Teórica\nIFT-UAM/CSIC\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain"
]
| []
| The Higgs-fermion couplings are sensitive probes of possible new physics behind a stable light Higgs particle. It is then essential to identify the flavour pattern of those interactions. We consider the case in which a strong dynamics lies behind a light Higgs, and explore the implications within the Minimal Flavour Violation ansatz. The dominant effects on flavour-changing Higgs-fermion couplings stem in this context from operators with mass dimension ≤ 5, and we analyze all relevant chiral operators up to that order, including loop-corrections induced by 4-dimensional ones. Bounds on the operator coefficients are derived from a plethora of low-energy flavour transitions, providing a guideline on which flavour-changing Higgs interactions may be open to experimental scrutiny. In particular, the coefficient of a genuinely CP-odd operator is only softly constrained and therefore its impact is potentially interesting. | 10.1103/physrevd.87.055019 | [
"https://arxiv.org/pdf/1212.3307v3.pdf"
]
| 44,148,671 | 1212.3307 | 03e44418f3665a2a891648d83013dbf9aa8906ac |
FTUAM-12-116 Flavour with a Light Dynamical "Higgs Particle"
19 Sep 2013
R Alonso
Departamento de Física Teórica and Instituto de Física Teórica
IFT-UAM/CSIC
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
M B Gavela
Departamento de Física Teórica and Instituto de Física Teórica
IFT-UAM/CSIC
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
Department of Physics
Theory Division
CERN
CH-1211Geneva 23Switzerland
L Merlo [email protected]
Departamento de Física Teórica and Instituto de Física Teórica
IFT-UAM/CSIC
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
Department of Physics
Theory Division
CERN
CH-1211Geneva 23Switzerland
TUM Institute for Advanced Study
Technische Universität München
Lichtenbergstrasse 2aD-85748GarchingGermany
S Rigolin
Dipartimento di Fisica "G. Galilei"
Università di Padova
INFN
Sezione di Padova
Via Marzolo 8I-35131PaduaItaly
J Yepes [email protected]
Departamento de Física Teórica and Instituto de Física Teórica
IFT-UAM/CSIC
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
FTUAM-12-116 Flavour with a Light Dynamical "Higgs Particle"
19 Sep 2013
The Higgs-fermion couplings are sensitive probes of possible new physics behind a stable light Higgs particle. It is then essential to identify the flavour pattern of those interactions. We consider the case in which a strong dynamics lies behind a light Higgs, and explore the implications within the Minimal Flavour Violation ansatz. The dominant effects on flavour-changing Higgs-fermion couplings stem in this context from operators with mass dimension ≤ 5, and we analyze all relevant chiral operators up to that order, including loop-corrections induced by 4-dimensional ones. Bounds on the operator coefficients are derived from a plethora of low-energy flavour transitions, providing a guideline on which flavour-changing Higgs interactions may be open to experimental scrutiny. In particular, the coefficient of a genuinely CP-odd operator is only softly constrained and therefore its impact is potentially interesting.
Introduction
A new resonance at the Electroweak (EW) scale has been established at LHC. Both ATLAS and CMS collaborations have recently presented [1,2] the discovery of an excess of events above the expected Standard Model (SM) background with a local significance of 5σ consistent with the hypothesis of the SM scalar boson [3][4][5] (so-called "Higgs boson" for short) with mass around 125 GeV.
This resonance is, at the moment, compatible with the SM Higgs interpretation, even if the rate in the di-photon channel, slightly above SM expectations, leaves still open the possibility of non-standard effects, and furthermore a ∼ 2σ tension persists between the predictions and measurement of the rate R 0 b and the forward-backward asymmetry A 0,b F B , in b-quark production from e + − e − collisions [6,7].
There are essentially two main frameworks that have been proposed over the last decades in order to explain the EW symmetry breaking sector. The first possibility is that the Higgs is a fundamental particle, transforming linearly (as a doublet in the standard minimal picture) under the gauge symmetry group SU (2) L × U (1) Y . This line of thought suggests, due to the appearance of the hierarchy problem, to invoke new physics (NP) around the TeV scale in order to definitively stabilize the Higgs (and the EW) mass scale. The MSSM and its variations are the best explored options of that kind, and a plethora of SUSY partners should populate the scale unveiled by LHC experiments, unless awkward fine-tuning effects take place.
An interesting alternative is that the Higgs dynamics is non-perturbative and associated to a strong interacting force with scale Λ s , and the gauge symmetry in the scalar sector is non-linearly realized. In the original "Technicolor" formulation [8][9][10], no physical Higgs particle appears in the low-energy spectrum and only the three would-be-Goldstone bosons responsible for the weak gauge boson masses are retained. The characteristic scale f associated to the Goldstone bosons was identified with the electroweak scale f = v ≡ 246 GeV, defined from the W mass M W = gv/2, and respecting f ≥ Λ s /4π [11]. The smoking gun signature of this Technicolor ansatz is the appearance of several vector and fermion resonances at the TeV scale. The discovery of a light Higgs candidate has recently focused the attention on an interesting variant: to consider still a strong dynamics behind the electroweak scalar sector but resulting -in addition-in a composite (instead of elementary) and light Higgs particle. In this scenario, proposed long ago [12][13][14][15][16][17], the Higgs itself would be one of the Goldstone bosons associated with the strong dynamics at the scale Λ s , while its mass would result from some explicit breaking of the underlying strong dynamics. It was suggested that this breaking may be caused by the weak gauge interactions or alternatively by non-renormalizable couplings. These ideas have been revived in recent years and are opportune given the recent experimental data (see for example Ref. [18] for a recent review on the subject). In this class of scenarios, f may lie around the TeV regime, while v is linked to the electroweak symmetry breaking process and is not identified with f , v ≤ f . The degree of non-linearity is then quantified by a new parameter,
ξ ≡ v 2 f 2 ,(1.1)
and, for instance, f ∼ v characterizes the extreme non-linear constructions, while f v is typical of scenarios which mimic the linear regime. As a result, for non-negligible ξ there may be corrections to the size of the SM couplings observable at low energies due to NP contributions.
The question we address in this paper is the flavour structure of the NP operator coefficients, when a strong dynamics is assumed at the scale Λ s and in the presence of a light Higgs particle. In particular, dangerous NP contributions to flavour-changing observables could arise. Indeed, the core of the flavour problem in NP theories consists in explaining the high level of suppression that must be encoded in most of the theories beyond the SM in order to pass flavour changing neutral current (FCNC) observability tests. Minimal Flavour Violation (MFV) [19][20][21] emerged in the last years as one of the most promising working frameworks and it will be used in this work.
Following the MFV ansatz, flavour in the SM and beyond is described at low-energies uniquely in terms of the known fermion mass hierarchies and mixings. An outcome of the MFV ansatz is that the energy scale of the NP may be as low as few TeV in several distinct contexts [22][23][24][25], while in general it should be larger than hundreds of TeV [26]. MFV has been codified as a general framework built upon the flavour symmetry of the kinetic terms [27][28][29][30][31][32][33][34]. For quarks, the flavour group
G f = SU (3) Q L × SU (3) U R × SU (3) D R (1.2)
defines the non-abelian transformation properties of the SU (2) L doublet Q L and singlets U R and D R ,
Q L ∼ (3, 1, 1) , U R ∼ (1, 3, 1) , D R ∼ (1, 1, 3) . (1.3)
To introduce the Yukawa Lagrangian without explicitly breaking G f , the Yukawa matrices for up (Y U ) and down (Y D ) quarks can be promoted to be spurion fields transforming under the flavour symmetry,
Y U ∼ (3,3, 1) , Y D ∼ (3, 1,3) . (1.4)
The quark masses and mixings are correctly reproduced once these spurion fields get background values as
Y U = V † y U , Y D = y D ,(1.5)
where y U,D are diagonal matrices whose elements are the Yukawa eigenvalues, and V a unitary matrix that in good approximation coincides with the CKM matrix. These background values break the flavour group G f , providing contributions to FCNC observables suppressed by specific combinations of quark mass hierarchies and mixing angles. In Ref. [21], the complete basis of gauge-invariant 6-dimensional FCNC operators has been constructed for the case of a linearly realized SM Higgs sector, in terms of the SM fields and the Y U and Y D spurions. Operators of dimension d > 6 are usually neglected due to the additional suppression in terms of the cut-off scale. The MFV ansatz in the presence on a strong interacting dynamics has been introduced in Ref. [35], where the list of relevant d = 4 flavour-changing operators was identified, in the limit in which the Higgs degree of freedom is integrated out. In the non-linear regime a chiral expansion is pertinent, and this results in a different set of operators at leading order than in the case of the linear regime, as the leading operators in the linear and non-linear expansion do not match one-to-one (see for instance the discussion in Ref. [36]). The promotion of the Yukawa matrices to spurions follows the same lines as in the linear regime, though. Indeed, when the SM quarks Ψ L,R couple bi-linearly to the strong sector 1
Ψ L Y Ψ Ψ R Θ s ,(1.6)
with Θ s a flavour blind operator in the strong sector, then all flavour information is encoded in Y Ψ , that, in order to preserve the flavour group G f , must transform as in Eq. (1.4). Once the spurions have been defined as the only sources of flavour violation (in the SM and beyond), it is possible to build the tower of FCNC operators, invariant under both the gauge and the flavour symmetries. It is customary to define:
λ F ≡ Y U Y † U + Y D Y † D = V † y 2 U V + y 2 D ,(1.7)
1 A more complicated case in which the link among the proto-Yukawa interactions and the spurions Y U,D is less direct happens in the context of the partial compositeness [37]. In this case, quarks couple to the strong sector linearly and therefore two Yukawa couplings, Y Ψ L and Y Ψ R , for each flavour sector appear. Spurions are then identified with only one of these proto-Yukawa couplings for each flavour sector, with the other assumed flavour diagonal [38]. We will not consider here this possibility. which transforms as a (8, 1, 1) under G f . The only relevant non-diagonal entries are all proportional to the top Yukawa coupling, (λ F ) ij ≈ y 2 t V * ti V tj , for i = j. Within the spirit of MFV, the flavour structure of all Yukawa terms will be dictated only by its fermion composition; in consequence, the resulting fermion-h couplings get diagonalized together with the fermion mass matrix diagonalization. In other words, flavourchanging couplings require operators of (at least) dimension 5. This property will also apply to the non-linear analysis below.
In this work we construct the tower of d ≤ 5 h-fermion flavour-changing operators for a generic strong interacting light Higgs scenario. Which operator basis is chosen in an effective Lagrangian approach is an issue relevant to get the best bounds from a given set of observables, and a convenient basis will be used when analyzing flavour, distinct from that applied in Refs. [36,[39][40][41] to analyze the Higgs-gauge sector. A consistent approach requires to revisit as well the d = 4 flavour-changing operators presented in Ref. [35], by introducing the possibility of a light scalar Higgs, and to consider in addition their main loop-induced effects. In the theoretical discussion we will reconsider the interesting exercise performed in Refs. [40][41][42] to reach the non-linear regime from the linear one [39] in the presence of a light Higgs. We will also perform the phenomenological analysis of the strength of the NP fermionic couplings, focusing on the -often stringent-bounds on the operator coefficients that follow from present low-energy measurements on the Higgsless component of the couplings. This will provide a guideline on which type of flavoured Higgs couplings may be at reach at the LHC.
The structure of the paper is the following. Sect. 2 describes the framework and it is mainly devoted to the relation between the linear and non-linear realizations of the electroweak symmetry breaking mechanism with a light scalar Higgs particle. Sect
The framework
By "Higgs" we mean here a particle that, at some level, participates in the EW symmetry breaking mechanism, which requires an SU (2) doublet structure. When building up the hybrid situation in which a non-linear dynamics is assumed but the Higgs is light two strategies are possible: to go from a linear expansion towards a non-linear one, or conversely to start from the non-linear realization of the Goldstone boson mechanism and modify it to account for a light Higgs. In general, four (related) scales may be relevant, Λ s , f , h and v:
i) Λ s is the energy scale of the strong dynamics and the typical size of the mass of the strong scalar and fermionic resonances (in the context of QCD, it corresponds to Λ χSB , the scale of the chiral symmetry breaking [11]).
ii) f is the characteristic scale associated to the Goldstone bosons that give mass to the gauge bosons and respects Λ s ≤ 4πf (in the context of QCD, it corresponds to the pion coupling constant f π ).
iii) h refers to the order parameter of EW symmetry breaking, around which the physical scalar h oscillates.
iiii) v denotes the EW scale, defined through M W = gv/2. In a general model h = v and this leads to an h dependence in the low-energy Lagrangian through a generic functional form F(h + h ).
In non-linear realizations such as Technicolor-like models, it may happen that h = v = f . In the setup considered here with a light h they do not need to coincide, though, although typically a relation links v, h and f . Thus, a total of three scales will be useful in the analysis, for instance Λ s , f and v. Without referring to a specific model, one can attempt to describe the NP impact at low energies resorting to an effective Lagrangian approach, with operators made out of SM fields and invariant under the SM gauge symmetry. The transformation properties of the three longitudinal degrees of freedom of the weak gauge bosons can still be described at low-energy 2 by a dimensionless unitary matrix transforming as a representation of the global symmetry group:
U(x) = e iσaπ a (x)/v , U(x) → L U(x)R † ,(2.1)
with L, R denoting respectively the global transformations SU (2) L,R . The adimensionality of U(x) is the key to understand that the dimension of the leading low-energy operators describing the dynamics of the scalar sector differs for a non-linear Higgs sector [43][44][45][46][47] and a purely linear regime, as insertions of U(x) do not exhibit a scale suppression. It is becoming customary to parametrize the Lagrangian describing a light dynamical Higgs particle h by means of the following ansatz [40,41] (see also Ref. [42]):
L h = 1 2 (∂ µ h)(∂ µ h) (1 + c H ξ F H (h)) − V (h)+ − v 2 4 Tr [V µ V µ ] F C (h) + c T v 2 4 ξ Tr [TV µ ] Tr [TV µ ] F T (h)+ − v √ 2Q L U(x) Y diag(F U Y (h), F D Y (h)) Q R + h.c. + . . . . (2.2)
where dots stand for higher order terms, and V µ ≡ (D µ U) U † (T ≡ Uσ 3 U † ) is the vector (scalar) chiral field transforming in the adjoint of the gauge group SU (2) L . The covariant derivative reads
D µ U(x) ≡ ∂ µ U(x) + ig 2 W a µ (x) σ a U(x) − ig 2 B µ (x) U(x) σ 3 ,(2.3)
with W a µ (B µ ) denoting the SU (2) L (U (1) Y ) gauge bosons and g (g ) the corresponding gauge coupling. In these equations, V (h) denotes the effective scalar potential describing the breaking of the electroweak symmetry, the first term in Eq. (2.2) includes the Higgs kinetic term, while the second line describes the W and Z masses and their interactions with h as well as the usual custodial symmetry breaking term. Finally, restricting our considerations to the quark sector, the third line accounts for the Yukawa-like interactions between h and the SM quarks, grouped in doublets of the global symmetry Q L,R , with
Y ≡ diag (Y U , Y D ) ,
Y U and Y D being the usual Yukawa matrices. The parameters c H and c T are model dependent operator coefficients.
The functions F i (h) in Eq. (2.2), as well as other F(h) functions defined below, encode the generic dependence on the light h particle. Each F(h) function can be expanded in powers of ξ,
F(h) = g 0 (h, v) + ξg 1 (h, v) + ξ 2 g 2 (h, v) + . . ., where g(h, v) are model- dependent functions of h.
We will not need to enter in their precise dependence in this work; a discussion can be found in Ref. [36] and references therein. We just mention here that in previous literature [40,41] the functional dependence of some of those functions has been expressed as a power series in h/v:
F C (h) = 1 + 2a h v + b h 2 v 2 + . . . , F U,D Y (h) = 1 + c U,D h v + . . . .
The constants a, b and c are model-dependent parameters and encode the dependence on ξ. The a and c T parameters are constrained from electroweak precision tests: in particular 0.7 a 1.2 [48] and −1.7 × 10 −3 < c T ξ < 1.9 × 10 −3 [39] at 95% CL. The Lagrangian discussed above can be very useful to describe an extended class of "Higgs" models, ranging from the SM scenario (for h = v, a = b = c U,D = 1 and neglecting higher order terms in h), to the Technicolor-like ansatz (for f ∼ v and omitting all terms in h) and intermediate situations with a light scalar h (in general for f = v) as in composite/holographic Higgs models [10,[12][13][14][15][16][17][49][50][51] up to dilaton-like scalar frameworks [52][53][54][55][56][57][58]. Note that, although electroweak radiative corrections severely constraint Technicolor-like scenarios, in concrete models values of v/f as large as v/f ∼ 0.4 − 0.6 are still allowed at the price of acceptable 10% fine-tunings [18,59]. As a result, the study of higher dimension operators is strongly motivated, especially as the limits on ξ are quite model-dependent: in the effective Lagrangian approach ξ will be left free 0 < ξ < 1 while the constraints on custodial breaking effects will be translated into limits on the operator coefficients. For the case of pure gauge and h-gauge couplings, some of the couplings have been explicitly explored in Refs [39][40][41] and a complete basis of independent operators up to dimension five has been provided in Ref. [36].
The ξ parameter in Eq. (1.1) defines the degree of non-linearity of a specific model and in particular ξ → 0 refers to the linear regime, while ξ → 1 to the non-linear one. For ξ 1 the hierarchy between operators mimics that in the linear expansion, where the operators are written in terms of the Higgs doublets H: couplings with higher number of (physical) Higgs legs are suppressed compared to the SM renormalizable ones, through powers of the high NP scale or, in other words, of ξ [11]. The power of ξ keeps then track of the h-dependence of the d > 4 operators, where the insertions of h enter only through powers of ( h + h)/f ξ 1/2 (v + h)/v, and of ∂ µ h/f 2 (see Ref. [36]). In the ξ 1 limit, the F i (h) functions, appearing in Eq. (2.2) and in the following, would inherit the same universal behaviour in powers of (1 + h/v): at order ξ, that is, for couplings that would correspond to d = 6 operators of the linear expansion, it follows that
F i (h) = F (h) ≡ 1 + h v 2 , (2.4)
An obvious extrapolation applies to the case of couplings weighted by higher powers of ξ, that is with d > 6. When ξ ≈ 1 the ξ dependence does not entail a suppression of operators compared to the renormalizable SM operators and the chiral expansion should instead be adopted, although it should be clarified at which level the effective expansion on h/f should stop. Below, the F(h) functions will be considered completely general polynomial of h and h (in particular not of derivatives of h) and, when using equations of motion and integration by parts to relate operators, they would be assumed to be redefined when convenient, much as one customarily redefines the constant operator coefficients.
To analyze the passage from the linear to the non-linear regime, it is an interesting exercise to explore the transition from a SU (2) L × U (1) Y invariant effective Lagrangian in the linear realization of the EW symmetry breaking mechanism to an effective chiral Lagrangian. For instance, in the so-called SILH framework, operators may be written in either the linear [39] (i.e. using the H(x) doublet) or the non-linear [40,41] (i.e. using the U matrix and a scalar field h) formalism. We have revisited this procedure in App. A.
The flavour sector
The choice of operator basis most suitable when analyzing fermionic couplings is in general one in which fermionic fields participate in the operators.
The flavour-changing sector has not been explicitly taken into consideration in previous analysis of the effective Lagrangian for a strong interacting light Higgs. Flavour-changing terms do appear in the equations of motion for the gauge field strengths in the presence of the effective operators, and the explicit expressions can be found in Appendix B. However, to include them explicitly would translate into corrections to flavoured observables that are quadratic in the effective operator coefficients a i , and more precisely of the type O(a F C × a GH ), where a F C (a GH ) represent the generic flavour-changing (gauge-h) coefficients. Given that a GH are severely constrained by EW data (barring extreme fine-tunings), those quadratic corrections can be disregarded in the rest of the analysis and it is enough to consider the SM equations of motion:
(D µ W µν ) j = i g 4 v 2 Tr[V ν σ j ] F C (h) + g 2Q L γ ν σ j Q L , (3.1) ∂ µ B µν = − i g 4 v 2 Tr[T V ν ] F C (h) + g Q L γ ν h L Q L + g Q R γ ν h R Q R , (3.2)
with h L,R the left and right hypercharges in the 2 × 2 matrix notation. In resume, the analysis of the flavour-changing sector can be considered "independent" of that for the gauge-h and flavour-conserving sectors.
From d = 4 non-linear operators
With the aid of the (Goldstone) chiral fields T and V µ it is only possible to write d = 4 fermionic operators involving two right-handed (RH) or two left-handed (LH) fields. In the MFV framework under consideration, only operators built with two LH fermions can induce flavour-changing effects at leading order in the spurion expansion. Consequently, terms with two RH fermions will not be considered in what follows.
A total of four independent d = 4 chiral operators containing LH fermion fields can be constructed [35,[60][61][62], namely:
O 1 = i 2Q L λ F γ µ {T, V µ } Q L , O 2 = iQ L λ F γ µ V µ Q L , O 3 = iQ L λ F γ µ T V µ T Q L , O 4 = 1 2Q L λ F γ µ [T, V µ ] Q L . (3.3)
Out of these O 1 − O 3 are CP-even while O 4 is intrinsically CP-odd [35]. Following the discussion in Sec. 2, it is pertinent to extend the definition of these chiral couplings in order to include the possibility of a light scalar degree of freedom (related to the EW symmetry breaking), through the ansatz:
L f χ=4 = ξ i=1,2,3â i O i (h) + ξ 2â 4 O 4 (h) (3.4)
where a redefinition by powers of ξ of the operators coefficients defined in Ref. [35] has been implemented, a i ≡ ξâ i for i = 1, 2, 3, while a 4 ≡ ξ 2â 4 . Furthermore
O i (h) ≡ O i F i (h) ,(3.O 1 (h) ≡ O 1 F (h) (1 + α 1 ξ F (h)) , O 2 (h) ≡ O 2 F (h) (1 + α 2 ξ F (h)) , O 3 (h) ≡ O 3 F (h) (1 + α 3 ξ F (h)) , O 4 (h) ≡ O 4 F 2 (h) . (3.6)
The powers of ξ in Eqs. (3.4) and (3.6) facilitate the identification of the lowest dimension at which a "sibling" operator appears in the linear regime. By sibling we mean an operator written in terms of H, that includes the couplings O 1−4 . For instance, the lowest-dimension siblings of O 1 and O 2 arise at d = 6, while that of O 4 appears at d = 8 [35]. The case of O 3 is special: indeed, it corresponds to a combination of a d = 6 and a d = 8 operators of the linear expansion. The parametrization in Eq. (3.6) reflects this correspondence, where for all the operators the contributions from siblings up to d = 8 have been accounted for (further contributions will arise considering higher-dimension siblings). For ξ 1 it is consistent to retain just the terms linear in ξ and neglect the contributions from O 4 (h), while it can be shown [35] that O 3 (h) coincides with −O 2 (h) and finally only two linearly-independent flavoured operators remain (e.g. O 1 (h) and O 2 (h)), as previously studied in the literature. On the contrary, in the ξ ∼ 1 limit all four operators are on the same footing, higher order terms in ξ may contribute, and one recognizes the need of a QCD-like resummation. In particular any chiral operator is made up by an infinite combination of linear ones, an effect represented by the generic F i (h) functions, which admit in general an expansion in powers of ξ as discussed previously.
In Ref. [35] we
L f χ=4 = − g √ 2 W + µŪ L γ µ [a W (1 + β W h/v) + ia CP (1 + β CP h/v)] y 2 U V + V y 2 D D L + h.c. + − g 2 cos θ W Z µ a u ZŪ L γ µ y 2 U + V y 2 D V † U L (1 + β u Z h/v) +a d ZD L γ µ y 2 D + V † y 2 U V D L (1 + β u Z h/v) , (3.7) where a u Z ≡ a 1 + a 2 + a 3 , a d Z ≡ a 1 − a 2 − a 3 , a W ≡ a 2 − a 3 , a CP ≡ −a 4 . (3.8)
The arbitrary coefficients β i in Eq. (3.7) follow a similar rearrangement to that for a i in Eq. (3.8), once the F(h) functions are expanded to first order in h,
F i (h) ∼ (1 + β i h + ...);
in general each β i may receive contributions from all orders in ξ for large ξ. All limits obtained in Ref. [35] for the values of a d Z , a W and a CP resulted from treelevel contributions to observables. It is interesting -and necessary when considering d = 5 effective couplings-to analyze as well the possible bounds on the a i coefficients from their contribution (still disregarding the h insertions) at loop-level to radiative processes, such as b → sγ decay. Indeed, the modification of the CKM matrix has a non-negligible impact in the branching ratio of this observable and its precision on both the experimental determination and the theoretical prediction constrains significantly the a W − a CP parameter space, as we will show in Sec. 4.
Finally, an important difference with strongly interacting heavy Higgs scenarios is the presence at low-energies of vertices with additional h external legs, as indicated by Eq. (3.7). This implies interesting phenomenological consequences that will be illustrated later on.
From d = 5 non-linear operators
To our knowledge, no discussion of d = 5 flavour-changing chiral operators has been presented in literature. They may contribute at tree-level to relevant flavour-changing observables, as for instance the b → sγ branching ratio. In this subsection all d = 5 flavourchanging chiral operators are identified, while interesting phenomenological consequences will be discussed in Sect. 4.
Gauge invariant d = 5 operators relevant for flavour must have a bilinear structure in the quark fields of the typeQ L (· · ·) U(x) Q R , where dots stand for objects that transform in the trivial or in the adjoint representation of SU (2) L . Besides the vector and scalar chiral fields V µ and T, they can contain either the rank-2 antisymmetric tensor σ µν or the strength tensors B µν , W µν and G µν . According to their Lorentz structure, the resulting independent d = 5 chiral couplings can be classified in three main groups:
i) dipole-type operators:
X 1 = g Q L σ µν U Q R B µν , X 2 = g Q L σ µν T U Q R B µν , X 3 = gQ L σ µν σ i U Q R W i µν , X 4 = gQ L σ µν σ i T U Q R W i µν , X 5 = g sQL σ µν U Q R G µν , X 6 = g sQL σ µν T U Q R G µν , X 7 = gQ L σ µν T σ i U Q R W i µν , X 8 = gQ L σ µν T σ i T U Q R W i µν ; (3.9)
ii) operators containing the rank-2 antisymmetric tensor σ µν :
X 9 =Q L σ µν [V µ , V ν ] U Q R , X 10 =Q L σ µν [V µ , V ν ] T U Q R , X 11 =Q L σ µν [V µ T, V ν T] U Q R , X 12 =Q L σ µν [V µ T, V ν T] T U Q R ; (3.10)
iii) other operators containing the chiral vector fields V µ :
X 13 =Q L V µ V µ U Q R , X 14 =Q L V µ V µ T U Q R , X 15 =Q L V µ T V µ U Q R , X 16 =Q L V µ T V µ T U Q R , X 17 =Q L T V µ T V µ U Q R , X 18 =Q L T V µ T V µ T U Q R .
(3.11)
A fourth group of operators can be constructed from the antisymmetric rank 2 chiral tensor, that transforms in the adjoint of SU (2) L :
V µν ≡ D µ V ν − D ν V µ = i g W µν − i g 2 B µν T + [ V µ , V ν ] . (3.12)
However, the second equality in Eq. (3.12) shows that operators including V µν are not linearly independent from those listed in Eqs. (3.9)-(3.10).
The chiral Lagrangian containing the 18 fermionic flavour-changing d = 5 operators can thus be written as
L f χ=5 = 18 i=1 b i X i Λ s , (3.13)
where Λ s is the scale of the strong dynamics and b i are arbitrary O(1) coefficients. It is worth to underline that for the analysis of d = 5 operators in the non-linear regime, the relevant scale is Λ s and not f as for the analysis in the previous section. Indeed, f is associated to light Higgs insertions, while Λ s refers to the characteristic scale of the strong resonances that, once integrated out, give rise to the operators listed in Eqs. (3.9)-(3.11). A redefinition of the coefficients allows to make explicit the connection to their lowestdimension siblings in the linear expansion:
L f χ=5 = ξ 8 i=1b i X i Λ s + ξ ξ 18 i=9b i X i Λ s .
(3.14)
In the limit of small ξ, X 1−6 correspond to d = 6 operators in the linear expansion, while Because in this work the analysis will be restrained to (at most) d = 5 couplings, it is not necessary nor pertinent to discuss further the possible extensions X i → X i (h) that would include the dependence on a light Higgs through generic F i (h) functions.
The phenomenological impact of these contributions can be best identified through the low-energy Lagrangian written in the unitary gauge:
δL χ=5 = δL u χ=5 + δL d χ=5 + δL ud χ=5 ,(3.15)
where
δL d χ=5 = g 2 4 cos θ 2 W b d Z Λ sD L D R Z µ Z µ + g 2 2 b d W Λ sD L D R W + µ W −µ + g 2 c d W Λ sD L σ µν D R W + µ W − ν + + e d d F Λ sD L σ µν D R F µν + g 2 cos θ W d d Z Λ sD L σ µν D R Z µν + g s d d G Λ sD L σ µν D R G µν + h.c. ,(3.16)δL ud χ=5 = g 2 2 √ 2 cos θ W b + W Z Λ sŪ L D R W + µ Z µ + b − W Z Λ sD L U R W − µ Z µ + + g 2 2 √ 2 cos θ W c + W Z Λ sŪ L σ µν D R W + µ Z ν + c − W Z Λ sD L σ µν U R W − µ Z ν + + g √ 2 d + W Λ sŪ L σ µν D R W + µν + d − W Λ sD L σ µν U R W − µν + h.c. ,(3.17)
and analogously for δL u χ=5 as in δL d χ=5 interchanging d ↔ u and D L,R ↔ U L,R . In these equations
W ± µν = ∂ µ W ± ν − ∂ ν W ± µ ± i g W 3 µ W ± ν − W 3 ν W ± µ ,
while the photon and Z field strengths are defined as F µν = ∂ µ A ν − ∂ ν A µ and Z µν = ∂ µ Z ν − ∂ ν Z µ , respectively, and W 3 µ = cos θ W Z µ +sin θ W A µ . The relations between the coefficients appearing in Eqs.
∆F = 1 and ∆F = 2 observables
In Ref. [35], the constraints on the coefficients of the d = 4 flavour-changing operators of the non-linear expansion have been analysed. These bounds resulted from ∆F = 1 and ∆F = 2 observables and apply straightforwardly to non-linear regimes with a light h scalar. Operators O 1 , O 2 and O 3 induce tree-level contributions to ∆F = 1 processes mediated by the Z boson, as can be seen from the Z couplings in the effective Lagrangian in Eq. from K + → π +ν ν, B → X s + − and B → µ + µ − data. Furthermore, operators O 2 , O 3 and O 4 induce corrections to the fermion-W couplings, and thus to the the CKM matrix, see Eq. (3.7). This in turn induces modifications [35] on the strength of meson oscillations (at loop level), on B + → τ + ν decay and on the B semileptonic CP-asymmetry, among others; more specifically the following process have been taken into account in Ref. [35]:
-The CP-violating parameter K of the K 0 −K 0 system and the mixing-induced CP asymmetries S ψK S and S ψφ in the decays B 0 d → ψK S and B 0 s → ψφ. The corrections induced to K are proportional to y 2 t , while those to S ψK S and S ψφ are proportional to y 2 b . Consequently, possible large deviations from the values predicted by the SM are only allowed in the K system.
-The ratio among the meson mass differences in the B d and B s systems, R ∆M B ≡ ∆M B d /∆M Bs . The NP contributions on the mass differences almost cancel in this ratio and therefore deviations from the SM prediction for this observable are negligible.
-The ratio among the B + → τ + ν branching ratio and the B d mass difference, R BR/∆M ≡ BR(B + → τ + ν)/∆M B d . This observable is clean from theoretical hadronic uncertainties and the constraints on the NP parameters are therefore potentially strong.
Since only small deviations from the SM prediction for S ψK S are allowed, only values close to the exclusive determination for |V ub | are favoured (see Ref. [35] for a complete discussion). Moreover, it is possible to constrain the |V ub | − γ parameter space, with γ being one of the angles of the unitary triangle, requiring that both S ψK S and R ∆M B observables are inside the 3σ experimental determination. Once this reduced parameter space is identified, it is illustrative to choose one of its points as reference point, in order to present the features of this MFV scenario; for instance for the values (|V ub |, γ) = (3.5 × 10 −3 , 66 • ), S ψK S , R ∆M B and |V ub | are all inside their own 1σ values, and the predicted SM values for K and R BR/∆M are 3
K = 1.88 × 10 −3 , R BR/∆M = 1.62 × 10 −4 ,(4.2)
that should be compared to the corresponding experimental determinations 4 The errors on these quantities are ∼ 15% and ∼ 8%, estimated considering the uncertainties on the input parameters and the analysis performed in Ref. [65]. Fig. 1 shows the correlation between K and R BR/∆M (left panel) and the a CP − a W parameter space (right panel), requiring that K and R BR/∆M lie inside their own 3σ experimental determination. Finally, for those points in the a CP − a W parameter space that pass all the previous constraints, the predictions for S ψφ and the B semileptonic CP-asymmetry turned out to be close to the SM determination, in agreement with the recent LHCb measurements [66].
In the next subsection, new constraints on the d = 4 operator coefficients a W and a CP will be obtained from their impact at loop-level on radiative B decays. The latter data will be also used to constrain the set of d = 5 chiral operators coefficients identified in Sect. 3.2: while they are expected a priori to be all of comparable strength, the most powerful experimental constraints should result from the tree-level impact of dipole operators X 1 to X 8 , as they include vertices involving just three fields, one of them being a light gauge boson. Photonic penguins and also gluonic penguins and tree-level four-fermion diagrams (through renormalization group mixing effects) will be explored below and contrasted with radiative B decays.
4.2B → X s γ branching ratio
The current experimental value of theB → X s γ branching ratio [67] is
Br(B → X s γ) = (3.31 ± 0.16 ± 0.30 ± 0.10) × 10 −4 , (4.4) 3
The predicted SM value for K differs from that in Ref. [35] due to the new input parameters used: in particularB K = 0.7643 ± 0.0097 has sensibly increased [63]. for a photon-energy cut-off E γ > 1.6 GeV. On the other hand, its NNLO SM prediction for that same energy cut-off and in theB-meson rest frame, reads [68][69][70] Br(B → X s γ) = (3.15 ± 0.23) × 10 −4 .
(4.5)
The presence of NP can easily modify this prediction, and the precision of both the experimental measure and the SM computation allows in principle to provide severe bounds on the NP parameters. The effective Lagrangian relevant for b → sγ decay at the µ b = O(m b ) scale can be written as:
L ef f = 4G F √ 2 V * ts V tb 6 i=1 C i (µ b )Q i (µ b ) + C 7γ (µ b )Q 7γ (µ b ) + C 8G (µ b )Q 8G (µ b ) , (4.6)
where Q 1,2 , Q 3,...,6 and Q 7γ,8G denote the current-current, QCD penguin and magnetic dipole operators, respectively, as it is customary. In this effective Lagrangian, subleading terms proportional to V * us V ub have been neglected; the same applies to the contributions from the so-called primed operators, similar to those appearing in Eq. (4.6) although with opposite chirality structure, which are suppressed by the m s /m b ratio.
The value of the Wilson coefficients C i (µ b ) at the scale µ b is derived applying the QCD renormalisation group (RG) analysis to the corresponding Wilson coefficients, evaluated at the effective scale µ of the underlying theory, which is the matching scale linking the effective and full descriptions. For the SM case, this is the electroweak scale µ W = O(M W ).
The effects of the RG contributions are in general non-negligible, and indeed the rate of the b → sγ decay in the SM is enhanced by a factor of 2 − 3 [69] upon the inclusion of these corrections. They originate dominantly from the mixing of charged current-current operators with the dipole operators, and to a smaller extent from the mixing with QCDpenguin operators. These QCD contributions can be formally written as
C i (µ b ) = U ij (µ b , µ) C j (µ) , (4.7)
where U ij (µ b , µ) are the elements of the RG evolution matrix from the effective scale µ down to µ b [71]. The expression for theB → X s γ branching ratio is then given as follows:
Br(B → X s γ) = R |C 7γ (µ b )| 2 + N (E γ ) , (4.8)
where R = 2.47×10 −3 is simply an overall factor as discussed in Refs. [68,70] and N (E γ ) = (3.6 ± 0.6) × 10 −3 is a non-perturbative contribution for the photon-energy cut-off E γ > 1.6 GeV. C 7γ (µ b ) can be decomposed into SM and NP contributions,
C 7γ (µ b ) = C SM 7γ (µ b ) + ∆C 7γ (µ b ) ,(4.9)
where, for µ b = 2.5 GeV, the SM contribution at the NNLO level, is given by [68][69][70]
C SM 7γ (µ b ) = −0.3523 . (4.10)
In our context, the NP contributions arise from the non-unitarity of the CKM matrix and the presence of flavour violating Z-fermion couplings (induced by the d = 4 chiral operators O 1−4 [35]), and from the direct contributions from the d = 5 chiral operators X 1−8 . In the following we will discuss separately these contributions.
d = 4 contributions
The effective scale of the d = 4 chiral operators is f ≥ v, but no contributions to the Wilson coefficients relevant for b → sγ arise at scales above the electroweak one. As a result, the analysis of these contributions is alike to that in the SM, except for the fact that the NP operators modify the initial conditions at µ W . The Wilson coefficients at the scale µ W can be written as
C i (µ W ) = C SM i (µ W ) + ∆C d=4 i (µ W ) ,(4.11)
where the SM coefficients at the LO are given by [72] C SM 2 (µ W ) = 1 ,
C SM 7γ (µ W ) = 7x t − 5x 2 t − 8x 3 t 24(x t − 1) 3 + −2x 2 t + 3x 3 t 4(x t − 1) 4 log x t , C SM 8G (µ W ) = 2x t + 5x 2 t − x 3 t 8(x t − 1) 3 + −3x 2 t 4(x t − 1) 4 log x t ,(4.12)with x t ≡ m 2 t /M 2 W .
The NP contributions due to the non-unitarity of the CKM matrix induce modifications in all three Wilson coefficients involved:
∆C d=4 2 (µ W ) = (a W − i a CP ) y 2 b + a 2 W + a 2 CP y 2 b y 2 c , ∆C d=4 7γ (µ W ) = 2a W y 2 t + a 2 W + a 2 CP y 4 t 23 36 + C SM 7γ (µ W ) , ∆C d=4 8G (µ W ) = 2a W y 2 t + a 2 W + a 2 CP y 4 t 1 3 + C SM 8G (µ W ) .
(4.13)
These terms originate from the corresponding SM diagrams with the exchange of a W boson and are proportional to a W and a CP : indeed they are due to the modified vertex couplings, both in the tree-level diagram that originates Q 2 and in the 1-loop penguin diagrams that give rise to Q 7γ and Q 8G . On the other hand, the new flavour-changing Z-fermion vertices participate in penguin diagrams contributing to the b → sγ decay amplitude, with a Z boson running in the loop [73]. These contributions can be safely neglected, though, because they are proportional to the a u,d Z parameters, which are already severely constrained from their tree-level impact on other FCNC processes.
Including the QCD RG corrections, the NP contributions at LO to the Wilson coefficients are given by: Table 1: The magic numbers for ∆C 7γ (µ b ) defined in Eq. (4.14).
∆C 7γ (µ b ) = η 16 23 ∆C 7γ (µ W ) + 8 3 η 14 23 − η 16 23 ∆C 8G (µ W ) + ∆C 2 (µ W ) 8 i=1 κ i η σ i , (4.14) with η ≡ α s (µ W ) α s (µ b ) = 0.45 .
The analysis above allows to estimate the impact of the experimental value for BR(B → X s γ) on the NP parameter space of O 1 . . . O 4 operators: in Fig. 2 we retake the scatter plot shown in Fig. 1b, based on the analysis of ∆F = 1 and ∆F = 2 observables for the reference point (|V ub |, γ) = (3.5 × 10 −3 , 66 • ), and superimpose the new constraints resulting from the present loop-level impact on BR(B → X s γ): they are depicted as shadowed (grey) exclusion regions. The figure illustrates that they reduce the available parameter space, eliminating about half of the points previously allowed in the scatter plot of Fig. 1b. Figure 2: a W − a CP parameter space for ε K and BR(B + → σ + ν)/∆M B d observables inside their 3σ error ranges and a d Z ∈ [ − 0.044, 0.009] (see [35] for details). The gray areas correspond to the bounds from the BR(B → X s γ) at 1σ, 2σ, and 3σ, from the lighter to the darker, respectively. Fig. 2 shows that a CP , the overall coefficient of the genuinely CP-odd coupling O 4 , and thus of O 4 (h) in Eq. (3.5), is still loosely constrained by low-energy data. This has an interesting phenomenological consequence on Higgs physics prospects, since it translates into correlated exotic Higgs-fermion couplings, which for instance at leading order in h read: These are encouraging results in the sense of allowing short-term observability. In a conservative perspective, the operator coefficients of the d = 4 non-linear expansion should be expected to be O(1). Would this be the case, the possibility of NP detection would be delayed until both low-energy flavour experiments and LHC precision on h-fermion couplings nears the O(10 −2 ) level, which for LHC means to reach at least its 3000 f b −1 running regime. Notwithstanding this, a steady improvement of the above bounds should be sought.
δL h χ=4 ⊃ a CP 1 + β CP h v O 4 .
d = 5 contributions
For the d = 5 chiral operators considered, the effective scale weighting their overall strength is Λ s ≤ 4πf . In the numerical analysis that follows, we will consider for Λ s the smallest value possible, i.e. Λ s = 4πv. For this value, the effects due to the d = 5 chiral operators are maximized: indeed, for higher scales, the initial conditions for the Wilson coefficients are suppressed with the increasing of Λ s . This effect is only slightly softened, but not cancelled, by the enhancement due to the QCD running from a higher initial scale. For the analytical expressions, we will keep the discussion at a more general level and the high scale will be denoted by µ s , µ s v. At this scale the top and W bosons are still dynamical and therefore they do not contribute yet to any Wilson coefficients. The only operators relevant for b → sγ decay and with non-vanishing initial conditions are thus Q 7γ and Q 8G , whose contributions arise from dipole d = 5 chiral operator. At the scale µ s the Wilson coefficients can thus be written as
C i (µ s ) ≡ C SM i (µ s ) + ∆C d=5 i (µ s ) ,(4.18)
where the only non-vanishing contributions are
∆C d=5 7γ (µ s ) = d d F (4π) 2 v y 2 t √ 2µ s , ∆C d=5 8G (µ s ) = d d G (4π) 2 v y 2 t √ 2µ s ,(4.C i (µ W ) ≡ C SM i (µ W ) + ∆C d=5 i (µ W ) ,(4.20)
where the only non-vanishing contributions from the set of d = 5 flavour-changing fermionic operators are those given by
C d=5 7γ (µ W ) = 8 3 1 − η 2/21 µs η 2/3 µs ∆C d=5 8G (µ s ) + η 16/21 µs ∆C d=5 7γ (µ s ) , C d=5 8G (µ W ) = η 2/3 µs ∆C d=5 8G (µ s ) ,(4.21)
with
η µs ≡ α s (µ s ) α s (µ W ) . (4.22)
In the numerical analysis η µs = 0.67 will be taken.
ii) A five-flavour RG running from µ W down to µ b . This analysis is alike to that described in the previous section, substituting the initial conditions for the Wilson coefficients in Eq. It is interesting to focus on the final numerical result for the BR(B → X s γ), leaving unspecified only the parameters of the d = 5 chiral operators b d F,G :
BR(b → sγ) = 0.000315 − 0.00175 b d ef f + 0.00247 b d ef f 2 , (4.23) where b d ef f ≡ 3.8 b d F + 1.2 b d G . (4.24)
The corresponding plot is shown on the left-hand side of Fig. 3, which depicts the dependence of the branching ratio on b d ef f , together with the experimental 3σ regions. Two distinct ranges for b d ef f are allowed: Analogously to the case of O 1 (h) . . . O 4 (h) operators discussed in the previous subsection, a correlation would hold between a low-energy signal from these d = 5 couplings and the detection of exotic fermionic couplings at LHC, upon considering their extension to include h-dependent insertions. Nevertheless, a consistent analysis would require in this case to consider d = 6 couplings of the non-linear expansion, which are outside the scope of the present work.
1.0 b F d b G d (b) b d F − b d G parameter space
Conclusions
The lack of indications of new resonances at LHC data other than a strong candidate to be the SM scalar boson h, together with the alignment of the couplings of the latter with SM expectations, draws a puzzling panorama with respect to the electroweak hierarchy problem. If the experimental pattern persists, either the extended prejudice against finetunings of the SM parameters should be dropped, or the new physics scale is still awaiting discovery and may be associated for instance to a dynamical origin of the SM scalar boson. We have focused in this work on possible implications for fermionic couplings of a strong interacting origin of electroweak symmetry breaking dynamics with a light scalar h with mass around 125 GeV, within an effective Lagrangian approach.
The parameter describing the degree of non-linearity ξ = (v/f ) 2 must lie in the range 0 < ξ < 1. For small values, the effective theory converges towards the SM, as the NP contributions can be safely neglected. On the other hand, large values indicate a chiral regime for the dynamics of the Goldstone bosons, which in turn requires to use a chiral expansion to describe them, combined with appropriate insertions of the light h field.
We identified the flavour-changing operator basis for the non-linear regime up to d = 5. Furthermore, taking into account the QCD RG evolution, the coefficients of these operators have been constrained from a plethora of low-energy transitions. In particular we have analyzed in detail and in depth the constraints resulting from the data onB → X s γ branching ratio. Its impact is important on the global coefficients of the four relevant d = 4 flavour-changing chiral couplings at the loop level, and on those of the d = 5 dipole operators. The limits obtained constrain in turn the possible fermion-h exotic couplings to be explored at the LHC. A particularly interesting example is that of the intrinsically CPodd d = 4 operator O 4 of the non-linear expansion, whose coefficient is loosely constrained by data: a correlation is established between the possible signals in low-energy searches of CP-violation and anomalous h-fermion couplings at the LHC. Their relative strength is explored for the case of a relatively small ξ. A similar correlation between low-energy flavour searches and LHC signals also follows for all other operators.
A Relation to the SILH basis
In this appendix we revisit the transition from an SU (2) L × U (1) Y invariant effective Lagrangian in the linear realization of the EW symmetry breaking mechanism to an effective chiral Lagrangian, focussing to the so-called SILH framework [39]. The d = 6 SILH Lagrangian in Eq. (2.15) of Ref. [39] can be written in terms of U, V, T and a scalar field h:
L SILH = ξ c H 2 (∂ µ h)(∂ µ h) F(h) + c T 2 v 2 4 Tr [TV µ ] Tr [TV µ ] F(h) 2 + − c 6 λ v 4 8 F(h) 3 + c y v 2 √ 2Q L U diag(y U , y D ) Q R F(h) 3/2 + h.c. + − i c W g 2m 2 ρ f 2 2 (D µ W µν ) i Tr [σ i V ν ] F(h) + i c B g 2m 2 ρ f 2 2 (∂ µ B µν ) Tr [T V ν ] F(h) + + i c HW g 16π 2 W µν i 1 4 Tr [σ i V µ V ν ] F(h) − 1 4 Tr [σ i V µ ] ∂ ν F(h) + + i c HB g 16π 2 B µν 1 4 Tr [T V µ V ν ] F(h) + 1 4 Tr [T V µ ] ∂ ν F(h) + + c γ g 2 16π 2 g 2 g 2 ρ 1 2 B µν B µν F(h) + c g g 2 S 16π 2 y 2 t g 2 ρ 1 2 G a µν G aµν F(h) , (A.1)
where the notation of the operator coefficients is as in Ref. [39] and F(h) = F (h) is the function of the light Higgs fields resulting from the doublet Higgs ansatz as in Eq. (2.4); the Lagrangian above is only complete at leading order for values of ξ 1; otherwise other operators of non-linear parenthood have to be added, as earlier explained.
B Gauge fields equations of motion
When deriving the gauge field SM equations in Eqs. (3.1) and (3.2), all contributions from d = 4 operators in δL d=4 effective Lagrangians have been neglected, on the assumption that their coefficients are small, typically a i < 1, i = 1...4, with typically a i ≈ 1/(16π 2 ). This allows to trade flavour-conserving currents for gauge terms with derivatives of the gauge field strengths.
Otherwise, for a i ∼ 1, taking into account that the gauge sector is already severely modified, and thus keeping only the flavour-changing contributions in δL f d=4 , Eqs. (3.1) and (3.2) would get modified to
(D µ W µν ) j = + i g 4 v 2 Tr[V ν σ j ] F C (h) + g 2Q L γ ν σ j Q L + − g 2Q L γ ν λ F [(a 2 − a 3 ) δ jk − a 4 3jk ] σ k Q L , (B.1) ∂ µ B µν = − i g 4 v 2 Tr[T V ν ] F C (h) + g Q L γ ν h L q Q L + g Q L γ ν h R q Q L + − g Q L γ ν λ F a 1 1 + (a 2 + a 3 ) σ 3 2 Q L , (B.2)
with h L,R the left and right hypercharges in the 2 × 2 matrix notation and where the right-handed flavour-changing contributions have been disregarded. However, as gauge-h coefficients are severely constrained by EW precision data (barring extremely fine-tuned regions in the parameter space) the analysis of flavour-changing couplings would get modified only by terms of O(a i × a GH ), i = 1...4, being a GH the coefficients in the gauge-h sector, and therefore their impact on the flavour sector is negligible.
C Linear siblings of the d = 5 operators
In this appendix we connect the operators listed in Eqs. (3.9)-(3.11) with those defined in the linear realization, X i ↔ j C ij X Hj where C is a 18 × 18 matrix.
The first set of non-linear operators listed in Eq. (3.9) corresponds to the following eight linear operators containing fermions, the Higgs doublet H, the rank-2 antisymmetric tensor σ µν and the field strengths B µν , W µν and G µν :
X H1 = g Q L σ µν H D R B µν X H2 = g Q L σ µνH U R B µν X H3 = gQ L σ µν W µν H D R X H4 = gQ L σ µν W µνH U R X H5 = g sQL σ µν H D R G µν X H6 = g sQL σ µνH U R G µν , X H7 = gQ L σ µν σ i H D R H † W µν σ i H X H8 = gQ L σ µν σ iH U R H † σ i W µν H . (C.1)
The operators X H7,H8 have mass dimension d = 8, while all the others have (linear) mass dimension d = 6. The correspondence among these linear operators and those non-linear listed in Eq. (3.9) is the following: for i = 1, . . . , 8, The second set of non-linear operators listed in Eq. (3.10) corresponds to the following four linear operators containing fermions, the Higgs doublet H and the rank-2 antisym-metric tensor σ µν :
X i ↔ 8 j=1 C ij X Hj with C = √ 2 f X H9 =Q L σ µν H D R (D µ H) † D ν H − (µ ↔ ν) , X H10 =Q L σ µνH U R (D µ H) † D ν H − (µ ↔ ν) , X H11 =Q L σ i σ µν H D R (D µ H) † σ i D ν H − (µ ↔ ν) , X H12 =Q L σ i σ µνH U R (D µ H) † σ i D ν H − (µ ↔ ν) , (C.3)
all of them of mass dimension d = 8. The correspondence among these linear operators and those non-linear listed in Eq. (3.10) is the following: for i = 9, . . . , 12,
X i ↔ 12 j=9 C ij X Hj with C = 2 √ 2 f 3 0 0 1 1 0 0 −1 1 1 −1 0 0 −1 −1 0 0 (C.4)
For the third set in Eq. (3.11), we consider the following six linear operators involving fermions and the Higgs doublet H: where the first four operators have mass dimension d = 8, while the last two have mass dimension d = 10. It is then possible to establish the following correspondence between these linear operators and those non-linear listed in Eq. (3.11): for i = 13, . . . , 18,
X H13 =Q L H D R (D µ H) † D µ H , X H14 =Q LH U R (D µ H) † D µ H , X H15 =Q L σ i H D R (D µ H) † σ i D µ H , X H16 =Q L σ iH U R (D µ H) † σ i D µ H ,X i ↔ 18 j=13 C ij X Hj with C = 2 √ 2 f 3 c u W c d W c + W Z c − W Z d u F d d F d u Z d d Z d + W d − W d u G d d G = A b 1 · · · b 12 , b u Z b d Z b u W b d W b + W Z b − W Z = B b 13 · · · b 18 (D.1) A =
. 3 identifies the d = 4 and d = 5 flavour-changing couplings. The main phenomenological impact of both d = 4 and d = 5 operators is presented in in Sect. 4. Finally, we conclude in Sect. 5. Technical details on the relation with the SILH Lagrangian [39] can be found in App. A; the gauge field equations of motion in the presence of flavour-changing contributions are described in App. B; the identification of the d ≥ 6 operators of the linear expansion which correspond to d = 5 operators of the non-linear one can be found in App. C; the relation between the d = 5 operator coefficients and the corresponding coefficients in the unitary basis is detailed in App. D.
set limits on the coefficients of the operators O 1 − O 4 from the analysis of ∆F = 1 and ∆F = 2 observables. The inclusion of a light scalar h does not modify the bounds obtained there for the overall coefficients. In fact, the overall operator coefficients in Eq. (3.4) may differ form their Higgsless counterparts in Eqs. (3.3) only through a (negligible) loop contribution. With the inclusion of the light h field, the low-energy effective flavour Lagrangian induced by the SM and the O 1 (h) − O 4 (h) operators in Eq. (3.5) reads, in the unitary gauge (i.e. U(x) = 1) and up to d = 5 couplings,
X 7 and X 8
8result from combinations of d = 6 and d = 8 siblings. Moreover, X 9−18 have linear siblings of d = 8, but X 17 and X 18 that are combinations of d = 8 and d = 10 operators in the linear regime. The complete list of the linear siblings of the chiral d = 5 operators can be found in Appendix C.
first resumes and updates the bounds existing in the literature [35] on the coefficients of the flavour-changing d = 4 chiral expansion, and then discusses new bounds and other phenomenological considerations with and without a light Higgs: -Loop level impact of fermionic d = 4 chiral operators (O 1 to O 4 ) on those same radiative decays; -Tree-level bounds on the fermionic d = 5 chiral operators X i , from radiative decays; -Light Higgs to fermions couplings, from operators O 1 (h) to O 4 (h).
(3.7), and are severely constrained. Due to the MFV structure of the coefficients, sizeable flavour-changing effects may only be expected in the down quark sectors, with data on K and B transitions providing the strongest constraints on a d Z ,
4 R
4BR/∆M exp has been computed considering the recent world average BR(B + → τ + ν) = (0.99 ± 0.25) × 10 −4 from Ref. [64]., ( K ) exp = (2.228 ± 0.011) × 10 −3 , R BR/∆M exp = (1.95 ± 0.49) × 10 −4 , plot between ε K and R BR/∆M . a W , a CP ∈ [ − 1, 1], a d Z ∈ [ − 0.) a W − a CP parameter space for the observables on the left panel inside their 3σ error ranges and a d Z ∈ [ − 0.044, 0.009].
Figure 1 :
1Results for the reference point (|V ub |, γ) = (3.5 × 10 −3 , 66 • ). Left panel: in red the SM prediction and its 1σ theoretical error bands for ε K and R BR/∆M for this reference point; in orange (green) the 1σ, 2σ and 3σ (from the darker to the lighter) experimental error ranges for ε K (R BR/∆M ), in blue the correlation between ε K and R BR/∆M induced by NP contributions. Right panel: allowed values for a W and a CP upon the setup of the left panel. See Ref.[35] for further details.
's and σ's are the magic numbers listed in Tab. 1, while η has been calculated taking α s (M Z = 91.1876 GeV) = 0.118. Due to the simple additive structure of the NP contributions in Eq. (4.11), these magic numbers are the same as in the SM context.
values of ξ (for which the linear expansion could be an acceptable guideline), the relative weight of the couplings with and without an external Higgs particle reduces to -see Eq. (3.6)β CP ∼ 4 .(4.17)
(4.11)-Eq. (4.13) for those in Eqs. (4.20)-(4.22).
expression for b d ef f in Eq. (4.24), it is possible to translate these bounds onto the b d F − b d G parameter space, as shown on the right-hand side ofFig. 3. The two narrow bands depict the two allowed regions.
) BR(B → X s γ) vs.
Figure 3 :
3Left panel: the curve depicts BR(B → X s γ) as a function of b d ef f , while the horizontal bands are the experimentally excluded regions at 1σ, 2σ, and 3σ, from the lighter to the darker, respectively. Right panel: the 3σ corresponding allowed b d F − b d G parameter space is depicted as two separate narrow bands.
X
H17 =Q L H D R (D µ H) † H H † D µ H , X H18 =Q LH U R (D µ H) † H H † D µ H , (C.5)
coefficients in the unitary basisIn this appendix, we report the relations between the coefficients appearing in the Lagrangian Eq.(3.16) and the ones defined in Eq. (3.13) for the effective Lagrangian in the unitary basis:
5 )
5where again the functions F i (h) contain the dependence on (h + h ). In the present work -restrained to effective couplings of total dimension d ≤ 5-only terms linear in h should be retained in Eq. (3.4); for the same reason it is neither pertinent to consider couplings containing ∂ µ h (that is, derivatives of F(h)). For ξ 1, the functions F i (h) collapse into combinations of F (h) as defined in Eq. (2.4) for the linear regime:
Notice that in this low-energy expression for U(x), the scale associated to the eaten GBs is v and not f . Technically, the scale v appears through a redefinition of the GB fields so as to have canonically normalized kinetic terms.
Acknowledgements
Observation of a New Particle in the Search for the Standard Model Higgs Boson with the Atlas Detector at the LHC. G Aad, ATLAS CollaborationarXiv:1207.7214Phys.Lett. 716ATLAS Collaboration, G. Aad et. al., Observation of a New Particle in the Search for the Standard Model Higgs Boson with the Atlas Detector at the LHC, Phys.Lett. B716 (2012) 1-29, [arXiv:1207.7214].
Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC. S Chatrchyan, CMS CollaborationarXiv:1207.7235Phys.Lett. 716CMS Collaboration, S. Chatrchyan et. al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, Phys.Lett. B716 (2012) 30-61, [arXiv:1207.7235].
Broken Symmetry and the Mass of Gauge Vector Mesons. F Englert, R Brout, Phys.Rev.Lett. 13F. Englert and R. Brout, Broken Symmetry and the Mass of Gauge Vector Mesons, Phys.Rev.Lett. 13 (1964) 321-323.
P W Higgs, Broken Symmetries, Massless Particles and Gauge Fields. 12P. W. Higgs, Broken Symmetries, Massless Particles and Gauge Fields, Phys.Lett. 12 (1964) 132-133.
Broken Symmetries and the Masses of Gauge Bosons. P W Higgs, Phys.Rev.Lett. 13P. W. Higgs, Broken Symmetries and the Masses of Gauge Bosons, Phys.Rev.Lett. 13 (1964) 508-509.
O Eberhardt, G Herbert, H Lacker, A Lenz, A Menzel, arXiv:1209.1101Impact of a Higgs Boson at a Mass of 126 GeV on the Standard Model with Three and Four Fermion Generations. O. Eberhardt, G. Herbert, H. Lacker, A. Lenz, A. Menzel, et. al., Impact of a Higgs Boson at a Mass of 126 GeV on the Standard Model with Three and Four Fermion Generations, arXiv:1209.1101.
M Baak, M Goebel, J Haller, A Hoecker, D Kennedy, arXiv:1209.2716The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC. M. Baak, M. Goebel, J. Haller, A. Hoecker, D. Kennedy, et. al., The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC, arXiv:1209.2716.
Dynamics of Spontaneous Symmetry Breaking in the Weinberg-Salam Theory. L Susskind, Phys. Rev. 20L. Susskind, Dynamics of Spontaneous Symmetry Breaking in the Weinberg-Salam Theory, Phys. Rev. D20 (1979) 2619-2625.
Mass without Scalars. S Dimopoulos, L Susskind, Nucl. Phys. 155S. Dimopoulos and L. Susskind, Mass without Scalars, Nucl. Phys. B155 (1979) 237-252.
Massless Composites with Massive Constituents. S Dimopoulos, J Preskill, Nucl.Phys. 199206S. Dimopoulos and J. Preskill, Massless Composites with Massive Constituents, Nucl.Phys. B199 (1982) 206.
Chiral Quarks and the Nonrelativistic Quark Model. A Manohar, H Georgi, Nucl.Phys. 234189A. Manohar and H. Georgi, Chiral Quarks and the Nonrelativistic Quark Model, Nucl.Phys. B234 (1984) 189.
SU (2) × U (1) Breaking by Vacuum Misalignment. D B Kaplan, H Georgi, Phys.Lett. 136183D. B. Kaplan and H. Georgi, SU (2) × U (1) Breaking by Vacuum Misalignment, Phys.Lett. B136 (1984) 183.
Composite Higgs Scalars. D B Kaplan, H Georgi, S Dimopoulos, Phys. Lett. 136187D. B. Kaplan, H. Georgi, and S. Dimopoulos, Composite Higgs Scalars, Phys. Lett. B136 (1984) 187.
Constraints on SU (2) × U (1) Breaking by Vacuum Misalignment. T Banks, Nucl.Phys. 243125T. Banks, Constraints on SU (2) × U (1) Breaking by Vacuum Misalignment, Nucl.Phys. B243 (1984) 125.
Calculation of the Composite Higgs Mass. H Georgi, D B Kaplan, P Galison, Phys.Lett. 143152H. Georgi, D. B. Kaplan, and P. Galison, Calculation of the Composite Higgs Mass, Phys.Lett. B143 (1984) 152.
. H Georgi, D B Kaplan, Composite Higgs, S U Custodial, Phys.Lett. 1452216H. Georgi and D. B. Kaplan, Composite Higgs and Custodial SU (2), Phys.Lett. B145 (1984) 216.
Anatomy of a Composite Higgs Model. M J Dugan, H Georgi, D B Kaplan, Nucl. Phys. 254299M. J. Dugan, H. Georgi, and D. B. Kaplan, Anatomy of a Composite Higgs Model, Nucl. Phys. B254 (1985) 299.
R Contino, arXiv:1005.4269The Higgs as a Composite Nambu-Goldstone Boson. R. Contino, The Higgs as a Composite Nambu-Goldstone Boson, arXiv:1005.4269.
Composite Technicolor Standard Model. R S Chivukula, H Georgi, Phys. Lett. 18899R. S. Chivukula and H. Georgi, Composite Technicolor Standard Model, Phys. Lett. B188 (1987) 99.
Weak Scale Effective Supersymmetry. L J Hall, L Randall, Phys. Rev. Lett. 65L. J. Hall and L. Randall, Weak Scale Effective Supersymmetry, Phys. Rev. Lett. 65 (1990) 2939-2942.
Minimal Flavour Violation: an Effective Field Theory Approach. G Ambrosio, G F Giudice, G Isidori, A Strumia, hep-ph/0207036Nucl. Phys. 645G. D'Ambrosio, G. F. Giudice, G. Isidori, and A. Strumia, Minimal Flavour Violation: an Effective Field Theory Approach, Nucl. Phys. B645 (2002) 155-187, [hep-ph/0207036].
Z Lalak, S Pokorski, G G Ross, arXiv:1006.2375Beyond MFV in Family Symmetry Theories of Fermion Masses. 1008Z. Lalak, S. Pokorski, and G. G. Ross, Beyond MFV in Family Symmetry Theories of Fermion Masses, JHEP 1008 (2010) 129, [arXiv:1006.2375].
Flavor anarchy in a Randall-Sundrum model with 5D minimal flavor violation and a low Kaluza-Klein scale. A L Fitzpatrick, G Perez, L Randall, arXiv:0710.1869Phys.Rev.Lett. 100A. L. Fitzpatrick, G. Perez, and L. Randall, Flavor anarchy in a Randall-Sundrum model with 5D minimal flavor violation and a low Kaluza-Klein scale, Phys.Rev.Lett. 100 (2008) 171604, [arXiv:0710.1869].
Low Scale Flavor Gauge Symmetries. B Grinstein, M Redi, G Villadoro, arXiv:1009.2049JHEP. 1167B. Grinstein, M. Redi, and G. Villadoro, Low Scale Flavor Gauge Symmetries, JHEP 11 (2010) 067, [arXiv:1009.2049].
) 3 Flavour Model. A J Buras, M V Carlucci, L Merlo, E Stamou, arXiv:1112.4477Phenomenology of a Gauged SU. 0883JHEPA. J. Buras, M. V. Carlucci, L. Merlo, and E. Stamou, Phenomenology of a Gauged SU (3) 3 Flavour Model, JHEP 03 (2012) 088, [arXiv:1112.4477].
Flavor Physics Constraints for Physics Beyond the Standard Model. G Isidori, Y Nir, G Perez, arXiv:1002.0900Ann. Rev. Nucl. Part. Sci. 60G. Isidori, Y. Nir, and G. Perez, Flavor Physics Constraints for Physics Beyond the Standard Model, Ann. Rev. Nucl. Part. Sci. 60 (2010) 355, [arXiv:1002.0900].
Minimal flavor violation in the lepton sector. V Cirigliano, B Grinstein, G Isidori, M B Wise, hep-ph/0507001Nucl. Phys. 728V. Cirigliano, B. Grinstein, G. Isidori, and M. B. Wise, Minimal flavor violation in the lepton sector, Nucl. Phys. B728 (2005) 121-134, [hep-ph/0507001].
Various Definitions of Minimal Flavour Violation for Leptons. S Davidson, F Palorini, hep-ph/0607329Phys. Lett. 642S. Davidson and F. Palorini, Various Definitions of Minimal Flavour Violation for Leptons, Phys. Lett. B642 (2006) 72-80, [hep-ph/0607329].
General Minimal Flavor Violation. A L Kagan, G Perez, T Volansky, J Zupan, arXiv:0903.1794Phys. Rev. 8076002A. L. Kagan, G. Perez, T. Volansky, and J. Zupan, General Minimal Flavor Violation, Phys. Rev. D80 (2009) 076002, [arXiv:0903.1794].
Minimal Flavour Seesaw Models. M Gavela, T Hambye, D Hernandez, P Hernandez, arXiv:0906.1461JHEP. 0380909M. Gavela, T. Hambye, D. Hernandez, and P. Hernandez, Minimal Flavour Seesaw Models, JHEP 0909 (2009) 038, [arXiv:0906.1461].
Sequential Flavour Symmetry Breaking. T Feldmann, M Jung, T Mannel, arXiv:0906.1523Phys. Rev. 8033003T. Feldmann, M. Jung, and T. Mannel, Sequential Flavour Symmetry Breaking, Phys. Rev. D80 (2009) 033003, [arXiv:0906.1523].
On the Scalar Potential of Minimal Flavour Violation. R Alonso, M B Gavela, L Merlo, S Rigolin, arXiv:1103.2915JHEP. 0712R. Alonso, M. B. Gavela, L. Merlo, and S. Rigolin, On the Scalar Potential of Minimal Flavour Violation, JHEP 07 (2011) 012, [arXiv:1103.2915].
Minimal Flavour Violation Extensions of the Seesaw. R Alonso, G Isidori, L Merlo, L A Munoz, E Nardi, arXiv:1103.5461JHEP. 0637R. Alonso, G. Isidori, L. Merlo, L. A. Munoz, and E. Nardi, Minimal Flavour Violation Extensions of the Seesaw, JHEP 06 (2011) 037, [arXiv:1103.5461].
On the Potential of Leptonic Minimal Flavour Violation. R Alonso, M Gavela, D Hernandez, L Merlo, arXiv:1206.3167Phys.Lett. 715R. Alonso, M. Gavela, D. Hernandez, and L. Merlo, On the Potential of Leptonic Minimal Flavour Violation, Phys.Lett. B715 (2012) 194-198, [arXiv:1206.3167].
Minimal Flavour Violation with Strong Higgs Dynamics. R Alonso, M Gavela, L Merlo, S Rigolin, J Yepes, arXiv:1201.1511JHEP. 1206R. Alonso, M. Gavela, L. Merlo, S. Rigolin, and J. Yepes, Minimal Flavour Violation with Strong Higgs Dynamics, JHEP 1206 (2012) 076, [arXiv:1201.1511].
The Effective Chiral Lagrangian for a Light Dynamical 'Higgs. R Alonso, M Gavela, L Merlo, S Rigolin, J Yepes, arXiv:1212.3305Phys.Lett. 722R. Alonso, M. Gavela, L. Merlo, S. Rigolin, and J. Yepes, The Effective Chiral Lagrangian for a Light Dynamical 'Higgs', Phys.Lett. B722 (2013) 330-335, [arXiv:1212.3305].
Flavor at Ssc Energies: a New Mechanism for Dynamically Generated Fermion Masses. D B Kaplan, Nucl.Phys. 365D. B. Kaplan, Flavor at Ssc Energies: a New Mechanism for Dynamically Generated Fermion Masses, Nucl.Phys. B365 (1991) 259-278.
Flavor and CP Invariant Composite Higgs Models. M Redi, A Weiler, arXiv:1106.6357JHEP. 10811M. Redi and A. Weiler, Flavor and CP Invariant Composite Higgs Models, JHEP 11 (2011) 108, [arXiv:1106.6357].
The Strongly-Interacting Light Higgs. G F Giudice, C Grojean, A Pomarol, R Rattazzi, hep-ph/0703164JHEP. 0645G. F. Giudice, C. Grojean, A. Pomarol, and R. Rattazzi, The Strongly-Interacting Light Higgs, JHEP 06 (2007) 045, [hep-ph/0703164].
Strong Double Higgs Production at the LHC. R Contino, C Grojean, M Moretti, F Piccinini, R Rattazzi, arXiv:1002.1011JHEP. 100589R. Contino, C. Grojean, M. Moretti, F. Piccinini, and R. Rattazzi, Strong Double Higgs Production at the LHC, JHEP 1005 (2010) 089, [arXiv:1002.1011].
Model-Independent Bounds on a Light Higgs. A Azatov, R Contino, J Galloway, arXiv:1202.3415JHEP. 1204A. Azatov, R. Contino, and J. Galloway, Model-Independent Bounds on a Light Higgs, JHEP 1204 (2012) 127, [arXiv:1202.3415].
A Higgs-Higgs Bound State Due to New Physics at a TeV. B Grinstein, M Trott, arXiv:0704.1505Phys.Rev. 7673002B. Grinstein and M. Trott, A Higgs-Higgs Bound State Due to New Physics at a TeV, Phys.Rev. D76 (2007) 073002, [arXiv:0704.1505].
Strongly Interacting Higgs Bosons. T Appelquist, C W Bernard, Phys. Rev. 22200T. Appelquist and C. W. Bernard, Strongly Interacting Higgs Bosons, Phys. Rev. D22 (1980) 200.
Heavy Higgs Bosons in the Weinberg-Salam Model. A C Longhitano, Phys. Rev. 221166A. C. Longhitano, Heavy Higgs Bosons in the Weinberg-Salam Model, Phys. Rev. D22 (1980) 1166.
Low-Energy Impact of a Heavy Higgs Boson Sector. A C Longhitano, Nucl. Phys. 188118A. C. Longhitano, Low-Energy Impact of a Heavy Higgs Boson Sector, Nucl. Phys. B188 (1981) 118.
The Chiral approach to the electroweak interactions. F Feruglio, hep-ph/9301281Int.J.Mod.Phys. 8F. Feruglio, The Chiral approach to the electroweak interactions, Int.J.Mod.Phys. A8 (1993) 4937-4972, [hep-ph/9301281].
The Electroweak Chiral Lagrangian and New Precision Measurements. T Appelquist, G.-H Wu, hep-ph/9304240Phys.Rev. 48T. Appelquist and G.-H. Wu, The Electroweak Chiral Lagrangian and New Precision Measurements, Phys.Rev. D48 (1993) 3235-3241, [hep-ph/9304240].
A Azatov, J Galloway, arXiv:1212.1380Electroweak Symmetry Breaking and the Higgs Boson: Confronting Theories at Colliders. 28A. Azatov and J. Galloway, Electroweak Symmetry Breaking and the Higgs Boson: Confronting Theories at Colliders, Int.J.Mod.Phys. A28 (2013) 1330004, [arXiv:1212.1380].
The Minimal Composite Higgs Model. K Agashe, R Contino, A Pomarol, hep-ph/0412089Nucl.Phys. 719K. Agashe, R. Contino, and A. Pomarol, The Minimal Composite Higgs Model, Nucl.Phys. B719 (2005) 165-187, [hep-ph/0412089].
Light Custodians in Natural Composite Higgs Models. R Contino, L Da Rold, A Pomarol, hep-ph/0612048Phys.Rev. 7555014R. Contino, L. Da Rold, and A. Pomarol, Light Custodians in Natural Composite Higgs Models, Phys.Rev. D75 (2007) 055014, [hep-ph/0612048].
Beyond the Minimal Composite Higgs Model. B Gripaios, A Pomarol, F Riva, J Serra, arXiv:0902.1483JHEP. 090470B. Gripaios, A. Pomarol, F. Riva, and J. Serra, Beyond the Minimal Composite Higgs Model, JHEP 0904 (2009) 070, [arXiv:0902.1483].
Technidilaton Or Higgs?. E Halyo, Mod.Phys.Lett. 8E. Halyo, Technidilaton Or Higgs?, Mod.Phys.Lett. A8 (1993) 275-284.
Distinguishing the Higgs Boson from the Dilaton at the Large Hadron Collider. W D Goldberger, B Grinstein, W Skiba, arXiv:0708.1463Phys.Rev.Lett. 100W. D. Goldberger, B. Grinstein, and W. Skiba, Distinguishing the Higgs Boson from the Dilaton at the Large Hadron Collider, Phys.Rev.Lett. 100 (2008) 111802, [arXiv:0708.1463].
Phenomenology of a Light Scalar: the Dilaton. L Vecchi, arXiv:1002.1721Phys.Rev. 8276009L. Vecchi, Phenomenology of a Light Scalar: the Dilaton, Phys.Rev. D82 (2010) 076009, [arXiv:1002.1721].
Phenomenology and Cosmology of an Electroweak Pseudo-Dilaton and Electroweak Baryons. B A Campbell, J Ellis, K A Olive, arXiv:1111.4495JHEP. 0261203B. A. Campbell, J. Ellis, and K. A. Olive, Phenomenology and Cosmology of an Electroweak Pseudo-Dilaton and Electroweak Baryons, JHEP 1203 (2012) 026, [arXiv:1111.4495].
S Matsuzaki, K Yamawaki, arXiv:1207.5911Is 125 GeV Techni-Dilaton Found at LHC?. S. Matsuzaki and K. Yamawaki, Is 125 GeV Techni-Dilaton Found at LHC?, arXiv:1207.5911.
Resonance at 125 Gev: Higgs Or Dilaton/Radion?. Z Chacko, R Franceschini, R K Mishra, arXiv:1209.3259Z. Chacko, R. Franceschini, and R. K. Mishra, Resonance at 125 Gev: Higgs Or Dilaton/Radion?, arXiv:1209.3259.
B Bellazzini, C Csaki, J Hubisz, J Serra, J Terning, arXiv:1209.3299A Higgslike Dilaton. B. Bellazzini, C. Csaki, J. Hubisz, J. Serra, and J. Terning, A Higgslike Dilaton, arXiv:1209.3299.
G Panico, M Redi, A Tesi, A Wulzer, arXiv:1210.7114On the Tuning and the Mass of the Composite Higgs. G. Panico, M. Redi, A. Tesi, and A. Wulzer, On the Tuning and the Mass of the Composite Higgs, arXiv:1210.7114.
The Breaking of Isospin Symmetry in Theories with a Dynamical Higgs Mechanism. T Appelquist, M J Bowick, E Cohler, A I Hauser, Phys. Rev. 311676T. Appelquist, M. J. Bowick, E. Cohler, and A. I. Hauser, The Breaking of Isospin Symmetry in Theories with a Dynamical Higgs Mechanism, Phys. Rev. D31 (1985) 1676.
Fermionic Couplings in an Electroweak Theory with Nonlinear Spontaneous Symmetry Breaking. G Cvetič, R Kogerler, Nucl. Phys. 328342G. Cvetič and R. Kogerler, Fermionic Couplings in an Electroweak Theory with Nonlinear Spontaneous Symmetry Breaking, Nucl. Phys. B328 (1989) 342.
CP Violation and Family Mixing in the Effective Electroweak Lagrangian. D Espriu, J Manzano, hep-ph/0011036Phys.Rev. 6373008D. Espriu and J. Manzano, CP Violation and Family Mixing in the Effective Electroweak Lagrangian, Phys.Rev. D63 (2001) 073008, [hep-ph/0011036].
Lattice QCD Inputs to the CKM Unitarity Triangle Analysis. J Laiho, E Lunghi, R S Van De Water, arXiv:0910.2928Phys. Rev. 8134503J. Laiho, E. Lunghi, and R. S. Van de Water, Lattice QCD Inputs to the CKM Unitarity Triangle Analysis., Phys. Rev. D81 (2010) 034503. Updates available on http://latticeaverages.org/, [arXiv:0910.2928].
Next-to-Next-to-Leading-Order Charm-Quark Contribution to the CP Violation Parameter K and ∆M K. J Brod, M Gorbahn, arXiv:1108.2036Phys.Rev.Lett. 108J. Brod and M. Gorbahn, Next-to-Next-to-Leading-Order Charm-Quark Contribution to the CP Violation Parameter K and ∆M K , Phys.Rev.Lett. 108 (2012) 121801, [arXiv:1108.2036].
Measurement of the CP-Violating Phase Φ s in the Decay B 0. R Aaij, LHCb CollaborationLHCb Collaboration, R. Aaij et. al., Measurement of the CP-Violating Phase Φ s in the Decay B 0
. S → J/ Ψφ, arXiv:1112.3183Phys.Rev.Lett. 108S → J/ΨΦ, Phys.Rev.Lett. 108 (2012) 101803, [arXiv:1112.3183].
Measurement of B(B → X s γ), the B → X s γ Photon Energy Spectrum, and the Direct CP Asymmetry in B → X s+d γ Decays. J Lees, BaBar Collaboration CollaborationarXiv:1207.5772Phys.Rev. 86BaBar Collaboration Collaboration, J. Lees et. al., Measurement of B(B → X s γ), the B → X s γ Photon Energy Spectrum, and the Direct CP Asymmetry in B → X s+d γ Decays, Phys.Rev. D86 (2012) 112008, [arXiv:1207.5772].
Quark Mass Effects inB → X s γ. P Gambino, M Misiak, hep-ph/0104034Nucl. Phys. 611P. Gambino and M. Misiak, Quark Mass Effects inB → X s γ, Nucl. Phys. B611 (2001) 338-366, [hep-ph/0104034].
The First Estimate of BR(B → X s γ) at O(α 2 s ). M Misiak, hep-ph/0609232Phys. Rev. Lett. 9822002M. Misiak et. al., The First Estimate of BR(B → X s γ) at O(α 2 s ), Phys. Rev. Lett. 98 (2007) 022002, [hep-ph/0609232].
NNLO QCD Corrections to theB → X s γ Matrix Elements Using Interpolation in m c. M Misiak, M Steinhauser, hep-ph/0609241Nucl. Phys. 764M. Misiak and M. Steinhauser, NNLO QCD Corrections to theB → X s γ Matrix Elements Using Interpolation in m c , Nucl. Phys. B764 (2007) 62-82, [hep-ph/0609241].
Weak Decays Beyond Leading Logarithms. G Buchalla, A J Buras, M E Lautenbacher, hep-ph/9512380Rev.Mod.Phys. 68G. Buchalla, A. J. Buras, and M. E. Lautenbacher, Weak Decays Beyond Leading Logarithms, Rev.Mod.Phys. 68 (1996) 1125-1144, [hep-ph/9512380].
Effects of Superheavy Quarks and Leptons in Low-Energy Weak Processes K L → µμ, K + → π + νν and K 0 ↔K 0. T Inami, C S Lim, Prog. Theor. Phys. 65297T. Inami and C. S. Lim, Effects of Superheavy Quarks and Leptons in Low-Energy Weak Processes K L → µμ, K + → π + νν and K 0 ↔K 0 , Prog. Theor. Phys. 65 (1981) 297.
The Impact of Flavour Changing Neutral Gauge Bosons onB → X s γ. A J Buras, L Merlo, E Stamou, arXiv:1105.5146JHEP. 12408A. J. Buras, L. Merlo, and E. Stamou, The Impact of Flavour Changing Neutral Gauge Bosons onB → X s γ, JHEP 08 (2011) 124, [arXiv:1105.5146].
| []
|
[
"Refining Vision Videos",
"Refining Vision Videos"
]
| [
"Kurt Schneider [email protected] \nSoftware Engineering Group\n\n",
"Melanie Busch [email protected] \nSoftware Engineering Group\n\n",
"Oliver Karras [email protected] \nSoftware Engineering Group\n\n",
"Maximilian Schrapel [email protected] \nHuman-Computer Interaction Group\nLeibniz Universität Hannover\nWelfengarten 130167HannoverGermany\n",
"Michael Rohs [email protected] \nHuman-Computer Interaction Group\nLeibniz Universität Hannover\nWelfengarten 130167HannoverGermany\n"
]
| [
"Software Engineering Group\n",
"Software Engineering Group\n",
"Software Engineering Group\n",
"Human-Computer Interaction Group\nLeibniz Universität Hannover\nWelfengarten 130167HannoverGermany",
"Human-Computer Interaction Group\nLeibniz Universität Hannover\nWelfengarten 130167HannoverGermany"
]
| []
| Context and motivation]Complex software-based systems involve several stakeholders, their activities and interactions with the system. Vision videos are used during the early phases of a project to complement textual representations. They visualize previously abstract visions of the product and its use. By creating, elaborating, and discussing vision videos, stakeholders and developers gain an improved shared understanding of how those abstract visions could translate into concrete scenarios and requirements to which individuals can relate.[Question/problem] In this paper, we investigate two aspects of refining vision videos: (1) Refining the vision by providing alternative answers to previously open issues about the system to be built. (2) A refined understanding of the camera perspective in vision videos. The impact of using a subjective (or "ego") perspective is compared to the usual third-person perspective.[Methodology] We use shopping in rural areas as a real-world application domain for refining vision videos. Both aspects of refining vision videos were investigated in an experiment with 20 participants.[Contribution] Subjects made a significant number of additional contributions when they had received not only video or text but also both -even with very short text and short video clips. Subjective video elements were rated as positive. However, there was no significant preference for either subjective or non-subjective videos in general. | 10.1007/978-3-030-15538-4_10 | [
"https://arxiv.org/pdf/1901.06677v1.pdf"
]
| 58,981,491 | 1901.06677 | 3a0c1f26526036ca6980512704b1f065434c643f |
Refining Vision Videos
Kurt Schneider [email protected]
Software Engineering Group
Melanie Busch [email protected]
Software Engineering Group
Oliver Karras [email protected]
Software Engineering Group
Maximilian Schrapel [email protected]
Human-Computer Interaction Group
Leibniz Universität Hannover
Welfengarten 130167HannoverGermany
Michael Rohs [email protected]
Human-Computer Interaction Group
Leibniz Universität Hannover
Welfengarten 130167HannoverGermany
Refining Vision Videos
VisionVideoRefinementCamera-PerspectiveExperiment
Context and motivation]Complex software-based systems involve several stakeholders, their activities and interactions with the system. Vision videos are used during the early phases of a project to complement textual representations. They visualize previously abstract visions of the product and its use. By creating, elaborating, and discussing vision videos, stakeholders and developers gain an improved shared understanding of how those abstract visions could translate into concrete scenarios and requirements to which individuals can relate.[Question/problem] In this paper, we investigate two aspects of refining vision videos: (1) Refining the vision by providing alternative answers to previously open issues about the system to be built. (2) A refined understanding of the camera perspective in vision videos. The impact of using a subjective (or "ego") perspective is compared to the usual third-person perspective.[Methodology] We use shopping in rural areas as a real-world application domain for refining vision videos. Both aspects of refining vision videos were investigated in an experiment with 20 participants.[Contribution] Subjects made a significant number of additional contributions when they had received not only video or text but also both -even with very short text and short video clips. Subjective video elements were rated as positive. However, there was no significant preference for either subjective or non-subjective videos in general.
Introduction: Shared Understanding and Vision Videos in RE When a complex technical or socio-technical system is being conceived, overall visions are developed before software requirements can be specified. In development processes like the V-model (www.iabg.de), system requirements and system design precede software requirements. Changes in business processes, complex interactions, or societal change call for stakeholder participation and discourse. However, it is often difficult to convey the concepts and visions to diverse stakeholders [10]. Due to the large number of available options, building software prototypes for all of them is impossible. Details of their scope and impact are initially unclear.
One of the main challenges in requirements engineering (RE) is to create a shared understanding of the future system among developers and different stakeholder groups [14]. Minutes of stakeholder meetings are usually limited to only one facet of various points of view and a shared vision [12]. Several researchers [8,11,19] proposed applying videos in RE due to their communication richness and effectiveness [4]. For example, Brill et al. [6] demonstrate the benefits of using ad-hoc videos compared to textual use cases in order to clarify requirements with stakeholders.
In RE, videos of human-computer interaction were used to document system context [15], product vision [6,23], or scenarios [22,28,29]. They were used as input to a requirements workshop [7], for analyzing usability [22], or for complementing specifications [8,20]. Fricker et al. [12] proposed to record stakeholder meetings on video as a source of authentic requirements. Many approaches use videos but do not report details about how to produce them [6,8,18,28]. This lack of guidance could be a reason why videos are not yet an established RE documentation practice [17,21]. In the process of eliciting, refining, and validating requirements with video, we investigated two aspects that may contribute to the benefit of videos: (1) Refining visions by presenting alternatives and (2) refining the camera perspective for better emotional involvement. Refining vision videos can empower elicitation and validation. Creighton et al. [8] proposed a high-tech approach to videos in RE. We follow a different line of research.
Affordable Video Approach: While high-end marketing videos obviously help to convince people, we target affordable videos that assist in elicitation and validation of requirements and visions. Hence, creating and refining videos should be affordable with respect to effort, time, and resources. We envision a video-based approach for ambitious requirements engineers in ordinary software development teams. This paper is structured as follows: Section 2 introduces the example application as a background. In Section 3, we describe the concepts of vision videos in RE and of refining them in particular. Related work is presented in Section 4, before we outline the experiment design (Section 5) and report about results (Section 6). Discussion and threats to validity (Section 7) lead to the conclusion (Section 8).
Application Example: Shopping in Rural Areas
According to Schneider et al. [25, p. 1], "spatial planning problems are characterized by large and heterogeneous groups of stakeholders, such as municipalities, companies, interest groups, women and men, young people and children". Challenges in spatial planning include shrinking population in rural areas. Mobility options are discussed, and shopping opportunities are related to mobility: How can inhabitants of villages and peripheral areas get access to medical services; how can they buy food and daily supplies if grocery stores close down, and public transportation is missing? Traditionally, neighborhood help or a grocery bus initiative will be discussed in meetings with citizens. Scheduling and conducting those meetings is difficult and usually reaches only a small portion of citizens and stakeholders. Possibilities to participate are initially high and decrease as more and more aspects are decided. According to the "Paradox of Participation", however, interest in participation tends to be low in the beginning and only rises when there is little left to decide. Therefore, it is desirable to enable and motivate stakeholders to start participating early.
CrowdRE stands for technical approaches to support participation of crowds (of stakeholders, citizens, etc.) in requirements engineering. In [25], we proposed to extend the approach beyond RE towards participating in public discourse. The example application chosen for this paper is a sub-aspect of mobility in rural areas. Shopping without shops seems to call for some kind of ordering online and requires an adequate way of delivering the ordered goods. All variants of ordering and delivery require internet access and sophisticated coordination, which must be provided by software. Long before software can be specified, however, stakeholders should get to see the vision and the variants associated with different proposals.
Shopping in rural areas is a real-world application domain of growing importance. This topic has caught public attention and is discussed in newspapers [1]. Findings from the experiment, therefore, apply to the rural context -and may be applicable to other domains with similar challenges. This is, however, beyond the scope of this paper.
3
Concepts to Improve the Use of Vision Videos
As outlined above, vision videos are a good representation for communicating what is proposed, and how it would feel to use it. Following our Affordable Video Approach, we intend to solicit feedback, questions, and even objections by affordable self-made video clips in order to start effective discourse early. Refinement Process: Stakeholders should be able to participate in the process of comparing alternatives, selecting, and refining options. As refinement progresses, the discussion with all its proposals and questions and rationale will change its nature: From imagining a vision over defining system alternatives to finally narrowing down on software requirements. Requirements are derived by refining visions.
Emotion: Emotional reactions need to be taken seriously. For example, one variant of delivery is frequently discussed in the media: A parcel service deposits parcels in the trunk of their recipients. This asynchronous delivery to a personal space sounds attractive to many. However, when they see how someone opens a trunk with personal items in it, their emotional reaction is sometimes less positive. Video is always concrete. A video confronts stakeholders with possible scenarios that should be considered. Similar to other prototypes, validation of assumptions and elicitation of unexpected reactions merge when watching vision videos.
Definition and Investigated Scenarios: The experiment assumes there is a discussion on shopping in a rural area, as described above. At this point, "some kind of internet delivery" is proposed.
Definition: By the term "vision video refinement", we refer to the process of replacing gaps, abstract, or vague parts of a vision video by more concrete or detailed video clips (i.e. short parts of the video).
This definition of vision video refinement expands into three scenarios: 1. Open question: As long as no proposal has been elaborated, vision videos can show the problem; stakeholders are then asked for their suggestions. 2. Closed Choice: Discussion moderators or requirements engineers identify a small number of pre-selected options. They create vision videos to visualize those options in an affordable way. Those videos are distributed and shown to stakeholders, asking them for feedback, such as advantages and disadvantages, newly arising questions, concerns, decisions with rationale. 3. Refined Video: After all open questions have been addressed, selected refinements are embedded in the overall video. Gaps and vague parts have been replaced by selected video clips. The resulting refined vision video can be distributed, shown at town hall meetings, or further refined in social media discussions. The experiment below covers scenarios (1) and (2): Preparing open questions, and selecting from different variants. Scenario (3) was not included in this experiment since it follows the same pattern on the next refinement level. We decided to show all alternatives (A-B-C and 1-2-3) in one video, one after the other (see Fig. 2). Vision videos should not be longer than a few minutes.
Camera Perspectives
Emotional involvement and stimulation of empathy are considered strengths of video [17]. When stakeholders can literally see what an intended solution would mean for them, they are enabled to judge alternative proposals and to participate effectively in the decision-making process. Stakeholder groups face different challenges and may hold different values. Stakeholder should be represented adequately in a video to improve empathy, e.g. by actors of their age group and by subjects of an experiment. Inspired by research in the HCI community [2,13], subjective camera perspective may also emphasize identification of stakeholders with actors while watching a video. We illustrate and define core terms for the remainder of this paper.
Definition: Subjective Camera Perspective
In the subjective (also "first-person" or "ego") perspective, a video shows the scene from the perspective of a particular actor. Video seems to be recorded through the eyes of that actor. Audio reflects what the actor hears in that situation. Definition: Third-Person Perspective The situation and scenario is being recorded from an outside point of view. Camera and microphone do not appear (or pretend to be) close to eyes and ears of an actor.
Related Work
Vision Videos for RE: A vision is a positive imagination of the future. It can refer to the capabilities, features, or quality aspects of a part of reality that does not yet exist, but can be imagined. Video is the format in which the vision (content) is presented. Thus, a vision video of a software-based system typically shows a problem, an envisioned solution, and its impact, pretending the solution already exists. According to this definition, the work by Brill et al. [6] investigated a situation in which one group of subjects created a textual use case specification while a second group prepared a vision video during the same short period of time. Complementary advantages were found. While there was intentional time pressure and inexpensive equipment used in this case, Creighton et al. [8] produced high-end vision videos in cooperation with Siemens and overlaid them visually with UML diagrams. Xu et al. [27] followed this line of research by starting with videos (pixels) and then replacing parts of them with operational software prototypes (bytes). This work demonstrated that visions could successfully be fed into software development activities. In our own work, Karras and Schneider [21] propose developing a quality model for videos that can be used by requirements engineers to produce "good-enough" vision videos. Today, smartphone cameras are of sufficient quality to produce useful vision videos [21]. Practitioners need only a few hints to produce technically sufficient and effective vision videos for eliciting requirements. Pham et al. [23] explored ways of arranging short videos on a custom-made video editor that associated the clips and their arrangement with semantic annotations. Vision videos have been created to promote a societal vision, as in the work of Darby et al. [9] on design fiction: A vision video shows a consultation session of a nurse with a patient. The visionary aspect is a tool that is able to read and interpret body sensors. This tool does not yet exist, but the video pretends it does. The video serves as a visual prototype of the software, its use and context long before even detailed specifications are known. Brill et al. [6] had used videos for the same purpose in our research group. This paper addresses the capability of videos for a discussion process of refinement and discourse rather than for promotional purposes.
Camera Perspective: Galinsky et al. [13, p. 110] show how perspective-taking, i.e. "the process of imagining the world from another's vantage point or imagining oneself in another's shoes," decreases stereotyping of others and facilitates social coordination. Aitamurto et al. [2] suspect that the sense of presence may be positively correlated with emotional engagement, empathy, and attitude change as viewers embed themselves in the perspectives of others. The authors suspect that view switching may support taking different perspectives and lead to a better understanding of the perspectives of the different characters, e.g. if the video is filmed in first-person view. Akkil and Isokoski [3] visualized the actor's gaze point in an egocentric video and show that this improves the viewers' awareness of the actor's emotions. Kallinen et al. [16] compared first-and third-person perspectives in computer games and found higher presence for first-person perspective. The concept of embodiment in VR refers to the users' experience that the virtual body is perceived as their own. It has been shown that first-person VR environments can create this illusion [26]. This paper analyzes the impact of the subjective perspective in vision videos to refine guidelines for making good vision videos.
Experiment Design
We used the Goal-Question-Metric Paradigm [5] to formulate goals, hypotheses, questions, and metrics of the experiment.
Goals of Refining Vision Videos
We want to apply vision videos for stimulating discussions on open questions.
Main Improvement Goals: (1) We want to support the process of making choices by refining a vision into more detailed and concrete scenarios.
(2) As a separate measurement goal, we want to explore the impact of a subjective camera perspective.
Goal 1 can be rephrased into GQM format: (Purpose) Analyze and compare (Quality Aspect) number of (new) contributions (Object) in feedback (Perspective) from young adults. Various combinations of text and video are compared, as specified below.
Research Questions: In particular, we are interested in the benefit of providing a second medium. With respect to the GQM goal statement, we investigate whether new contributions can be raised ("stimulated") by video and text, respectively. The camera perspective is directly related to Goal 2 above.
RQ1:
Can adding videos stimulate discussion better than text alone?
RQ2:
Can adding text stimulate discussions better than video alone?
RQ3: Does a subjective camera perspective in refined vision videos help to empathize with the actor representing a stakeholder?
Video Set-Up and Experiment Procedure
The chosen study design leads to a simple and uniform process of conducting subject sessions. We describe here how the procedure unfolds, and explain our rationale with respect to answering the research questions while considering threats to validity.
Approach to Refining a Vision Video: In a live or online discussion on rural shopping, discussions led to identifying ordering and delivery as two crucial open issues. Each subject chooses one refinement A-B-C for ordering, and one refinement 1-2-3 for delivery. Offered options were: (A) Ordering by taking a picture, (B) using a Dash Button, and (C) a self-ordering sensitive box. Delivery was offered (1) through neighbor-pickup, (2) drones, and (3) deposit in the trunk of a parked car. We used individual sessions for each subject. They saw the videos and texts on a laptop. On the side, they completed the paper questionnaire. Q1 to Q8 are the feedback "object" of Goal 1. In the experiment, we followed the procedure depicted in Fig. 2. We provided a scenario of buying groceries with two open issues (halt points): (Issue 1) "How can groceries be ordered?" and (2) "How are they delivered?" Subjects completed a questionnaire with eight parts Q1..Q8: Triangles in Fig. 2 indicate what parts of the questionnaire were answered when. For example, Q1 asks for ideas for rural shopping after reading the intro text. Q2 was completed after an introductory video was shown.
There are two groups of subjects in the experiment (Fig. 2). Group 1 saw subjective style videos first (for ordering), and then third person videos (delivery). Group 2 started with third-person videos (ordering), and then saw subjective videos for delivery. This cross design is supposed to mitigate learning effects while at the same time exposing every subject to both camera perspectives. It is important to note that the presented alternatives (refinements A-B-C and 1-2-3) of ordering and delivery must be shown in the same order to all subjects: They are part of the stimulus that must be kept unchanged in order to produce comparable results.
Hypotheses
In the first block of hypotheses, we investigate subjective aspects of the research questions which are devoted preference. The second block of hypotheses investigates the performance in terms of the number of contributions. In particular, we investigate the following alternative hypotheses which represent our assumptions. Null hypotheses are reported in the result section (Section 6) together with the results they refer to.
Preference: What do subjects like?
Preference is measured by directly asking subjects about their opinion. In the experiment, such a rating is collected after several steps of the experiment. H11: Subjects prefer obtaining a video in addition to a text to getting only the text. H21: Subjects prefer obtaining a text in addition to a video to getting only the video.
H31: There is a difference between the Group 1 and Group 2 in how much they like the subjective perspective.
H41: Subjects' preference differs between the subjective or third-person perspective.
Performance: Added contributions to RE and shared understanding Performance is measured by counting contributions (GQM "quality aspect"). In the context of RE, we consider new ideas, new questions, requirements, and rationale as "contributions" for improving shared understanding. We did not count meaningless and repetitive contributions. The quality of contributions was not rated or evaluated. In this context, the term "idea" refers to a contribution about a new form of ordering or delivery.
When information is first represented as text and then a video is added, the benefit of that video is measured in terms of the number of new ideas and new contributions (see above) compared to the ideas respectively the contributions made after seeing only text before. In the inverse case, a video is presented first, and then a text is added: H51: Providing a video in addition to a text leads to new solution ideas. H61: Providing a video in addition to a text leads to new contributions.
H71: Providing a text in addition to a video leads to new contributions.
Emotional effect of the camera perspective: Which of the two perspectives has a greater emotional potential?
Emotional effect of the camera perspective is measured by directly asking subjects about their opinion. In the experiment, such a rating is collected after subjects saw both types of videos, i.e. in subjective and in third-person perspective.
H81: There is a difference in the subjects' perceived emotional involvement of between Group 1 and Group 2.
Selection of Actors, Subjects, and the Affordable Video Approach
There are obviously various age groups of stakeholders affected: Seniors with limited mobility, but also young people on the verge of leaving the village. Seniors and young adults will probably react differently to variants, and they will evaluate them from a different perspective. This important fact is obvious in videos. For this experiment, we focused on the group of young residents. A young actor in a room with modern furniture and big-screen TV has more appeal for empathy to young experiment subjects than a senior in a traditional living room -and vice versa. We collected data (ratings, evaluations, and contributions) from 20 subjects, aged between 20 and 33 years ( = 25.2 years). Seven were women, 13 men. We randomly asked members of the intended age group to participate, e.g. during an Open-Door day at the university. Nineteen of them use online shopping, but only eight of them had bought articles of daily use online.
According to our Affordable Video Approach, all video clips together were recorded within 3:15 hours of a single day. They were cut using ordinary video editing software within another half day. Video equipment consisted of a standard video camera (300 €) with Rode microphone (330 €) attached, since we found comprehensible audio important in earlier work [25]. Subjective video clips were recorded using a mobile phone camera mounted on a Gimbal (180 €) for the subjective video parts. Mobile phone cameras also would have been sufficient. All four lay actors and video personnel were members of the research group with no advanced video background or training.
The texts for introduction, ordering, and delivery variants are typically read by subjects in silence (32 s for intro, 29 s ordering, and 35 s delivery). Subjective videos on ordering run for 60 s (all three variants together), and 68 s in normal camera perspective. Delivery is more complex and includes interaction beyond the (first person) actor. Delivery variants run for a total of 155 s (subjective) and 150 s (third-person).
Experiment Results
For evaluating the alternative hypotheses in 5.3, we state corresponding null hypotheses. We provide additional descriptive analysis of ratings, evaluations, and subject opinions as boxplots. Results are clustered in the same three above-mentioned categories: Preference, performance, and emotional effect of the camera perspective.
Preference
H10: Subjects' preference does not differ between obtaining a video in addition to a text and getting only the text.
Subjects had first received a text describing the ordering options and then an additional video illustrating the same ordering options. After watching the video, we asked whether they preferred having the video in addition to the text, or only the text (see Fig. 2, Q4). According to a chi-square test of independence ( = 1.05, = .
3), there is no difference between the two groups. Thus, we could aggregate the data for analysis. Since we had nominal data, we performed a chi-square goodness-of-fit test with a significance level = .05. Corresponding to H10, one would expect a 0.5/0.5 distribution of the stakeholders' preference. We found significant deviation from the hypothetical distribution ( = 12.8, = .0003). We can reject H10 and accept H11. Subjects prefer obtaining a video in addition to text rather than having only the text. H20: Subjects' preference does not differ between obtaining a text in addition to a video and getting only the video.
Subjects had first received a video illustrating the delivery options and then an additional text describing the same delivery options. After reading the text, we asked whether they preferred having the text in addition to the video, or only the video (see Fig. 2, Q6). We performed a chi-square test of independence ( = 1.25, = .26), which indicates no difference between the two groups. Since there is no difference between the groups, we aggregated the nominal data. We found a significant deviation from this distribution ( = 7.2, = .007). Thus, we can reject 2 and conclude: Subjects prefer obtaining a text in addition to a video rather than having only the video.
H30: There is no difference between Group 1 and Group 2 in how much they like the subjective perspective.
At the end of the experiment, the subjects assessed the statement: "I liked the egoperspective." on a Likert-scale from 0 (totally disagree) to 5 (totally agree) (see Fig. 2, Q8). According to Kolmogorv-Smirnov ( = .19, = .07) and Shapiro-Wilk tests ( = .9, = .05), the data is normally distributed. Next, we performed a Mann-Whitney U test. The test indicated that the rating of Group 1 ( = 4) for the subjective perspective was significantly higher than for Group 2 ( = 2.5), = 2.35, = .02. Thus, we can reject H30. There is a difference between Group 1 and Group 2 in how much they like the subjective perspective.
H40: Subjects consider both subjective and third-person perspectives equally good.
We asked subjects if they preferred subjective or third-person perspective (see Fig. 2, Q8). According to the chi-square independence test ( ² = 1.14, = .56), there is no difference between the two groups and we can aggregate the nominal data. We applied a chi-square goodness-of-fit test ( = .05). According to the 4 , there would be a .5/.5 distribution. We found no significant deviation from the hypothesized distribution ( = 1.125, = .29). We cannot reject H40. There is no significant difference between the subjects' preference for one of the two perspectives.
We asked how emotionally involved subjects were after seeing the second medium (video after text / text after video). Fig. 3 (left) shows the high ratings on a 0 to 5 Likert scale. In all three cases (introduction, ordering, delivery) the emotional involvement was higher after receiving the second medium. All videos received very high ratings ( = 4); the stimulated emotional involvement. With text, values are a little lower. Performance The performance is measured in number of contributions after text or videos were provided. Fig. 3 (right) shows the three parts: introduction, ordering, and delivery. Boxplots on the left of each pair show the number of contributions made after the first element was provided; the right-hand boxplots show additional contributions solicited after the second element was provided. Light boxes stand for text, darker ones for video.
H50: Providing a video in addition to a text does not lead to new solution ideas.
Subjects had first received text and were asked to write down solution ideas (see Fig. 2, Q1). After participants had received the video, we asked if they had any additional Fig. 2, Q2). According to Kolmogorov-Smirnov ( = .34, < .001) and Shapiro-Wilk ( = .81, = .001) tests, the number of solution ideas are not normally distributed. We investigated whether the two groups differ from each other by using Mann-Whitney U test: = .53, = .60. Since we did not find a difference between the two groups, we aggregated the data and performed the non-parametric one-sample Wilcoxon Signed-Rank test. The test showed a significant number of additional, not yet mentioned, solution ideas by the stakeholders ( = −3.62, < .001). H50 is rejected: Providing a video in addition to text leads to new solution ideas.
H60: Providing a video in addition to a text does not lead to new contributions
After the subjects read the text of the ordering options we asked them to select one option and to write down their rationale, requirements, and questions (Fig. 2, Q3). Afterwards the participants received a video of the ordering options and we asked them for further requirements-related contributions (Fig. 2, Q4). We investigated the collected number of contributions for normal distribution with Kolmogorov-Smirnov test and Shapiro-Wilk test. Both tests indicated that the data is not normally distributed ( = .21, = .02, = .89, = .02). There is no difference between the groups by means of a Mann-Whitney U test: = .91, = .36. We analyzed all data together by using the one-sample Wilcoxon Signed-Rank test. This test yields a significant difference, i.e. a significant number of new contributions ( = −3.62, = .0002). H60 is rejected: Providing a video in addition to a text leads to new contributions.
H70: Providing a text in addition to a video does not lead to new contributions.
For the delivery options, subjects saw the video first and we asked them to select one option. Based on their choice, we asked them to write down their rationale, requirements, and questions (Fig. 2, Q5). Then they read the text describing the delivery options and we asked them for further requirements-related contributions (Fig. 2, Q6). The statistical analysis follows the same procedure: The Kolmogorov-Smirnov test ( = .27, < .001) and Shapiro-Wilk test ( = .74, < .001) showed that the data is not normally distributed. There was no difference between the groups in a Mann-Whitney U test: = .76, = .45. We analyzed all data together by using the nonparametric one-sample Wilcoxon Signed-Rank. The test yields a significant number of additional contributions after the subjects read the text ( = −3.06, = .001). H70 is rejected. Providing a text in addition to a video leads to new contributions.
Emotional effect of the camera perspective H80: There is no difference in the subjects' perceived emotional involvement between Group 1 and Group 2.
Subjects first received the text of the variants and then the video. Afterwards, we asked subjects to indicate their emotional involvement by assessing the statement "I was more emotionally involved in the problem due to the video." on a Likert-scale from 0 (totally disagree) to 5 (totally agree) (see Fig. 2, Q4). While the Kolmogorov-Smirnov test ( = .20, = .07) indicated that the data is normally distributed, the Shapiro-Wilk test found the data to be not normally distributed ( = .88, = .03). Due to this discrepancy, we used a non-parametric Mann-Whitney U test. It showed no difference between the group that watched a subjective video ( = 4) and the group that watched a third-person video ( = 3), = .44, = .66. We cannot reject H80 and conclude: There seems to be no difference between Group1 and Group 2 in the perceived emotional involvement of subjects.
Evaluations and Subject Opinions
Finally, we asked subjects in Q7 for more detailed feedback (Fig. 4) after all texts and videos had been presented. Most subjects also found videos provided important information (d). The ratings for "video conveys atmosphere" (e) were even higher, but there were also a few low ratings. In (f), a large majority considered video quality sufficient, despite the Affordable Video Approach. Most disagreed with "videos were obsolete" (g) -not an obvious result, given the short runtime of all videos and the fact that there were also texts available.
Interpretation and Discussion
We investigated whether adding videos to previously available texts would solicit additional contributions for discourse. In Q6, we also measured the inverse situation: Adding text to previously shown videos. In real decision situations about rural areas, most stakeholders would read brief texts (in the newspaper [1], or online) before they decide to watch a short video about it. The results confirm the usefulness of enriching early discussions about visions and requirements with both text and video. Preference and evaluation were very positive, and a number of statistically significant results confirm that adding either video or text (to the other) stimulated more contributions.
Threats to validity
The experiment design presented in Sec. 5 is rather sophisticated. It reflects the complexity of evaluating the role of vision videos in refining visions towards requirements. The real-world application domain is of substantial complexity. Hence, a number of specific threats to validity must be considered.
Internal validity: Causality. Possible influences on the outcome are numerous, due to the high complexity. For the experiment, we used texts and videos that were created independently. Neither did one build on the other, nor was there a competition to outperform each other. The mission was to explain the introduction and refinement options concisely and self-sufficiently. Some subjects might have felt pressed to provide more contributions when they were shown an extra video or text. However, we checked whether those new contributions were original or repetitive and counted only new ones. There were several cases in which a question did not solicit any additional responses.
External validity: Can results be generalized? Despite the above-mentioned precautions, our findings cannot be generalized directly to every kind of text and every type of video. Texts and videos can be useful or difficult to understand and annoyingintentionally or by accident. There are so many types and styles of video (and text) that one should take even our significant findings with a grain of salt.
Construct validity: Adequate concepts? As explained in Sec. 5, we counted new questions, new reasons to choose or reject a variant as contributions to the discourse as RE contributions. In the area of RE, a good question can be valuable [14] for clarifying visions and requirements. The results and findings should be read with this definition in mind. Conceptualizations of "contribution" that deviate substantially from this definition may not be covered by our experiment. The treatments in our experiment was adding a second medium (video/text). We analyzed the effect of getting that treatment by comparing contributions before and after receiving the second medium.
Conclusion validity: Adequate conclusions? The positive impact of adding video or text could be a case of "paraphrasing", presenting a situation from different angles. It is possible and likely that adding other media could have had similar effects. We wanted to investigate whether low-effort video with its own advantages was also qualified as a useful second medium. Our results confirm the benefit of taking the extra effort of providing a second medium. Please note that providing "both video and text at a time" may seem a similar and attractive option, but poses yet another threat to validity: Its impact will depend highly on how the media are presented: if and in which order subjects look at them. This aspect was beyond the scope of our study.
Deciding about rural shopping is an integral part of much wider concerns. There are so many parameters and influence factors that cannot -and should not -be controlled in order not to distort the phenomenon of interest. We decided to study the very basic mechanisms of refining a vague vision of shopping into several variants by video. The technical choice of a camera perspective is related. Those mechanisms are the basis for more complex interactions of vision and requirements communication and clarification.
Conclusions
We had expected to stimulate additional questions and ideas by showing videoswhere usually only a few short texts would be provided. However, we did not expect the number of additional contributions stimulated by the videos (Fig. 3), and the very positive evaluation of videos in hindsight (Fig. 4). In the introduction to the experiment, exactly the same text was provided to read -and then to hear in the video. Nevertheless, almost all subjects recommended showing the video in addition to the text in the future. We had included the inverse direction (text after video) in the study as a matter of curiosity. Given the less than 10-line texts and 20-second video clips describing an ordering refinement, we had not expected performance and preference indicators to be as clear as they were: Provide both media.
Subjective camera perspective seemed to be a matter of taste. There was no significant performance advantage over third-person videos, nor was the empathy rating higher. Some subjects preferred subjective over third-person perspective -and vice versa. According to Runeson et al. [24], case studies are appropriate for investigating phenomena that require complex subsets of reality to occur. Based on this established baseline of experimental insights, we plan to triangulate our findings in case studies.
Fig. 1 .
1Examples of subjective (top) and corresponding third-person perspective (bottom) from the experiment videos. Variant IDs are displayed temporarily (e.g. "Variante A").
Fig. 2 :
2Video structure and experiment design, with Questionnaire parts Q1..Q8
Fig. 3 .
3Emotional involvement at Q2, Q4, Q6; No. of contributions at Q1/2; Q3/4; Q5/6
Fig. 4 .
4Subjects' detailed evaluation results. (g) means: "videos were not obsolete" As Fig. 4 indicates, both text (a) and video (b) were considered important for making decisions about the refinement variants. Most subjects liked the videos (c).
Federal mail will sell bread. In rural areas, shopping gets increasing difficult -now, the postman could sell groceries on the doorstep. Zeitung Hannoversche Allgemeine, 15.9original in GermanHannoversche Allgemeine Zeitung (15.9.2018). "Federal mail will sell bread. In rural areas, shopping gets increasing difficult -now, the postman could sell groceries on the doorstep" (original in German) (2018)
Sense of Presence, Attitude Change, Perspective-taking and Usability in First-Person Split-Sphere 360° Video. T Aitamurto, S Zhou, S Sakshuwong, J Saldivar, Y Sadeghi, A Tran, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. the 2018 CHI Conference on Human Factors in Computing SystemsAitamurto, T., Zhou, S., Sakshuwong, S., Saldivar, J., Sadeghi, Y., Tran, A.: Sense of Presence, Attitude Change, Perspective-taking and Usability in First-Person Split-Sphere 360° Video. In: Proceedings of the 2018 CHI Conference on Human Factors in Compu- ting Systems (2018)
Gaze Augmentation in Egocentric Video Improves Awareness of Intention. D Akkil, P Isokoski, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. the 2016 CHI Conference on Human Factors in Computing SystemsAkkil, D., Isokoski, P.: Gaze Augmentation in Egocentric Video Improves Awareness of Intention. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016)
Agile Modeling. S Ambler, John WileyAmbler S.: Agile Modeling. John Wiley, (2002)
The Goal Question Metric Approach. V R Basili, G Caldiera, H D Rombach, Encyclopedia of Software Engineering. WileyBasili, V. R., Caldiera, G., Rombach, H. D.: The Goal Question Metric Approach. In: Encyclopedia of Software Engineering, Wiley (1994)
Videos vs. Use Cases: Can Videos Capture More Requirements Under Time Pressure?. O Brill, K Schneider, E Knauss, 16th International Working Conference on Requirements Engineering: Foundation for Software Quality. SpringerBrill, O., Schneider, K., Knauss, E.: Videos vs. Use Cases: Can Videos Capture More Requirements Under Time Pressure? In: 16th International Working Conference on Re- quirements Engineering: Foundation for Software Quality, Springer (2010)
Using Video Clips to Support Requirements Elicitation in Focus Groups -An Experience Report. G Broll, H Hussmann, E Rukzio, R Wimmer, SE 2007 Workshop on Multimedia Requirements Engineering. Broll, G., Hussmann, H., Rukzio, E., Wimmer, R.: Using Video Clips to Support Re- quirements Elicitation in Focus Groups -An Experience Report. In: SE 2007 Workshop on Multimedia Requirements Engineering (2007)
Software Cinema -Video-Based Requirements Engineering. O Creighton, M Ott, B Bruegge, 14th IEEE International Requirements Engineering Conference. Creighton, O., Ott, M., Bruegge, B.: Software Cinema -Video-Based Requirements En- gineering. In: 14th IEEE International Requirements Engineering Conference (2006)
Speculative Requirements: Design Fiction and RE. A Darby, E Tsekleves, P Sawyer, 26th IEEE International Requirements Engineering Conference. Darby, A., Tsekleves, E., Sawyer, P.: Speculative Requirements: Design Fiction and RE. In: 26th IEEE International Requirements Engineering Conference (2018)
A H Dutoit, R Mccall, I Mistrík, B Paech, Rationale Management in Software Engineering. SpringerDutoit, A. H., McCall, R., Mistrík, I., Paech, B.: Rationale Management in Software En- gineering. Springer (2007)
Documenting Software Using Video. W Feeney, IEEE Computer Society Workshop on Software Engineering Technology Transfer. Feeney, W.: Documenting Software Using Video. In: IEEE Computer Society Workshop on Software Engineering Technology Transfer (1983)
Workshop Videos for Requirements Communication. S A Fricker, K Schneider, F Fotrousi, C Thuemmler, Requirements Engineering. 214Fricker, S. A., Schneider, K., Fotrousi, F., Thuemmler, C.: Workshop Videos for Re- quirements Communication. Requirements Engineering 21(4) (2016)
Perspective-Taking and Self-Other overlap: Fostering Social Bonds and Facilitating Social Coordination. A D Galinsky, G Ku, C S Wang, Group Processes & Intergroup Relations. 82Galinsky, A. D., Ku, G., Wang C. S.: Perspective-Taking and Self-Other overlap: Fos- tering Social Bonds and Facilitating Social Coordination. Group Processes & Intergroup Relations 8(2) (2005)
M Glinz, S A Fricker, On Shared Understanding in Software Engineering: An Essay. In: Computer Science -Research and Development. Glinz, M., Fricker, S.A.: On Shared Understanding in Software Engineering: An Essay. In: Computer Science -Research and Development (2014)
Supporting Requirements with Video-Based Analysis. M Jirotka, P Luff, IEEE Software. 23Jirotka, M., Luff, P.: Supporting Requirements with Video-Based Analysis. In: IEEE Software Vol. 23 (2006)
Presence and Emotion in Computer Game Players During 1st person vs. K Kallinen, M Salminen, N Ravaja, R Kedzior, M Sääksjärvi, Eye-Tracking, and Facial Muscle Activity Data. In: 10th Annual International Workshop on Presence. 3rd Person Playing View: Evidence from Self-ReportKallinen, K., Salminen, M., Ravaja, N., Kedzior, R., Sääksjärvi, M.: Presence and Emo- tion in Computer Game Players During 1st person vs. 3rd Person Playing View: Evidence from Self-Report, Eye-Tracking, and Facial Muscle Activity Data. In: 10th Annual In- ternational Workshop on Presence (2007)
Software Professionals' Attitudes Towards Video as a Medium in Requirements Engineering. O Karras, Product-Focused Software Process Improvement. Karras, O.: Software Professionals' Attitudes Towards Video as a Medium in Require- ments Engineering. In: Product-Focused Software Process Improvement (2018)
Enriching Requirements Specifications with Videos -The use of Videos to Support Requirements Communication. O Karras, A Hamadeh, K Schneider, Softwaretechnik-Trends. 38Karras, O., Hamadeh, A., Schneider, K.: Enriching Requirements Specifications with Videos -The use of Videos to Support Requirements Communication. In: Softwaretech- nik-Trends 38(1) (2017)
Supporting Requirements Elicitation by Tool-Supported Video Analysis. O Karras, S Kiesling, K Schneider, 24th IEEE International Requirements Engineering Conference. Karras, O., Kiesling, S., Schneider, K.: Supporting Requirements Elicitation by Tool- Supported Video Analysis. In: 24th IEEE International Requirements Engineering Con- ference (2016)
Enrichment of Requirements Specifications with Videos -Enhancing the Comprehensibility of Textual Requirements. O Karras, J Klünder, S Schneider, ZenodoKarras, O., Klünder, J., Schneider, S.: Enrichment of Requirements Specifications with Videos -Enhancing the Comprehensibility of Textual Requirements. Zenodo (2016)
Software professionals are Not Directors: What Constitutes a Good Video?. O Karras, K Schneider, 2018 1st International Workshop on Learning from other Disciplines for Requirements Engineering (D4RE). Karras, O., Schneider, K.: Software professionals are Not Directors: What Constitutes a Good Video? In: 2018 1st International Workshop on Learning from other Disciplines for Requirements Engineering (D4RE) (2018)
Video as a By-Product of Digital Prototyping: Capturing the Dynamic Aspect of Interaction. O Karras, C Unger-Windeler, L Glauer, K Schneider, 25th IEEE International Requirements Engineering Conference Workshops. Karras, O., Unger-Windeler, C., Glauer, L., Schneider, K.: Video as a By-Product of Digital Prototyping: Capturing the Dynamic Aspect of Interaction. In: 25th IEEE Inter- national Requirements Engineering Conference Workshops (2017)
Interactive Multimedia Storyboard for Facilitating Stakeholder Interaction: Supporting Continuous Improvement in IT-Ecosystems. R Pham, S Meyer, I Kitzmann, K Schneider, 8th International Conference on the Quality of Information and Communications Technology. Pham, R., Meyer, S., Kitzmann, I., Schneider, K.: Interactive Multimedia Storyboard for Facilitating Stakeholder Interaction: Supporting Continuous Improvement in IT- Ecosystems. In: 8th International Conference on the Quality of Information and Commu- nications Technology (2012)
P Runeson, M Host, A Rainer, B Regnell, Case Study Research in Software Engineering: Guidelines and Examples. John Wiley & SonsRuneson, P., Host, M., Rainer, A., Regnell, B.: Case Study Research in Software Engi- neering: Guidelines and Examples. John Wiley & Sons (2012)
Reframing Societal Discourse as Requirements Negotiation: Vision Statement. K Schneider, O Karras, A Finger, B Zibell, 25th IEEE International Requirements Engineering Conference Workshops. Schneider, K., Karras, O., Finger, A., Zibell, B.: Reframing Societal Discourse as Re- quirements Negotiation: Vision Statement. In: 25th IEEE International Requirements En- gineering Conference Workshops (2017)
Embodiment and Presence in Virtual Worlds: A Review. U Schultze, JIT. 254Schultze, U.: Embodiment and Presence in Virtual Worlds: A Review. JIT 25(4) (2010)
From Pixels to Bytes: Evolutionary Scenario Based Design with Video. H Xu, O Creighton, N Boulila, B Bruegge, Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering. the ACM SIGSOFT 20th International Symposium on the Foundations of Software EngineeringXu, H., Creighton, O., Boulila, N., Bruegge, B.: From Pixels to Bytes: Evolutionary Sce- nario Based Design with Video. Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (2012)
ART-SCENE: Enhancing Scenario Walkthroughs with Multi-Media Scenarios. K Zachos, N Maiden, 12th IEEE International Requirements Engineering Conference. Zachos, K., Maiden, N.: ART-SCENE: Enhancing Scenario Walkthroughs with Multi- Media Scenarios. In: 12th IEEE International Requirements Engineering Conference (2004)
Rich-Media Scenarios for Discovering Requirements. K Zachos, N Maiden, A Tosar, IEEE Software. 225Zachos, K., Maiden, N., Tosar, A.: Rich-Media Scenarios for Discovering Requirements. In: IEEE Software 22(5) (2005)
| []
|
[
"A DEEP LEARNING APPROACH TO DATA-DRIVEN MODEL-FREE PRICING AND TO MARTINGALE OPTIMAL TRANSPORT 1 A deep learning approach to data-driven model-free pricing and to martingale optimal transport",
"A DEEP LEARNING APPROACH TO DATA-DRIVEN MODEL-FREE PRICING AND TO MARTINGALE OPTIMAL TRANSPORT 1 A deep learning approach to data-driven model-free pricing and to martingale optimal transport"
]
| [
"Ariel Neufeld ",
"Julian Sester "
]
| []
| []
| We introduce a novel and highly tractable supervised learning approach based on neural networks that can be applied for the computation of model-free price bounds of, potentially high-dimensional, financial derivatives and for the determination of optimal hedging strategies attaining these bounds. In particular, our methodology allows to train a single neural network offline and then to use it online for the fast determination of model-free price bounds of a whole class of financial derivatives with current market data. We show the applicability of this approach and highlight its accuracy in several examples involving real market data. Further, we show how a neural network can be trained to solve martingale optimal transport problems involving fixed marginal distributions instead of financial market data. 1 Examples for sophisticated financial market models include among many others the Heston model (compare[33]) and Dupire's local volatility model (compare [22]). 2 The terms model-free and model-independent are used synonymously in the literature.3Arbitrage refers to a profit that can be realized without taking any risk. Prices of a derivative that allow for arbitrage are considered as not reasonable as the arbitrage profit would be immediately exploited by arbitrageurs.4We speak of deep neural networks if there are at least 2 hidden layers involved. 5 As a convention we refer to feed-forward simply as neural networks throughout the paper. 6 Every option that is neither a call nor a put option is called exotic. | 10.1109/tit.2022.3229845 | [
"https://export.arxiv.org/pdf/2103.11435v3.pdf"
]
| 232,307,210 | 2103.11435 | 0babd25d0382b3a8d19f1cda6706b82aa826afd9 |
A DEEP LEARNING APPROACH TO DATA-DRIVEN MODEL-FREE PRICING AND TO MARTINGALE OPTIMAL TRANSPORT 1 A deep learning approach to data-driven model-free pricing and to martingale optimal transport
Ariel Neufeld
Julian Sester
A DEEP LEARNING APPROACH TO DATA-DRIVEN MODEL-FREE PRICING AND TO MARTINGALE OPTIMAL TRANSPORT 1 A deep learning approach to data-driven model-free pricing and to martingale optimal transport
We introduce a novel and highly tractable supervised learning approach based on neural networks that can be applied for the computation of model-free price bounds of, potentially high-dimensional, financial derivatives and for the determination of optimal hedging strategies attaining these bounds. In particular, our methodology allows to train a single neural network offline and then to use it online for the fast determination of model-free price bounds of a whole class of financial derivatives with current market data. We show the applicability of this approach and highlight its accuracy in several examples involving real market data. Further, we show how a neural network can be trained to solve martingale optimal transport problems involving fixed marginal distributions instead of financial market data. 1 Examples for sophisticated financial market models include among many others the Heston model (compare[33]) and Dupire's local volatility model (compare [22]). 2 The terms model-free and model-independent are used synonymously in the literature.3Arbitrage refers to a profit that can be realized without taking any risk. Prices of a derivative that allow for arbitrage are considered as not reasonable as the arbitrage profit would be immediately exploited by arbitrageurs.4We speak of deep neural networks if there are at least 2 hidden layers involved. 5 As a convention we refer to feed-forward simply as neural networks throughout the paper. 6 Every option that is neither a call nor a put option is called exotic.
I. INTRODUCTION
F INANCIAL derivatives are financial contracts between the corresponding seller, typically a bank, and a buyer, typically another financial institution or a private person, with a future uncertain payoff depending on another (typically simpler) financial instrument, often a stock, to which we refer as the underlying security. Options are a large class of financial derivatives which allow, but do not oblige the owner of the option to buy or sell the underlying securities involved in the contract. The most common types of traded financial derivatives are call and put options which allow to buy and sell, respectively, the underlying single security at a future maturity at a predetermined price, the so called strike of the option. Due to the uncertainty involved in the future cashflow, today's price of the financial derivative is a priori unclear and subject to a high degree of ambiguity. The classical paradigm in mathematical finance, which is commonly applied to determine the fair value of some financial derivative, consists in capturing the developments of the real underlying market by a sophisticated financial market model 1 . This model is then calibrated to observable market parameters such as current spot prices, prices of liquid options, interest rates, and dividends, and is thus believed to capture the reality appropriately, see e.g. [53] for details of this procedure. However, such an approach evidently involves the uncontrollable risk of having a priori chosen the wrong type of model -this refers to the so called Knightian uncertainty ([42]).
To reduce this apparent model risk the research in the area of mathematical finance recently developed a strong interest in the computation of model-independent 2 and robust price bounds for financial derivatives (compare among many others [7], [13], [16], [17], [20], [27], [35], [36], [38], [46], and [47]). We speak of model-independent price bounds if realized prices within these bounds exclude any arbitrage opportunities 3 under usage of liquid market instruments independent of any model assumptions related to potential underlying stochastic models, whereas robust price bounds refer to the exclusion of model-dependent arbitrage within a range of models that are deemed to be admissible.
We present an approach enabling the fast and reliable computation of model-independent price bounds of financial derivatives. This approach is mainly based on supervised deep learning ( [43], [52]) and proposes how a deep 4 feed-forward neural network 5 can be trained to learn the relationship between observed financial data and associated model-independent price bounds of any potentially high-dimensional financial derivative from an entire parametric class of exotic 6 options. The great advantage of the presented methodology is that, in contrast to computational intensive and therefore potentially time-consuming pricing methods which have to be reapplied for each new set of observed financial data and each derivative one wants to valuate, it allows to use a sole pre-trained neural network for real time pricing of every financial derivative from a pre-specified class of payoff functions. Let us consider some family of financial derivatives defined through payoff functions
Φ θ : R nd + → R, θ ∈ Θ,
which determines the payoff an investor receives at time tn in case he bought the derivative Φ θ at initial time t0. The payoff depends on the values of d ∈ N underlying securities at n ∈ N future times t1 < t2 < · · · < tn, i.e., the derivative depends on each security S k = (S k t 1 , . . . , S k tn ) for k = 1, . . . , d. The goal is then to determine all possible today's prices for each Φ θ such that a potential investor cannot profit from one of these prices to exploit model-independent arbitrage. This notion refers to strategies that involve trading in underlying securities and/or in liquid options which are cost-free and lead to a profit independent of any model assumptions, i.e., for any possible future evolvement of the underlying security, see also [2]. As a canonical example we consider the class of payoffs associated to basket options, which are financial derivatives that allow (but not oblige) at a future time to buy a weighted sum of financial assets (with weights denoted by (w k i ) i,k ) at a predetermined strike L. Such an option is only executed if it is favorable for the option-holder to do so, which is the case if the difference between the weighted sum and the strike is positive and therefore the set of payoffs is given by
Φ θ (S 1 , . . . , S d ); θ ∈ Θ := max n i=1 d k=1 w k i S k t i − L, 0 where θ := (w k i ) i,k , L ∈ R nd × R .
(I.1)
To find the arbitrage-free upper price bounds of a financial derivative Φ θ , we consider model-independent super-replication strategies (also called super-hedging strategies) of Φ θ , i.e., trading strategies that lead for every possible evolvement of the underlying securities to a greater or equal outcome than the payoff of Φ θ , which is referred to as the trading strategy super-replicating Φ θ . Prices of such strategies need to be at least as high as the price of Φ θ , since otherwise the market would admit model-independent arbitrage, which can indeed be seen by buying the strategy and by selling the derivative Φ θ at initial time. Thus, the smallest price among all model-independent super-replication strategies leads to the arbitrage-free upper price bound of Φ θ . Analogue, the greatest price among sub-replication strategies yields the arbitrage-free lower price bound. Moreover, it is a consequence of (adaptions of) the fundamental theorem of asset pricing (see [2] and [19]) that there exist a dual method to approach the valuation problem: One may also consider all martingale models 7 which are consistent with bid and ask prices of liquidly traded option prices written on the underlying securities S = (S 1 , . . . , S d ) and expire at the future maturities (ti)i=1,...,n as candidate models. Then, minimizing and maximizing the expectations E Q [Φ θ (S)] among all associated martingale / risk-neutral measures Q of potential models leads to the desired price bounds, compare e.g. [2] and [15] for such results in the discrete time model-independent setting. Given a payoff function Φ θ from a (parametric) set of payoff functions {Φ θ , θ ∈ Θ}, for example from the set of basket options as in (I.1), we use the sub/super-replication method to compute the lower and upper price bound of Φ θ for various different sets of financial data and for several choices of θ ∈ Θ, i.e., we compute the bounds in dependence of different observed financial data. The observable market parameters comprised in the financial data include prices of the underlying securities as well as bid and ask prices of liquidly traded call and put options and its associated strikes. After having computed the price bounds for various different sets of financial data, we let, in accordance with the universal approximation theorem from [37], a specially designed neural network learn the relationship between observed financial data and the corresponding model-independent price bounds for a parametric family of payoff functions, compare also While there exist several numerical routines to compute model-free price bounds of financial derivatives, the only numerical routine that allows to compute model-free price bounds in a purely data-driven approach without imposing any probabilistic assumptions on the market is the approach from [47] which we therefore use to construct a training set of price bounds. Indeed, while [23], [24], [29], [31], [32] all provide methods to compute price bounds of financial derivatives, they all rely fundamentally on the assumption that the marginal distributions of each single asset are known exactly. Moreover, each established methodology so far requires for every new set of financial data to employ a potentially time-consuming valuation method to find price bounds for every financial derivative of interest. Our approach circumvents this problem as it enables to train offline a single neural network for a whole family of related payoff functions, such as e.g. basket options with different weights and strikes, and then to determine model-free price bounds in real time by using the already trained neural network. Thus, in practice, it suffices to train a couple of neural networks (one for each relevant family of payoffs) and then to use the pre-trained neural networks for valuation-purposes. We refer to Remark II.9 (e) for further possible examples of parametric families {Φ θ , θ ∈ Θ}, where we highlight that only a few neural networks are necessary to cover the most relevant payoff functions of financial derivatives.
In Section II, we first present our approach in a very general setting including multiple assets, multiple time steps, as well as market frictions. To justify our methodology, we show, by proving a continuous relationship between market data and resultant model-free price bounds, that the universal approximation theorem from [37] is indeed applicable, see Theorem II.2. More precisely, under some continuity assumptions on the parametric family of payoff functions θ → Φ θ which are typically satisfied for the relevant families of payoff functions in finance (see Section II-C), we prove that both upper and lower arbitrage-free price bound depend continuously on the relevant inputs: strike prices and bid-ask prices of the liquid call and put options, the current prices of the underlying stocks, as well as the parameter determining the payoff function. This, together with the universal approximation property of neural networks proves that a single neural network can indeed learn the arbitrage-free price bounds of a parametric family of payoff functions. To the best of our knowledge, Theorem II.2 (as well as Theorem III.1) is the first result which proves a continuous relationship between the respective inputs and outputs. This result justifies to learn model-free price bounds by neural networks, and hence provides an important novel contribution to the field. In particular, this means that it is possible to train a single neural network offline on past market data and then to use it online with current market data to compute price bounds of each financial derivative Φ θ , θ ∈ Θ. Additionally, we show accuracy and tractability of our presented approach in various high-dimensional relevant examples involving real market data.
In Section III, we show that the methodology can also be applied to compute two-marginal martingale optimal transport (MOT) problems, see also Figure 1b for an illustration of the approach where instead of market data entire marginal distributions are the input to the neural network, we refer to Theorem III.1 for the novel theoretical justification of that approach. The knowledge about marginal distributions can be motivated by the findings from [11] which lead to the insight that complete information about the marginal distributions is equivalent to the knowledge of prices of call options written on the underlying securities for a whole continuum of strikes.
We further show within several examples the applicability of the presented approach. Mathematical proofs of the theoretical results are provided in Section IV.
II. APPROXIMATING MODEL-FREE PRICE BOUNDS WITH NEURAL NETWORKS
In this section we present an arbitrage-free approach to determine model-free price bounds of a possibly high-dimensional financial derivative when real market data is given. In addition to prices of underlying securities we observe bid and ask prices of call and put options written on these securities , where bid and ask prices refer to the quotations for which the options can be sold and bought. Moreover, we explain how model-independent price bounds can be approximated through neural networks.
A. Model-independent valuation of derivatives
We consider at the present time t0 ∈ [0, ∞) d ∈ N underlying securities and n ∈ N future times t0 < t1 < · · · < tn < ∞, i.e., the underlying process is given by
S := S 1 , . . . , S d = (St 1 , . . . , St n ) = (S k t i ) k=1,...,d i=1,...,n with S k := (S k t i )i=1,.
..,n denoting the k-th underlying security and St i := (S k t i ) k=1,...,d denoting the values of the underlying securities at time ti. The process S is modelled as the canonical process on R nd + equipped with the Borel σ-algebra denoted by B(R nd + ), i.e., for all i = 1, . . . , n, k = 1, . . . , d we have S k t i (s) = s k i for all s = (s 1 1 , . . . , s d 1 , . . . , s 1 n , . . . , s d n ) ∈ R nd + . As we want to consider real market data, we cannot -as usual in a vast majority of the mathematical literature on model-independent finance -neglect bid-ask spreads as well as transaction costs. Thus, we assume that option prices do not necessarily coincide for buyer and seller, instead we take into account a bid price and an ask price. Let k ∈ {1, . . . , d}, i ∈ {1, . . . , n}, j ∈ {1, . . . , n opt ik }, where n opt ik denotes the amount of tradable put and call options 8 with maturity ti written on S k t i . Then a call option on S k with maturity ti for strike K call ijk ∈ R+ can be bought at price π + call,i,j,k and be sold at price π − call,i,j,k . As a call option entitles the owner of the option to buy the underlying security at price K call ijk at time ti it is only exercised if the difference between underlying security and strike K call ijk is positive, and therefore possesses the payoff max S k t i − K call ijk , 0 . Similarly, bid and ask prices for traded put options are denoted by π − put,i,j,k and π + put,i,j,k , respectively. Put options give the right to sell the underlying security at price K put ijk at time ti. Hence, put options are only exercised if the difference between strike K call ijk and underlying security is positive, leading to the payoff max K put ijk − S k t i , 0 . Moreover, we assume proportional transaction costs, similar to the approaches in [15, Section 3.1.1.] and [21]. This means, at each time ti, after having observed the values St 1 , . . . , St i , rearranging a dynamic self-financing 9 trading position in the underlying security from 10 8 We assume the same amount of traded put and call options. This simplifies the presentation, but can without difficulties be extended to a more general setting. 9 Self-financing means that at any time there is neither consumption nor any money injection. The profit of the trading strategy is purely a consequence of the trading in the underlying security. 10 For m, n ∈ N and some set K ⊆ R m , we denote by B (K, R n ) the set of all functions f : K → R n which are B(K)/B(R n )-measurable, whereas C(K, R n ) denotes the set of all continuous functions f : K → R n . 11 Here also different approaches to measure transaction costs would have been possible. Compare for example the presentations in [12] and [15].
∆ k i−1 ∈ B R (i−1)d + , R ,κ|S k t i | ∆ k i (St i , . . . , St 1 ) − ∆ k i−1 (St i−1 , . . . , St 1 )
for some fixed κ ≥ 0. We denote for each k = 1, . . . , d by S k t 0 ∈ R+ the observable and therefore deterministic current value of the k-th security, also called the spot price of S k . Then, given spot prices St 0 = (S 1 t 0 , . . . , S d t 0 ) and strikes K := (K call ijk ) i,j,k , (K put ijk ) i,j,k , we consider trading strategies with profits of the form 12
Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) (S) := a + n i=1 d k=1 n opt ik j=1 c + ijk − c − ijk max S k t i − K call ijk , 0 + n i=1 d k=1 n opt ik j=1 p + ijk − p − ijk max K put ijk − S k t i , 0 + d k=1 n−1 i=0 ∆ k i (St i , . . . , St 1 ) S k t i+1 − S k t i − κ|S k t i | ∆ k i (St i , . . . , St 1 ) − ∆ k i−1 (St i−1 , . . . , St 1 ) (II.1)
for an amount of cash a ∈ R, non-negative long positions c + ijk , p + ijk ∈ R+ and non-negative short positions c − ijk , p − ijk ∈ R+ in call and put options, respectively, for j = 1, . . . , n opt ik , i = 1, . . . , n, k = 1, . . . , d. In equation (II.1) and for the rest of the paper we use the abbreviations cijk = (c + ijk , c − ijk ) ∈ R 2 + and pijk = (p + ijk , p − ijk ) ∈ R 2 + . Further, the strategies involve self-financing trading positions ∆ k i ∈ B(R id + , R) with the convention ∆ k 0 ∈ R, i.e., to be deterministic, as well as ∆ k −1 :≡ 0. The costs for setting up the position Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) with respect to the bid-ask prices π := π − call,i,j,k i,j,k , π + call,i,j,k i,j,k , π − put,i,j,k i,j,k , π + put,i,j,k i,j,k are given by
C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π := a + n i=1 d k=1 n opt ik j=1 c + ijk π + call,i,j,k − c − ijk π − call,i,j,k + n i=1 d k=1 n opt ik j=1 p + ijk π + put,i,j,k − p − ijk π − put,i,j,k . (II.2) For a strategy Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) with parameters (a, cijk, pijk, ∆ k i ) i,j,k we introduce the function Σ(cijk, pijk, ∆ k i ) := n i=1 d k=1 n opt ik j=1 (c + ijk + c − ijk + p + ijk + p − ijk ) + d k=1 |∆ k 0 | + n−1 i=1 d k=1 ∆ k i ∞,
where · ∞ denotes the supremum norm. Imposing a universal upper bound on the function Σ, i.e., Σ(·) ≤ B < ∞ for some B ∈ R+, relates to a restriction on the maximal position an investor is willing/allowed to invest. We want to valuate a derivative with payoff Φ ∈ B(R nd + , R). Hence, given strikes K, spot prices St 0 , and bid-ask prices π, our goal is to solve the following super-hedging problem
D B,B (K,π,S t 0 ) (Φ) := inf a∈R, c ijk ,p ijk ∈R 2 + , (∆ k i )∈B(R id + ,R) C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π s.t. Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) (s) ≥ Φ(s) for all s ∈ [0, B] nd , and Σ(c ijk , p ijk , ∆ k i ) ≤ B , (II.3)
for some bounds B, B ∈ (0, ∞], where the bound B corresponds to a restriction of the form S k t i ≤ B for all i, k. It is economically reasonable to assume a large but finite B for the securities under consideration, since it imposes no severe restriction and reduces artificial high prices which were not realistic in practice. 13 A solution of (II.3) defines the largest model-independent arbitrage-free price of the derivative Φ and simultaneously comes with a strategy that enables to exploit arbitrage if prices for Φ lie above this price bound.
In analogy to (II.3), the smallest model-independent arbitrage-free price of Φ is given by the corresponding sub-hedging problem
D B,B (K,π,S t 0 ) (Φ) := sup a∈R, c ijk ,p ijk ∈R 2 + , (∆ k i )∈B(R id + ,R) C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π s.t. Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) (s) ≤ Φ(s) for all s ∈ [0, B] nd , and Σ(c ijk , p ijk , ∆ k i ) ≤ B .
B. Training a neural network for option valuation
Next, we focus on the supervised learning approach we pursue in this paper. This approach is implemented using neural networks, thus we start this section with a short exposition on neural networks which can be found in similar form in [18], [5], [12], [23], [24], [26], or in every standard textbook on the topic (e.g. [9], [28], or [30]). 12 To simplify the presentation we assume zero interest rates and zero dividend yields. 13 We still allow a priori B, B = ∞ in case one does not want to make restrictions on the trading strategies or exclude unbounded price paths.
1) Neural networks:
In the following we consider a fully-connected neural network which is for input dimension din ∈ N, output dimension dout ∈ N, and number of layers l ∈ N defined as a function of the form
R d in → R d out x → Al • ϕ l • Al−1 • · · · • ϕ 1 • A0(x), (II.4)
where (Ai) i=0,...,l are functions of the form
A0 : R d in → R h 1 , Ai : R h i → R h i+1 for i = 1, . . . , l − 1, (if l > 1), Al : R h l → R d out , (II.5) and where for i = 1, . . . , l we have ϕ i (x1, . . . , x h i ) = (ϕ(x1), . . . , ϕ(x h i ))
, with ϕ : R → R being a non-constant function called activation function. Here h = (h1, . . . , h l ) ∈ N l denotes the dimensions (the number of neurons) of the hidden layers, also called hidden dimension. Moreover, for all i = 0, . . . , l, the function Ai is assumed to have an affine structure of the form
Ai(x) = Mix + bi for some matrix Mi ∈ R h i+1 ×h i and some vector bi ∈ R h i+1 , where h0 := din and h l+1 := dout. We then denote by N l,h d in ,d out
the set of all neural networks with input dimension din, output dimension dout, l hidden layers, and hidden dimension h. Moreover, we consider the set of all neural networks with input dimension din, output dimension dout, a fixed amount of l hidden layers, but unspecified hidden dimension
N l d in ,d out := h∈N l N l,h d in ,d out ,
as well as the set of all neural networks mapping from R d in to R d out with an unspecified amount of hidden layers
N d in ,d out := l∈N N l d in ,d out .
One fundamental result that is of major importance for the approximation of functions through neural networks is the universal approximation theorem from e.g. [37,Theorem 2], stating that, given some mild assumption on the activation function ϕ, every continuous function can be approximated arbitrarily well by neural networks on compact subsets.
Proposition II.1 (Universal approximation theorem for continuous functions [37]). Assume that ϕ ∈ C(R, R) and that ϕ is not constant, then for any compact and we aim at predicting via an appropriately trained neural network the following target
K ⊂ R d in the set N d in ,d out | K is dense in C(K, R d out ) w.r.t. the topology of uniform convergence on C(K, R d out ).Yi = D B,B (K,π,S t 0 ) (Φ θ ) , D B,B (K,π,S t 0 ) (Φ θ ) ,
for a parametrized family {Φ θ , θ ∈ Θ}. If we are additionally interested in predicting the optimal super-replication strategy, then Yi contains instead the associated parameters of the strategies, i.e.,
Yi = a, (cijk) i,j,k , (pijk) i,j,k , (∆ k i ) i,k ,
for the minimal super-replication strategy Ψ
(K,S t 0 )
(a,c ijk ,p ijk ,∆ k i ) (and analogue also for the maximal sub-replication strategy), which implicitly also contains the minimal super-replication price by calculating the corresponding cost using (II.2). However, after having trained a neural network to predict Yi given market data Xi, due to a different training error, the implied price bounds are expected to differ to a larger extent from Yi than those from a neural network which directly predicts the prices Yi, see also Example II.6 in which we compare both approaches.
According to Proposition II.1, a trained neural network can be used to predict price bounds and optimal strategies if price bounds and strategies, respectively, are continuous functions of the input, i.e., of Xi. The following result stated in Theorem II.2 ensures that, under mild assumptions which we discuss subsequently in Remark II.3, this requirement is fulfilled. For this, we denote for all k ∈ N by · k some norm on R k . Since all norms on Euclidean spaces are equivalent, the specific choice of the norm is irrelevant for the following assertions. The induced metric for k ∈ N is denoted by
d k (x, y) = x − y k . Moreover, we define for every B ∈ (0, ∞) the norm f ∞,B := sup x∈[0,B] nd |f (x)| and d∞,B(f, g) := f − g ∞,B for f, g ∈ C(R nd + , R). In the case B = ∞ we set d∞,∞(f, g) := f − g ∞ 1 + f − g ∞ .
To the best of our knowledge, the following Theorem II.2 proves for the first time a continuous relation between the market inputs and the corresponding price bounds. This novel result justifies to apply neural networks to determine model-independent price bounds and hence provides a significant contribution to the literature.
Theorem II.2. Let B ∈ (0, ∞], B ∈ (0, ∞), and M := n i=1 d k=1 n opt ik . Let {Φ θ , θ ∈ Θ}, for some Θ ⊂ R p and p ∈ N, be a (parametric) family of functions in C(R nd + , R) such that (Θ, dp) → C(R nd + , R), d∞,B θ → Φ θ (II.6)
is continuous and let Ninput := 2M + 4M + d + p. Then, the following holds.
(a) Let K1 ⊂ R 2M + × R 4M × R d + × Θ be a compact set such that both D B,B (K,π,S t 0 ) (Φ θ ) , D B,B (K,π,S t 0 ) (Φ θ ) ∈ (−∞, ∞) (II.7)
holds for all (K, π, St 0 , θ) ∈ K1. Then, the map
K 1 , d N input → R 2 , d 2 (K, π, St 0 , θ) → D B,B (K,π,S t 0 ) (Φ θ ) , D B,B (K,π,S t 0 ) (Φ θ ) is continuous. (b) Let K1 be defined as in (a). Then, for all ε > 0 there exists a neural network N1 ∈ NN input ,2 such that for all (K, π, St 0 , θ) ∈ K1 it holds N 1 (K, π, S t0 , θ) − D B,B (K,π,St 0 ) (Φ θ ) , D B,B (K,π,St 0 ) (Φ θ ) 2 < ε. (II.8) (c) Let n = 1. Let K2 ⊂ R 2M + × R 4M × R d + × Θ be a compact set such that for all (K, π, St 0 , θ) ∈ K2 we have that (II.7) holds and D B,B (K,π,S t 0 ) (Φ θ ) is attained by a unique strategy a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K, π, St 0 , θ) satisfying sup (K,π,S t 0 ,θ)∈K 2 |a * (K, π, St 0 , θ)| < ∞. (II.9)
Then the map
K2, dN input → R 1+4M +d , d 1+4M +d (K, π, St 0 , θ) → a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K, π, St 0 , θ)
is continuous. (d) Let n = 1 and let K2 be defined as in (c). Then, for all ε > 0 there exists a neural network N2 ∈ N N input ,1+4M +d such that for all (K, π, St 0 , θ) ∈ K2 it holds
N2(K, π, St 0 , θ) − a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k K, St 0 , θ 1+4M +d < ε.
(II.10)
Proof. See Section IV.
Remark II.3. (a) Assumption (II.7) means that the market with its parameters K, π, St 0 is arbitrage-free, compare e.g. [
(K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) (s) ≥ 0 for some parameters (a, cijk, pijk, ∆ k i ) i,j,k with price C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π < 0. Now, consider a super-replication strategy ( a, cijk, pijk, ∆ k i ) i,j,k of some derivative Φ satisfying Ψ (K,S t 0 ) ( a, c ijk , p ijk , ∆ k i ) (s) ≥ Φ(s). Then we have for all λ > 0 that Ψ (K,S t 0 ) ( a, c ijk , p ijk , ∆ k i ) (s) + λ · Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) (s) = Ψ (K,S t 0 ) ( a+λa, c ijk +λc ijk , p ijk +λp ijk , ∆ k i +λ∆ k i ) (s) ≥ Φ(s) meaning that ( a + λa, cijk + λcijk, pijk + λpijk, ∆ k i + λ∆ k i )
is another super-replication strategy, whose price is given by
C Ψ (K,S t 0 ) ( a+λa, c ijk +λc ijk , p ijk +λp ijk , ∆ k i +λ∆ k i ) , π = C Ψ (K,S t 0 ) ( a, c ijk , p ijk , ∆ k i ) , π + λ · C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π .
(II.11)
By scaling up λ > 0, we see from (II.11) that the corresponding price decreases, which in turn implies D
B,B (K,π,S t 0 ) (Φ) 0 (for B large enough).
With an analogue argument we conclude that if the market offers arbitrage, then we have D B,B
(K,π,S t 0 ) (Φ) 0, preventing the computation of reasonable price bounds. (b) In an arbitrage-free market, a necessary requirement for the existence of a unique optimizer, as assumed in Theorem II.2 (c), is that the considered market instruments are non-redundant, i.e., that the payoffs of the market instruments are linear independent. In our case, this means that to avoid ambiguity of minimal super-replication strategies, one should only consider put options that are written on other strikes than the ones for the call options under consideration. (c) As a canonical example for a parametric family of payoff functions, we consider for example basket call options with payoffs
Φ θ = max n i=1 d k=1 w k i S k t i − L, 0 where θ ∈ Θ := {((w k i ) i,k , L)} = R nd × R+ ,
i.e., the strike L and the weights (w k i ) i,k are inputs to the trained neural network. For any 0 < B < ∞, we have the continuity of the map
(Θ, d nd+1 ) → C(R nd + , R), d∞,B , θ → Φ θ .
Thus, we can find a neural network which fulfils (II.8) with respect to D B,B and D B,B . We remark that assuming a uniform large bound B on the possible values of S k t i imposes no severe constraint for practical applications with real market data and allows to reduce the difference between the no-arbitrage price bounds by not considering unbounded prices which are unrealistic in practice. For further examples of parametric families of payoff functions, e.g., best-of-call options or call-on-max options, we refer to [47
, Example 3.2. (i)-(vi)].
(d) Theorem II.2 is also applicable to a single pre-specified continuous payoff function Φ when setting Φ θ = Φ for all θ ∈ Θ.
(e) Note that we restrict the assertion of Theorem II.2 (d) to n = 1 to make sure that ∆ k i , which is the output of the neural network, is a number, not a function.
(f) An analogue result as in Theorem II.2 (c) and Theorem II.2 (d) for optimal sub-hedging strategies can be obtained in the same way. (g) We highlight that our approach computes model-independent price bounds, i.e., no assumptions on underlying financial models are imposed and market prices of call and put options are considered as exogenous inputs. Nevertheless, in a frictionless market, it is possible for each arbitrage-free sample (K, π, St 0 , θ) to find a stochastic model (in which call option prices are given endogenously) that is consistent with this data, compare e.g. [34]. Therefore one could understand the modelindependent price bounds obtained with respect to the given sample (K, π, St 0 , θ) also as the prices of hedging strategies in such a consistent model. However, note that such a model (expressed by a probability measure) would be different for each considered sample.
Finally, Algorithm 1 describes, relying on the results from Theorem II.2, how one can train a neural network which approximates these price bounds. Remark II.4. To compute model-independent price bounds given option prices, we can use e.g. a linear programming approach based on grid discretization as proposed in [23], [29], or [31]. If the payoff function only depends on one future maturity and is continuous piecewise affine (CPWA, see e.g. [47, Example 3.2.]) we can also use the numerically very efficient algorithm proposed in [47]. If the payoff function fulfils a so-called martingale Spence-Mirrleess condition 14 one can apply the algorithm presented in [32]. For multiple time-steps, another possibility is to apply the penalization approach presented in [24]. The minimization is then performed using a stochastic gradient descent algorithm with some penalization parameter γ which enforces the optimizing strategy to be a super-hedge.
for i in {1, . . . , S} do Generate a (random) subset {Φ θ j , j = 1, . . . , n subset } ⊂ {Φ θ , θ ∈ Θ}; for j in {1, . . . , n subset } do Compute for Xi the corresponding price bounds D B,B (K i ,π i ,S t 0 i ) Φ θ j , D B,B (K i ,π i ,S t 0 i ) Φ θ j ; X (i−1)n subset +j ← ( Xi, θj); Y (i−1)n subset +j ← D B,B (K i ,π i ,S t 0 i ) Φ θ j , D B,B (K i ,π i ,S t 0 i ) Φ θ j ; //
C. Examples
In this section we present, in selected examples, the results of our approach when applied to real market data. 1) Training data: We consider for the training of all neural networks financial market data received from Thomson Reuters Eikon that was observed on 10th June 2020. The data includes bid and ask prices on call options written on all 500 constituents of the American stock market index S&P 500. Note that, in Example II.6, we also predict the optimal super-hedging strategy and not only optimal price bounds. Thus, to avoid ambiguity of the optimal strategy, as explained in Remark II.3, we do not consider any put options there. We consider for each constituent and each available maturity of an option the 20 most liquid strikes, i.e., the bid and ask prices of options with the highest trading volume.
2) Test data: For testing the trained neural networks we consider -as for the training data -option prices on all constituents of the S&P 500. The data was observed on 23rd August 2020. We highlight that, in particular, the test data comes from a different dataset than the training data. Example II.5 (Training of the valuation of call options given prices of other call options). We want to train the valuation of call options for arbitrary strikes, i.e., we consider payoff functions from the set
{Φ θ , θ ∈ Θ} = ΦL(S 1 t 1 ) := max S 1 t 1 − L, 0 , with L ∈ R+ .
Note that the assumptions of Theorem II.2 are met for any B ∈ (0, ∞], B < ∞, which we choose therefore large enough to not impose a restriction. Thus Theorem II.2 (b) ensures that we can train a single neural network for the above mentioned parameterized family of payoff functions.
In this example, a single sample Xi consists of 62 total entries which comprise 20 bid prices, 20 ask prices, 20 associated strikes of call options, as well as the underlying spot price and the strike L of the call option ΦL which we want to price.
We apply Algorithm 1 to train a neural network, in particular, for each of the prices from the training data, we create several different random strikes L, for which we compute a corresponding Yi which consists of lower and upper model-independent price bounds for all samples according to the algorithm from [47]. With this methodology we create a training set with 100000 samples. We then train, as described in Section II-C3, a neural network using back-propagation and test it on the test data, described in Section II-C2, which was observed at a later date (August 2020). The test set consists of 10000 samples.
The results of the training yield a mean absolute error of 2.2033 as well as a mean squared error of 25.8779 on the test set and are depicted in Figure 2a. To be able to compare the error independent of the size of the spot price of the underlying security, we report a mean absolute error of 0.0111 when dividing the predicted prices by the spot prices. We call this value the relative mean absolute error. The corresponding squared distance after division by the spot prices amounts to 0.0003 and is called relative mean squared error. Compare also Figure 2b, where we depict the relative error of each sample in the test set, i.e., the difference of each prediction from its target value, after division with the spot price.
Example II.6 (Optimal strategies of basket options). We consider payoff functions of basket options written on two assets, i.e., the class of payoff functions is defined through
{Φ θ , θ ∈ Θ} = Φw 1 ,w 2 ,L(S 1 t 1 , S 2 t 1 ) = max w1S 1 t 1 + w2S 2 t 1 − L, 0 , with w1, w2, L ∈ R+ .
When B, B < ∞, the assumptions of Theorem II.2 (b), respectively those of Remark II.3 (c), are fulfilled. For each of the considered market prices from the training set and test set, respectively, we create in accordance with Algorithm 1 several different weights w1, w2 and some strike L. Thus, a sample Xi consists of the spot prices S 1 t 0 , S 2 t 0 , the generated values w1, w2, L as well as of bid and ask prices with associated strikes of both assets, i.e., in total each Xi consists of 125 numbers.
We aim at simultaneously predicting the minimal super-replication strategy and the maximal sub-replication strategy. Therefore, we compute the parameters of the strategies attaining the price bounds according to the algorithm from [47]. Thus, in our case 14 This means that ∂ 3 ∂xy 2 Φ(x, y) exists and satisfies ∂ 3 ∂xy 2 Φ(x, y) > 0 for all x, y. 15 For copyright reasons we can only provide the used code, but we cannot provide the used data. (40 parameters) and the investment positions in the underlying securities (∆ k 0 ) k=1,2 (2 parameters). Moreover, training a neural network to learn the relationship between market parameters and optimal super-replication strategy (without the price bound) is possible due to Theorem II.2 (d), which implies that the difference in absolute values between predicted parameters a, (c1jk) j,k , (∆ k 0 ) k and true parameters a * , (c * 1jk ) j,k , (∆ k 0 * ) k of the optimal strategy should not differ significantly after training. We train the neural network on 150000 samples, test it on 10000 samples, and obtain indeed a small relative mean absolute error of 0.0015. We highlight that the training set remains the same, in particular does not consist of trading strategies. After having trained the neural network to predict the minimal super-hedging strategies, we are able to derive from these strategies the optimal price bounds using (II.2) and compare it with the predictions from a neural network which is trained on predicting lower and upper price bounds directly instead of predicting optimal strategies. In Figure 3 we show that however, as expected, the neural network that predicts prices directly performs by far better than the price bound predictions that are derived via (II.2) from the trained strategies, when evaluated on the test set. The relative 17 mean absolute error of the direct prediction of the price bounds is 0.0319, whereas the relative mean absolute error of the prediction relying on the strategies is 0.1804. The corresponding relative mean squared errors are 0.0148 and 0.5045, respectively. The larger approximation error when approximating first the strategies by a neural network and then deriving price bounds from this approximation can be explained as follows.
When approximating the price bounds directly, then we have, after sufficient training of a neural network, according to Theorem II.2 (b), a maximal absolute approximation error of order ε1 between the upper price bound and the output of the neural network, given a tolerance level of ε1 > 0.
In contrast, when approximating the optimal super-replication strategy a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K, π, St 0 , θ) by the output of a neural network, denoted by a N N , (c N N 1jk ) j,k , (p N N 1jk ) j,k , (∆ k 0 N N ) k (K, π, St 0 , θ) at some tolerance level ε2 > 0, then the absolute error between the upper price bound D B,B
(K,π,S t 0 ) (Φ θ ) = C Ψ (K,S t 0 ) (a * ,c 1jk * ,p 1jk * ,∆ k 0 * ) , π and the price C Ψ (K,S t 0 ) (a N N ,c 1jk N N ,p 1jk N N ,∆ k 0 N N )
, π derived from the approximated strategy computes by (II.2) as . However, C := max max j,k π + call,1,j,k , max j,k π − call,1,j,k , max j,k π + put,1,j,k , max j,k π − put,1,j,k is typically significantly larger than 1 (the largest call option price in the considered test set was 349$). Therefore, to obtain similar approximation results for the prices derived from strategies as those when approximating prices directly, one needs to consider a tolerance level of ε2 = ε 1 C which usually is significantly smaller than ε1. In our case, the training set was not large enough to obtain such a high precision in the approximation of the strategies 18 .
a * − a N N + d k=1 n opt 1k j=1 (c + 1jk * − c + 1jk N N )π + call,1,j,k − (c − 1jk * − c − 1jk N N )π − call,1,j,k + d k=1 n opt 1k j=1 (p + 1jk * − p + 1jk N N )π + put,1,j,k − (p − 1jk * − p − 1jk N N )π − put,1,j,
The above outlined reasoning is supported by a comparatively large approximation error which is already observed on the training set. When predicting prices from strategies via (II.2), we have a relative mean absolute error of 0.1311 on the training set, which is only marginally smaller than the relative mean absolute error of 0.1804 which we reported for predictions on the test set. In comparison, the relative mean absolute error for directly predicting prices is 0.0160 on the training set, whereas we reported a relative mean absolute error of 0.0319 on the test set.
We conclude that, even though it is theoretically possible to derive price bounds with an arbitrarily high precision from approximated strategies, in practice it turns out to be more efficient (and requires a smaller training set) to train a neural network that approximates the price bounds directly if one is only interested in the prediction of these.
Example II.7 (Neural networks trained for basket options applied to call options). We reconsider the trained neural network from Example II.6 predicting the price bounds of basket options written on two assets.
• We now test this neural network on the same test set as in Example II.5 which takes into account call options instead of basket options. To be able to apply the neural network that takes inputs with 125 entries, we modify the original samples Xi from Example II.5 (originally containing 62 entries) by duplicating the original entries (except for the strike of the call option) and by additionally adding weights of 0.5 and 0.5 as well as the original strike, leading to 2 · 61 + 2 + 1 = 125 entries. This means that we consider a basket option with payoff of the form max{0.5S 1 t 1 + 0.5S 1 t 1 − K, 0} = max{S 1 t 1 − K, 0} which is a call option. • We then observe a relative mean absolute error of 0.0230 and a relative mean squared error of 0.0023 (in comparison with 0.0111 and 0.0003, respectively for the neural network from Example II.5 that was only trained on call options). • This shows that the more general neural network performs reasonably well also on the more specific payoffs, but is less specifically trained and therefore is outperformed by the neural network solely trained on call options. • To improve the performance of this neural network, we retrain the neural network by adding in addition to the 150000 samples containing basket options from Example II.6 also the 100000 samples containing call options from Example II.5. • Indeed, the result is a trained neural network that performs well on both basket options and call options. On the test set for call options we obtain a relative mean absolute error of 0.0112 and a relative mean squared error of 0.0004 (compared to 0.0111 and 0.0003 obtained in Example II.5). On the test set for basket options we compute a relative mean absolute error of 0.0292 and a relative mean squared error of 0.0374 (compared to 0.0319 and 0.0148 obtained in Example II.6). See also Figure 4, where we depict the accuracy of the predictions before and after adding the additional samples. This shows that training the neural network on an additional task (call option price bounds) did improve notably the approximation quality on this specific task while the approximation quality on the well-known task (basked option price bounds) is barely affected. This example showcases that it is possible to train a single neural network that approximates price bounds reasonably well for different types of payoff functions given that these payoff functions are represented appropriately in the training set. For an overview of different types of payoff functions that can be covered by a single neural network we refer to Table I. Moreover, the approach pursued in Example II.7 can be considered as an instance of transfer learning, since we used an existing neural network as a starting point to improve the approximation quality on a more specific class of payoff functions. The corresponding prices are computed with the algorithm from [47] which enables to compute precise price bounds even in this high-dimensional setting 19 .
After having trained the neural network, we test on 2000 samples from data on options on the S&P 500 that were observed in August 2020, as described in Section II-C2. We test only for the lower bound of the basket option and achieve a relative 20 mean absolute error of 0.0101 and a relative mean squared error of 0.0002 on the test set. Compare Figure 5, where we depict the relative error, i.e., we divide predictions and prices by the weighted sum of the spot prices. Remark II.9. (a) It turns out that indeed our proposed approach can be executed significantly faster than comparable methods that can be applied to compute model-free price bounds. The computation of 100 price bounds in the setting of Example II.8 takes 225.31 seconds on a standard computer 21 when using the LSIP approach 22 from [47]. The execution of a trained neural network to predict 100 price bounds however only takes 0.00303 seconds. The execution of the neural network is therefore approximately 75000 times faster. This highlights the computational advantage of our proposed approach over comparable numerical methods, and further indicates that our approach indeed allows almost in real time the model-free valuation of financial derivatives. (b) We report quite fast training times that are required to train the neural networks. The computationally most expensive setting considered in Example II.8 involves, after applying early stopping, the training of 284 Epochs on 7200 samples with a batch size of 256. This training procedure takes in total only 142.88 seconds 23 . (c) Note that even though the underlying approach is presented in a great generality, it can be easily modified to meet the potentially more specific requirements of applicants. If less call or put options are traded than the trained neural network contains, then one can simply set several strikes and prices to the same value and then apply the trained neural network on the smaller financial market. Moreover, if the payoff function considered in the trained neural network depends on more assets than the same-type payoff function which one wants to price, then this is possible within our approach by adjusting the parameters from the more general payoff function. For example it is possible to determine the price bounds of call options after having trained price bounds of basket options as it was shown in Example II.7, compare also the overview provided in Table I which clarifies which payoffs are of the same type. Moreover, if one wants to take into account asset-specific investment constraints, both universal bounds B and B can be replaced by asset-specific bounds (Bi)i=1,...,n and option specific bounds (B i,j,k ) i=1,...,n,k=1,...,d,j=1,...,n opt ik . The assertion of Theorem II.2 remains valid as the continuity mainly relies on a compactness argument which still can be applied. (d) One major implicit assumptions of the presented approach is that the considered call and put options can be traded liquidly.
Even though this assumption is usually fulfilled in practice one should verify this assumption carefully. Moreover, given that other traded options are considered sufficiently liquid, the presented approach can be extended in a straightforward way by including these options in (II.1) and (II.2). (e) If one is not interested in imposing a proper trading restriction through the bound B, then setting B to a sufficiently large value does lead in practice to the same price bounds as in an unbounded setting. Therefore, also the optimal parameters of the neural networks that approximate these bounds are the same as in an unbounded setting. This holds true since both the trained parameters of the neural network and the corresponding trained trading strategy a posteriori turn out to remain bounded over the whole training period, compare e.g. [5, Fig. 4]. (f) It is noteworthy that the presented approach allows to determine model-free price bounds of the most common types of traded financial derivatives by only training a couple of neural networks (one for each type of payoff function), compare the nonexhaustive Table I for an overview which relies partly on the presentation provided in [47]. Table I shows in particular which payoff functions are of the same type and can therefore be trained with a single neural network. Note that with Example II.7 we provide empirical evidence that this approach not only works in theory but indeed in practice. Recall that in Example II.7 we trained a neural network to predict price bounds of call options and basket options that are both of the same type. Spread call options with weights w i , w j and strike L max{w
i S i t 1 − w j S j t 1 − L, 0} Θ = {w i , w j ∈ R, L ∈ R} I.
Spread put options with weights w i , w j and strike L max{L
− (w i S i t 1 − w j S j t 1 ), 0} Θ = {w i , w j ∈ R, L ∈ R} II.
Call-on-max with strike L max{max{S k t 1 , k = 1, . . . , d} − L, 0} Θ = {L ∈ R} II.
Put-on-max with strike L max{L − max{S k t 1 , k = 1, . . . , d}, 0} Θ = {L ∈ R} III.
Call-on-min with strike L max{min{S k t 1 , k = 1, . . . , d} − L, 0} Θ = {L ∈ R} III.
Put-on-min with strike L max{L − min{S k t 1 , k = 1, . . . , d}, 0} Θ = {L ∈ R}
IV.
Best-of-calls option with strikes L 1 , . . . , L d max max{S k t 1 − L k , 0}, k = 1, . . . , d Θ = {L 1 , . . . , L d ∈ R}
IV.
Best-of-puts option with strikes L 1 , . . . , L d max max{L k − S k t 1 , 0}, k = 1, . . . , d Θ = {L 1 , . . . , L d ∈ R} TABLE I THE TABLE DEPICTS THE MOST COMMON TYPES OF FINANCIAL DERIVATIVES THAT CAN BE TRAINED BY THE PRESENTED APPROACH. THE LEFTMOST COLUMN IDENTIFIES THE TYPE OF THE RESPECTIVE
III. MARTINGALE OPTIMAL TRANSPORT
The presented approach from Section II-B can easily be adjusted to modified market settings given that it is possible to establish a continuous relationship between the prevailing market scenario and resultant model-free price bounds of derivatives. In this section we show how the approach can be adapted to the setting used in martingale optimal transport (compare among many other relevant articles [7], [8], [14], and [20]), where instead of observing a finite amount of call and put option prices, one assumes that the entire one-dimensional marginal distributions of the underlying assets at future dates are known. This situation is according to the Breeden-Litzenberger result [11] equivalent to the case where one can observe call and put option prices for a continuum of strikes 23 See footnote 21, on each of the associated maturities on which the financial derivative Φ ∈ B(R nd + , R) depends. In the martingale optimal transport case, one wants to compute the arbitrage-free upper price bound 24 of Φ which leads to the maximization problem sup Q∈M(µ 1 ,...,µ n )
E Q [Φ(S)], (III.1) where 25 M(µ 1 , . . . , µ n ) := Q ∈ P(R nd + ) Q • S k t i −1 = µ k i , E Q [S k t i+1 | St i , . . . , St 1 ] = S k t i Q-a.s. for all i, k (III.2)
describes the set of all n-step martingale measures with fixed one-dimensional marginals µ i = (µ 1 i , . . . , µ d i ), i = 1, . . . , n, of all involved securities.
We show that, when considering a single asset and two future maturities, (III.1) can be approximated by a properly constructed neural network. To that end, let P1(R+) := Q ∈ P(R+)
R + x dQ(x) < ∞
denote the set of probability measures on R+ with existing first moment. Further, we introduce the 1-Wasserstein-distance W(·, ·) between two measures µ1, µ2 ∈ P1(R+), which is defined through
W(µ1, µ2) := inf π∈Π(µ 1 ,µ 2 ) R 2 + |u − v| dπ(u, v),
with Π(µ1, µ2) denoting the set of all couplings 26 of µ1 and µ2, compare e.g. [55].
We recall the construction of the U-quantization from [6]. Given some probability measure µ ∈ P1(R+) and some N ∈ N we set for i = 1, . . . , N
x (N ) i (µ) := N i/N (i−1)/N F −1 µ (u) du, where F −1 µ (u) := inf{x ∈ R+ : Fµ(x) ≥U (N ) (µ) := 1 N N i=1 δ x (N ) i (µ) (III.3)
converges weakly to µ for N → ∞. Since the mean of U (N ) (µ) and µ coincide for all N ∈ N due to [6, Lemma 2.4.4.] and µ, U (N ) (µ) ∈ P1(R+), we further obtain convergence in the 1-Wasserstein-distance, compare [55, Definition 6.8]. This means particularly that for all µ ∈ P1(R+) and for all δ > 0 there exists some N ∈ N such that W(U (N ) (µ), µ) < δ. We derive the following novel result which asserts for the first time that two-marginal martingale optimal transport problems can be approximated arbitrarily well by neural networks. 27 Theorem III.1. Let Φ : R 2 + → R be continuous such that sup x 1 ,x 2 ∈R + |Φ(x 1 ,x 2 )| 1+x 1 +x 2 < ∞. Then, for all ε > 0, N ∈ N, and compact sets K ⊂ R N + , there exists a neural network N ∈ N2N,2 such that for all (µ1, µ2) ∈ P1(R+) × P1(R+) with µ1 µ2 28 , there exists some δ > 0 such that if W(U (N ) (µ1), µ1) < δ, W(U (N ) (µ2), µ2) < δ, and x (N ) (µ1),
x (N ) (µ2) ∈ K, then N x (N ) (µ1), x (N ) (µ2) − inf Q∈M(µ 1 ,µ 2 ) E Q [Φ], sup Q∈M(µ 1 ,µ 2 ) E Q [Φ] 2 < ε. (III.4)
Proof. See Section IV.
Remark III.2. The assumption in Theorem III.1 stating that the atoms of the approximating U-quantizations are contained in some prespecified compact set K can always be fulfilled if one only considers marginals with support in K.
If one does not want to restrict to compactly supported marginals, one can start with µ1, µ2 ∈ P1(R+) and consider for every r > 0 the sets Br (III.5) 24 We implicitly assume absence of a bid-ask spread and of transaction costs. 25 P(R nd + ) denotes the set of all Borel probability measures on R nd + . 26 More precisely, the set Π(µ 1 , µ 2 ) is defined as Π(µ 1 , µ 2 ) := π ∈ P 1 (R 2 + ) : π • S −1 t i = µ i , i = 1, 2 27 We recall that · 2 is an arbitrary norm on R 2 . 28 Here denotes the convex order for measures (µ 1 , µ 2 ) ∈ P 1 (R + ) × P 1 (R + ), i.e., µ 1 µ 2 means
R + f dµ 1 ≤ R + f dµ 2 for all convex functions f : R + → R.
Moreover, for all µi ∈ Br(μi), i = 1, 2, we have for all l = 1, . . . , N that
x (N ) l ( µi) − x (N ) l (µi) = N l/N (l−1)/N F −1 µ i (u) − F −1 µ i (u) du ≤ N l/N (l−1)/N F −1 µ i (u) − F −1 µ i (u) du ≤ N 1 0 F −1 µ i (u) − F −1 µ i (u) du = N · W ( µi, µi) ≤ N ·inf Q∈M(µ 1 ,µ 2 ) E Q [Φ]
it is only necessary to assume that Φ is lower semi-continuous (and linearly bounded).
In the case with two marginals, the training routine from Algorithm 1 modifies to the below presented Algorithm 2 which is also depicted in Figure 1b.
= (x i 1 , . . . , x i N , y i 1 , . . . , y i N ) ∈ R 2N + such that 1 N N j=1 δ x i j 1 N N j=1 δ y i j ;
Compute, e.g. via linear programming (compare [29]), the target value
Y i = inf Q∈M 1 N N j=1 δ x i j , 1 N N j=1 δ y i j E Q [Φ], sup Q∈M 1 N N j=1 δ x i j , 1 N N j=1 δ y i j E Q [Φ] ∈ R 2 ;
end Train via back-propagation a neural network N ∈ N2N,2 to predict Yi given Xi, i.e., such that N (Xi) ≈ Yi (which is possible due to Theorem III.1); Output: Trained neural network N ∈ N2N,2;
Remark III.4.
(a) The critical point in Algorithm 2 is the method which is employed to create discrete samples of marginals that are increasing in convex order. It is a priori not obvious how to create a good sample set. One working methodology is to draw samples from marginals that are similar to marginals one wants to predict, then to apply U-quantization and to discard measures that do not increase in convex order. Compare also Example III.5. Being similar can for example mean to be close in Wasserstein distance or being from the same parametric family of probability distributions. (b) Since the proof of Theorem III.1 relies on the continuity of the MOT problem, as stated in [4], [48], and [56], for whichat the moment -no extension to the case with more than two marginals or in multidimensional settings is known, we stick to the two marginal case in the one-dimensional setting. (c) The approach can be extended to the case with information on the variance as in [44], with Markovian assumptions as imposed in [25] and [54], or even more general constraints on the distribution (see e.g. [3]), whenever it is possible to establish a continuous relationship between marginals and prices. Compare also Example III.6, where we consider a constrained martingale optimal transport problem.
To compute the robust bounds inf Q∈M(µ 1 ,µ 2 ) E Q [Φ] and sup Q∈M(µ 1 ,µ 2 ) E Q [Φ] for arbitrary (possibly continuous) marginal distributions µ1 µ2, with the above explained approach, we approximate µ1 and µ2 in the Wasserstein distance by U (N ) (µ1) U (N ) (µ2) and then compute the resultant price bound via N (x (N ) (µ1), x (N ) (µ2)), where N ∈ N2N,2 denotes the neural network which was trained according to Algorithm 2.
A. Examples
We train a neural network according to Algorithm 2) Architecture of the neural networks: Given a set of samples (Xi)i=1,...,S and a set of targets (Yi)i=1,...,S we train a neural network using the back-propagation algorithm with an Adam optimizer implemented in Python using Tensorflow similar to Section II-C. The loss function is a L 2 -loss function, the neural networks comprise 3 hidden layers with 512 neurons each, and with ReLU activation functions. For further details of the code we refer to https://github.com/juliansester/deep model free pricing.
= 0) µ i 1 ∼ LN (µ i − (σ i 1 ) 2 /2, (σ i 1 ) 2 ) with µ i ∼ U([−2, 2]) and σ i 1 ∼ U([0, 0.5]), µ i 2 ∼ LN (µ i − (σ i 2 ) 2 /2, (σ i 2 ) 2 ) with σ i 2 = σ i 1 · σ2 i , where σ2 i ∼ U([1, 2]); 2) Uniform marginals (if i mod 4 = 1) µ i 1 ∼ U ([µ i − a i , µ i + a i ]), µ i 2 ∼ U ([µ i − b i , µ i + b i ]) with µ i ∼ U ([10, 20]), a i ∼ U ([0, 5]), b i ∼ U ([a i , a i + 5]); 3) Continuous and discrete uniform marginals (if i mod 4 = 2) µ i 1 ∼ U ([µ i − a i , µ i + a i ]), µ i 2 ∼ U ({µ − a i , µ + a i }) with µ i ∼ U ([5,
Example III.5 (MOT without constraints). We report the results for a neural network trained to S = 100, 000 samples that are generated according to the procedure described above. We split the samples in training set, test set (10% of the samples) and validation set (20% of the training samples) and report a mean absolute error of 0.0082 as well as mean squared error of 0.0001 on the test set after 1000 epochs of training with early stopping. The accuracy of the trained neural network on the test set is displayed in Figure 6. Additionally, we test the neural network in the following specific situations. Table II and indicate that the bounds can indeed be approximated very precisely. TABLE II THE TABLE DISPLAYS FOR DIFFERENT MARGINALS THE APPROXIMATED LOWER BOUNDS AND UPPER BOUNDS THAT ARE COMPUTED VIA TRAINED In the next example we show how the introduced methodology can be applied to constrained martingale optimal transport problems.
Example III.6 (MOT with variance constraints). We train the neural network with the same artificial samples of marginals as in the previous Example III. 5. In addition to the discretized marginals U (N ) (µ i 1 ), U (N ) (µ i 2 ), we consider as an additional feature a prespecified level of the variance of the returns as in [44]. This means we only consider martingale measures Q which additionally 29 This can be easily checked by verifying that R + (x − L) + dµ 1 (x) ≤ R + (x − L) + dµ 2 (x) for all atoms L as well as R + xdµ 1 (x) = R + xdµ 2 (x), compare e.g. [45]. fulfil Var Q (St 2 /St 1 ) = σ 2 12 . In particular, σ12 is thus, an additional feature of the samples Xi. It was already indicated in Remark II.4 (c), that the approximation through neural networks, as stated in Theorem III.1, can also be obtained in this case due to [44,Theorem A.6.] ensuring continuity of the map ], whereas the marginals from (d) are omitted, since the value of lower and upper bound coincide and they therefore do not depend on a pre-specified level of the variance. The neural network was trained on 500000 samples for 1000 epochs with early stopping. We assign 10% of the samples to the test set and 20% of the remaining training samples to the validation set (which is relevant for the early stopping rule). The resulting mean absolute error on the test set is 0.0044, the mean squared error is 0.00003. In Figure 7b we show to which degree the predictions deviate from the target values on the test set implying that the accuracy of the predictions is indeed very high on the test set.
(µ1, µ2) → sup E Q [Φ] Q ∈ M(µ1,
IV. PROOFS
Proof of Theorem II.2 (a). We prove the continuity of (K, π, St 0 , θ) → D B,B
(K,π,S t 0 ) (Φ θ ). The continuity of (K, π, St 0 , θ) → D B,B
(K,π,S t 0 ) (Φ θ ) can be obtained analogously. Let ε > 0 and let (K, π, St 0 , θ) ∈ K1. For any δ > 0, by the continuity of (Θ, dp) → C(R nd + , R), d∞,B θ → Φ θ , we can choose δ > 0 sufficiently small such that for all θ ∈ Θ we have that
θ − θ p < δ (IV.1) implies that Φ θ − Φ θ ∞,B < δ. (IV.2)
Moreover, we can choose δ and δ small enough to ensure that
( c + ijk + c − ijk + p + ijk + p − ijk ) ≤ δ + δ(1 + κ)B + δB < ε/2. (IV.8)
Hence, we obtain that
D B,B ( K, π, S t 0 ) Φ θ − D B,B (K,π,S t 0 ) (Φ θ ) = inf (a,c ijk ,p ijk ,∆ k i ): Ψ ( K, S t 0 ) (a,c ijk ,p ijk ,∆ k i ) ≥Φ θ , Σ(c ijk ,p ijk ,∆ k 0 )≤B C Ψ ( K, S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π − inf (a,c ijk ,p ijk ,∆ k i ): Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) ≥Φ θ , Σ(c ijk ,p ijk ,∆ k 0 )≤B C Ψ (K,S t 0 ) (a,c ijk ,p ijk ,∆ k i ) , π ≤C Ψ ( K, S t 0 ) ( δ+δ(1+κ)B+ a, c ijk , p ijk , ∆ k i ) , π − C Ψ (K,S t 0 ) ( a, c ijk , p ijk , ∆ k i ) , π + ε/2 < ε,
where the last two inequalities are consequences of (IV.3), (IV.5), (IV.6), and (IV.8).
If instead the inequality D B,B
(K,π,S t 0 ) (Φ θ ) ≥ D B,B
( K, π, S t 0 ) Φ θ holds, then in this case we choose parameters a, ( cijk) i,j,k ,
( p ijk ) i,j,k , ( ∆ k i ) i,k such that Ψ ( K, S t 0 ) ( a, c ijk , p ijk , ∆ k i ) (s) ≥ Φ θ (s) for all s ∈ [0, B] nd , Σ cijk, p ijk , ∆ k i ≤ B, and such that C Ψ ( K, S t 0 ) ( a, c ijk , p ijk , ∆ k i )
, π ≤ D B,B
( K, π, S t 0 ) Φ θ + ε/2, and then we repeat the following line of argumentation. This shows part (a).
Remark IV.1. For all ε > 0, (K, π, St 0 , θ) ∈ K1, there exists some δ > 0 such that if (IV.4) holds for ( K, π, St 0 , θ) ∈ K1, then for all strategies Ψ is an element of C(K1, R 2 ). Hence, we find according to Proposition II.1 a neural network N1 ∈ NN input ,2 such that (II.8) holds on K1.
Proof of Theorem II.2 (c). Consider a sequence K (n) , π (n) , St 0 (n) , θ (n) n∈N ⊂ K2 with lim n→∞ d N input (K (n) , π (n) , St 0 (n) , θ (n) ), (K, π, St 0 , θ) = 0 for some (K, π, St 0 , θ) ∈ K2. Since by assumption B < ∞ and by (II.9), the sequence
x (n) := a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K (n) , π (n) , St 0 (n) , θ (n) ),
n ∈ N, is bounded. Thus, there exists at least one accumulation point x ∈ R 1+4M +d . Hence, we can find a subsequence (x (n k ) ) k∈N with lim k→∞ d 1+4M +d x (n k ) , x = 0. (K (n k ) ,π (n k ) ,S t 0 (n k ) ) Φ θ (n k ) = lim k→∞ C Ψ (K (n k ) ,S t 0 (n k ) )
x (n k )
, π (n k ) = C Ψ (K,S t 0 ) x , π .
Using the continuity of Ψ w.r.t. its parameters and the continuity of θ → Φ θ as in (II.6), we also obtain that Ψ (K,S t 0 ) x ≥ Φ θ on [0, B] nd . Moreover, x := a, (c 1jk ) j,k , (p 1jk ) j,k , (∆ k 0 ) k ∈ R 1+4M +d satisfies Σ(c1jk, p1jk, ∆ k 0 ) ≤ B. Thus, x is indeed a minimal super-replication strategy of Φ θ for parameters (K, π, St 0 , θ). So we have shown that any accumulation point x is a minimal super-replication strategy of Φ θ for parameters (K, π, St 0 , θ). Since we assumed that the minimizer is unique, the accumulation point x is unique and is necessarily the limit of the sequence (x (n) ) n∈N . Therefore, we have shown that lim n→∞ a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K (n) , π (n) , St 0 (n) , θ (n) ) = a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k (K, π, St 0 , θ) .
Proof of Theorem II.2 (d).
According to Theorem II.2 (b), the map (K, π, St 0 , θ) → a * , (c * 1jk ) j,k , (p * 1jk ) j,k , (∆ k 0 * ) k K, π, St 0 , θ , is an element of C(K2, R 1+4M +d ). Thus, by Proposition II.1, there exists some N2 ∈ N N input ,1+4M +d such that (II.10) holds on K2.
Proof of Theorem III.1. Let ε > 0, N ∈ N, and pick some compact set K ⊂ R N + . We observe that the map In particular, we have by (IV.15) that
x (N ) (µ1), x (N ) (µ2) ∈ C (N ) . Thus, the triangle inequality and (IV.14) combined with (IV.16) implies that if W(U (N ) (µ1), µ1) < δ, W(U (N ) (µ2), µ2) < δ, and x (N ) (µ1), x (N ) (µ2) ∈ K, then
N x (N ) (µ 1 ), x (N ) (µ 2 ) − inf Q∈M(µ 1 ,µ 2 ) E Q [Φ], sup Q∈M(µ 1 ,µ 2 ) E Q [Φ] 2 ≤ N x (N ) (µ 1 ), x (N ) (µ 2 ) − inf Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) E Q [Φ], sup Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) E Q [Φ] 2 + inf Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) E Q [Φ], sup Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) E Q [Φ] − inf Q∈M(µ 1 ,µ 2 ) E Q [Φ], sup Q∈M(µ 1 ,µ 2 ) E Q [Φ] 2 <ε,
which shows the assertion.
ACKNOWLEDGMENT
Financial support by the Nanyang Assistant Professorship Grant (NAP Grant) Machine Learning based Algorithms in Finance and Insurance is gratefully acknowledged. 30 We refer to [6,Definition 2.45] and [6, Lemma 2.49] for a characterization of x, y ∈ R N + to satisfy D (N ) (x) D (N ) (y).
which was the trading position after having observed only the values St 1 , . . . , St i−1 , to a new trading position ∆ k i ∈ B(R id + , R), causes transaction costs 11 of
Popular examples for activation functions are the ReLU function given by ϕ(x) := max{x, 0} or the logistic function ϕ(x) := 1/(1 + e −x ), which fulfil the assumptions of Proposition II.1. Further, we remark that the original statement from [37, Theorem 2] only covers output dimension dout = 1, and l = 1 hidden layer, but can indeed be generalized to the above statement, compare e.g. [40, Theorem 3.2.]. 2) Approximation of the super-replication functional through neural networks: We consider for i = 1, . . . , S, where S ∈ N denotes the number of samples, input data of the form Xi = (K, π, St 0 , θ)
Algorithm 1 :
1Training of a neural network via back-propagation for the computation of the price bounds D B,B (K,π,St 0 ) (Φ θ ) , D B,B (K,π,St 0 ) (Φ θ ) of a class of financial derivatives {Φ θ } θ∈Θ . Data: Call and put option prices (bid and ask) on different securities and maturities; Associated strikes, maturities, and spot prices; Input : Algorithm to compute price bounds of exotic derivatives; Family {Φ θ , θ ∈ Θ} of payoff functions Φ θ : R nd + → R fulfilling the requirements of Theorem II.2; Hyper-parameters of the neural network; Number n subset of considered functions from {Φ θ , θ ∈ Θ} for each sample; Transaction costs κ ≥ 0; Bounds B and B; for each sample (Ki, πi, St 0 i ) of data considering exactly n maturities and d securities do Xi ← (Ki, πi, St 0 i ); end S ← #{ Xi}; // Assign number of samples and call it S.
3 )
3Implementation: The training of each of the neural networks is performed using the back-propagation algorithm ([50]) with an Adam optimizer ([41]) implemented in Python using Tensorflow ([1]). For the optimization with the Adam optimizer we use a batch size of 256. The architecture involves a L 2 -loss function, the neural networks comprise 3 hidden layers with 512 neurons each and ReLU activation functions. The samples are normalized before training with a min-max scaler. Moreover, we assign 10% of the training data to a validation set to be able to apply early stopping (compare [28, Chapter 7.8.]) to prevent overfitting to the training data. To reduce the internal covariate shift of the neural network and to additionally regularize it, we apply batch normalization ([39]) after each layer. All the codes related to the examples below 15 , as well as the trained neural networks are provided under https://github.com/juliansester/deep model free pricing. For all examples we assume no transaction costs, i.e., we have κ = 0.
Fig. 2 .
2(a): This figure illustrates the accuracy of the predictions on the test set. The left panel shows a plot of all target values (x-values) and its predictions (y-values), the right panel depicts a histogram of the prediction error, i.e., the error between target values and predicted values. (b):This figure shows the accuracy of the predictions of call option prices on the test set when considering the relative error, i.e., when dividing the predicted prices Y i by the corresponding spot prices. each sample Yi comprises 86 values, which constitute, for both lower and upper bound, of the initial investment a (1 parameter), the buy and sell positions in call options 16 (c 1jk ) j,k j=1,...,20, k=1,2
Fig. 3 .
3This figure compares the accuracy of prediction of price bounds derived from the trained strategies using (II.2) (left) with those predictions from neural networks that are trained to predict the prices directly (right). We depict the relative error of the predictions by dividing the prediction error through the weighted sum of spot prices, where the weights are according to the weights in the payoff of the basket option.
Fig. 4 .
4This figure compares the accuracy of prediction of price bounds of call options using the neural network trained solely on basket options (left) with those predictions from a neural network that is trained additionally also on call options (right). We depict the relative error of the predictions by dividing the prediction error by the spot prices.Example II.8 (High-dimensional payoff function). We train a neural network to predict prices of a basket option depending on 30 underlying securities, i.e., we consider a family of payoff functions of the form{Φ θ , θ ∈ Θ} = Φw 1 ,··· ,w 30 ,L(S 1 t 1 , · · · , S 30 t 1 ) = max 30 k=1 w k S k t 1 − L, 0 , with w1, . . . , w30, L ∈ R+ .We train the neural network only on 8000 different samples, where each sample consists of bid and ask prices of call options for 20 different strikes of 30 different underlying securities from data observed in June 2020 with randomly generated weights w k , k = 1, . . . , 30, and randomly generated strikes L, i.e., one single sample Xi consists of 1861 entries. These entries consist of 30 · 20 strikes of call options, 30 · 20 bid prices of call options, 30 · 20 ask prices of call options, as well as 30 spot prices and 1 strike L of the basket option and 30 associated weights w k , k = 1, . . . , 30.
Fig. 5 .
5This figure shows the relative error when predicting the lower price bound of a basket option that depends on 30 underlying securities. We divide the target prices (and predicted prices) by the weighted spot prices, where the weights are the ones in the payoff function under consideration.
option with weights (w k ) k=1,...,d and strike L max{ d k=1 w k S k t 1 − L, 0} Θ = {w 1 , . . . , w d ∈ R, L ∈ R} I. Basket put option with weights (w k ) k=1,...,d and strike L max{L − d k=1 w k S k t 1 , 0} Θ = {w 1 , . . . , w d ∈ R, L ∈ R} I. Call option on the i-th asset with strike L max{S i t 1 − L, 0} Θ = {L ∈ R} I. Put option on the i-th asset with strike L max{L − S i t 1 , 0} Θ = {L ∈ R} I.
u} denotes the u-quantile associated to the cumulative distribution function Fµ of µ, and we denote x (N )
( µi) := {µi ∈ P1(R+) : W (µi, µi) ≤ r} for i = 1, 2. Then there exists some compact set K ⊂ R N + s.t. x (N ) l (µi) ∈ K for all l = 1, . . . , N , µi ∈ Br( µi), i = 1, 2. Indeed, there exists some constant C > 0 such that x (N ) l ( µi) ≤ C for all l = 1, . . . , N, i = 1, 2.
N
the last equality in (III.6) also[49, Equation 3.1.6] and[51, Equation 3.5]. This implies for all i = 1(µi) ∈ K := (x1, . . . , xN ) ∈ R N + | |xj| ≤ C + N · r for all j .Remark III.3. If we are only interested in predicting the upper bound sup Q∈M(µ 1 ,µ 2 ) E Q [Φ], then one can see, by carefully reading the proof of Theorem III.1, that it suffices to assume that Φ is upper semi-continuous (and linearly bounded) to obtain the existence of a neural network N ∈ N2N,1 that approximates the bound as in (III.4) w.r.t. · 1. Similarly, to derive the existence of a neural network approximating the lower bound
Algorithm 2 :
2Training of a neural network for the computation of the price bounds of some financial derivative Φ in the MOT setting Input : Payoff function Φ : R 2 + → R fulfilling the assumptions of Theorem III.1; N describing the number of maximal supporting values of the approximating distribution; Number of samples S; Method to create samples such that the associated marginal distributions increase in convex order (cf. Remark III.4 (a)); for i in {1, . . . , S} do Create samples Xi
2. Thus, the input features are, in contrast to the examples from Section II-C, not directly option prices but are given by marginal distributions which are discretized according to the U-quantization. In the following we fix a payoff function Φ(St 1 , St 2 ) = |St 1 − St 2 | and present the results when predicting inf Q∈M(µ 1 ,µ 2 ) E Q [Φ(St 1 , St 2 )] and sup Q∈M(µ 1 ,µ 2 ) E Q [Φ(St 1 , St 2 )] via neural networks in dependence of the marginals µ1 and µ2.1) Implementation: To train the neural network we create according to Algorithm 2 numerous artificial marginals with a fixed number N ∈ N of supporting values. In the examples below we choose N = 20. To discretize continuous marginal distributions we apply the U-quantization as introduced in (III.3). Below, we describe which marginal distributions µ i 1 , µ i 2 we generate for i = 1, . . . , S with S being the total number of samples. 1) Log-Normally distributed marginals (if i mod 4
10]), a i ∼ U([0, 5]); 4) Uniform and triangular marginals (if i mod 4 = 3) µ i 1 ∼ U([m i − l i /2, m i + l i /2]) for l i ∼ U([0, 5]), m i ∼ U([l i , l i + 10]) and triangular marginals µ i 2 with lower limit l i , upper limit u i ∼ U([m i , m i + 10]) and mode m i . If the generated marginals are in convex order 29 , then we add the discretized values U (N ) (µ i 1 ), U (N ) (µ i 2 ) to the sample set (Xi)i=1,...,S and compute via linear programming inf Q∈M(µ i 1 ,µ i 2 ) E Q [Φ] as well as sup Q∈M(µ i 1 ,µ i 2 ) E Q [Φ] which we then add as corresponding target values to (Yi)i=1,...,S .
(a) µ1 = LN (0.5 − 0.5 · 0.25 2 , 0.25 2 ), µ2 = LN (0.5 − 0.5 · 0.5 2 , 0.5 2 ) (b) µ1 = LN (−0.05, 0.1), µ2 = LN (−0.1, 0.2) (c) µ1 = U([8, 12]), µ2 = U([5, 15]) (d) µ1 = U([5, 10]), µ2 = U({5, 10}) (e) µ1 = U([2, 4]), dµ 2 dλ (x) = (x − 1)/31 l [1,2] (x) + (1/3)1 l [2,4] (x) + (5 − x)/31 l [4,5] (x), where λ denotes the Lebesgue-measure. The results are displayed in
Fig. 6 .
6This figure illustrates the accuracy of the predictions of the trained neural network in the MOT setting, evaluated on the test set.
µ2) and Var Q (St 2 /St 1 ) = σ 2 12 when Φ is Lipschitz-continuous and when the marginals have compact support. Then, an adaption of Theorem III.1 is straightforward. The results of a neural network approximation for the payoff function Φ(St 1 , St 2 ) = |St 2 −St 1 | for the marginal distributions (c) and (e) from Example III.5 are displayed in Figure 7a. The marginals from (a) and (b) are omitted as they do not satisfy the conditions from [44, Theorem A.6.
Fig. 7 .
7(a): This figure shows the accuracy of a neural network that was trained with 500000 samples. The accuracy is displayed for the test marginals from (c) and (e) of Example III.5. The points indicate the upper and lower bounds for the prices under the influence of variance information obtained from the trained neural network (NN) in comparison with the precise bounds, computed with a linear programming (LP) approach, that are indicated by the solid lines. (b): This figure illustrates the accuracy of the predictions of the trained neural network in the MOT setting with variance constraints, evaluated on the test set.
ijk ,p ijk ,∆ k i ) ≥ Φ θ on [0, B] nd and Σ cijk, pijk, ∆ k i ≤ B, there exists some a ∈ R with |a − a| < ε/2 (IV.9) such that Ψ ( K, S t 0 ) ( a,c ijk ,p ijk ,∆ k i ) ≥ Φ θ on [0, B] nd . Indeed, let ε > 0, (K, π, St 0 , θ) ∈ K1,and choose δ, δ > 0 analogue as in the proof of Theorem II.2 (a), such that (IV.1), (IV.2), (IV.3), and (IV.4) hold. Then according to (IV.7) we see that the strategy Ψ ( K, S t 0 ) ( δ+δ(1+κ)B+a,c ijk ,p ijk ,∆ k i ) fulfils (IV.9). Proof of Theorem II.2 (b). According to Theorem II.2 (a), the map (K, π, St 0 , θ) → D B,B (K,π,S t 0 ) (Φ θ ) , D B,B (K,π,S t 0 ) (Φ θ )
Then, we obtain by the continuity of D B,B , shown in Theorem II.2 (a), and by the continuity of the cost function C, defined in (II.2), w.r.t. all its arguments,
D
(N ) : R N + , dN → (P1(R+), W) (x1, . . . , xN ) → x = (x1, . . . , xN ) ∈ R N + for m → ∞, then we can consider for all m ∈ N the coupling π m assumption Φ is upper semi-continuous with sup x 1 ,x 2 ∈R + |Φ(x 1 ,x 2 )| 1+x 1 +x 2 < ∞, we can apply[56, Theorem 2.9.] which ensures that for any (µ1, µ2) ∈ P1(R+) × P1(R+) with µ1 µ2 and (µ m1 , µ m 2 ) ∈ P1(R+) × P1(R+) with µ m 1 µ m 2 , m ∈ N, satisfying W(µ m 1 , µ1) → 0, W(µ m 2 , µ2) → 0 for m → ∞, any set of measures M ⊂ P(R 2 + ) we have inf Q∈M E Q [Φ] = − sup Q∈M E Q [−Φ], and since −Φ is also upper semicontinuous with sup x 1 ,x 2 ∈R + |−Φ(x 1 ,x 2 )| 1+x 1 +x 2 < ∞,we can apply the same arguments to see that closed set30 C (N ) := (x, y) ∈ R N + × R N + : D (N ) (x) D (N ) (y). Then, we obtain by (IV.10), (IV.11), and (IV.12) the continuity of(x, y) → inf Q∈M(D (N ) (x), D (N ) (y)) E Q [Φ], sup Q∈M(D (N ) (x), D (N ) (y)) E Q [Φ] on C (N ) . (IV.13)Hence, the universal approximation theorem from Proposition II.1 guarantees the existence of a neural network N ∈ N2N,2 such thatsup x,y∈K∩C (N ) N (x, y) − inf Q∈M(D (N ) (x), D (N ) (y)) E Q [Φ], sup Q∈M(D (N ) (x), D (N ) (y)) (µ1,µ2) ∈ P1(R+) × P1(R+) with µ1 µ2. Then [6, Theorem 2.4.11.] ensures that U (N ) (µ1) U (N ) (µ2). (IV.15) This and the continuity of the two-marginal MOT problem with respect to its marginals, as stated in (IV.11) and (IV.12), implies that there exists some δ > 0 such that if W(U (N ) (µ1), µ1) < δ, W(U (N ) (µ2), µ2) < δ, then inf Q∈M(µ 1 ,µ 2 ) E Q [Φ], sup Q∈M(µ 1 ,µ 2 ) E Q [Φ] − inf Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) E Q [Φ], sup Q∈M(U (N ) (µ 1 ),U (N ) (µ 2 )) of the map D (N ) it holds that D (N ) x (N ) (µi) = U (N ) (µi) for i = 1, 2.
Figure 1a.Fig. 1. (a): Illustration of the presented approach, that is described in detail in Algorithm 1, in order to train a neural network (NN) to learn the model-independent price bounds of a derivative Φ θ from a family {Φ θ , θ ∈ Θ} in dependence of given market prices. (b): Illustration of Algorithm 2 which is applied to learn price bounds of MOT problems from marginals. The price bounds contained in Y i correspond to the solutions of MOT problems, i.e., to inf Q∈M(µ 1 ,µ 2 ) E Q [Φ(St 1 , St 2 )] and sup Q∈M(µ 1 ,µ 2 ) E Q [Φ(St 1 , St 2 )], where M(µ 1 , µ 2 ) denotes the set of martingale measures with fixed marginal distributions µ 1 and µ 2 , compare also equation (III.2).{Yi}i =
{Price bounds of Φθ}
{Xi}i =
{Market parameters
and θ}
Precise algorithm
(possibly time
consuming)
NN learns relation
between {Xi}i
and {Yi}i.
Trained NN
(a)
{Yi}i =
{Price bounds of Φ}
{Xi}i =
{Discretized
Marginals (µ1, µ2)}
Linear Programming
NN learns relation
between {Xi}i
and {Yi}i.
Trained NN
(b)
47, Assumption 2.1. and Theorem 2.4.] for the case n = 1, [10, Definition 1.1. and Theorem 5.1.] for the multi-period case with traded options, and [15, Theorem 2.1.] for the general case with market frictions. Note that assuming an arbitrage-free market is a necessity to determine arbitrage-free price bounds of financial derivatives. Indeed, if the market offers arbitrage, then we can identify a trading strategy fulfilling Ψ
In this step it would be possible to use any algorithm that can compute these bounds reliably, see Remark II.4 end end Train with back-propagation ([50]) a neural network N ∈ N d in ,2 with a sufficient number of neurons and hidden layers such that N (Xi) ≈ Yi; Output: Trained neural network N ∈ NN input ,2 with Ninput as in Theorem II.2;
FINANCIAL DERIVATIVE, WHERE THE PRICE BOUNDS OF PAYOFFS OF THE SAME TYPE CAN BE LEARNED FROM A SINGLE NEURAL NETWORK.
NEURAL NETWORKS (NN) AND VIA A LINEAR PROGRAMMING APPROACH (LP).Lower bound (LP)
Lower bound (NN) Upper bound (LP) Upper bound (NN) Cumulative Error
(a)
0.2363
0.2573
0.4226
0.4210
0.0226
(b)
0.0814
0.0939
0.1870
0.1946
0.0202
(c)
1.7491
1.7503
2.6220
2.6082
0.0150
(d)
1.6688
1.6659
1.6687
1.6636
0.0080
(e)
0.3587
0.3626
0.7215
0.7151
0.0103
One often refers to martingale models as risk-neutral models, in which an investor is indifferent of either investing in the underlying security or keeping her money in a bank account with constant interest rate (where typically one assumes the interest rate to be zero for simplicity).
Here, by slight abuse of notation, we denote by c ljk ∈ R the net position invested in the option, i.e., c ljk is also allowed to attain negative values.17 Note that here the relative error refers to the error after division with the weighted sum of the spot prices, where the weights are determined by the weights in the payoff of the basket option.
One certainly could enlarge the training set to improve the approximation results. However, the goal of this example is to showcase that on a fixed training set predicting price bounds directly is more efficient than predicting price bounds via (II.2) after having approximated the optimal strategies.
With this algorithm it is even possible to compute price bounds of basket options that depend on 60 securities.20 Note that here, as in Example II.6, the relative error refers to the error after division with the weighted sum of the spot prices, where the weights are determined by the weights in the payoff of the basket option.21 We used for the computations a Gen Intel(R) Core(TM) i7-1165G7, 2.80 GHz processor with 40 GB RAM.22 The Matlab-code for the execution of the LSIP approach is provided under https://github.com/qikunxiang/ModelFreePriceBounds.
We pick some δ > 0, δ > 0 such that the implication from (IV.1) to (IV.2) is satisfied, and such that (IV.3) holds true and let ( K, π, St 0 , θ) ∈ K1 satisfy for all i, j, k thatIn this case, consider parameters a,which is possible due to (II.7). Then, we obtain by definition of the semi-static strategies defined in (II.1) and by (IV.4) thatfor all s = (s 1 1 , . . . , s d 1 , . . . , s 1 n , . . . , s d n ) ∈ [0, B] nd . Thus it holds pointwise on [0, B] nd thatwhere the last inequality follows due to (IV.2), since Φ θ − Φ θ ∞,B < δ. This then yields by the definition of the cost function in (II.2), by (IV.2), (IV.3), and by (IV.4) that
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16). Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), pages 265-283, 2016.
A model-free version of the fundamental theorem of asset pricing and the super-replication theorem. Beatrice Acciaio, Mathias Beiglböck, Friedrich Penkner, Walter Schachermayer, Mathematical Finance. 262Beatrice Acciaio, Mathias Beiglböck, Friedrich Penkner, and Walter Schachermayer. A model-free version of the fundamental theorem of asset pricing and the super-replication theorem. Mathematical Finance, 26(2):233-251, 2016.
Improved robust price bounds for multi-asset derivatives under marketimplied dependence information. Jonathan Ansari, Eva Lütkebohmert, Ariel Neufeld, Julian Sester, arXiv:2204.01071arXiv preprintJonathan Ansari, Eva Lütkebohmert, Ariel Neufeld, and Julian Sester. Improved robust price bounds for multi-asset derivatives under market- implied dependence information. arXiv preprint arXiv:2204.01071, 2022.
Stability of martingale optimal transport and weak optimal transport. Julio Backhoff, - Veraguas, Gudmund Pammer, The Annals of Applied Probability. 321Julio Backhoff-Veraguas and Gudmund Pammer. Stability of martingale optimal transport and weak optimal transport. The Annals of Applied Probability, 32(1): 721-752, 2022.
Low-rank plus sparse decomposition of covariance matrices using neural network parametrization. Michel Baes, Calypso Herrera, Ariel Neufeld, Pierre Ruyssen, IEEE Transactions on Neural Networks and Learning Systems. Michel Baes, Calypso Herrera, Ariel Neufeld, and Pierre Ruyssen. Low-rank plus sparse decomposition of covariance matrices using neural network parametrization. IEEE Transactions on Neural Networks and Learning Systems, 2021.
Martingales with specified marginals. Theses. David Baker, Université Pierre et Marie Curie-Paris VIDavid Baker. Martingales with specified marginals. Theses, Université Pierre et Marie Curie-Paris VI, 2012.
Model-independent bounds for option prices: a mass transport approach. Mathias Beiglböck, Pierre Henry-Labordère, Friedrich Penkner, Finance and Stochastics. 173Mathias Beiglböck, Pierre Henry-Labordère, and Friedrich Penkner. Model-independent bounds for option prices: a mass transport approach. Finance and Stochastics, 17(3):477-501, 2013.
On a problem of optimal transport under marginal martingale constraints. The Annals of Probability. Mathias Beiglböck, Nicolas Juillet, 44Mathias Beiglböck and Nicolas Juillet. On a problem of optimal transport under marginal martingale constraints. The Annals of Probability, 44(1):42-106, 2016.
Learning deep architectures for AI. Yoshua Bengio, Now Publishers IncYoshua Bengio. Learning deep architectures for AI. Now Publishers Inc, 2009.
Arbitrage and duality in nondominated discrete-time models. Bruno Bouchard, Marcel Nutz, The Annals of Applied Probability. 252Bruno Bouchard and Marcel Nutz. Arbitrage and duality in nondominated discrete-time models. The Annals of Applied Probability, 25(2):823- 859, 2015.
Prices of state-contingent claims implicit in option prices. T Douglas, Robert H Breeden, Litzenberger, Journal of business. Douglas T Breeden and Robert H Litzenberger. Prices of state-contingent claims implicit in option prices. Journal of business, pages 621-651, 1978.
Deep hedging. Hans Buehler, Lukas Gonon, Josef Teichmann, Ben Wood, Quantitative Finance. 198Hans Buehler, Lukas Gonon, Josef Teichmann, and Ben Wood. Deep hedging. Quantitative Finance, 19(8):1271-1291, 2019.
Model-free superhedging duality. Matteo Burzoni, Marco Frittelli, Marco Maggis, The Annals of Applied Probability. 273Matteo Burzoni, Marco Frittelli, and Marco Maggis. Model-free superhedging duality. The Annals of Applied Probability, 27(3):1452-1477, 2017.
Martingale optimal transport duality. Patrick Cheridito, Matti Kiiski, J David, H Mete Prömel, Soner, Mathematische Annalen. Patrick Cheridito, Matti Kiiski, David J Prömel, and H Mete Soner. Martingale optimal transport duality. Mathematische Annalen, pages 1-28, 2020.
Duality formulas for robust pricing and hedging in discrete time. Patrick Cheridito, Michael Kupper, Ludovic Tangpi, SIAM Journal on Financial Mathematics. 81Patrick Cheridito, Michael Kupper, and Ludovic Tangpi. Duality formulas for robust pricing and hedging in discrete time. SIAM Journal on Financial Mathematics, 8(1):738-765, 2017.
Robust pricing and hedging of double no-touch options. M G Alexander, Jan Cox, Obłój, Finance and Stochastics. 153Alexander MG Cox and Jan Obłój. Robust pricing and hedging of double no-touch options. Finance and Stochastics, 15(3):573-605, 2011.
Arbitrage bounds for prices of weighted variance swaps. Mark Davis, Jan Obłój, Mathematical Finance. 244Mark Davis, Jan Obłój, and Vimal Raval. Arbitrage bounds for prices of weighted variance swaps. Mathematical Finance, 24(4):821-854, 2014.
Bounds on multi-asset derivatives via neural networks. Luca De, Gennara Aquino, Carole Bernard, International Journal of Theoretical and Applied Finance. 23082050050Luca De Gennara Aquino and Carole Bernard. Bounds on multi-asset derivatives via neural networks. International Journal of Theoretical and Applied Finance, 23(08):2050050, 2020.
A general version of the fundamental theorem of asset pricing. Freddy Delbaen, Walter Schachermayer, Mathematische Annalen. 3001Freddy Delbaen and Walter Schachermayer. A general version of the fundamental theorem of asset pricing. Mathematische Annalen, 300(1):463-520, 1994.
Martingale optimal transport and robust hedging in continuous time. Yan Dolinsky, H Mete Soner, Probability Theory and Related Fields. 160Yan Dolinsky and H Mete Soner. Martingale optimal transport and robust hedging in continuous time. Probability Theory and Related Fields, 160(1-2):391-427, 2014.
Robust hedging with proportional transaction costs. Yan Dolinsky, H Mete Soner, Finance and Stochastics. 182Yan Dolinsky and H Mete Soner. Robust hedging with proportional transaction costs. Finance and Stochastics, 18(2):327-347, 2014.
Pricing with a smile. Bruno Dupire, Risk. 71Bruno Dupire. Pricing with a smile. Risk, 7(1):18-20, 1994.
Robust pricing and hedging of options on multiple assets and its numerics. Stephan Eckstein, Gaoyue Guo, Tongseok Lim, Jan Obloj, SIAM Journal on Financial Mathematics. 121Stephan Eckstein, Gaoyue Guo, Tongseok Lim, and Jan Obloj. Robust pricing and hedging of options on multiple assets and its numerics. SIAM Journal on Financial Mathematics, 12(1): 158-188, 2021.
Computation of optimal transport and related hedging problems via penalization and neural networks. Stephan Eckstein, Michael Kupper, Applied Mathematics & Optimization. 83Stephan Eckstein and Michael Kupper. Computation of optimal transport and related hedging problems via penalization and neural networks. Applied Mathematics & Optimization, 83: 639-667, 2019.
Martingale transport with homogeneous stock movements. Stephan Eckstein, Michael Kupper, Quantitative Finance. 212Stephan Eckstein and Michael Kupper. Martingale transport with homogeneous stock movements. Quantitative Finance, 21(2):271-280, 2021.
Robust risk aggregation with neural networks. Stephan Eckstein, Michael Kupper, Mathias Pohl, Mathematical Finance. 304Stephan Eckstein, Michael Kupper, and Mathias Pohl. Robust risk aggregation with neural networks. Mathematical Finance, 30(4):1229-1272, 2020.
A stochastic control approach to no-arbitrage bounds given marginals, with an application to lookback options. Alfred Galichon, Pierre Henry-Labordère, Nizar Touzi, The Annals of Applied Probability. 241Alfred Galichon, Pierre Henry-Labordère, and Nizar Touzi. A stochastic control approach to no-arbitrage bounds given marginals, with an application to lookback options. The Annals of Applied Probability, 24(1):312-336, 2014.
Ian Goodfellow, Yoshua Bengio, Aaron Courville, Yoshua Bengio, Deep learning. MIT press Cambridge1Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
Computational methods for martingale optimal transport problems. Gaoyue Guo, Jan Obłój, The Annals of Applied Probability. 296Gaoyue Guo and Jan Obłój. Computational methods for martingale optimal transport problems. The Annals of Applied Probability, 29(6):3311- 3347, 2019.
H Mohamad, Hassoun, Fundamentals of artificial neural networks. MIT pressMohamad H Hassoun. Fundamentals of artificial neural networks. MIT press, 1995.
Automated option pricing: Numerical methods. Pierre Henry-Labordère, International Journal of Theoretical and Applied Finance. 16081350042Pierre Henry-Labordère. Automated option pricing: Numerical methods. International Journal of Theoretical and Applied Finance, 16(08):1350042, 2013.
martingale) optimal transport and anomaly detection with neural networks: A primal-dual algorithm. Available at SSRN 3370910. Pierre Henry-Labordère, Pierre Henry-Labordère. (martingale) optimal transport and anomaly detection with neural networks: A primal-dual algorithm. Available at SSRN 3370910, 2019.
A closed-form solution for options with stochastic volatility with applications to bond and currency options. The review of financial studies. Steven L Heston, 6Steven L Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. The review of financial studies, 6(2):327-343, 1993.
Hobson The range of traded option prices. H A Mark, David Davis, Mathematical Finance. 171Mark HA Davis and David Hobson The range of traded option prices. Mathematical Finance, 17(1):1-14, 2007.
The Skorokhod embedding problem and model-independent bounds for option prices. David Hobson, Paris-Princeton Lectures on Mathematical Finance. SpringerDavid Hobson. The Skorokhod embedding problem and model-independent bounds for option prices. In Paris-Princeton Lectures on Mathematical Finance 2010, pages 267-318. Springer, 2011.
Robust bounds for forward start options. David Hobson, Anthony Neuberger, Mathematical Finance: An International Journal of Mathematics, Statistics and Financial Economics. 221David Hobson and Anthony Neuberger. Robust bounds for forward start options. Mathematical Finance: An International Journal of Mathematics, Statistics and Financial Economics, 22(1):31-56, 2012.
Approximation capabilities of multilayer feedforward networks. Kurt Hornik, Neural networks. 42Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251-257, 1991.
Robust pricing-hedging dualities in continuous time. Zhaoxu Hou, Jan Obłój, Finance and Stochastics. 223Zhaoxu Hou and Jan Obłój. Robust pricing-hedging dualities in continuous time. Finance and Stochastics, 22(3):511-567, 2018.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, International conference on machine learning. PMLRSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448-456. PMLR, 2015.
Universal approximation with deep narrow networks. Patrick Kidger, Terry Lyons, Conference on Learning Theory. PMLRPatrick Kidger and Terry Lyons. Universal approximation with deep narrow networks. In Conference on Learning Theory, pages 2306-2327. PMLR, 2020.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Deep learning. nature. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, 521Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015.
Tightening robust price bounds for exotic derivatives. Eva Lütkebohmert, Julian Sester, Quantitative Finance. 1911Eva Lütkebohmert and Julian Sester. Tightening robust price bounds for exotic derivatives. Quantitative Finance, 19(11):1797-1815, 2019.
Convex orders for linear combinations of random variables. Chunsheng Ma, Journal of Statistical Planning and Inference. 841-2Chunsheng Ma. Convex orders for linear combinations of random variables. Journal of Statistical Planning and Inference, 84(1-2):11-25, 2000.
Superreplication under volatility uncertainty for measurable claims. Electronic journal of probability. Ariel Neufeld, Marcel Nutz, 18Ariel Neufeld and Marcel Nutz. Superreplication under volatility uncertainty for measurable claims. Electronic journal of probability, 18, 2013.
Model-free bounds for multi-asset options using option-implied information and their exact computation. Ariel Neufeld, Antonis Papapantoleon, Qikun Xiang, Management Science. 2022Ariel Neufeld, Antonis Papapantoleon, and Qikun Xiang. Model-free bounds for multi-asset options using option-implied information and their exact computation. Management Science, 2022.
On the stability of the martingale optimal transport problem: A set-valued map approach. Ariel Neufeld, Julian Sester, Statistics & Probability Letters. 176Ariel Neufeld and Julian Sester. On the stability of the martingale optimal transport problem: A set-valued map approach. Statistics & Probability Letters, 176:109-131, 2021.
Mass Transportation Problems: Volume I: Theory. T Svetlozar, Ludger Rachev, Rüschendorf, Springer Science & Business Media1Svetlozar T Rachev and Ludger Rüschendorf. Mass Transportation Problems: Volume I: Theory, volume 1. Springer Science & Business Media, 1998.
Learning representations by back-propagating errors. Geoffrey E David E Rumelhart, Ronald J Hinton, Williams, nature. 3236088David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533- 536, 1986.
Monge-Kantorovich transportation problem and optimal couplings. Ludger Rüschendorf, Jahresbericht der DMV. 3Ludger Rüschendorf. Monge-Kantorovich transportation problem and optimal couplings. Jahresbericht der DMV, 3:113-137, 2007.
Deep learning in neural networks: An overview. Jürgen Schmidhuber, Neural networks. 61Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85-117, 2015.
A perfect calibration! now what? The best of Wilmott. Wim Schoutens, Erwin Simons, Jurgen Tistaert, 281Wim Schoutens, Erwin Simons, and Jurgen Tistaert. A perfect calibration! now what? The best of Wilmott, page 281, 2003.
Robust bounds for derivative prices in Markovian models. Julian Sester, International Journal of Theoretical and Applied Finance. 2332050015Julian Sester. Robust bounds for derivative prices in Markovian models. International Journal of Theoretical and Applied Finance, 23(3):2050015, 2020.
Optimal transport: old and new. Cédric Villani, Springer Science & Business Media338Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
Johannes Wiesel, arXiv:1905.04574Continuity of the martingale optimal transport problem on the real line. arXiv preprintJohannes Wiesel. Continuity of the martingale optimal transport problem on the real line. arXiv preprint arXiv:1905.04574, 2019.
| [
"https://github.com/juliansester/deep",
"https://github.com/juliansester/deep",
"https://github.com/qikunxiang/ModelFreePriceBounds."
]
|
[
"Learning to Dynamically Select Cost Optimal Schedulers in Cloud Computing Environments",
"Learning to Dynamically Select Cost Optimal Schedulers in Cloud Computing Environments"
]
| [
"Shreshth Tuli \nImperial College London\nUK\n",
"Giuliano Casale \nImperial College London\nUK\n",
"Nicholas R Jennings \nImperial College London\nUK\n\nLoughborough University\nUK\n"
]
| [
"Imperial College London\nUK",
"Imperial College London\nUK",
"Imperial College London\nUK",
"Loughborough University\nUK"
]
| []
| The operational cost of a cloud computing platform is one of the most significant Quality of Service (QoS) criteria for schedulers, crucial to keep up with the growing computational demands. Several data-driven deep neural network (DNN)-based schedulers have been proposed in recent years that outperform alternative approaches by providing scalable and effective resource management for dynamic workloads. However, state-of-the-art schedulers rely on advanced DNNs with high computational requirements, implying high scheduling costs. In non-stationary contexts, the most sophisticated schedulers may not always be required, and it may be sufficient to rely on low-cost schedulers to temporarily save operational costs. In this work, we propose MetaNet, a surrogate model that predicts the operational costs and scheduling overheads of a large number of DNNbased schedulers and chooses one on-the-fly to jointly optimize job scheduling and execution costs. This facilitates improvements in execution costs, energy usage and service level agreement violations of up to 11%, 43% and 13% compared to the state-of-the-art methods. | 10.1145/3595244.3595255 | [
"https://arxiv.org/pdf/2205.10640v1.pdf"
]
| 248,986,713 | 2205.10640 | b7b9fadcf01dd1e56d1963151154a8011a9d3088 |
Learning to Dynamically Select Cost Optimal Schedulers in Cloud Computing Environments
Shreshth Tuli
Imperial College London
UK
Giuliano Casale
Imperial College London
UK
Nicholas R Jennings
Imperial College London
UK
Loughborough University
UK
Learning to Dynamically Select Cost Optimal Schedulers in Cloud Computing Environments
The operational cost of a cloud computing platform is one of the most significant Quality of Service (QoS) criteria for schedulers, crucial to keep up with the growing computational demands. Several data-driven deep neural network (DNN)-based schedulers have been proposed in recent years that outperform alternative approaches by providing scalable and effective resource management for dynamic workloads. However, state-of-the-art schedulers rely on advanced DNNs with high computational requirements, implying high scheduling costs. In non-stationary contexts, the most sophisticated schedulers may not always be required, and it may be sufficient to rely on low-cost schedulers to temporarily save operational costs. In this work, we propose MetaNet, a surrogate model that predicts the operational costs and scheduling overheads of a large number of DNNbased schedulers and chooses one on-the-fly to jointly optimize job scheduling and execution costs. This facilitates improvements in execution costs, energy usage and service level agreement violations of up to 11%, 43% and 13% compared to the state-of-the-art methods.
INTRODUCTION
The onset of the Artificial Intelligence (AI) and Deep Learning (DL) era has led to a recent shift in computation from hand-encoded algorithms to data-driven solutions for resource management in cloud systems [2]. This has given rise to efficiently harnessing the data processing capacities of multiple devices and provides services at scale with high Quality of Service (QoS). However, the increasing computational demands of modern Internet of Things (IoT) applications makes it crucial to curtail the operational costs of cloud machines. This calls for efficient resource management schemes, such as task scheduling policies, to execute workloads on cloud systems within tight cost budgets. AI and DL offer promising solutions that rely on accurate surrogate models that are inexpensive to evaluate at run-time.
Background and Motivation. In recent years, state-ofthe-art resource management solutions, which particularly focus on optimal placement of tasks on cloud virtual machines (VMs), have increasingly explored data-driven DL methods [7,9,10,11,12]. Such methods typically rely on a trace of resource utilization characteristics and QoS metrics of tasks and cloud hosts and are referred to as trace-driven Copyright is held by author/owner(s). schedulers. They utilize such traces to train a deep neural network (DNN) to estimate a set of Quality of Service (QoS) parameters and run optimization strategies to find the best scheduling decision for each incoming task. However, most prior work run inference using such trace-driven DNN models on a broker node to generate scheduling decisions, while the incoming tasks are executed on worker nodes [11]. In such cases, having a dedicated broker is often cost inefficient due to the idle times between decisions in discrete-time control settings, wherein the task placement decisions are taken at fixed scheduling intervals [11]. To tackle this, we resort to paradigms such as Function as a Service (FaaS) that allow execution of DL schedulers as serverless functions, only incurring costs for the inference run time of each DNN model.
Contributions. In this work, to leverage the recent advantages brought by DL in task scheduling and build a policy agnostic solution, we choose a set of state-of-the-art schedulers. This work aims to solve the meta-problem of onthe-fly selection of scheduling policies for a cloud computing environment wherein the incoming tasks are executed on worker nodes and scheduling decisions are run as serverless functions. To solve this meta-problem, we develop the proposed solution that we call MetaNet. It uses a DNN as a surrogate model to predict the task execution costs and scheduling time for each policy. We then select the most cost-efficient policy at each scheduling interval using the online estimates generated by this surrogate. As we dynamically update the policy to tradeoff between simple and sophisticated DL schedulers, we reduce overall operational costs, energy consumption and Service Level Agreement (SLA) deadline violation rates by up to 11%, 43% and 13% respectively compared to state-of-the-art schedulers.
BACKGROUND AND RELATED WORK
Recent work in scheduling for cloud computing environments has demonstrated that AI-based solutions are not only faster, but can also scale efficiently compared to traditional heuristic and optimization techniques [9,10,11,12].
Evolutionary Optimization. This class of methods forecasts QoS using non-DL solutions such as ARIMA [8] or DL based models such as Long-Short-Term-Memory (LSTM) neural networks [6]. It then applies evolutionary search strategies such as Ant Colony Optimization (ACO) [1], to converge to a locally optimal scheduling decision.
Surrogate Optimization. This class of methods uses differentiable function approximators, particularly neural networks, to act as surrogates of the QoS for a future state of the system. For instance, GRAF [7] uses a graph neural net- work (GNN) as a surrogate model to predict service latencies and operational costs for a given scheduling decision and uses gradient-based optimization to minimize the service latencies or execution costs. To do this, it uses the concept of neural network inversion [13], wherein the method evaluates gradients of the objective function with respect to inputs and runs optimization in the input space. Other methods, such as Decision-NN [12], GOBI [11] and its second-order generalization GOSH [10], combine the prediction and optimization steps by modifying the loss function to train and optimize in tandem.
PROPOSED METHOD
System Model. In this work, we target a standard heterogeneous cloud computing environment where all nodes are in the same Wide Area Network (see Figure 1 for an overview). Tasks are container instances that need to be processed, generated from an IoT layer and transferred to the compute nodes via gateway devices. We do not have any cloud broker; instead, we run the scheduling policies as FaaS functions on a serverless platform such as AWS Lambda or Azure Serverless. As is common in prior work [1,9,12], we consider a discrete-time control problem, i.e., we divide the timeline into fixed-size execution intervals (of ∆ seconds). We denote the overall cost of the system amortized by the number of completed tasks in the t-th interval as φt. This includes the operational costs of the worker nodes as well as that of running the schedulers as serverless functions. MetaNet Methodology. We create a surrogate model f θ that is a DNN with parameters θ. We consider a set of scheduling policies P of size q. Given a system state at the start of the t-th interval, each scheduler produces a scheduling decision for this interval. The state is denoted by Ct−1 and includes the resource (CPU, RAM and disk) utilization characteristics of the active workloads in the system Wt−1, resource utilization of cloud hosts Ht−1 and scheduling decision of the previous interval St−1. Thus, Ct−1 = [Wt−1, Ht−1, St −1]. Now, f θ estimates the average execution cost (sum of the cost of task execution and serverless run) for each scheduler p k ∈ P in the t-the interval aŝ φ k t . The collection ofφ k t for all k in It is denoted byφt. In such a case, our problem can be formulated as
minimize θ T t=1 φ π t subject to St = p k (t), ∀ t π = arg min kφ k t , ∀ t φt = f θ (Wt−1, Ht−1, St−1), ∀ t,(1)
where f θ is a surrogate of the average execution cost.
Surrogate Model and Training. We use a feed-forward neural model that takes as input the state of the cloud system Ct−1 = [Wt−1, Ht−1, St−1] and outputs a single scalar. We use a fully connected neural network with 3 layers, each with 64 nodes and ReLU activation, except the last layer where we use the Sigmoid activation to bring the output in the range (0, 1). Thus, for any input (Wt−1, Ht−1, St−1), the complete neural network can be described aŝ
φ = f θ (Wt−1, Ht−1, St−1).(2)
To train the above described MetaNet neural network f θ , we collect traces from a cloud computing environment. To collect data for training, we execute the scheduling policies P, each for Γ scheduling intervals, and collect a dataset as
Λ = {(k, Wt−1, Ht−1, St−1, φ k t , ω k t )} Γ·q t=1 ,(3)
where q is the number of scheduling policies in P. We initialize W0 as H0 zero-matrices and S0 as an empty graph. We find the maximum cost from the dataset as φ k max for each scheduler p k . This allows us to denormalize the neural network output and bring it to the same range as the one in the dataset. We then train the model using the loss function
L = φ k − φ k max ·φ k 2.
(4)
Dynamic Policy Selection Using MetaNet. We first store the saved surrogate model f θ into a central networkattached-storage (NAS) that is accessible to all worker nodes. At start the scheduling interval It, we get the workload and host characteristics Wt−1, Ht−1 with the scheduling decision St−1. As we do not have any broker node in our setup, we run the MetaNet model in the worker node with the least CPU utilization. On this worker node we use the surrogate model to predict cost and decide the scheduling policy as p π , s.t. π = arg min kφ k t .
Overall, MetaNet selects a scheduling policy on-the-fly as per the system states. To do this, at the start of each scheduling interval, it predicts the task execution cost and scheduling time of each policy. It then selects the one with the minimum cost estimate. This policy optimizes the scheduling decision, initialized as St−1, to generate St. The complete pipeline is summarized in Figure 2. We also tune f θ with each new datapoint to adapt to non-stationary settings.
PERFORMANCE EVALUATION
Setup. We consider the complete set of hosts H to be static with time as is common in prior work for a fixed cloud platform [9]. We use different VM types from the Microsoft Azure Platform, i.e., 60 Azure B2s with a dual-core CPU and 4GB RAM (in UK-South), 20 Azure B4ms with a quadcore CPU and 16GB RAM and 20 Azure B8ms with an octacore CPU and 32 GB RAM (in East-US). To save on costs, we define a host to be active when the CPU utilization is > 0%. We utilize the Azure Automation (https://azure. microsoft.com/services/automation/#overview) service to hibernate and resume VMs based on their CPU utilization. Workloads. To generate the tasks in our system, we use the AIoTBench applications [5]. AIoTBench is an AI based cloud computing benchmark suite that consists of various real-world computer vision application instances. The seven specific application types correspond to the neural networks they utilize. These include three typical heavyweight networks: ResNet18, ResNet34, ResNext32x4d, as well as four light-weight networks: SqueezeNet, GoogleNet, MobileNetV2, MnasNet. At the start of each scheduling interval, we create new tasks, sampled uniformly from the seven applications, where the number of new tasks comes from a Poisson distribution with rate λ = 1.2.
Baselines. We compare MetaNet against seven baselines, which also form our policy set P. We integrate the ACO algorithm with two demand forecasting methods: Au-toARIMA and LSTM, and call these ARIMA+ACO and LSTM+ACO. We also include Decision-NN, Semi-Direct and GRAF, GOBI and GOSH. This makes the size of the policy set q = 7. We also use a Multi-Armed Bandit (MAB) model using Upper-Confidence-Bound exploration [3] and a Deep-Q-Network (DQN) [4] that choose a policy based on pre-trained models using data Λ.
Implementation and Training. We use implement MetaNet on the COSCO framework [11]. We collect the dataset Λ by executing all baselines for Γ = 100 intervals. Similarly, we execute all approaches for T = 1000 scheduling intervals to generate QoS scores, with each interval being ∆ = 10 seconds long, giving a total experiment time of nearly 2 hours 46 minutes for each method.
Visualization of MetaNet. Figure 3 visualizes the MetaNet approach running on the setup and workloads described above. The x-axis denotes the scheduling interval and the y-axis denotes the cost estimateφ k t for the three most frequently selected policies in P for readability: ARIMA+ACO, GOBI and GOSH. The highlighted bands indicate the selected scheduling policy. The selected policy corresponds to the one with the least estimated cost. The cost estimates are non-stationary, further corroborating the need for dynamic selection of the scheduling policies in volatile workload and host setups.
Comparison with Baselines. Table 1 shows the results of our experiments (cost in USD, energy in KW-hr, resp. time in seconds). Across all metrics, MetaNet outperforms the baselines. MetaNet improves the energy consumption by allocating tasks, i.e., to the same hosts to minimize execution costs and consequently the active hosts in the system. This is shown by the highest CPU utilization of MetaNet, i.e., 87.4%. MetaNet also gives the lowest response time and consequently the lowest SLA violation rates. Dynamic optimization baselines perform poorly due to the stateless assumption in MAB that ignores environment dynamism, and DQN being slow to adapt in volatile settings [11].
Conclusions
This work presents MetaNet, which dynamically selects scheduling policies for cost-efficient task processing in cloud systems. Future work would explore other DNNs as surrogates and extend MetaNet to other types of resource management decisions such as dynamic VM provisioning and autoscaling.
Figure 1 :
1MetaNet System Model.
Figure 2 :
2MetaNet Pipeline.
Figure 3 :
3Predicted costs (line plots) and dynamic scheduler selection (background color) in MetaNet.
Table 1 :
1Comparison with the baseline.Model
Cost Energy Resp. T. SLA V. CPU %
ARIMA+ACO 0.794
5.432
20.391
0.241
62.4
LSTM+ACO
0.862
5.217
16.921
0.212
76.5
Decision-NN
1.072
4.224
12.255
0.211
80.2
Semi-Direct
1.021
4.095
17.092
0.131
72.2
GRAF
0.993
4.228
14.722
0.127
76.1
GOBI
0.752
3.827
14.293
0.122
80.2
GOSH
0.911
3.267
13.292
0.118
84.3
MAB
0.874
2.921
13.921
0.121
79.9
DQN
0.907
3.121
13.877
0.119
80.4
MetaNet
0.667
2.029
11.273
0.102
87.4
Efficient metaheuristic populationbased and deterministic algorithm for resource provisioning using ant colony optimization and spanning tree. Muhammad Aliyu, In: International Journal of Cloud Applications and Computing. 10IJCAC)Muhammad Aliyu et al. "Efficient metaheuristic population- based and deterministic algorithm for resource provisioning using ant colony optimization and spanning tree". In: In- ternational Journal of Cloud Applications and Computing (IJCAC) 10.2 (2020), pp. 1-21.
IoT-based big data storage systems in cloud computing: perspectives and challenges. Hongming Cai, IEEE Internet of Things Journal. 41Hongming Cai et al. "IoT-based big data storage systems in cloud computing: perspectives and challenges". In: IEEE Internet of Things Journal 4.1 (2016).
Algorithms for multi-armed bandit problems. Volodymyr Kuleshov, Doina Precup, arXiv:1402.6028Volodymyr Kuleshov and Doina Precup. "Algorithms for multi-armed bandit problems". In: arXiv:1402.6028 (2014).
Deep reinforcement learning: An overview. Yuxi Li, arXiv:1701.07274Yuxi Li. "Deep reinforcement learning: An overview". In: arXiv:1701.07274 (2017).
AIoT bench: towards comprehensive benchmarking mobile and embedded device intelligence. Chunjie Luo, International Symposium on Benchmarking, Measuring and Optimization. SpringerChunjie Luo et al. "AIoT bench: towards comprehensive benchmarking mobile and embedded device intelligence". In: International Symposium on Benchmarking, Measuring and Optimization. Springer. 2018, pp. 31-35.
An efficient forecasting approach for resource utilization in cloud data center using CNN-LSTM model. Soukaina Ouhame, Youssef Hadi, Arif Ullah, Neural Computing and Applications (2021). Soukaina Ouhame, Youssef Hadi, and Arif Ullah. "An effi- cient forecasting approach for resource utilization in cloud data center using CNN-LSTM model". In: Neural Comput- ing and Applications (2021), pp. 1-13.
GRAF: a graph neural network based proactive resource allocation framework for SLO-oriented microservices. Jinwoo Park, Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies. 2021. the 17th International Conference on emerging Networking EXperiments and Technologies. 2021Jinwoo Park et al. "GRAF: a graph neural network based proactive resource allocation framework for SLO-oriented microservices". In: Proceedings of the 17th International Conference on emerging Networking EXperiments and Tech- nologies. 2021, pp. 154-167.
TASM: Technocrat ARIMA and SVR model for workload prediction of web applications in cloud. Parminder Singh, Pooja Gupta, Kiran Jyoti, Cluster Computing. 22Parminder Singh, Pooja Gupta, and Kiran Jyoti. "TASM: Technocrat ARIMA and SVR model for workload predic- tion of web applications in cloud". In: Cluster Computing 22.2 (2019), pp. 619-633.
Dynamic programming for pre-dict+optimise. J Peter, Stuckey, 02. 2020In: AAAI. 34Peter J Stuckey et al. "Dynamic programming for pre- dict+optimise". In: AAAI. Vol. 34. 02. 2020.
GOSH: Task Scheduling using Deep Surrogate Models in Fog Computing Environments. Shreshth Tuli, Giuliano Casale, Nicholas R Jennings, IEEE Transactions on Parallel and Distributed Systems. Shreshth Tuli, Giuliano Casale, and Nicholas R. Jennings. "GOSH: Task Scheduling using Deep Surrogate Models in Fog Computing Environments". In: IEEE Transactions on Parallel and Distributed Systems (2022).
COSCO: Container Orchestration Using Co-Simulation and Gradient Based Optimization for Fog Computing Environments. Shreshth Tuli, IEEE Transactions on Parallel and Distributed Systems. 33Shreshth Tuli et al. "COSCO: Container Orchestration Us- ing Co-Simulation and Gradient Based Optimization for Fog Computing Environments". In: IEEE Transactions on Parallel and Distributed Systems 33.1 (2021), pp. 101-116.
Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization. Bryan Wilder, Bistra Dilkina, Milind Tambe, In: AAAI. 331Bryan Wilder, Bistra Dilkina, and Milind Tambe. "Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization". In: AAAI. Vol. 33. 01. 2019.
Neural network inversion beyond gradient descent. Eric Wong, J Zico Kolter, Advances in Neural Information Processing Systems, Workshop on Optimization for Machine Learning. Eric Wong and J Zico Kolter. "Neural network inversion beyond gradient descent". In: Advances in Neural Infor- mation Processing Systems, Workshop on Optimization for Machine Learning (2017).
| []
|
[
"Heterogeneous causal effects of neighborhood policing in New York City with staggered adoption of the policy",
"Heterogeneous causal effects of neighborhood policing in New York City with staggered adoption of the policy"
]
| [
"Joseph Antonelli ",
"Brenden Beck "
]
| []
| []
| In New York City, neighborhood policing was adopted at the police precinct level over the years 2015-2018, and it is of interest to both (1) evaluate the impact of the policy, and(2)understand what types of communities are most impacted by the policy, raising questions of heterogeneous treatment effects. We develop novel statistical approaches that are robust to unmeasured confounding bias to study the causal effect of policies implemented at the community level. We find that neighborhood policing decreases discretionary arrests in certain areas of the city, but has little effect on crime or racial disparities in arrest rates. arXiv:2006.07681v4 [stat.ME] 23 Jan 2023 Abadie, A., Diamond, A., and Hainmueller, J. (2010). Synthetic control methods for comparative case studies: Estimating the effect of california's tobacco control program. | 10.1093/jrsssa/qnad058 | [
"https://export.arxiv.org/pdf/2006.07681v4.pdf"
]
| 249,018,199 | 2006.07681 | 878327dc870fa3118f7e720787d431fc67dcc113 |
Heterogeneous causal effects of neighborhood policing in New York City with staggered adoption of the policy
Joseph Antonelli
Brenden Beck
Heterogeneous causal effects of neighborhood policing in New York City with staggered adoption of the policy
In New York City, neighborhood policing was adopted at the police precinct level over the years 2015-2018, and it is of interest to both (1) evaluate the impact of the policy, and(2)understand what types of communities are most impacted by the policy, raising questions of heterogeneous treatment effects. We develop novel statistical approaches that are robust to unmeasured confounding bias to study the causal effect of policies implemented at the community level. We find that neighborhood policing decreases discretionary arrests in certain areas of the city, but has little effect on crime or racial disparities in arrest rates. arXiv:2006.07681v4 [stat.ME] 23 Jan 2023 Abadie, A., Diamond, A., and Hainmueller, J. (2010). Synthetic control methods for comparative case studies: Estimating the effect of california's tobacco control program.
Introduction
In this paper, we consider the problem of estimating the effect of neighborhood policing on arrest rates in New York City (NYC) over the years 2006-2018. Neighborhood policing is a policy implemented at the police precinct level in which the New York Police Department (NYPD) restructured its precincts, hired specialized community engagement officers, and gave all patrol officers time away from responding to 911 calls to devote to preemptive problem solving. The policy is meant to encourage officers to build community relationships and has them patrol a small sector with the twin goals of reducing crime and promoting trust between the police and residents (Bratton, 2015). The City began implementation in May 2015 with two of the 76 police precincts, and all precincts had adopted neighborhood policing by October 2018. In the statistics and economics literature this type of policy implementation is referred to as "staggered adoption" (Athey and Imbens, 2018;Shaikh and Toulis, 2019;Ben-Michael et al., 2019). Our overarching goal is two-fold: 1) to understand the causal effect of this policy on crime and arrest levels, and 2) to estimate how the effect varies over time and how communities with different characteristics might respond differently.
Review of neighborhood policing Neighborhood policing updates a policing approach common since the 1980s: community policing. Community policing de-emphasizes traditional police actions like arrests and prioritizes problem solving with community members, though research on its consequences is mixed. One reason for the unclear findings is that "many of the [past] studies were characterized by weak evaluation designs" (National Academies of Sciences, Engineering, and Medicine, 2018, p. 7). In Gill et al. (2014), a meta-analysis found that many studies compared only one treated and one control neighborhood. When looking at studies with rigorous pre-and post-intervention measures, they found that community policing did not have a significant effect on crime, and concluded there is "a need for further research around community policing" (Gill et al., 2014, p.399). Another review of studies found that high levels of police-community contact had the promise to reduce crime, but such findings did not hold up when examining only RCTs (Sherman and Eck, 2003).
Despite these null findings of community policing's effect on crime, the NYPD made crime reduction the first goal of its new community-oriented policy (O'Neill, 2018). When announcing the policy, Mayor Bill de Blasio suggested several mechanisms that might link the policy to decreased crime, including the "constant vigilance" of neighborhood policing deterring crime, officers being more inclined to help would-be offenders desist from crime, and improved relationships between police and community members leading to more cooperation in crime solving (de Blasio, 2015).
Previous research has focused on crime rates, but there might be additional, unintended consequences of neighborhood policing. The new practice might decrease arrests and the racial disparities of arrests by reorienting the goals of officers away from aggressive enforcement. A large body of evidence finds Black people are more likely to be arrested than white people, even controlling for differences in offending, so any racial equity impacts of neighborhood policing would be important (Lytle, 2014;Kochel et al., 2011). Alternatively, the increased police-community contact the policy generates could expose police to more potentially criminalizable behavior and therefore increase arrests and racial disparities. Furthermore, if police are deployed in higher numbers to predominantly black or Latino neighborhoods and/or if officers' bias makes them more likely to arrest black or Latino residents, neighborhood policing might increase the racial disparities in low-level, discretionary arrests through this increased contact.
Statistical literature on causal inference and panel data There is an extensive literature on the estimation of causal effects from observational, time-series data. Interrupted time series (ITS) methods are one of the most common methods for estimating causal effects in such situations and have been used for decades in economics and epidemiology, e.g. (Campbell and Cook, 1979;Gillings et al., 1981). See Bernal et al. (2017) for both a review of ITS and practical details on their implementation. Intuitively, ITS methods fit a time-series model that allows the outcome to depend on time and an indicator of whether the policy has been initiated. Under certain assumptions, such as no unmeasured time-varying confounding, the effect of the policy can be obtained from the parameters of the model. Similar ideas have been extended to more complex problems, such as estimating spillover effects in time series studies (Bojinov et al., 2019) or estimating heterogeneous treatment effects from Bayesian state-space models (Li and Bühlmann, 2018).
Another commonly used method for identifying causal effects from observational time series data is the difference in differences (DiD) design (Ashenfelter, 1978;Angrist and Pischke, 2008). Traditionally these methods have been employed in settings with two groups and two time points. In the first time point, neither group has received treatment, while in the second time point only one group has received treatment. These designs rely on the parallel trends assumption that states that the counterfactual outcome under no treatment has the same time trend in both groups. For a review of DiD methods and their extensions to more complex settings, see Lechner et al. (2011) and references within. Of most relevance to the problem at hand are methods that account for the staggered adoption of treatment over time, which has seen a spike in interest within DiD methodology (Athey and Imbens, 2018;Goodman-Bacon, 2018;Callaway and Sant'Anna, 2021b). These methods focus on first defining relevant causal estimands in the presence of multiple time periods where treatment is initiated. Correspondences between these estimators and traditional DiD estimators are highlighted, and multiple different inferential strategies are used to acquire uncertainty measures for treatment effects. Athey and Imbens (2018) take a design-based approach to causal inference where uncertainty stems from the treatment assignment. Callaway and Sant'Anna (2021b) highlight new causal estimands unique to the multiple time point setting, and derive a novel bootstrap approach to inference that is able to account for correlation and clustering within their data.
Synthetic controls present a different approach to causal inference in this setting (Abadie et al., 2010(Abadie et al., , 2015. These were initially developed for the setting where only one unit receives treatment, and their potential outcome under control is estimated using a weighted average of control units with weights estimated using data prior to treatment initiation. This method has been extended to multiple treated units with staggered adoption by using the synthetic control method separately on each treated unit (Dube and Zipperer, 2015;Donohue et al., 2019). This approach has been shown to not be optimal if interest lies in average treatment effects, and was extended to the staggered adoption regime in (Ben-Michael et al., 2019). Synthetic controls, along with other estimators in the panel data setting, were placed in a broader framework of matrix completion methods by Athey et al. (2018). This treats the matrix of potential outcomes over time as a partially observed matrix and uses matrix completion methods to impute the missing values of the potential outcomes. Further, synthetic controls and DiD estimators have been combined to provide doubly robust estimates such that only one of the synthetic control weights or fixed effects regression model needs to be correctly specified in order to obtain consistent estimates of treatment effects (Arkhangelsky et al., 2019).
More recently, time series methods have been used to estimate causal effects by forecasting what would happen in the absence of treatment (Brodersen et al., 2015;Papadogeorgou et al., 2018;Miratrix et al., 2019). The original approach in Brodersen et al. (2015), is developed for the setting when one time series is measured over many time points both in the pre-and post-treatment time periods. The pre-treatment data is used to estimate a Bayesian state-space model that is then used to predict the counterfactual outcome under control during the post-treatment period.
Review of our contribution We develop a framework for causal inference with multiple time series in the presence of staggered adoption that allows for estimation of causal quantities and heterogeneous treatment effects that vary over time and across precinct characteristics that is robust to unmeasured confounding bias. We use multivariate Bayesian time series models that allow for a high-dimensional set of observed time series to produce posterior predictive distributions of subject-specific treatment effects, which account for temporal and spatial correlation in the data. We couple the posterior predictive distribution with regression models to find conditional average treatment effect functions in a straightforward manner. Our paper extends the existing literature in a number of ways. Synthetic controls require there to be units without the treatment, while in our study every precinct adopts the policy by the end of the study. Many of the estimators proposed in the literature with staggered treatment are targeting average treatment effects of a policy, while our goal is to estimate conditional treatment effects that are functions of observed precinct characteristics. Interrupted time series type approaches estimate heterogeneous treatment effects in time-series settings (Li and Bühlmann, 2018), but rely on an assumption of no unmeasured confounding, while we show our approach is robust to certain types of confounding bias from unmeasured covariates. Another key challenge we address is that our data are highly correlated across space, while existing approaches do not account for this spatial dependence in the observations. We show using pre-treatment data in NYC that our approach to inference is able to provide valid inferences for treatment effects in this setting, and then apply our approach to answer important, unanswered questions of the effects of neighborhood policing on crime and arrests. Lastly, we provide an R package to implement the proposed methodology that is available at https://github.com/jantonelli111/HeterogeneousTEpanel 2 Crimes, arrests, and policing in New York City
We gathered data on crimes, arrests, and community characteristics from three sources. Data on crimes reported to the police come from the NYPD's Historic Complaint Database, while arrest data are from the NYPD's Arrest Database. Crime and arrest data are publicly available on New York City's Open Data Portal. Data on precincts' demographic, economic, and housing characteristics come from the Census Bureau's American Community Survey (ACS) five-year estimates. Data were spatially linked and acquired at the precinct level. Address-level crime and arrest data were placed into precincts using a precinct shapefile map provided by the City and Stata's geoinpoly command (Picard, 2015). ACS data at the census tract-level, a smaller geography than the precinct, were placed into the precinct that hosted its centroid. In total, the data contains 156 months of data for 76 precincts.
We are interested in studying how the adoption of neighborhood policing affected (1) overall crime, (2) arrest levels, and (3) racial disparity in enforcement. To study the effect on overall crime, we use the number of violent crimes, defined as the number of murders, manslaughters, robberies, and felony assaults reported to the police in each precinct. We use violent crime rather than total crime because violent crimes are the most likely to be reported to police. Misdemeanor crimes are less frequently reported and are therefore more reliant on police action to be recorded, reflecting police enforcement priorities more than actual crime levels. To analyze arrest levels, we focus on both the number of misdemeanor arrests and proactive arrests. Misdemeanor arrests is a count of arrests for 133 misdemeanor crimes, the most common of which were marijuana possession, misdemeanor assault, theft of services (transit fare evasion), possession of stolen property, and trespassing. Proactive arrests are a subset of misdemeanor arrests that reflect the 55 crimes most often identified by police activity rather than victim complaints. We focus on these two arrest types, rather than an aggregate measure of all arrests, because their discretionary nature makes them the most likely to fluctuate as a result of policy changes. An illustration of proactive arrests over time can be found in Figure 1. We see that proactive arrests were generally increasing in the early years of the study, though they declined in recent years. Finally, we are interested in the impact on racial disparities in arrest rates. To measure this, we define City−wide average Precinct specific Figure 1: Preliminary look at the NYC policing data showing the time series for proactive arrests for each precinct and the city average. The points marked by an x are the times at which each precinct adopted neighborhood policing.
a measure of racial disparity in proactive arrests as the difference in the number of proactive arrests a precinct makes of black people and the number of proactive arrests of white people in the precinct. We obtained the implementation date of neighborhood policing for each precinct from the NYPD's Commissioner Report in 2018 (O'Neill, 2018). Treatment adoption times are shown in Figure 1, which highlights how precincts steadily began implementation in 2015 and shows there was no period of time when the majority of precincts began neighborhood policing. To understand heterogeneity of the effects of neighborhood policing, we measure a vector of precinct-specific demographic, economic, and housing variables that have been shown to relate to crime and arrest rates. Understanding how the effect of neighborhood policing varies by these characteristics will inform the types of communities for which the policy is most useful.
Potential outcomes, estimands, and identification
Let A and Y denote the treatment and outcome of interest, respectively, from a population of n units. We observe each of these n units at T time periods and therefore our data consists of A it and Y it for i = 1, . . . , n units, and t = 1, . . . , T time periods. Each unit in our population eventually adopts treatment and we let T 0 = (T 10 , T 20 , . . . , T n0 ) be the vector of initiation times for each unit. We will be working under the framework that once a unit initiates treatment, it cannot revert back to the control condition, i.e A it = 1 ∀ t ≥ T i0 . Lastly, we denote the unit-specific vector of covariates by X i .
Potential outcomes and estimands
We denote potential outcomes by Y it (t 0 ), where t 0 is the time at which treatment is initiated. This represents the potential outcome we would observe for subject i at time t had they initiated treatment at time t 0 . Similar to Athey and Imbens (2018) we let Y it (∞) denote the potential outcome for a subject if they never receive treatment. To link potential outcomes to the observed data we make a standard consistency assumption that Y it (T i0 ) = Y it . This consistency assumption implicitly assumes there is no interference between units. This assumption states that the treatment status of one unit can not affect the outcomes of other units. As we discuss in greater detail in Section 6.1, this is reasonable in the policing data as it is unlikely for neighborhood policing in one precinct to affect crime or arrest rates in neighboring precincts. We also alleviate this assumption in Appendix B and find that results in the NYC policing example remain unchanged. Lastly, we assume that there are no anticipatory effects, i.e. that Y it (t 0 ) = Y it (∞) for all i and t < t 0 . This states that units do not respond to treatment before it gets initiated. This could be violated if units are aware of the impending treatment change, and subsequently change their behavior due to the upcoming change in policy. We first define unit-level treatment effects at specific time points, and then extend them to samplelevel treatment effects, as well as estimands that target heterogeneity of the treatment effect. Let us first define the unit-level causal effect for unit i at time t if they initiated treatment at time t 0
as ∆ i,t,t 0 = Y it (t 0 ) − Y it (∞).
This contrast compares what would have happened if a unit initiated treatment at time t 0 versus what would have happened if they never initiated the treatment. While unit-level treatment effects are of interest themselves, in many settings average treatment effects are of more interest as they highlight the impact of a policy over an entire region, such as New York City. We define the time-specific sample average treatment effect as
∆(q) = 1 n n i=1 ∆ i,T i0 +q,T i0 = 1 n n i=1 Y i,T i0 +q (T i0 ) − Y i,T i0 +q (∞) .(1)
This is the average impact of the treatment q time periods after treatment initiation over all units.
For different values of q, ∆(q) illuminates how the treatment effect varies over time after treatment initiation. In the policing example, this represents the effectiveness of neighborhood policing q time points after the observed initiation times, which measures the overall impact that the policy had on crime and arrest levels.
Identification of population estimands
The estimand ∆(q) is a sample-specific estimand and is therefore not strictly identifiable from the observed data in the sense that it can be written as a function of the observed data distribution in finite samples (Balzer et al., 2016). Nonetheless, we can examine a population-level counterpart to ∆(q), and study the assumptions under which it is identified from the observed data. This will provide an improved understanding of the assumptions required for estimation. Define a population analog of ∆
(q) as E(Y t 0 +q (t 0 ) − Y t 0 +q (∞)|T 0 = t 0 )
. Similarly to ∆(q), this estimand looks at the average effect of the policy q time points after the realized start time, which is denoted by t 0 here. The first term is immediately identifiable by the consistency assumption, which implies that E(Y t 0 +q (t 0 )|T 0 = t 0 ) = E(Y t 0 +q |T 0 = t 0 ), which is a function of the observed data distribution only. We show in Appendix A that under the consistency and no anticipatory effects assumptions, we can write the second term, E(Y t 0 +q (∞)|T 0 = t 0 )., as a function of
P (Y t 0 +1 (∞)|Y t 0 (∞), T 0 = t 0 ), . . . , P (Y t 0 +q (∞)|Y t 0 +q−1 (∞), T 0 = t 0 ),(2)
where P (Y t 0 +1 (∞)|Y t 0 (∞), T 0 = t 0 ) is the distribution of Y t 0 +1 (∞) given Y t 0 (∞) and T 0 = t 0 . These are not identifiable from the observed data without additional assumptions because they are functions of unobserved counterfactual values. We prove in Appendix A that these terms, and hence the causal effect, are identified under the following assumption:
Assumption 1: P (Y t 0 +l (∞)|Y t 0 +l−1 (∞), T 0 = t 0 ) = P (Y t 0 +m (∞)|Y t 0 +m−1 (∞), T 0 = t 0 ) ∀ l, m ∈ {−t 0 + 2, −t 0 + 3, . . . , q − 1, q}.(3)
This states that the distribution of the potential outcome time series given the past value of the time series is stationary up to q time periods post-treatment initiation. Note this assumption is only on the potential outcome in the absence of treatment and does not assume anything about the potential outcome under treatment. This differs from certain estimators, such as DiD estimators, that make assumptions about potential outcomes across treatment and control groups. For a more in-depth comparison of this stationarity assumption with existing assumptions used in the literature, see Appendix C where we additionally highlight a situation when stationarity holds, but existing assumptions are violated. A crucial point regarding our strategy to identifying causal effects is that we only require assumption 1 and do not rely on an assumption that there are no unmeasured confounders, a common assumption in causal inference with observational data. This means that there can be unmeasured variables U i (t) that affect both the time of treatment initiation and the outcome, yet we are still able to identify and estimate causal effects. The only assumption we rely on is stationarity as defined in (3). Note that assumption 1 can be loosened to condition on time-varying covariates or additional lags beyond the single time point lag that is used currently, which we describe further in Appendix A. In Appendix C, we highlight scenarios where there exists a time-varying or seasonal unmeasured confounder and our approach is still able to obtain unbiased estimates of causal effects. It is important to note that our approach is not robust to all types of unmeasured confounding, as some may violate the stationarity assumption. If the effect of the unmeasured variable on the outcome changes after treatment initiation, or if treatment initiation affects the distribution of the unmeasured variable, then stationarity would be violated. A key feature of this assumption is that one can assess whether stationarity holds in the pre-treatment time periods to assess the plausibility of the assumption. If stationarity does not hold in the pre-treatment period, we would not believe that it holds in the post-treatment period. However, if it holds in the pre-treatment period, this would provide increased confidence in the stationarity assumption. We assess the ability of our approach to estimate treatment effects in the pre-treatment period for the motivating NYC study in Section 5, where we find that our approach obtains accurate estimates of treatment effects with valid measures of uncertainty.
Heterogeneous treatment effects
In the policing example, we are interested in understanding how the treatment effect varies as a function of precinct level characteristics, such as the racial or socioeconomic distribution in a precinct. We can study how ∆ i,t,t 0 varies with X i using ∆ i,t,t 0 = f (X i , t − t 0 ), which conveys how treatment effects are expected to change as a function of precinct level characteristics. The f (·) function can be a complex, nonlinear function of covariates, but many times it is of interest to study heterogeneity by a specific covariate. Let X −j be the matrix of observed covariates in our data excluding covariate j, and let x j and x j denote two distinct values for covariate j. We define
Ψ(j, q) = 1 n q l=0 n i=1 f ([X −j , x j ], l) − f ([X −j , x j ], l) ,
which highlights the difference in causal effects after q time points for units with the same X −j but different values for covariate j. We focus on setting x j and x j to the 25 th , and 75 th quantile of X j , though other values would work analogously. In some instances, however, this may not be a realistic comparison. One example is that it is unlikely that a precinct could have increasing unemployment levels without also having increasing poverty levels. In situations such as these, we can compare f (X i , t − t 0 ) for distinct values of the entire covariate vector that correspond to feasible levels of the covariates. In Section 6 we investigate the effects of neighborhood policing on distinct types of precincts that are found via a clustering algorithm.
Estimation and inference
Based on assumption 1 and the identifiability results in Section 3.2, we require a model that predicts future values of Y it (∞) given previous time periods. Therefore, we develop a Bayesian multivariate time series model that accounts for spatial correlation across precincts. Our goal is to find the posterior distribution of the potential outcome time series in the absence of the policy. Letting Y (∞) represent all unknown values we are interested in predicting, interest lies in P ( Y (∞)|Y , X), the posterior predictive distribution of these predictions given the observed data. Both temporal and spatial correlation in the data must be accounted for properly if we want our estimators to have good inferential properties, such as frequentist interval coverage. We describe one such model here, and explore an additional vector autoregressive model in Appendix E, but the ideas that follow will hold for any model for P ( Y (∞)|Y , X).
Bayesian multivariate structural time series model
To account for both spatial and temporal dependencies, we specify a Bayesian structural time series model of the form
Y t = µ t + t , t ∼ N (0, Σ) (4) µ t = µ t−1 + δ t−1 + η µ t , η µ t ∼ N (0, D µ ) δ t = δ t−1 + η δ t , η δ t ∼ N (0, D δ ),
where D µ is a diagonal matrix with elements given by σ 2 µ,i for i = 1, . . . , n. D δ is defined analogously, but with variance parameters given by σ 2 δ,i for i = 1, . . . , n. Independent inverse-gamma prior distributions are assigned for all variance parameters (σ 2 µ,i , σ 2 δ,i ) for i = 1, . . . , n. This is one example of a more general class of structural time series models, and it is straightforward to add additional complexities such as terms capturing seasonality. For a more general discussion of these models, see Scott and Varian (2014). The trend terms µ t capture the underlying trend of the multivariate time series at time t, while δ t represents the slope of the trend at time t. Each of these follow random walks that induce dependence over time, the extent of which is governed by D µ and D δ . Lastly, the error term t allows for spatial dependence across units in the study at a particular time period through the covariance matrix Σ.
Reducing parameter space of Σ
In high-dimensional time series settings, we do not have sufficient data to estimate all n(n − 1)/2 parameters of the covariance matrix Σ. Existing dimension reduction approaches include imposing sparsity on the inverse of the covariance matrix, assuming it has a low-rank structure (Fox and Dunson, 2015), or decomposing the covariance matrix into the product of upper triangular matrices (George and Sun, 2008). We utilize the first of these approaches, but will use geographic information in the data to inform the sparsity. Our approach finds an initial estimator of model 5 using smoothing approaches such as natural cubic splines or smoothing splines for each unit separately, then uses the model residuals and an optimization algorithm to acquire an estimate of Σ. From the initial fitted model, we acquire predicted values Y t for all t < t min , where t min = min(T 10 , . . . , T n0 ) is the earliest time point that treatment is initiated, and calculate
S = 1 t min − K − 1 t min t=1 (Y t − Y t )(Y t − Y t ) ,
where K is the degrees of freedom used for the individual models for each unit. For high-dimensional data sets such as the NYC policing data, this estimator will be very unstable, so we regularize the estimated covariance matrix by imposing sparsity on Σ −1 . It is known in Gaussian graphical models that if the (i, j) element of Σ −1 ≡ Ω is zero, then Y it (∞) and Y jt (∞) are conditionally independent given the remaining observations. In our policing example, a reasonable assumption is that precincts that are more than one neighbor apart are conditionally independent. We enforce this in the estimation of Σ by solving the following constrained optimization:
Ω = argmin Ω tr(Ω S) − log det Ω, such that Ω ∈ Q,
where Q is the space of all positive semi-definite matrices whose (i, j) element is zero for any i and j that are not neighbors. This finds the value of Ω that is closest to S −1 while enforcing the desired sparsity. We first estimate Σ and run the remaining procedure conditional on this estimate. Note that this is not a fully Bayesian procedure and will ignore uncertainty due to estimation of Σ. We empirically evaluate the quality of model 5 in Section 5 and find that it performs very well on the New York City policing data and leads to credible intervals with nominal coverage rates, despite conditioning on Σ.
Posterior distribution of treatment effects
Sampling from the posterior distribution of all parameters in model 5 allows us to obtain the posterior predictive distribution of future time points. This characterizes our uncertainty around what would have happened in the absence of treatment for all units in the sample. The unit-level treatment effects
of interest are ∆ i,t,T i0 = Y it (T i0 ) − Y it (∞) for t ≥ T i0 .
The first of these two values is observed and known as it is simply the observed outcome after treatment is initiated, while the second of these two is the unknown quantity for which we have a posterior distribution. We automatically obtain the posterior distribution of the unit-level treatment effects, denoted by
P (∆|Y , X) = P (Y obs − Y (∞)|Y , X),
where Y obs is the corresponding vector of observed outcomes after treatment initiation. Now that we have posterior distributions for ∆ i,t,T i0 for all i, and all t ≥ T i0 , we can proceed with obtaining estimates and credible intervals for the estimands of interest from Section 3. ∆(q) is obtained directly by averaging the relevant unit-level treatment effects over the correct time periods. Inference is straightforward using the posterior distribution of these quantities. For treatment effect heterogeneity, we focus on inference for f (·). To estimate f (·), we first draw a sample from the posterior distribution of unit-level treatment effects, ∆ (b) . We then regress these values on X i , the observed characteristics for each unit, as well as t − T i0 . This can be done using linear models, nonlinear models, or more complicated machine learning approaches. We can repeat this process for b = 1, . . . , B posterior draws, each time keeping track of estimates of treatment effect heterogeneity that we are interested in, such as Ψ(j, q), and inference proceeds from the posterior distribution of these quantities. We can additionally improve estimation of ∆(q) by assuming a smooth function of t − T i0 in the model for f (·). The predicted values from f (·) can be used to estimate ∆(q) with 1 n n i=1 f (X i , q), and we show in Appendix H that smoothness can lead to more efficient estimates when the true treatment effect is smooth in time.
Distinguishing spatial correlation and interference
One of the key underlying assumptions necessary for the estimation of causal effects in our setting is the no interference assumption. This states that the potential outcome for a precinct does not depend on the treatment status of other precincts. In spatio-temporal settings it is important to distinguish between spatial dependence across units and spillover of treatment effects into neighboring units. First, we stress that model 5 is for the potential outcomes in the absence of treatment and does not imply anything regarding the nature of the treatment effect. Importantly, this means that any spatial dependence in model 5 does not imply spillover of treatment effects, i.e. interference. An example of when interference is present but spatial correlation is absent would be a setting where the mean of the potential outcome for unit i depends on the treatment status of neighboring units, but that the correlation across units is still zero. Alternatively, spatial dependence can occur without interference if there is an underlying predictor that has spatial structure that induces dependence of the outcomes. We expect there to be spatial dependence in the NYC data as crime levels tend to have spatial structure. Additionally, in Section 6.1 and Appendix B, we discuss how the no interference assumption is expected to hold within the context of our study.
Simulations using observed NYC precinct data
Here we present simulation studies using the observed data in New York City. We focus on observed data for misdemeanor outcomes, which is one of the four outcomes we analyze in Section 6. We ran similar simulations for the other three outcomes to ensure that our method works in all four situations, and those results can be found in Appendix G. These simulations allow us to test a number of features about our approach such as 1) how plausible our identification assumptions are in the NYC data, and 2) how well our model performs in estimating effects for the NYC policing data. We follow a similar approach to Schell et al. (2018) to generating simulated data sets.
We first generate a new time of treatment initiation for each precinct between times 71 and 100, which we can denote by T * i0 . Note that min i T i0 = 112, which ensures that we have more than 10 time periods between the actual and simulated start time of treatment. We will only estimate treatment effects 10 time points into the future, which ensures that our simulation results will not be impacted by the introduction of neighborhood policing. We let T * i0 be dependent on the outcome prior to treatment initiation. Specifically, we randomly sample n numbers between 71 and 100 with replacement and sort them in increasing order. We refer to this ordered vector of potential times as T 1 , . . . , T n where we have that T 1 ≤ T 2 ≤ · · · ≤ T n . For j = 1, . . . , n, we assign the j th time T j to be the start time for unit i with probability proportional to Y i,50 = (1/50) 50 t=1 Y it . Once a unit has been assigned a start time, they are removed from the pool of units moving forward. This process ensures that units with higher values of the outcome are more likely to initiate treatment earlier, which is a realistic situation in practice. On average, the correlation between T * i0 and Y i,50 was -0.39 across simulated data sets. Our simulated data set is therefore the observed data from the NYC policing example, but now the start times are given by T * i0 . For all time periods after T * i0 for unit i, we shift the observed time series, and the amount that we shift the outcome is the magnitude of the causal effect for that unit and time combination. We repeat this process 1000 times and average results over all simulated data sets.
This simulation framework is extremely informative about the performance of our approach on the application of interest, because we only control the magnitude and form of the treatment effect, and we have no control on the data generating process for the outcome. This is a far more realistic simulation than simulations based completely on user-specified data generating models and will provide more insight into the performance of our approach for the data set at hand. One can think of this simulation as a form of model checking or model validation, where we empirically evaluate the performance of our model on the observed data. If our model performs poorly that would indicate that either stationarity does not hold in the data or our model does not capture all sources of uncertainty. Note that approaches based on a no unmeasured confounding assumption can not evaluate their approach in this same manner as this simulation requires randomly assigning treatment times, which would alter the no unmeasured confounding assumption.
We consider two distinct estimands for the simulation study: a time specific treatment effect (∆(q) for q = 0, . . . , 9), and Ψ(j, 9) for each covariate in our study. To estimate f (·), we specify a linear regression model in the covariates X i and time since treatment adoption, t − T i0 . We focus on bias, interval coverage, and efficiency for estimating each estimand. Interval coverage is the proportion of simulations in which the 95% credible interval covers the true parameter. For brevity, we explore one simulation design here, though additional extensive simulations can be found in Appendices C,E, G and H, and a summary of which can be found in Section 5.2. All prior distributions and model specifications are as described in Section 4.
Results with homogeneous treatment effects
We first simulate treatment effects such that there is no heterogeneity by covariates X i and that the unit-specific treatment effects for each precinct across the 10 time points are given by ∆ i = 0.1Y i,50 + (1, 2, 2, 1, 0.5, 0, 0, 0, 0, 0). This ensures that the treatment effect is larger in areas with larger values of the outcome, and that the treatment effect is approximately 10% of Y i,50 . The range of Y i,50 is 78.26 to 767.02 with a mean of 256.9, leading to time-specific treatment effects ∆(q) that are between 25.7 and 27.7. The results from this simulation study can be seen in Figure 2. The left panel shows a boxplot of estimates for ∆(q) for ten time points post-treatment, where the estimates are shifted by the true mean so that unbiased estimates would be centered around zero. Estimates are essentially unbiased for all values of q considered. Additionally, in the middle panel, we can see that interval coverages are close to 95% for all time points considered. A similar story emerges for heterogeneous treatment effects, which can be seen in the right panel of Figure 2. We are able to obtain interval coverages at or near the nominal rate showing the ability of our approach to account for all sources of uncertainty in the NYC policing data.
Coverage of heterogeneous estimands
X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X 11
0
Summary of additional simulations
We have run a number of additional simulation studies that can be found in Appendices C,E, G and H that cover a wide range of situations including the presence of an unmeasured variable affecting both treatment times and the outcome, smoothed estimates of ∆(q), heterogeneous treatment effects, and simulations based on other outcomes in the NYC data. One key takeaway from these simulations is that the presence of an unmeasured covariate affecting treatment times and the outcome does not necessarily bias our results or affect the validity of our inferential procedure. We simulate situations where an unmeasured confounder follows an AR(1) or seasonal process, and we are still able to obtain unbiased results while existing methods are biased. This confirms the theoretical results in Section 3 suggesting that our approach is robust to unmeasured confounders as long as stationarity holds. Timevarying, unmeasured confounders could still negatively impact inference if they are non-stationary themselves or if their effect on the outcome changes over time. We also explore scenarios where the true ∆(q) values are smooth in q, and show that if smoothness of q is incorporated into the estimation procedure, then more efficient estimates of marginal treatment effects can be obtained. We further evaluated model performance in a similar manner on the remaining three outcomes of interest in our data analysis: violent crimes, proactive arrests, and the racial disparity in proactive arrests. The results are mostly identical across the four outcomes with some minor differences. For all three outcomes our approach obtains credible interval coverages that are at, or near, the nominal level for both marginal and heterogeneous estimands. We see a small amount of bias in the estimates of ∆(q) for violent crimes and proactive arrests that could be due to mild amounts of model misspecification or non-stationarity. However, it is not substantial enough to drastically impact coverage rates for either outcome.
Overall, the simulations showed that our approach is well-suited to estimating treatment effects in the policing data in NYC. Additionally, we have seen that our approach is robust to certain types of unmeasured confounding as long as stationarity continues to hold.
The effects of neighborhood policing in NYC
We estimate the impact of neighborhood policing on four outcomes: The number of proactive arrests, the number of misdemeanor arrests, the number of violent crimes, and the difference in the number of black and white proactive arrests. We choose these outcomes because they are plausibly affected by the initiation of neighborhood policing and are of interest for understanding the impact of the policy throughout New York City. We include eleven covariates as potential effect modifiers: population size, percentage of the population that is Black, percentage of the population that is Latino, percentage of housing units that are vacant, percentage unemployed, percentage living in poverty, percentage young men, percentage foreign-born, percentage of owner-occupied housing units, percentage of people with a bachelor's degree, and the average outcome in the first 50 time periods of the study.
Plausibility of causal assumptions
Before estimating the effects of neighborhood policing on crime and arrest rates, it is important to discuss the assumptions required to identify them from the data. The three key assumptions are the no interference assumption, the no anticipatory effects assumption, and the stationarity assumption. In evaluating the effects of policing interventions, spillover effects are often of interest. For example, hot spots policing places a large number of officers in a small area with very high crime rates, and such targeted enforcement might push crime to adjacent areas (Puelz et al., 2019;Collazos et al., 2020). However, in our setting, neighborhood policing is unlikely to have spillover effects that cross precinct lines. Neighborhood policing involves hiring new officers, not to target enforcement, but to engage with community members. The theorized mechanism linking the policy to crime reduction is not the incapacitation of offenders through increased arrests, but the preemptive reduction in crime through improved community trust, which is unlikely to push crime to nearby areas. Neighborhood policing also reduces the time existing patrol officers spend responding to 911 calls. Given that officers typically stay within their precinct-in our data 99% of arrests in any precinct are made by officers of that precinct-it is unlikely that this additional time devoted to community engagement will have impacts on nearby precincts. Nonetheless, this is a key assumption, and therefore we have provided two additional approaches in Appendix B that alleviate the no interference assumption and allow for spillover of the treatment effect. We find that there is very little evidence of spillover of the treatment effect, which increases our belief in the no interference assumption, as well as the findings presented here.
The no anticipatory effects assumption ensures that the potential outcome if neighborhood policing is never adopted is the same as the potential outcome if neighborhood policing has not been adopted yet. This would fail if the police officers in a precinct changed their behavior in preparation for the change to neighborhood policing. Given that the new community engagement officers would not be working yet and the traditional officers would not have the additional free time allotted for community engagement, this assumption is expected to hold. Regardless, if this assumption were to be violated, it would likely bias results towards null effects. This is because the pre-treatment data directly before initiation of the policy would reflect the impact of neighborhood policing and our predictions for the post-treatment period in the absence of the policy would be shifted in the direction of the treatment effect, thereby making the estimated effect smaller in magnitude. As a sensitivity analysis we ran all analyses using earlier treatment initiation times and do not find that estimates differ.
The assumption of stationarity is arguably a strong assumption. Fortunately, this is the one assumption that we were able to partially assess in the simulation study of Section 5. While we can never formally test this assumption, because it involves unobserved counterfactuals in the post-treatment period, we are able to assess whether the stationarity assumption holds in earlier time periods. If stationarity were to be violated in the earlier time points, then our simulations based on the observed New York City arrest data would show biased estimates of the treatment effects and coverage rates below 95%. The fact that our estimates remained relatively unbiased and led to nominal coverage rates gives us increased confidence that this assumption holds in the data example.
Marginal effects
First we focus on time-specific effects denoted by ∆(q) for q = 0, . . . 9 and for each of the four outcomes. Throughout, we assume that ∆(q) is smooth in q by using 3 degree of freedom splines in f (X i , t − t i0 ). The estimates and pointwise 95% credible intervals are depicted in Figure 3. Estimates are negative for both misdemeanor and proactive arrests indicating that neighborhood policing leads to a reduction in low-level, discretionary arrests. The effect remains relatively constant over time for proactive arrests, and the credible interval only contains zero in the final three time points. The effect of misdemeanor arrests is negative at all time points, but decreases in magnitude at later time points, with the credible interval containing zero beginning at the fifth time point. The average number of misdemeanor and proactive arrests in the month before neighborhood policing implementation was 183 and 55, respectively. This indicates that the estimated reductions of 25 and 11.5 misdemeanor and proactive arrests in the first month of implementation are substantial (13.6% and 20.9%) reductions in arrest levels. The estimates of the effects on both violent crime and the difference in black to white arrest levels show essentially no effect of the policy, as the estimates for these two outcomes are very close to zero for all time points considered. These results indicate that the policy does not reduce crime levels, but does reduce arrests for low-level offenses. The increased community contact does not appear to lead to increased arrests. Instead, the policy's de-emphasis of arrests as a primary goal and its increased emphasis on community trust has led to fewer discretionary arrests. As a sensitivity analysis to confirm these results, we utilized both a difference in differences and synthetic control estimator in Appendix G. These rely on different assumptions and different model choices, but the overall findings remain relatively similar. In Appendix E, we evaluate the sensitivity of our results to model specification where we implement a vector autoregressive model for the outcome time series, and again find very similar results.
Heterogeneous effects
Understanding the impact of the policy on different communities is potentially more useful than marginal effects alone. It is possible that marginal effects show no effect of the policy, but only because certain communities have a positive treatment effect while others have a negative treatment effect. Ignoring such differences could lead to an incomplete assessment of the success (or lack thereof) of the policy. We estimate Ψ(j, 9) for each covariate using the 25 th and 75 th quantiles of the observed covariate distribution as x j and x j . We use a linear regression to model f (X i , t−t 0 ) = β 0 +g(t−t 0 )+ p j=1 β j X ij . We have tried more flexible approaches, such as random forests and super learners, but did not find substantively different conclusions and therefore we restrict attention to the simpler case here. Figure 4 shows the coefficient estimates (β j ) for each of the eleven covariates considered in our study. Precincts with higher prior values of the outcome have more negative treatment effects on misdemeanor arrests and proactive arrests, indicating that the reduction in arrests is more prominent in areas with higher arrest rates.
While these estimates of heterogeneity are interesting in their own right, they represent the effect of changing one covariate while fixing the remaining covariates. This may not be plausible in certain situations. For instance, we would expect that precincts with a higher percentage of residents with a bachelor's degree will have lower unemployment rates. In light of this, we attempted to group precincts based on their characteristics and estimate treatment effect heterogeneity across precincts of different profiles. To do this, we used k-nearest neighbors with k = 5 to cluster our precincts into five distinct clusters. We chose k = 5 as this led to clusters that were very diverse socioeconomically, and because the within cluster sum of squares did not decrease substantially after 5 clusters. These clusters can be visualized in Figure 5, which shows a clear spatial structure to the clusters of precincts. This is because neighboring precincts and certain boroughs of NYC tend to have similar covariate profiles. We then estimate the treatment effect within each of these clusters separately, and the results for proactive arrests are shown in Figure 5. It is clear that the treatment effect varies greatly across the different areas of NYC. There is no treatment effect in wealthier, predominantly white areas of Manhattan (cluster 2), while the treatment effect is significantly negative in working-class neighborhoods with higher proportions of Black or Latino people (clusters 1, 3, and 5) indicating that the number of proactive arrests dropped dramatically in those precincts. This heterogeneity in the treatment effect for proactive arrests is missed in Figure 4, which only looks one covariate at a time, highlighting the importance of comparing distinct, plausible covariate values.
Discussion
In this paper, we estimated the effect of neighborhood policing on crime and arrests in New York City using a novel approach to estimating causal effects of policies with staggered adoption that additionally estimates heterogeneity of the effects by observed covariates. We showed through realistic simulations based on the observed data that our approach is able to estimate the causal effects of a precinctlevel treatment with good finite sample properties. We found that neighborhood policing reduces low-level arrests and that this effect was more pronounced in working-class neighborhoods of NYC with larger proportions of Black or Latino people. Another crucial takeaway from our analysis is that neighborhood policing does not reduce violent crime, in alignment with previous criminological research showing community policing has null to minimal impacts on crime. This suggests that neighborhood policing, and possibly other policies that reduce arrests, can be implemented without increasing violent crime.
The city officials that launched neighborhood policing hoped it would promote racial equity. A large body of research reveals that police arrest Black people at starkly disproportionate rates, underscoring the importance of this goal. We found, however, that neighborhood policing had no impact on racial disparities in discretionary arrests. Even as police made fewer arrests, the racial balance remained the same. Changing the durable disparities in criminal justice outcomes will likely require more dramatic interventions. The policy was not without its benefits, however. Mayors and city councilmembers might want to reduce the number of low-level arrests their police departments make. Research suggests this is a worthwhile goal, as low-level arrests have negative consequences for both police and the people arrested while having minimal crime control benefits (Natapoff, 2018; National Academies of Sciences, Engineering, and Medicine, 2018). We found that adopting neighborhood policing would be an effective way to achieve fewer misdemeanor arrests without increasing crime.
One feature of the proposed work is that it is better suited for estimating short-term causal effects rather than long-term downstream effects of a policy. Our approach relies on forecasting the potential outcomes in the absence of the policy, and these forecasts become more uncertain over time. For instance, it is possible that the policy has an effect on violent crime, but that this effect takes longer to propagate than the 10 months considered here. Another feature of the proposed approach is the generalizability of results to other populations of interest, such as other cities that may adopt this policy. While our approach focuses on sample-level estimands unique to the population being studied, by looking at heterogeneity of the causal effect we may provide better insights into how this policy would affect other populations with different covariate distributions.
As with any causal analysis of observational data, the validity of our approach depends on certain assumptions. One of these assumptions is that there is no spillover of the treatment effect into neighboring precincts. While we believe that this is a reasonable assumption in our study of neighborhood policing (Section 6.1), in other contexts it may be less likely to hold. In Appendix B, we extended our approach to allow the potential outcomes to depend on the treatment status of neighboring units (Verbitsky-Savitz and Raudenbush, 2012;Papadogeorgou et al., 2019), and found little evidence of spillover of the treatment effect, which increases our confidence in the results of the NYC policing study. Further research could look to improve these extensions or allow for other interference mechanisms, such as letting the potential outcomes depend on the proportion of treated units (Miles et al., 2019). Importantly, our approach does not rely on an unconfoundedness assumption. This is critical because we can never know if we've measured all relevant covariates in observational studies, and unmeasured confounders are always a primary concern. We showed that our approach is instead based on a time series stationarity assumption. While this is an assumption in its own right, it is one that can be evaluated in the pre-treatment time periods to assess its plausibility. We did this in the simulation study based on the observed New York City policing data, and found that our approach attained very good credible interval coverage that was at or near the nominal rate for all estimands. This shows that our proposed procedure is well-suited to the problem at hand, and therefore the results in the New York City data are much more believable.
A Identification of population estimands
As discussed in the manuscript, sample treatment effects are generally not identified solely as a function of the observed data distribution, though we will show that the population counterpart of our estimands is identified under the stationarity, consistency, and no anticipatory effects assumptions. Our sample average treatment effect is the average difference between the potential outcome at the observed treatment time and the potential outcome assuming the treatment is never initiated. The population version of this estimand that we will target is
E(Y t 0 +q (t 0 ) − Y t 0 +q (∞)|T 0 = t 0 ).
Note that we condition on T 0 in the population estimand, because the sample estimand looks at the observed T 0 only, and does not examine what would happen had everyone received treatment at a particular time point, which would be analogous to a marginal estimand that does not condition on T 0 . The first of these terms is immediately identifiable by the consistency assumption, which implies that E(Y t 0 +q (t 0 )|T 0 = t 0 ) = E(Y t 0 +q |T 0 = t 0 ), which is a function of the observed data distribution. Now, we can identify the second term:
E(Y t 0 +q (∞)|T 0 = t 0 ) = y t 0 +q y t 0 +q f Y t 0 +q (∞)|T 0 =t 0 (y t 0 +q )dy t 0 +q = y t 0 +q y t 0 +q−1 y t 0 +q f Y t 0 +q (∞)|Y t 0 +q−1 (∞),T 0 =t 0 (y t 0 +q−1 ) × f Y t 0 +q−1 (∞)|T 0 =t 0 dy t 0 +q−1 dy t 0 +q = y t 0 +q y t 0 +q−1 · · · yt 0 y t 0 +q × q j=0 f Y t 0 +q−j (∞)|Y t 0 +q−j−1 (∞),T 0 =t 0 (y t 0 +q−j ) × f Y t 0 −1 (∞)|T 0 =t 0 dy t 0 dy t 0 +1 . . . dy t 0 +q = y t 0 +q y t 0 +q−1 · · · yt 0 y t 0 +q × q j=0 f Y t 0 +q−j (∞)|Y t 0 +q−j−1 (∞),T 0 =t 0 (y t 0 +q−j ) × f Y t 0 −1 |T 0 =t 0 dy t 0 dy t 0 +1 . . . dy t 0 +q
The last equality held because Y t 0 −1 (∞) = Y t 0 −1 by the consistency assumption. The only remaining component of this expression that is not a function of the observed data is the density of Y t 0 +q−j (∞)
given both T 0 = t 0 and Y t 0 +q−j−1 (∞), denoted by f Y t 0 +q−j (∞)|Y t 0 +q−j−1 (∞),T 0 =t 0 (y t 0 +q−j )
. The stationarity assumption, however, makes this a function of observed data as we can assume that this conditional distribution after time t 0 is the same as in the periods before t 0 where the potential outcome is fully observed under the consistency and no anticipatory effects assumptions. Note we have used densities throughout here, but if the outcome is a discrete random variable, then analogous expressions would hold. Throughout we have only conditioned on one time point in the past, however, the identification formula holds in the same manner if additional time points are considered. While not necessary for identification, conditioning on additional time points can 1) improve efficiency of results, and 2) make it more likely that the aforementioned stationarity assumption holds. Additionally, we could adjust for time-varying covariates, though this requires an additional assumption that the covariates are unaffected by treatment.
B Alleviating the no interference assumption
In this section, we describe two distinct extensions to our approach in order to remove the no interference assumption and estimate spillover effects of neighboring precincts adopting the policy. To the best of our knowledge, interference in panel-data settings has not been rigorously studied. In a simplified setting, interference was addressed in Menchetti and Bojinov (2020). They have pairs of data points which can interfere with each other, but the nature of the interference does not change over time. In our data example, we have a large number of data points that could in principle interfere with each other, and whether or not a unit has nearby precincts with the policy is constantly changing over time as units adopt treatment at vastly different times. When interference is present, potential outcomes should be denoted by the time of treatment initiation of all precincts. Specifically, we can extend our potential outcomes to be Y it (t 0 , t −i ) where t 0 is the treatment start time for unit i while t −i represents the starting time for the remaining units. There are far too many possibilities for possible treatment times of the remaining units, and therefore we must simplify the interference structure somewhat so that the potential outcome does not depend on the treatment status of all other units. Let g t (t −i ) be a function of the treatment times for all of the remaining individuals. We will make the following assumption, which is less restrictive than the no interference assumption adopted in the original manuscript:
Neighboring interference assumption: For two sets of treatment times for the remaining precincts given by t −i and t
−i , we have that Y it (t 0 , t −i ) = Y it (t 0 , t −i ) if g t (t −i ) = g t (t −i ).
This assumption is closely related to the exposure mappings found in recent proposals for estimating spillover effects in network settings (Aronow and Samii, 2017;Forastiere et al., 2021). For both proposals, we will make the simplifying assumption that the interference is restricted to the presence of a neighbor that has implemented the policy. Intuitively, this means that we are assuming that the treatment status of neighboring units can affect the outcomes of a particular unit, but that the treatment status of far away precincts can not affect a unit's outcome time series. Additionally, we are assuming that the number of treated neighbors does not matter, and that it is only the presence or absence of a treated neighbor that affects the potential outcome. In our setting, we let g t (t −i ) be an indicator of whether any of the neighbors to unit i have already began treatment at time t. Formally, we can write this function as g t (t −i ) = 1(∃ j ∈ N i : t j0 ≤ t), where N i represents the set of precincts that are geographic neighbors of precinct i. With this simplification, we can now define potential outcomes as Y it (t 0 , g), where t 0 is the time that unit i adopts treatment, and g is a binary indicator of whether any of the neighbors of unit i have began treatment by time t. Further, let G it be the random variable denoting whether g t (T −i ) = 1, i.e. whether unit i has a neighbor that has already begun treatment by time t. We now discuss two different estimands and estimation strategies for estimating the effects of having a neighbor with the policy.
B.1 Assessing spillover of the realized policy on not-yet treated precincts
One way in which interference can manifest is that untreated precincts can experience an effect of neighboring treatments receiving the policy before they themselves adopt the policy. In terms of our potential outcomes, this can be denoted by
Y it (∞, 1) − Y it (∞, 0).
The first of these two quantities will be fully observed if at time t, unit i does not yet have the policy implemented, but one of their neighbors does. On the other hand, Y it (∞, 0) is fully observed if at time t, neither unit i or their neighbors have adopted the policy yet. We take a similar strategy to estimation as in the main manuscript, which is to build a model for Y t (∞, 0). This amounts to fitting a model for all of the observed data prior to either a unit initiating treatment or a neighbor initiating treatment. At this point, we will use our model to forecast Y it (∞, 0) in these time periods after either a unit or their neighbor becomes treated. We can then compare this to the fully observed value of Y it (∞, 1) to see what the effect of the neighboring treatment is on unit i. Note that this quantity will not be observed for all n precincts in the sample. Some precincts are treated before their neighbors and we will never be able to observe Y it (∞, 1) for them. For this reason, we will restrict attention to the subset of the sample that is treated after one of their neighbors receives treatment. Let F q be the set of indices in 1, . . . , n that adopt policy at least q time points after their first neighbor adopts the policy. For an illustration of such as situation, see Figure A.1. We see that in August of 2015 most units are not yet treated, but in September 2015 two precincts on the right of the figure become treated. The green precincts are those without the policy, but also have a neighboring precinct that has already adopted the policy. Each of these green precincts will be included in F 0 because they have a neighbor with treatment, and they are not yet treated at this time. If these precincts remain untreated in the following month, they will be included in F 1 , and this process continues until they are treated and then they are no longer included in this set.
Before we can introduce our full estimand, we require one more piece of notation. Again let T i0 be the time at which unit i begins treatment, but now let N i0 be the first time period for which a neighbor of precinct i has adopted treatment. Our estimand can be written as This can be interpreted as the average effect on untreated precincts of having a neighbor adopt policy, and this effect is allowed to vary by the time since the neighbor adopted policy. This has the added complication that the sample over which we are averaging our effects is changing at each time period q. Note that the set F q is decreasing in size with q, i.e. |F q | ≥ |F q+1 |. We find that 75% of our sample is included in F 0 and this steadily decreases as we increase the amount of time periods we look forward due to the fact that more units become treated. For this reason, we only examine I(q) for q = 0, 1, . . . , 4 so that the sample being averaged over remains relatively stable. The estimates and corresponding 95% credible intervals can be found in Figure A.2. We see that there does not appear to be any strong spillover effects on precincts that have not yet adopted the policy. Most of the estimates are close to zero with credible intervals that contain zero, with the one exception being I(4) for misdemeanor arrests. The outcome with the largest estimated treatment effect in the manuscript when interference was ignored, proactive arrests, shows no spillover effect of the policy. This increases our belief in our no interference assumption of the manuscript and strengthens the findings of the manuscript when interference was assumed away. To further assess the plausibility of the no interference assumption, we explore a separate approach in the following section.
I(q) = 1 |F q | i∈Fq Y i,N i0 +q (∞, 1) − Y i,N i0 +q (∞, 0) .
B.2 Simple parameterization of interference effects
Here, we aim to separate the overall effect of the policy into two distinct effects: one that targets spillover effects of the policy, and one that targets the direct effect of the policy on the precinct that it is applied to. Specifically, we target the estimands defined as:
∆ sp (q) = 1 n n i=1 Y i,T i0 +q (∞, G i,T i0 +q ) − Y i,T i0 +q (∞, 0) ∆ dir (q) = 1 n n i=1 Y i,T i0 +q (T i0 , G i,T i0 +q ) − Y i,T i0 +q (∞, G i,T i0 +q ) .
The first of these two quantities targets a spillover effect of precincts that have already adopted the policy influencing the outcomes of those precincts that have not yet had the policy. One can interpret ∆ sp (q) as the average effect of neighboring precinct's policy decisions when a precinct does not yet itself have the policy implemented. Note that it looks at the realized treatment of neighbors given by G i,T i0 +q , which is zero for some of the precincts in the study and one for others. This means that it represents the impact of the policy on untreated neighbors under the rollout of the policy that was observed in the study. On the other hand, ∆ dir (q) can be seen as average impact of having the policy on precincts, given the observed status of the neighbors of those precincts. Interestingly, if we sum these two quantities we obtain
∆ tot (q) = ∆ sp (q) + ∆ dir (q) = 1 n n i=1 Y i,T i0 +q (T i0 , G i,T i0 +q ) − Y i,T i0 +q (∞, 0) ,
the overall impact of the policy as it was adopted in New York City, which is closely related to ∆(q) of the main manuscript. We know that under a consistency assumption, we have that Y i,T i0 +q (T i0 , G i,T i0 +q ) is a fully observed quantity. The remaining quantities are unobserved, and effectively amount to needing to estimate Y i,T i0 +q (∞, g) for g = 0 and g = G i,T i0 +q . To estimate these missing counterfactuals, we fit the following model to the precincts and time periods prior to them adopting the intervention themselves:
Y t = µ t + βG t + t µ t = µ t−1 + δ t−1 + η µ t δ t = δ t−1 + η δ t η µ t ∼ N (0, D µ ) η δ t ∼ N (0, D δ )
Here β is a scalar quantity that captures the effect of neighboring precincts' treatment status on a particular unit. If β = 0, then ∆ sp (q) = 0 and there is no spillover of the policy. Also note that we are using the same β parameter for each unit's time series, and therefore we must standardize each unit's time series before fitting this model to ensure that the magnitude of the effect of neighboring treatment status is shared across units. We fit this model to each of the four outcomes of interest in our study, and Figure A.3 shows the resulting posterior distributions for β under each outcome. We see that for each of the four outcomes considered, zero clearly lies within the posterior distribution of β suggesting that interference does not play a large role in any of our analyses. We can also examine estimates of ∆ dir (q), which corresponds to the average direct effect of treatment on the precincts. The results can be found in Figure A.4. We see results that closely mirror the effects of neighborhood policing that were seen in the main manuscript. There is a negative and significant effect of the policy on both proactive arrests and misdemeanor arrests, and the magnitude of these effects closely mirror the estimates of ∆(q) from the manuscript. We do not see large direct effects of the policy on either violent crimes or the racial disparity measure, which again agrees with the findings of the manuscript.
C Example highlighting differences between stationarity and existing assumptions
Here we highlight a simple example that illustrates how the approach taken in the current paper can still provide unbiased results of treatment effects even in the presence of a time-varying confounder that is not measured, which can negatively impact results under different assumptions. To allow for the use of existing approaches to panel data causal inference problems, we generate data with both treated units and units that never receive treatment. Specifically, we simulate n = 30 independent units measured over T = 80 time points. We generate an unmeasured, time-varying covariate U for each unit from an AR(1) process with correlation ρ between successive time points. It will help to write this as
U it = αU i,t−1 + η it
where η it is random noise and α dictates the degree of temporal correlation. We generate the treatment times T i0 to be a function of the unmeasured covariate. Specifically, all units are untreated until time 76 when certain units are treated. We set A i,76 = 1(U i,75 > 0), which means that all individuals who have a positive value of the unmeasured confounder at time 75 are then treated and remain treated for the remaining time points. This can also be written as
T i0 = 76 U i,75 > 0 ∞ otherwise.
In this setting, we have an unmeasured variable that perfectly determines the treatment time of individuals. The potential outcomes in the absence of treatment are given by
Y it (0) = βt + γU it + it , it ∼ N (0, 1).
The potential outcomes are therefore increasing with time, but are also a function of the unmeasured confounder. Note that this situation breaks the assumptions of a number of existing panel data methods, which assume that the distribution of the residuals for the potential outcome are the same in the treated and control groups. Clearly this is violated here, as the residuals are a function of both it and U it , but the distribution of U it differs in the treated and control groups since treated individuals have positive values of this unmeasured variable at time 75, while control units have negative values. We will show for the difference in differences (DID) and synthetic control (SC) estimators how this can lead to biased estimation, but how the stationarity assumption still holds in this setting and therefore our approach can provide valid inference in this setting. For simplicity, we assume that Y it (1) = Y it (0) + τ and therefore there is a constant treatment effect over time and across different populations.
C.1 Impact on existing estimators
Here we show that the difference in difference and synthetic control estimators will not be able to provide unbiased estimates of causal effects in this setting. First we can look at the DID estimator, which relies on a parallel trends assumption. In our setting, this assumption is that
E(Y i,76 (0) − Y i,75 (0)|T i0 = 76) = E(Y i,76 (0) − Y i,75 (0)|T i0 = ∞).
Under the simple model described above, we can easily show that this equality does not hold, since
E(Y i,76 (0) − Y i,75 (0)|T i0 = 76) = E(Y i,76 (0) − Y i,75 (0)|U i,75 > 0) = β + γE(U i,76 − U i,75 |U i,75 > 0) = β + γE((α − 1)U i,75 + η i,76 |U i,75 > 0) = β − γ(1 − α)E(U i,75 |U i,75 > 0) = β − γ(1 − α)E(U i,75 |U i,75 ≤ 0) = E(Y i,76 (0) − Y i,75 (0)|T i0 = ∞).
We see that the parallel trends assumption does not hold and the degree to which it does not hold depends on both γ and α. The synthetic control approach is a little more difficult to derive analytically what would happen in this context, but we can provide intuition for its performance here. Synthetic controls aim to find weights w i for treated individual i such that Y it (0) ≈ j:T j0 =∞ w ij Y jt (0). This means that for each treated unit, we try to find a weighted linear combination of control units that well approximates the potential outcome of the treated unit, which we can use to impute the missing potential outcome in the absence of the treatment. These weights are assigned a sum to one constraint such that j:T j0 =∞ w ij = 1. We point readers to Ben-Michael et al. (2019) for more details on the estimation of these weights in more complex settings with multiple treated units. We can see, however that the expected outcome for a treated unit in this case is given by
E(Y i,76 (0)|U i,75 > 0) = 76β + γE(U i,76 |U i,75 > 0).
The synthetic controls will try to approximate this with a weighted combination of control units, which have expectation
E j:T j0 =∞ w ij Y jt (0) U j,75 ≤ 0 ∀ j = 76β + γ j:T j0 =∞ w ij E(U j,76 |U j,75 ≤ 0) < 76β + γE(U i,76 |U i,75 > 0) = E(Y i,76 (0)|U i,75 > 0)
and therefore the synthetic controls will tend to underestimate the missing potential outcome in this setting, which will lead to overestimating the treatment effect.
C.2 Showing stationarity holds
An alternative approach is to rely on a stationarity assumption on the control potential outcomes for the treated units. As in the manuscript, this approach will utilize the pre-treatment data (up to time period 75) to build a model for Y it (0) in the treated group and then use this model to forecast times t ≥ T i0 . Mathematically, we will be estimating E[Y it (0)|U i,75 > 0] in the pre-treatment period, and need this model to continue to hold in the post-treatment period to obtain accurate predictions of the missing potential outcome. One can write this expectation as
E[Y it (0)|U i,75 > 0] = E(βt + γU it + it |U i,75 > 0) = βt + γE[U it |U i,75 > 0].
Now we can show that this expectation is the same in the pre and post-treatment periods. First we can look at the case where t > 75:
βt + γE[U it |U i,75 > 0] = βt + γE[E[U it |U i,t−1 = u i,t−1 , U i,75 > 0]] = βt + γαE[U i,t−1 |U i,75 > 0] . . . = βt + γα t−75 E[U i,75 |U i,75 > 0].
Now, we can perform similar operations for t ≤ 75 :
βt + γE[U it |U i,75 > 0] = βt + γE[E[U it |U i,t+1 = u i,t+1 , U i,75 > 0]] = βt + γ α E[U i,t+1 |U i,75 > 0] . . . = βt + γ α 75−t E[U i,75 |U i,75 > 0] = βt + γα t−75 E[U i,75 |U i,75 > 0].
We see that these mean functions are the same in the pre and post-treatment periods and therefore we can use the pre-treatment outcomes to estimate this model and use it to forecast into time periods after treatment initiation. Note that even though the true outcome model is a linear function of time and the unmeasured confounder, after integrating over possible values of the unmeasured confounder, we now have a nonlinear function of time t.
C.3 Simulation results
Now, we repeat this simulation study 100 times to evaluate the bias of a variety of estimators in this situation. We focus on the following list of estimators: 1. A difference in differences estimator using the R package did (Callaway and Sant'Anna, 2021a,b).
A pooled synthetic control estimator from
A two-way fixed effects estimator of the form
E(Y it ) = β 0 + β i + β t + τ A it .
4. The proposed approach where we model E(Y it ) = f (t) during the pre-treatment period for the treated subjects and use it to forecast the potential outcomes in the post-treatment period. We estimate f (t) using 3 degree of freedom natural cubic splines.
5. The proposed approach where we estimate f (t) using a linear function of time.
6. The proposed approach where we use the known function of time f (t) = βt+γα t−75 E[U i,75 |U i,75 > 0] to forecast future time points. This estimator is not feasible in practice, but we use it here to compare with our approaches that estimate this unknown function.
The results from this simulation can be found in Figure A.5. We see that the bias results for both the DID and synthetic control estimators are as expected given the results above. The DID estimator is biased in nearly every situation, except for when there is no association between the unmeasured confounder and the outcome (γ = 0). The synthetic control estimator has increasing bias as we increase both γ and the autocorrelation. The direction of the bias, which is not shown here, is as expected given the results above in that the synthetic control estimator tends to overestimate the treatment effect. The DID estimator overestimates the unknown trend given by E(Y i,76 (0) − Y i,75 (0)|T i0 = 76), which leads to underestimation of the causal effect. The two-way fixed effects estimator also has substantial amounts of bias for larger values of autocorrelation and γ. The stationarity approaches do not suffer from this bias. When we estimate a nonlinear function of time, which is seen in the middle-right panel of Figure A.5, we obtain relatively small amounts of bias in all situations. The bias is not as small as the situation when we know the true underlying f (t) function (bottom-right panel), but it approximates this ideal situation relatively well. The model assuming stationarity with a linear function of time (bottom-left panel) also does reasonably well, but is more biased than the nonlinear function of time, since the true underlying function is nonlinear.
C.4 Simulation study with seasonal unmeasured confounder
We now explore a related situation where the unmeasured confounder has a seasonal trend instead of following an AR(1) process. We use the same simulation structure as above where we have n = 30 units and T = 80 time periods. We use the same structure for defining the start time of treatment adoption and simulating the potential outcomes. Specifically, we simulate treatment start time as
T i0 = 76 U i,75 > 0 ∞ otherwise,
and generate the potential outcomes according to the following:
Y it (0) = βt + γU it + it , it ∼ N (0, 1).
We now generate the unmeasured variable as U it = sin(t/3 + a i ) + η it where η it ∼ N (0, σ 2 t ) and a i is a randomly sampled integer between 1 and 10. We utilize the same estimators as before, but we exclude the linear stationarity estimator that clearly won't work in this nonlinear setting and the known stationarity estimator that is not feasible in practice. The results can be found in Figure A.6, which shows the bias of the four estimators considered across a range of values for σ 2 t and γ. We see that the DID estimator again has the most bias of all of the estimators, while the synthetic control and two-way fixed effect estimators are biased for large values of γ and small values of σ 2 t . The proposed approach based on stationarity is relatively unbiased for all parameter values considered.
D Computational details for MCMC sampling
Here we discuss the details of the MCMC algorithm used to sample from the multivariate Bayesian structural time series model that was utilized in the paper. We also present computational details for a vector autoregressive model that can be used for the same forecasts in Section E. We will discuss sampling for a particular structural time series model, though adding additional complexities, such as seasonal terms, is relatively straightforward to add into the proposed Gibbs sampler. Specifically, we will be sampling from the following model:
Y t = µ t + t µ t = µ t−1 + δ t−1 + η µ
where D µ is a diagonal matrix with elements given by σ 2 µ,i for i = 1, . . . , n. D δ is defined analogously, but with variance parameters given by σ 2 δ,i for i = 1, . . . , n. Given this model specification, one can show the conditional updates for the vector of mean and trend states at time period 1 to be given by
• δ 1 |· ∼ N D −1 µ + 2D −1 δ −1 (µ 2 − µ 1 ) T D −1 µ + δ T 2 D −1 δ , D −1 µ + 2D −1 δ −1 • µ 1 |· ∼ N Σ −1 + 2D −1 µ −1 Y T 1 Σ −1 + (µ 2 − δ 1 ) T D −1 µ , Σ −1 + 2D −1 µ −1 .
Now we can show the updates for a time 1 < t < T where all units are still fully observed. This means that T i0 > t for all i = 1, . . . , n. These updates are given by
• δ t |· ∼ N D −1 µ + 2D −1 δ −1 (µ t+1 − µ t ) T D −1 µ + δ T t−1 D −1 δ + δ T t+1 D −1 δ , D −1 µ + 2D −1 δ −1 • µ t |· ∼ N Σ −1 +2D −1 µ −1 Y T t Σ −1 +(µ t−1 +δ t−1 ) T D −1 µ +(µ t+1 −δ t ) T D −1 µ , Σ −1 +2D −1 µ −1 .
Lastly, we can show the updates for time periods for which some units may be on their final time period before treatment initiation, and other units may have already adopted treatment and therefore do not contribute to the likelihood of our models anymore. To do this, we must first introduce some additional notation. Let the * superscript denote vectors and matrices that only contain the data from the units who have yet to receive treatment. For instance, Y * t is a vector of outcomes at time t for individuals with T i0 > t. Similarly, Σ * is a k by k matrix, where k is the number of individuals for whom T i0 > t, and it corresponds to the submatrix of Σ that only has the rows and columns corresponding to the untreated units at time t. Lastly, let the 0 subscript correspond to vectors and matrices that have zeroes for indices of individuals with T i0 = t + 1, i.e. receive treatment in the next time period. For instance µ * t,0 is a vector of length k with values of µ t for whom T i0 > t, with zeroes for individuals with T i0 = t + 1. The updates for the mean and trend parameters at time t for individuals with T i0 > t are given by
• δ * t |· ∼ N D * µ,0 −1 + D * δ −1 + D * δ,0 −1 −1 (µ * t+1,0 − µ * t,0 ) T D * µ,0 −1 + δ * t−1 T D * δ −1 + δ * t+1,0 T D * δ −1 , D * µ,0 −1 + D * δ −1 + D * δ,0 −1 −1 • µ * t |· ∼ N Σ * −1 +D * µ −1 +D * µ,0 −1 −1 Y * t T Σ * −1 +(µ * t−1 +δ * t−1 ) T D * µ −1 +(µ * t+1,0 −δ * t,0 ) T D * µ −1 , Σ * −1 + D * µ −1 + D * µ,0 −1 −1 .
That concludes the updates for the mean and trend parameters, however, we also have to update the variance parameters (σ 2 δ,i , σ 2 µ,i ) for i = 1, . . . , n. We assign these independent inverse gamma priors with hyper parameters a σ and b σ . This leads to conjugate updates within the Gibbs sampler that are given by
• σ 2 δ,i |· ∼ IG a σ + T i0 −1 2 , b σ + δ 2 1i /2 + 1 2 T i0 −1 t=2 (δ ti − δ t−1,i ) 2 • σ 2 µ,i |· ∼ IG a σ + T i0 −1 2 , b σ + µ 2 1i /2 + 1 2 T i0 −1 t=2 (µ ti − µ t−1,i − δ t−1,i ) 2 .
One can iterate through each of the steps described above to implement a Gibbs sampler that updates all parameters in our model.
E Alternative model specification
In addition to the structural time series model used throughout the manuscript, we explored a vector autoregressive model as an alternative model for the outcome process over time. Specifically, to account for both spatial and temporal dependencies, we specify a local-mean first-order vector autoregressive model of the form
Y t = f (t) + A Y t−1 − f (t − 1) + t ,(5)
where t ∼ N (0 n , Σ). For a more general discussion of these models, see Banbura and van Vlodrop (2018). The vector of functions f (t) accounts for unit-specific intercepts and trends over time. These will be estimated using basis functions by allowing f i (t) = K k=1 β ik φ k (t), where {φ k (t)} K k=1 are prespecified basis functions that include an intercept. A is an n × n matrix of parameters that control the extent of temporal dependence over time. Diagonal elements of A allow for dependence across time within each subject, while non-diagonal elements of A dictate the amount of dependence across subjects over time. As in the manuscript, the error term t allows for spatial dependence across units in the study at a particular time period through the covariance matrix Σ.
E.1 Sparsity of A matrix
In high-dimensional time series settings, we do not have sufficient data to estimate all n 2 parameters in the A matrix, and some form of dimension reduction or shrinkage is required. Both shrinkage priors (Bańbura et al., 2010;Kastner and Huber, 2017;Ghosh et al., 2019) and point mass priors (Korobilis, 2013) have been used to improve estimation of A. One complicating factor unique to our setting is that the observed time series lengths in the pre-treatment period may vary drastically across subjects. This complicates computation as nearly all algorithms have been developed for situations with equal numbers of time periods. Additionally, this can cause a problem for forecasting as it is difficult to use values from one precinct to predict the values of another if they are observed for drastically different lengths of time. For this reason, we force A ij = 0 for any subjects i and j that initiate treatment at substantially different times. This greatly increases the sparsity in A leading to more stable estimation, and avoids problems due to time series being observed at very different time points. Further dimension reduction can be achieved by setting any A ij = 0 for subjects that are not geographic neighbors. This is based on the fact that geographic neighbors should be more correlated than geographically distant subjects, and is a strategy we will utilize for the New York City policing data. The nonzero entries of A, along with the parameters β for the time trends, are given independent normal prior distributions with diffuse variances, as the dimension of the parameter space has been sufficiently reduced to alleviate the need for shrinkage priors.
E.2 Computational details for MCMC sampling
All unknown parameters have full conditional distributions that are from known distributions and therefore the model can be implemented using a standard Gibbs sampling algorithm. First we will detail the Gibbs sampling update for β k for k = 1, . . . , K. Let us define
R t = Y t − j =k X j (t)β j − AY t−1 + A j =k X j (t − 1)β j ,
for t = 2, . . . T , and the first time point is defined as
R 1 = Y 1 − j =k X j (1)β j .
Further, we must define the following:
X k (t) = X k (t) − AX k (t − 1),
for t = 2, . . . , T , and X k (1) = X k (1). Next, define O t = {j : t j0 > t} to be the set of subjects who have not yet been exposed to the treatment. We will use the * superscript to denote versions of all relevant vectors and matrices that have elements corresponding to indices not in O t set to zero. Specifically, let Y * t be defined such that
Y * it = Y it i ∈ O t 0 i / ∈ O t .
identical notation will be used for R t . For matrices, we will let X * j (t) be equal to X * j (t) except all values in row i and column i will be set to zero if i / ∈ O t . Lastly, we will let Σ * t −1 be an n × n matrix with any elements in row and column i set to zero for all i / ∈ O t . The remaining elements of Σ * t −1 will be set to the inverse of the submatrix of Σ defined by indices in O t .
The update for β k then proceeds as follows: for k = 1, . . . , K sample β k from a multivariate normal distribution with mean M and variance V defined by
V = T t=1 X * k (t) Σ * t −1 X * k (t) −1 M = V T t=1 X * k (t) Σ * t −1 R t
Now we will detail the update for A, which can be done in a very similar manner to β k if structured properly. To simplify the updates, assume that we are trying to update one element in each row of A simultaneously, implying that we are updating n values at a time, one for each subject in the data. This is nearly identical to the situation posed when updating the n−vector β k , with some slight modifications needed. We again need to define a residual value as
E t = Y t − K k=1 X k (t)β k − A 0 Y t−1 − K k=1 X k (t − 1)β k ,
where A 0 is equal to A with the n elements we are updating all set to zero. If we let j i be the index of A i that we are updating, we can define W t to be a diagonal matrix with the (i, i) element equal to the j i element of Y t−1 − K k=1 X k (t − 1)β k . We will again use the * superscript to zero the relevant elements of all vectors and matrices as we did for the update of β k . We can update the vector of n values from A from a multivariate normal distribution with mean M and variance V defined by
V = T t=1 W * t Σ * t −1 W * t −1 M = V T t=1 W * t Σ *
This process is then iterated until all elements of A have been updated. If each row in A has exactly q nonzero elements, then this process can simply be iterated q times. If there is an unequal number of nonzero terms in the rows of A then this process will be repeated q max times where q max is the maximum number of nonzero elements in a row of A. In this setting, rows of A that have less than q max nonzero elements can either recycle their nonzero elements and therefore they get updated more than once per MCMC iteration, or the relevant matrices and vectors in the construction of M and V can have the indices corresponding to these rows set to zero. In the latter setup we will update the parameters of A from a multivariate normal with mean and variance given by only the elements of M and V that correspond to the indices being updated.
E.2.1 Posterior predictive distribution and causal effects
Once we have posterior samples of β k and A, we can now produce posterior samples of outcome values in the absence of treatment, denoted by Y (∞). For all time periods before the treatment is initiated, this value is observed and known. We will use time series forecasting from our model to predict these values for values post-treatment initiation. Let t min be the first time point that treatment is initiated throughout the study. The following algorithm will generate the m th posterior draw from P ( Y (∞)|Y , X): for t = t min , . . . , T perform the following steps:
1. Draw values β (m) k for k = 1, . . . , K and A (m) from the posterior distribution of both parameters.
2. Obtain residuals from the previous time point as
r t−1 = Y (m) t−1 (∞) − K k=1 X k (t − 1)β (m) k
3. For the current time point, calculate
M t = K k=1 X k (t)β (m) k + A (m) r t−1 4. If t i0 ≤ t ∀ i then draw Y (m) t
(∞) from a multivariate normal distribution with mean M t and variance Σ. If there exists an i such that t i0 > t, then split the mean and covariance matrices as
M t = M t1 M t2 Σ = Σ 11 Σ 12 Σ 21 Σ 22 ,
where M t1 represents the components of M t corresponding to subjects i with t i0 ≤ t. Further Σ 11 corresponds to the covariance matrix of residual errors for the same set of subjects. We can also let Y
(m) t1 (∞) and Y (m)
t2 (∞) be defined similarly. We are able to observe Y (m)
t2 (∞) = Y t2 (∞) whereas we will draw Y (m) t1 (∞) from a multivariate normal distribution with mean M t1 + Σ 12 Σ −1 22 (Y t2 (∞) − M t2 ),
and variance Σ 11 + Σ 12 Σ −1 22 Σ 21 . Now that we have posterior draws of Y t (∞) we automatically have posterior draws of the treatment effect at any time point as Y t,obs − Y t (∞), the difference between the observed data (under treatment) and the prediction of what would have happened in the absence of treatment.
E.3 Alternating least squares estimate of Σ
Here we detail how we find initial estimates of ( f (t), A), which can be used to construct an estimate of the residual covariance matrix. We will be constructing f (t) as f i (t) = K k=1 β ik φ k (t), and therefore our model can be expressed as follows:
Y t = f (t) + A Y t−1 − f (t − 1) + t = K k=1 X k (t)β k + A Y t−1 − K k=1 X k (t − 1)β k ,
where X k (t) is an n × n diagonal matrix with each element of the diagonal equal to φ k (t). β k is a vector of length n representing coefficients β ik for i = 1, . . . , n. We will construct an algorithm to estimate all unknown parameters, which are given by (β 1 , . . . , β k , A). Note that we will also adopt the convention that A i = [A i1 , . . . , A in ]. Our algorithm will iterate across all unknown parameters using least squares at each step while conditioning on the current estimates of the remaining parameters. We will do this until the estimates have converged, which we will assess using the l2 norm between iterative updates to see if the difference has dropped below a pre-chosen δ level. Below are the specific updates for each parameters involved in the iterative algorithm.
E.3.1 Update of β k
Let us first describe how to estimate β k given current estimates of the other parameters, β j for j = k, and A. Let us define
R t = Y t − j =k X j (t) β j − AY t−1 + A j =k X j (t − 1) β j ,
for t = 2, . . . T , and the first time point is defined as
R 1 = Y 1 − j =k X j (1) β j .
Lastly, we must define the following:
X k (t) = X k (t) − AX k (t − 1),
for t = 2, . . . , T , and X k (1) = X k (1). As we are estimating the parameters with least squares, our goal is to minimize the following quantity:
T t=1 ||R t − X k (t)β k || 2 .
Taking the derivative of this expression with respect to β k , setting the derivative equal to zero, and solving for β k , we can see that our estimate is
β k = T t=1 X k (t) T X k (t) −1 T t=1 X k (t) T R t .
E.3.2 Update of A
To update A we can separately estimate A i for i = 1, . . . , n. As described in the manuscript, many elements of A i will be zero by construction. To simplify notation, let A * i be a vector containing only the nonzero elements of A i . For instance, if S = {s : A is = 1} and |S| = q, then A * i = (A iS 1 , . . . , A iSq ).
We will adopt the same convention for the vectors β k and Y t , and we will let X * k (t) be the rows of X k (t) corresponding to the indices in S. We can now write our model as follows
Y it = K k=1 β ik φ k (t) + A i Y t−1 − K k=1 X k (t − 1) β k = K k=1 β ik φ k (t) + A * i Y * t−1 − K k=1 X * k (t − 1) β * k Now we can define E i = (E i2 , . . . , E iT ), where E it = Y it − K k=1 β ik φ k (t).
Further, define W i to be a (T − 1) × q matrix where row t is given by
W it = Y * t−1 − K k=1 X * k (t − 1) β * k .
Our least squares estimate of A i is then equal to
A i = ( W T i W i ) −1 W T i E i E.3.3 Update of Σ
Once estimates of all parameters are obtained from the above algorithm, then estimates of Y t can be obtained using the following
Y 1 = K k=1 X k (t) β k Y t = K k=1 X k (t) β k + A Y t−1 − K k=1 X k (t − 1) β k for t = 2, . . . , T
Once these have been obtained, then we can construct an estimate of the covariance matrix by first using the sample covariance matrix defined by
S = 1 T − K − 1 T t=1 (Y t − Y t )(Y t − Y t ) .
Lastly, this covariance matrix is unstable unless T is large relative to n, which is not the case in the policing example described in the manuscript. To improve estimation, we will enforce sparsity on the inverse of Σ by solving the following constrained optimization problem:
Ω = argmin Ω tr(Ω S) − log det Ω such that Ω ∈ Q,
where Q is the space of all positive semi-definite matrices whose (i, j) element is zero for any i and j that are not neighbors in the data. Ω can be inverted to provide our final estimate of Σ.
E.4 Simulation results
Here we present simulation results analogous to those used from the manuscript, but for the VAR model described above. The simulation setup is identical to the one presented in the manuscript, and the results can be found in Figure A.7. The results are fairly similar with respect to the marginal estimands ∆(q) as this approach is able to achieve nearly the nominal 95% coverage rate. With the VAR model, coverage does dip slightly lower into the 86% range for longer-term causal effects. Another key difference arises in the estimation of heterogeneous treatment effects, where the VAR model leads to worse coverage of heterogeneous estimands as seen in the right panel of Figure A.7. We see that if instead of trying to forecast 10 time periods into the future, we only forecast 3 time periods, then the coverage of the heterogeneous estimands improves and is closer to the nominal level. While it is intuitive that forecasting farther into the future is more difficult, the state space model used in the main manuscript did not suffer from this same limitation. For brevity we do not show the results here, but we applied this VAR model to simulations from all four outcomes of interest and it generally performs slightly worse than the model used in the manuscript.
E.5 Results on NYC policing study
Here we present the findings of our NYC neighborhood policing analysis when using VAR models to forecast the potential outcomes in the absence of the policy. The estimates of ∆(q) for each of the four outcomes as a function of q can be found in Figure A.8. We see a very similar story as with the models in the main manuscript, which is that there is a strong effect of the policy on both misdemeanor and proactive arrests that seems more sustained for proactive arrests. The estimated effects for both violent crimes and the racial disparity of proactive arrests are very close to zero indicating little to no effect of the policy on these outcomes. We can also examine whether the effects of neighborhood policing varied by observed characteristics or locations of New York City. First, looking at whether the covariates modify the treatment effect in Figure A.9, we see a similar story as in the main manuscript. None of the covariates strongly affect the magnitude or direction of the effect of neighborhood policing. Lastly, we can investigate whether the treatment effect differs in the five regions defined by the clustering algorithm of the manuscript. Previously, we had seen very negative effects in certain areas of New York City, and very little effect of the policy in other neighborhoods. We see very similar estimates here, with significant and negative effects on clusters 1,3, and 5, while there is very little evidence of a treatment effect in clusters 2 and 4. Overall, the results are fairly similar across the VAR and state space modeling approaches, which gives us increased beliefs in the main findings of the manuscript.
Cluster
Treatment Effect 1 -10.58 (-18.11, -3.
F Results on NYC policing study using existing estimators
Here we apply both a difference in differences and synthetic control estimator to estimate the marginal estimands in the neighborhood policing study in NYC. We utilize the same DID and synthetic control estimators described in Section C that are applicable to the staggered adoption setting. One difference between these estimators and the one proposed in the manuscript is that these estimators require at least one control observation that is never treated during the study, while every precinct in NYC eventually adopts neighborhood policing. To avoid this issue, we will estimate the effect of the policy on a subset of the data that is treated earlier, while using precincts that are treated at later times as control units. Specifically, we have 6 precincts that become treated at time point 153 in our study, 6 that become treated at time point 150, and all other precincts are treated on or before time period 147. For this reason, we drop the precincts that are treated at time point 150 and we include those treated at time point 153 as control precincts. Due to this restriction, we also only estimate treatment effects for the first five time periods after treatment adoption, so that even precincts treated at time 147 can utilize these control precincts. It is important to note that while we are performing these analyses to confirm that we obtain similar results as those from our approach in the manuscript, these are slightly different estimands that are averaging over a subset of the precincts. Any differences seen between our approach and these approaches could be due to the statistical approach taken, the underlying assumptions associated with each approach, or the fact that we are looking at a slightly different estimand. Also note that we do not consider estimands highlighting heterogeneity by covariates here as these approaches are currently not developed for this purpose.
The results for all four outcomes considered in the manuscript can be seen in Figure A.10. We see relatively similar results to those seen in the manuscript, which provides additional evidence for the overall findings of our approach. Neither approach finds any effect of neighborhood policing on the racial disparity in proactive arrests. In terms of both misdemeanor and proactive arrests, the DID approach provides extremely similar results to those seen in the manuscript. The DID estimator finds a strong, negative effect of neighborhood policing on proactive arrests, and finds a moderate effect on misdemeanor arrests that slightly decreases in strength over time. The synthetic control estimator also finds highly similar results to the proposed approach in terms of point estimates for both proactive and misdemeanor arrests. They have slightly wider 95% confidence intervals, however, which leads to the intervals generally covering 0. The increased width in the confidence intervals is at least partially caused by the fact that this analysis uses a subset of the precincts for estimation of the treatment effect, and this decreased sample size would be expected to lead to increased uncertainty. In terms of violent crimes, the synthetic control estimator finds no effect of neighborhood policing, which closely aligns with the results seen in the manuscript, while the DID estimator finds a moderately negative effect on violent crimes. Overall, these results paint a relatively similar picture as those presented in the manuscript, which is that the most pronounced effects of neighborhood policing are on both proactive and misdemeanor arrests.
G Simulations on other outcomes of interest
In the manuscript, we focused our simulations on misdemeanor arrests, though we examine proactive arrests, violent crimes, and the racial disparity in proactive arrests as well. Here, we present identical simulation studies to the homogeneous treatment effect simulation of the main manuscript, though we use the three additional outcomes for the data in the simulation. The stationarity assumption, which is critical to our approach, is unique to each outcome and therefore we must run this simulation and model checking across all outcomes to ensure validity of our results. The full results for violent crimes, proactive arrests, and racial disparities in proactive arrests can all be found in Figures A.11, A.12, and A.13, respectively. Here, we briefly summarize these results and how they compare with those seen in the manuscript for misdemeanor arrests. Importantly, the credible interval coverage for all marginal estimands remains very close to 0.95 for all three simulations, which echoes the results of the simulations for misdemeanor arrests in the main manuscript. Heterogeneous estimands also depict a similar story, as we are also able to achieve coverage very close to the nominal rate. The bias of the marginal treatment effects is relatively similar to the simulation from the main manuscript as biases are relatively low, and they tend to increase as we estimate causal effects farther into the future. This bias is not substantial enough to greatly affect the coverage probabilities for these estimands, which are close to 95%. Overall, these results suggest that our approach is able to estimate treatment effects for all four outcomes of interest with a reasonably high amount of statistical validity. It appears, at least in the pre-treatment period, that the stationarity assumption is reasonable and that we can obtain accurate estimates of treatment effects of interest.
X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X The left panel shows the estimates of ∆(q) for q = 0, . . . , 9. Estimates are mean shifted so that an unbiased estimator will be centered at zero. The middle panel shows the coverage of ∆(q), while the right panel shows coverage of the heterogeneous estimands.
H.2 Heterogeneous treatment effects
Here we run the same simulation study as in the homogeneous simulation study of the main manuscript, except we now let the treatment effect for each precinct be proportional to X i β. The results are nearly identical to those from the homogeneous treatment effect setting in the manuscript. We see effectively no bias of the marginal treatment effects, and interval coverages that are close to the nominal 95% rate. We see the same story for the heterogeneous estimands as well, as our approach is able to achieve nearly the nominal coverage rate for each covariate in the study, which shows the ability of our approach to estimate heterogeneous treatment effects in the NYC policing example.
Figure 2 :
2Results from the homogeneous treatment effect simulation study. The left panel shows estimates of ∆(q) for q = 0, . . . 9. Estimates are mean shifted so that an unbiased estimator would be centered at zero. The middle panel shows coverage for all marginal estimands, while the right panel shows coverage for heterogeneous estimands.
Figure 3 :
3Estimates and 95% credible intervals for time specific effects ∆(q) of neighborhood policing on misdemeanor arrests (first panel), proactive arrests (second panel), Violent crimes (third panel), and the difference of black and white proactive arrests (fourth panel).
Figure 4 :
4Estimates of coefficients from the heterogeneous treatment effect functions.
Figure 5 :
5Illustration of the results of the clustering algorithm on NYC precincts as well as the estimates of the treatment effect on proactive arrests for each of these clusters.
Figure A. 1 :
1Treatment status of precincts in two successive months. Blue denotes a unit that has already adopted policy, while green denotes units that have not yet adopted policy and have a neighboring precinct with the policy.
Figure A. 2 :
2Estimates of I(q) for q = 0, 1, . . . , 4 under each of the four outcomes considered.
Figure A. 3 :Figure A. 4 :
34Posterior distribution of β for each of the four outcomes considered. A value of β = 0 indicates there is no spillover of the treatment effect. Estimates of ∆ dir (q) for q = 0, 1, . . . , 9 under each of the four outcomes considered.
Ben-Michael et al. (2019).
Figure A. 5 :
5Absolute value of the bias of the various estimators under the timevarying unmeasured confounder simulation across a range of values of ρ and γ.
Figure A. 6 :
6Absolute value of the bias of the various estimators under the timevarying unmeasured confounder simulation across a range of values of σ 2 t and γ.
Figure A. 7 :
7Results from the simulation study for misdemeanor arrests when using vector autoregressive models.
Figure A. 8 :
8Estimates and 95% credible intervals for time specific effects ∆(q) of neighborhood policing on misdemeanor arrests (first panel), proactive arrests (second panel), Violent crimes (third panel), and the difference of black and white proactive arrest rates (fourth panel) when using a VAR outcome model.
Figure A. 9 :
9Estimates of coefficients from the heterogeneous treatment effect functions when using VAR models for time series forecasting.
Figure
A.10: Estimates of the marginal effect of neighborhood policing in the first five time periods after treatment initiation for both a DID and synthetic control estimator.
Figure A. 15 :
15Results from the heterogeneous treatment effect simulation study.
Verbitsky-Savitz, N. and Raudenbush, S. W. (2012). Causal inference under interference in spatial settings: a case study evaluating community policing program in chicago.Epidemiologic Methods,
1(1):107-130.
Table 1: Estimates of the treatment effect on proactive arrests for each cluster when using a VAR outcome model.5)
2
1.38 (-7.33, 10.37)
3
-25.05 (-39.39, -10.16)
4
-1.7 (-11.49, 8.01)
5
-34.34 (-53.42, -15.56)
t δ t = δ t−1 + η δ t η µ t ∼ N (0, D µ ) η δ t ∼ N (0, D δ ) t ∼ N (0, Σ),
t −1 E t
t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8 t = 9 t = 10 0.0
AcknowledgementsThe authors would like to thank Georgia Papadogeorgou, Aaron Molstad, and Rohit Patra for extremely insightful comments on the manuscript.Coverage of heterogeneous estimandsX 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X 11Coverage of heterogeneous estimandsX 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X 11H Additional simulation studiesHere we present additional simulation results that are again based on the NYC policing data. We will present results from simulation studies that show our approach with smoothed estimates of ∆(q), and in a simulation with heterogeneous treatment effects.Coverage of heterogeneous estimandsX 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X 11H.1 Smooth estimates of ∆(q)Here we run the same simulation study as in the homogeneous simulation study of the main manuscript, except we now let the true ∆(q) be a smooth function of q. In particular, we let ∆(q) = 10 + Z q β q where Z q are three degrees of freedom natural splines evaluated at q and β q = (−2, −4, −6). We use the proposed approach in two ways: one that assumes smoothness of ∆(q) and one that does not. The model that does not assume smoothness simply takes the posterior distribution of ∆ i,T i0 +q,T i0 and directly calculates the posterior distribution of the sample average treatment effect by averaging over the units in the sample. The approach that assumes smoothness takes every posterior sample of ∆ i,T i0 +q,T i0 and regresses these individual treatment effects against a three degrees of freedom spline representation for q. The fitted values from this model are then used as posterior draws of ∆ i,T i0 +q,T i0 and calculating sample average treatment effects proceeds analogously. We can see in the left panel ofFigure A.14 that both estimators provide credible interval coverages for ∆(q) that are at or near the nominal 95% rate, with the smoothed estimates having slightly lower coverage. One key difference can be seen in the right panel ofFigure A.14 as the standard deviation of the estimates coming from the model assuming smoothness are generally smaller than the model that does not assume smoothness of ∆(q).No smoothing Smoothed estimatesCoverage of aggregate estimands t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8 t = 9 t = 10Coverage of heterogeneous estimands
Design-based analysis in difference-in-differences settings with staggered adoption. S Athey, G W Imbens, National Bureau of Economic ResearchTechnical reportAthey, S. and Imbens, G. W. (2018). Design-based analysis in difference-in-differences settings with staggered adoption. Technical report, National Bureau of Economic Research.
Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching. L B Balzer, M L Petersen, M J Van Der Laan, S Collaboration, Statistics in medicine. 3521Balzer, L. B., Petersen, M. L., van der Laan, M. J., and Collaboration, S. (2016). Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching. Statistics in medicine, 35(21):3717-3732.
Large bayesian vector auto regressions. M Bańbura, D Giannone, L Reichlin, Journal of applied Econometrics. 251Bańbura, M., Giannone, D., and Reichlin, L. (2010). Large bayesian vector auto regressions. Journal of applied Econometrics, 25(1):71-92.
Forecasting with bayesian vector autoregressions with time variation in the mean. M Banbura, A Van Vlodrop, Banbura, M. and van Vlodrop, A. (2018). Forecasting with bayesian vector autoregressions with time variation in the mean.
E Ben-Michael, A Feller, J Rothstein, arXiv:1912.03290Synthetic controls and weighted event studies with staggered adoption. arXiv preprintBen-Michael, E., Feller, A., and Rothstein, J. (2019). Synthetic controls and weighted event studies with staggered adoption. arXiv preprint arXiv:1912.03290.
Interrupted time series regression for the evaluation of public health interventions: a tutorial. J L Bernal, S Cummins, A Gasparrini, International journal of epidemiology. 461Bernal, J. L., Cummins, S., and Gasparrini, A. (2017). Interrupted time series regression for the evaluation of public health interventions: a tutorial. International journal of epidemiology, 46(1):348- 355.
I Bojinov, Y Tu, M Liu, Y Xu, arXiv:1903.07755Causal inference from observational data: Estimating the effect of contributions on visitation frequency atlinkedin. arXiv preprintBojinov, I., Tu, Y., Liu, M., and Xu, Y. (2019). Causal inference from observational data: Estimating the effect of contributions on visitation frequency atlinkedin. arXiv preprint arXiv:1903.07755.
The nypd plan of action and the neighborhood policing plan: A realistic framework for connecting police and communities. W J Bratton, Bratton, W. J. (2015). The nypd plan of action and the neighborhood policing plan: A realistic framework for connecting police and communities. nd, http://home. nyc. gov.
Inferring causal impact using bayesian structural time-series models. K H Brodersen, F Gallusser, J Koehler, N Remy, S L Scott, The Annals of Applied Statistics. 91Brodersen, K. H., Gallusser, F., Koehler, J., Remy, N., Scott, S. L., et al. (2015). Inferring causal impact using bayesian structural time-series models. The Annals of Applied Statistics, 9(1):247-274.
. B Callaway, P H Sant'anna, did: Difference in differences. R package version 2.1.1Callaway, B. and Sant'Anna, P. H. (2021a). did: Difference in differences. R package version 2.1.1.
Estimating causal effects in the presence of partial interference using multivariate bayesian structural time series models. B Callaway, P H Sant'anna, Journal Menchetti, F. and Bojinov, I. 2006Difference-in-differences with multiple time periods. arXiv e-printsCallaway, B. and Sant'Anna, P. H. (2021b). Difference-in-differences with multiple time periods. Journal Menchetti, F. and Bojinov, I. (2020). Estimating causal effects in the presence of partial interference using multivariate bayesian structural time series models. arXiv e-prints, pages arXiv-2006.
Causal inference when counterfactuals depend on the proportion of all subjects exposed. C H Miles, M Petersen, M J Van Der Laan, Biometrics. 753Miles, C. H., Petersen, M., and van der Laan, M. J. (2019). Causal inference when counterfactuals depend on the proportion of all subjects exposed. Biometrics, 75(3):768-777.
Simulating for uncertainty with interrupted time series designs. L Miratrix, C Anderson, B Henderson, C Redcross, E Valentine, Miratrix, L., Anderson, C., Henderson, B., Redcross, C., and Valentine, E. (2019). Simulating for uncertainty with interrupted time series designs.
Punishment without crime: How our massive misdemeanor system traps the innocent and makes America more unequal. A Natapoff, Natapoff, A. (2018). Punishment without crime: How our massive misdemeanor system traps the innocent and makes America more unequal. Hachette UK.
Proactive policing: Effects on crime and communities. National Academies of Sciences, Engineering, and Medicine. National Academies PressNational Academies of Sciences, Engineering, and Medicine (2018). Proactive policing: Effects on crime and communities. National Academies Press.
The Police Commissioner's. J P O'neill, New York City Police DepartmentReportO'Neill, J. P. (2018). The Police Commissioner's Report 2018. New York City Police Department.
Causal inference with interfering units for cluster and population level treatment allocation programs. G Papadogeorgou, F Mealli, C M Zigler, Biometrics. 753Papadogeorgou, G., Mealli, F., and Zigler, C. M. (2019). Causal inference with interfering units for cluster and population level treatment allocation programs. Biometrics, 75(3):778-787.
G Papadogeorgou, F Mealli, C M Zigler, F Dominici, J H Wasfy, C Choirat, arXiv:1809.09590Causal impact of the hospital readmissions reduction program on hospital readmissions and mortality. arXiv preprintPapadogeorgou, G., Mealli, F., Zigler, C. M., Dominici, F., Wasfy, J. H., and Choirat, C. (2018). Causal impact of the hospital readmissions reduction program on hospital readmissions and mortality. arXiv preprint arXiv:1809.09590.
Geoinpoly: Stata module to match geographic locations to shapefile polygons. R Picard, Picard, R. (2015). Geoinpoly: Stata module to match geographic locations to shapefile polygons.
A graph-theoretic approach to randomization tests of causal effects under general interference. D Puelz, G Basse, A Feller, P Toulis, arXiv:1910.10862arXiv preprintPuelz, D., Basse, G., Feller, A., and Toulis, P. (2019). A graph-theoretic approach to randomization tests of causal effects under general interference. arXiv preprint arXiv:1910.10862.
Evaluating methods to estimate the effect of state laws on firearm deaths: A simulation study. T L Schell, B A Griffin, A R Morral, RAND CorporationSchell, T. L., Griffin, B. A., and Morral, A. R. (2018). Evaluating methods to estimate the effect of state laws on firearm deaths: A simulation study. RAND Corporation.
Predicting the present with bayesian structural time series. S L Scott, H R Varian, International Journal of Mathematical Modelling and Numerical Optimisation. 51-2Scott, S. L. and Varian, H. R. (2014). Predicting the present with bayesian structural time series. International Journal of Mathematical Modelling and Numerical Optimisation, 5(1-2):4-23.
Randomization tests in observational studies with staggered adoption of treatment. A Shaikh, P Toulis, University of Chicago, Becker Friedman Institute for Economics Working PaperShaikh, A. and Toulis, P. (2019). Randomization tests in observational studies with staggered adoption of treatment. University of Chicago, Becker Friedman Institute for Economics Working Paper, (2019- 144).
Policing for crime prevention. L W Sherman, J E Eck, Evidence-based crime prevention. RoutledgeSherman, L. W. and Eck, J. E. (2003). Policing for crime prevention. In Evidence-based crime preven- tion, pages 309-343. Routledge.
| [
"https://github.com/jantonelli111/HeterogeneousTEpanel"
]
|
[
"From Free-Energy Profiles to Activation Free Energies",
"From Free-Energy Profiles to Activation Free Energies"
]
| [
"Johannes C B Dietschreit \nDepartment of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA\n",
"Dennis J Diestler \nUniversity of Nebraska-Lincoln\n68583LincolnNebraskaUSA\n",
"Andreas Hulm \nChair of Theoretical Chemistry\nDepartment of Chemistry\nUniversity of Munich (LMU)\nButenandtstr. 7D-81377MünchenGermany\n",
"Christian Ochsenfeld \nChair of Theoretical Chemistry\nDepartment of Chemistry\nUniversity of Munich (LMU)\nButenandtstr. 7D-81377MünchenGermany\n\nMax Planck Institute for Solid State Research\nHeisenbergstr. 1D-70569StuttgartGermany\n",
"Rafael Gómez-Bombarelli \nDepartment of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA\n"
]
| [
"Department of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA",
"University of Nebraska-Lincoln\n68583LincolnNebraskaUSA",
"Chair of Theoretical Chemistry\nDepartment of Chemistry\nUniversity of Munich (LMU)\nButenandtstr. 7D-81377MünchenGermany",
"Chair of Theoretical Chemistry\nDepartment of Chemistry\nUniversity of Munich (LMU)\nButenandtstr. 7D-81377MünchenGermany",
"Max Planck Institute for Solid State Research\nHeisenbergstr. 1D-70569StuttgartGermany",
"Department of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA"
]
| []
| Given a chemical reaction going from reactant (R) to the product (P) on a potential energy surface (PES) and a collective variable (CV) discriminating between R and P, we define the free-energy profile (FEP) as the logarithm of the marginal Boltzmann distribution of the CV. This FEP is not a true free energy. Nevertheless, it is common to treat the FEP as the "free-energy" analog of the minimum potential energy path and to take the activation free energy, ∆F ‡ RP , as the difference between the maximum at the transition state and the minimum at R. We show that this approximation can result in large errors. The FEP depends on the CV and is therefore not unique. For the same reaction different, discriminating CVs can yield different ∆F ‡ RP . We derive an exact expression for the activation free energy that avoids this ambiguity. We find ∆F ‡ RP to be a combination of the probability of the system being in the reactant state, the probability density on the dividing surface, and the thermal de Broglie wavelength associated with the transition. We apply our formalism to simple analytic models and realistic chemical systems and show that the FEP-based approximation applies only at low temperatures for CVs with a small effective mass. Most chemical reactions occur on complex, high-dimensional PES that cannot be treated analytically and pose the added challenge of choosing a good CV. We study the influence of that choice and find that, while the reaction free energy is largely unaffected, ∆F ‡ RP is quite sensitive. | 10.1063/5.0102075 | [
"https://export.arxiv.org/pdf/2206.02893v2.pdf"
]
| 249,431,611 | 2206.02893 | 7d90dd07af5bc5d65ce9de718364d88eb4f22cde |
From Free-Energy Profiles to Activation Free Energies
Johannes C B Dietschreit
Department of Materials Science and Engineering
Massachusetts Institute of Technology
02139CambridgeMassachusettsUSA
Dennis J Diestler
University of Nebraska-Lincoln
68583LincolnNebraskaUSA
Andreas Hulm
Chair of Theoretical Chemistry
Department of Chemistry
University of Munich (LMU)
Butenandtstr. 7D-81377MünchenGermany
Christian Ochsenfeld
Chair of Theoretical Chemistry
Department of Chemistry
University of Munich (LMU)
Butenandtstr. 7D-81377MünchenGermany
Max Planck Institute for Solid State Research
Heisenbergstr. 1D-70569StuttgartGermany
Rafael Gómez-Bombarelli
Department of Materials Science and Engineering
Massachusetts Institute of Technology
02139CambridgeMassachusettsUSA
From Free-Energy Profiles to Activation Free Energies
(Dated: 20.4.2023)
Given a chemical reaction going from reactant (R) to the product (P) on a potential energy surface (PES) and a collective variable (CV) discriminating between R and P, we define the free-energy profile (FEP) as the logarithm of the marginal Boltzmann distribution of the CV. This FEP is not a true free energy. Nevertheless, it is common to treat the FEP as the "free-energy" analog of the minimum potential energy path and to take the activation free energy, ∆F ‡ RP , as the difference between the maximum at the transition state and the minimum at R. We show that this approximation can result in large errors. The FEP depends on the CV and is therefore not unique. For the same reaction different, discriminating CVs can yield different ∆F ‡ RP . We derive an exact expression for the activation free energy that avoids this ambiguity. We find ∆F ‡ RP to be a combination of the probability of the system being in the reactant state, the probability density on the dividing surface, and the thermal de Broglie wavelength associated with the transition. We apply our formalism to simple analytic models and realistic chemical systems and show that the FEP-based approximation applies only at low temperatures for CVs with a small effective mass. Most chemical reactions occur on complex, high-dimensional PES that cannot be treated analytically and pose the added challenge of choosing a good CV. We study the influence of that choice and find that, while the reaction free energy is largely unaffected, ∆F ‡ RP is quite sensitive.
I. INTRODUCTION
Computer simulations of chemical systems are valuable for the explanation of their experimental counterparts. In the case of chemical reactions, quantities of primary interest are equilibrium constants and reaction rate constants, or quantities directly related to these, i.e., the reaction free energy ∆F RP (difference between free energies of products and reactants) and the activation free energy ∆F ‡ RP (the difference between free energies of transition state and reactants). Indeed, the computation of such free energy differences has a long history. [1][2][3][4][5][6][7] The kinetics of a chemical reaction can be modeled as a transition from a reactant well (R) on the potential energy surface (PES) to a product well (P). The two local minima are separated by a potential energy barrier that must be overcome as the atomic configuration changes and the reaction progresses. The total configuration space is partitioned into (hyper) volumes corresponding to R and P by a dividing (hyper) surface, the separatrix. The atomic rearrangement is described by a collective variable (CV) (or reaction coordinate), which is a function of some subset of Cartesian coordinates that gives the degree of reaction progress (e.g., 0 at R and 1 at P). In order to describe a reaction well, one needs to a) Electronic mail: [email protected] choose a "good" CV, i.e., one that distinguishes properly between configurations of R and P. The CV is chosen so that it has two non-overlapping domains that correspond to the domains of R and P. It is practically impossible to find the optimal CV for a complex realistic system. 8 One must therefore base the choice of CV either on chemical intuition or on recently developed machine learning-based methods. [9][10][11][12][13] The free-energy profile (FEP) 14 (also referred to as the potential of mean force) is defined, up to a scaling constant, as the logarithm of the marginal Boltzmann distribution of the CV (Fig. 1). The FEP is determined in practice by molecular dynamics (MD) or Monte Carlo simulations. Because R and P are often separated by high potential energy barriers that are not overcome on simulation timescales, special simulation techniques, such as importance-sampling algorithms, must often be employed to sample configuration space properly. [15][16][17][18][19][20][21][22] These algorithms usually directly yield the FEP.
Contrary to what the name implies, the FEP is not a true Helmholtz or Gibbs free energy. 23 Treating the FEP as if it were a free-energy analog of the minimum energy path is pervasive in the field and rarely acknowledged explicitly as the approximation that it is. Differences in the FEP between local extrema are then misinterpreted as reaction and activation free energies (see red highlight in Fig. 1).We have recently shown that this misconception leads to significant errors in reaction free energies, ∆F RP . 23 The choice of the CV has a large influence on the FEP. In fact, the FEP has no meaning independent of the CV [23][24][25] and the structure of the FEP (e.g., the breadth and depth of local extrema or even their existence) depends on the CV. Thus, a treatment that relies solely on the shape of the FEP yields CV-dependent activation free energies. Moreover, kinetic quantities (e.g., ∆F ‡ RP ) derived from the FEP, which depends solely on the PES and does not account for particle masses, must be approximations. The rigorous formula for ∆F ‡ RP derived here (see green highlight of Fig. 1 and Sec. II E) is independent of the precise mathematical form of the CV, as long as it discriminates between R and P. We show below that a poor choice of CV has an even bigger impact on ∆F ‡ RP than on ∆F RP .
The remainder of the article is organized as follows. In Section II we first derive an expression for the rate constant k R→P . Then, using the Eyring equation, we derive the connection between ∆F ‡ RP and k R→P . The physical interpretation of the components that constitute the correct activation free energy is discussed. In Section III we employ simple analytic models to assess the error incurred by the common practice of taking ∆F ‡ RP to be the difference between the values of the FEP at the maximum (transition state) and the minimum at R. Section IV is devoted to an analysis of the sensitivity of ∆F RP and ∆F ‡ RP to the choice of the CV. To emphasize the errors that can result from estimating ∆F ‡ RP directly from the FEP, we examine in Section V a numerical onedimensional model and two realistic chemical processes. Section VI consists of a summary of our findings and a discussion of open questions on the computation of the activation free energy. Our conclusions are summarized in Section VII.
II. THEORY
A. Description of the System
The interconversion of R and P is represented by the chemical reaction
R P .(1)
State α(=R,P) is defined by the region of configuration space it occupies, designated by Ω α . Thus, we define the configuration integral associated with the state α by
Z α = Ωα dx e −βU (x) .(2)
Here x = (x 1 , x 2 , . . . , x 3N ) T denotes the column vector of Cartesian coordinates that specify the atomic config-
uration; dx = 3N i=1 dx i is the 3N -dimensional volume element; U (x)
is the potential energy surface (PES), and β ≡ 1/k B T . Only those configurations x that belong to Ω α contribute to Z α , which is the effective volume of configuration space occupied by state α. We assume that Ω R and Ω P constitute the whole configuration space available to the system and they are separated by a (3N − 1)dimensional dividing (hyper) surface, normally taken to contain the ridge of the barrier of the PES between the minima corresponding to R and P.
The course of the reaction can be monitored by a (scalar) CV (or reaction coordinate), ξ(x), which is a function of a subset of the atomic coordinates that gives a measure of the progress of the reaction. The CV is chosen such that Ω R and Ω P correspond to non-overlapping domains of the CV. Ideally the gradient of ξ(x) should be normal to the dividing surface, on which the CV assumes a particular value z TS . In this case the CV discriminates properly between R and P.
It is convenient to introduce mass-weighted coordinates
x = M 1/2 x ,(3)
where M stands for the 3N × 3N diagonal matrix of atomic masses. In terms of mass-weighted coordinates the Hamiltonian is
H = 1 2 3N i=1 p 2 i + U ( x 1 , x 2 , . . . , x 3N ) = 1 2 p T p + U ( x) ,(4)
where p i =˙ x i is the momentum conjugate to the coordinate x i . Henceforth we employ the condensed notation of the second line of eq. (4), where p stands for the column vector of momenta.
B. Curvilinear Coordinates
The treatment of the reaction rate is facilitated by employment of a special set of coordinates, one of which is the CV. Hence, we transform from mass-weighted coordinates to a complete set of curvilinear coordinates, q = q( x), of which we take q 1 ( x) = ξ( x). From the inverse transformation x = x(q) we obtaiṅ
x = Jq ,(5)
where [J] ij = ∂ xi ∂qj is an element of the Jacobian. The momentum conjugate to q is
p = M qq ,(6)
where
M q = J T J ,(7)
the mass matrix in curvilinear coordinates, is also referred to as the mass-metric tensor (see, for example, Refs. 2,26-28). In general, M q is a full matrix. The Hamiltonian is given in curvilinear coordinates by
H = 1 2 p T M −1 q p + U (q) .(8)
From eq. (7) we deduce the following expression for the effective inverse mass matrix
[M −1 q ] ij = 3N k=1 [J −1 ] ik [J −1 T ] kj = (∇ x q i ) T (∇ x q j ) ,(9)
where we employ [J −1 ] ik = ∂qi ∂ x k and (∇ x q i ) T = (∂q i /∂ x 1 , ∂q i /∂ x 2 , . . . , ∂q i /∂ x 3N ) is the 3N -dimensional mass-weighted gradient. Using eq. (3), we get from eq. (9)
[M −1 q ] ij = (∇ x q i ) T M −1 (∇ x q j )(10)
Note the distinction between ∇ x for the Cartesian gradient and ∇ x for the gradient with respect to massweighted coordinates.
C. Reaction Rate Constant
We assume the system to be in thermodynamic equilibrium. Then the rate of the forward reaction equals the rate of the backward reaction
k R→P P(R) = k P→R P(P) ,(11)
where k R→P and k P→R are the forward and backward rate constants, and P(R) and P(P) are the respective probabilities of observing R and P. The rate can also be expressed in terms of the frequency ν of crossing the dividing surface in either the forward or backward direction (i.e., of the number of times per unit time that ξ( x)−z TS changes sign). Since the forward and backward rates are equal, then either rate must equal ν/2. Thus, focusing on the forward rate, we have from eq. (11)
k R→P = ν 2P(R)(12)
The following alternative expression for the rate constant is frequently used: [29][30][31][32][33][34]
k R→P = ξ Θ(ξ) δ(ξ( x) − z TS ) p,q Θ(z TS − ξ( x)) p,q(13)
Here p,q denotes the ensemble average over all of phase space, δ the Dirac delta function, Θ the Heaviside function, andξ the time derivative of the CV. The equivalency of the two expressions is proven in the supplementary material.
D. Frequency of Crossing the Dividing Surface
The frequency of crossing the dividing surface can be expressed formally as the time average of the frequency with which ξ( x) − z TS changes sign 35 :
ν = lim τ →∞ 1 τ τ 0 dt d dt Θ[ξ( x(t)) − z TS ] = lim τ →∞ 1 τ τ 0 dt (˙ x(t)) T ∇ x ξ( x(t)) δ(ξ( x(t)) − z TS )(14)
A proof of this expression is provided in the supplementary material. Assuming the system to be ergodic, we can recast the time average as an ensemble average
ν = d x d p e −βH ˙ x T ∇ x ξ( x) δ(ξ( x) − z TS ) d x d p e −βH ,(15)
where H is given by eq. (4). We next transform from mass-weighted to curvilinear coordinates. From eqs. (5), (6), and (7) we geṫ
x T ∇ x ξ = p T J −1 ∇ x ξ = 3N i=1 p i (∇ x q i ) T ∇ x ξ ,(16)
where the second equality invokes the definition of the inverse Jacobian. Substitution of eq. (16) into eq. (15) and transformation to curvilinear coordinates yields ν = dq e −βU (q) dp e − β
2 p T M −1 q p 3N i=1 p i (∇ x q i ) T ∇ x ξ δ(ξ(q) − z TS ) dq e −βU (q) dp e − β 2 p T M −1 q p(17)
To simplify this expression we exploit the freedom afforded by curvilinear coordinates. While the "first" is chosen to be the CV, the remaining 3N − 1 are as yet unspecified. Hence, we require that q 2 , q 3 , . . . , q 3N be orthogonal to q 1 = ξ, which constraint is expressed by
(∇ x q i ) T ∇ x ξ = 0, i = 2, 3, . . . , 3N(18)
In general, the construction of the orthogonal set can be achieved in a variety of ways. 16 Invoking eq. (18), we can express the kinetic energy as
1 2 p T M −1 q p = 1 2 3N i=1 3N j=1 p i (∇ x q i ) T (∇ x q j )p j = 1 2 |∇ x ξ| 2 p 2 1 + 3N i=2 3N j=2 p i (∇ x q i ) T (∇ x q j )p j = 1 2 |∇ x ξ| 2 p 2 1 + 1 2 p T M −1 p(19)
where in analogy to eq. (9) we define the (3N − 1) × (3N − 1) inverse mass matrix M −1 and the (3N − 1)dimensional momentum vector p = (p 2 , p 3 , . . . , p 3N ) T . Likewise, we can simplify eq. (16)
3N i=1 p i (∇ x q i ) T ∇ x ξ = |∇ x ξ| 2 p 1(20)
Plugging eqs. (19) and (20) into eq. (17), we get
ν = dq e −βU (q) δ(ξ(q) − z TS ) ∞ −∞ dp 1 |p 1 |e − β|∇ x ξ| 2 p 2 1 2 |∇ x ξ| 2 dp e − β 2 p T M −1 p dq e −βU (q) dp e − β 2 p T M −1 q p(21)
Performing the integration on p 1 gives
ν = 2k B T dq e −βU (q) δ(ξ(q) − z TS ) · 1 · dp e − β 2 p T M −1 p dq e −βU (q) dp e − β 2 p T M −1 q p(22)
Inserting the identity 1 = |∇ x ξ|(2πk B T ) −1/2 ∞ −∞ dp 1 e − β|∇ x ξ| 2 p 2 1 2 into eq. (22) at the place indicated, we obtain
ν = 2k B T π dq e −βU (q) dp e − β 2 p T M −1 q p |∇ x ξ| δ(ξ(q) − z TS ) dq e −βU (q) dp e − β 2 p T M −1 q p(23)
Transforming back to Cartesian coordinates yields
ν = 2k B T π δ(ξ(x) − z TS ) |∇ x ξ| ,(24)
where indicates the ensemble average over configuration space. Using the fact that
ρ(z) = δ(ξ(x) − z) = Z −1 dx δ(ξ(x) − z) e −βU (x)(25)
is the normalized probability density of observing an atomic configuration x such that ξ(x) = z, we can re-cast eq. (24) as
ν = 2k B T π ρ(z TS ) |∇ x ξ| zTS = 2k B T π ρ(z TS ) (∇ x ξ) T M −1 (∇ x ξ) zTS = 2k B T πm ξ zTS ρ(z TS ) ,(26)
where zTS signifies an average over the dividing surface. The second line of eq. (26) follows from eq. (3); the third line implicitly defines m ξ , which we interpret as the effective mass of the pseudo-particle associated with the coordinate ξ(x):
m −1 ξ = (∇ x ξ) T M −1 (∇ x ξ) = M −1 q 11 ,(27)
which is the 1,1 element of the inverse mass-metric tensor (see eq. (10)). 2,26-28 Finally, combining eqs. (12) and (26), we obtain
k R→P = k B T 2πm ξ zTS ρ(z TS ) P(R) .(28)
E. Free Energy of Activation
Eyring's equation relates the rate constant to a free energy of activation by defining a modified equilibrium constant for the formation of activated complex from reactant R (see, for example, Ref 36). In the present notation the equation is
k R→P = k B T h e −β∆F ‡ RP ,(29)
where h is Planck's constant. We use the symbol F for the Helmholtz free energy in order to distinguish it from the free-energy profile denoted by A (see eq. (32)). We solve eq. (29) for the activation free energy and combine the result with eq. (28) to get
∆F ‡ RP = −k B T ln ρ(z TS ) λ ξ zTS P(R) ,(30)
where λ ξ ≡ h 2 /2πm ξ k B T . We interpret λ ξ as the de Broglie thermal wavelength of the pseudo-particle associated with the CV. By expanding the logarithm in eq. (30) we can recast the "exact" expression for the activation free energy as
∆F ‡ RP = −k B T ln ρ(z TS ) + k B T ln P(R) − k B T ln λ ξ zTS = A(z TS ) + k B T ln ΩR dz ρ(z) − k B T ln λ ξ zTS(31)
The second line of eq. (31) depends on the definition of the free-energy profile (FEP) 16,23
A(z) = −k B T ln ρ(z) ,(32)
and on the relation 23
P(R) = Ω R dz ρ(z) .(33)
A frequently employed procedure is to set the activation free energy equal to the difference between the maximum of the FEP at z TS and the minimum at z R,min :
∆ F ‡ RP = A(z TS ) − A(z R,min )(34)
We place a tilde on this formula to distinguish it from the "exact" one in eq. (30). Thus, ∆ F ‡ RP can be viewed as an approximation. For example, if the density is strongly peaked about z R,min , then k B T ln P(R) ≈ −A(z R,min ), according to eqs. (32) and (33). Under this condition the approximate formula agrees with the exact, except for the term −k B T ln λ ξ zTS . Therefore the influence of distortions of the coordinate system induced by ξ(x) is ignored by ∆ F ‡ RP , as is the influence of mass (see eq. (27)). An alternative recasting of the exact formula for the activation free energy, eq. (30), is instructive. Invoking the relations 23
q R = Z R Λ(35)
and
P(R) = Z R Z ,(36)
where q R is the molecular partition function of R and
Λ ≡ 3N i=1 h 2 /2πm i k B
T (the product of all Cartesian de Broglie wavelengths), we rewrite the exact expression as
∆F ‡ RP = −k B T ln Zρ(z TS ) λ ξ zTS Λ q R = −k B T ln Zρ(z TS ) λ ξ zTS Λ + k B T ln q R (37)
The second term on the right side of eq. (37) is the (negative of the) free energy of R. 23 Likewise, if we regard
q ‡ ≡ Zρ(z TS ) λ ξ z TS Λ
as the effective partition function with z fixed at z TS , then the first term is the free energy of the constrained system. That q ‡ has the stated character can be demonstrated explicitly in case the curvilinear coordinates form a complete orthogonal set. Then we can rewrite eq. (37) as
∆F ‡ RP = −k B T ln q ‡ + k B T ln q R = F ‡ − F R(38)
This form of ∆F ‡ RP is very intuitive: The activation free energy is the difference between the free energy of the system constrained to the dividing surface, F ‡ , and the free energy of the reactant, F R . Moreover, it is noteworthy that eq. (38) assumes the same form as the corresponding expression derived by conventional transition state theory. 36
III. IMPACT OF APPROXIMATING THE ACTIVATION FREE ENERGY
In order to gauge the error incurred by approximating the activation free energy ∆ F ‡ RP (eq. (34)) in comparison to the "exact" ∆F ‡ RP (eq. (30)) we study the behavior of two analytically treatable models. Each consists of a single particle of mass m moving in one dimension. The PESs are meant to represent a system with two minima, which are approximated either by square wells (SW) or parabolic (harmonic oscillator) wells (HO). Their detailed treatment is presented in the supplementary material. We take the difference between approximate and "exact" activation free energy as a correction term, which we derive to be:
corr SW = ∆F ‡ SW − ∆ F ‡ = k B T ln 2πk B T mL 2 R /h 2(39)corr HO = ∆F ‡ HO − ∆ F ‡ = k B T ln (2π) 2 k 2 B T 2 m/h 2 k(40)
In eq. (39) L R denotes the width of the reactant square well. In eq. (40) k is the force constant of the harmonic well.
We note that ∆ F ‡ does not depend on particle mass (m) (as it is only derived from a marginal Boltzmann distribution), and in the one-dimensional case neither on temperature (T ), nor parameters of the PES (L R and k). Thus, we regard the difference as a correction of ∆ F ‡ that accounts for the influence of these parameters. Though the corrections for the two models exhibit different dependencies on the parameters, they can nevertheless be correlated. We note directly, for example, that both corrections increase at the same rate with increasing m. Further, both increase with increasing T , although corr HO increases more rapidly. Concerning the PES parameters, we observe that corr SW increases with increasing L R , whereas corr HO increases with increasing k −1 . This is expected, since as k decreases the harmonic potential broadens, allowing the particle to move in an effectively larger domain of R, just as an increase in L R does.
The one-dimensional HO model can be roughly correlated with realistic multi-dimensional systems. We observe that ν R = k/m/2π is the frequency of oscillation of the particle about the minimum x R,min . Hence, we can recast the correction given by eq. (40) as
corr HO = k B T ln(k B T /hν R )(41)
For reactions carried out around room temperature T • = 300 K, a reference frequency ν • = k B T • /h ≈ 6.0 10 12 s −1 can be defined. Thus, for molecular vibrations around this frequency, the correction is negligible. In the typical case, where the masses of constituent atoms (e.g., H, C, and O) are small, and the bonds are stiff, ν R → ν • and the correction is small. On the other hand, for reactions involving more massive atoms and "soft" degrees of freedom, ν R < ν • and we expect substantial corrections.
IV. THE INFLUENCE OF THE CHOICE OF CV
The validity of the formulas describing the activation free energy (eq. (30)), and the reaction free energy 23
∆F RP = −k B T ln P(P) P(R) ,(42)
depend on the assumption that the CV distinguishes properly between R and P, as defined by the dividing surface S. Thus, knowledge of S is crucial to the proper choice of CV. For low-dimensional model systems, the choice is generally clear, but for realistic multidimensional systems one usually has little or no information about S and must base their choice on heuristics and chemical intuition. Such intuitive CVs can lead to significant errors. In this Section we systematically explore the influence of the choice of the CV on ∆F RP and ∆F ‡ RP . For this purpose we employ the following model: a single particle of mass m moving in two dimensions on the PES yield realistic free energies (∆F RP = −12.28 kJ/mol and ∆F ‡ RP = 16.06 kJ/mol, which is roughly the activation free energy of the internal rotation of butane 37 ). The ideal CV is ξ(x, y) = x and the dividing surface coincides with the line x = x max = −0.06725 (see Fig. 2). Clearly, ∇ξ·∇U vanishes on S, which is the constraint that should be obeyed by a CV that properly discriminates between R and P. 35 To vary the choice of the CV systematically, we define the CV by
U (x, y) = y 4 + x 4 − bx 2 − cx ,(43)A(z) / kJ mol −1 a) θ = 90 • b) θ = 48 • c) θ = 32 • −2 0 2 z /Å 0 50 100 150 200 D(z) / kJ mol −1Å−1 d) −2 0 2 z /Å e) −2 0 2 z /Å f)ξ(x, y) = ax + (1 − a)y(44)
where a is restricted to the interval [0, 1]. We determine the value of a by specifying the angle θ between ∇ξ and e S , the unit vector parallel with the true separatrix S (i.e., e y ). In other words, a, and therefore ξ, are determined by the condition ∇ξ |∇ξ| · e S = cos θ. (Details of the calculation are provided in the supplementary material.) Corresponding to a given θ (i.e., a given choice of the CV) is a "trial" separatrix S(θ), which is a line having the equation
y = −a(x−x max )/(1−a)
, where x max is the x-coordinate of the saddle point on the PES. When a = 1, then θ = 90 • . In this limit S(90 • ) coincides with S. As a decreases from 1 to 0, S(θ) rotates counterclockwise about the point (x max , 0). The trial separatrix S(45 • ) is shown in Fig. 2. In the limit a = 0, ∇ξ = e y , θ = 0. Hence, S(0 • ) is normal to S, which makes ξ(x, y) = y the worst possible choice of the CV.
For a given θ, we calculate the probability density ρ(z) using eq. (25). As shown in the supplementary material, the evaluation of the required double integrals is facilitated by transforming from Cartesian to orthogonal coordinates q 1 = ξ(x, y) and q 2 = (a − 1)x + ay. We obtain the FEP using eq. (32). Illustrative plots of ρ(z) and A(z) are shown in Fig. 3a-c for three CV choices. The local maximum of the FEP, z max , defines the domains of R and P. We note, however, that the FEPs for θ < 32 • lack any such local maximum. We henceforth ignore these choices, as the CV cannot distinguish R from P at all.
As a measure of the quality of the chosen CV, we adopt a modification of the procedure introduced previously 23 , which was to monitor the quantity D(z) = |∇ξ(x) · ∇U (x)| z . We note that D(z TS ) is exactly zero on S for the ideal CV (i.e., the one that discriminates perfectly between R and P). However, away from S, or in case the choice of CV is not ideal, D(z) is difficult to interpret, because it depends so strongly on the local gradient of the PES. To ameliorate this defect we propose a scaled, dimensionless orthogonality measure defined by
D s (z) = ∇ξ(x) |∇ξ(x)| · ∇U (x) |∇U (x)| z ,(45)
where we replace the gradients of U and ξ with their corresponding unit vectors. Thus, D s (z TS ) is zero on S for the ideal CV, where the gradients of U and ξ are perpendicular, and unity where they are parallel.
One can see in Fig. 3d that for the ideal CV ξ(x, y) = x, D and D s have very sharp roots at z TS , indicating that the CV is orthogonal to the separatrix. Because of the symmetry of the PES, the two measures have two additional roots located at the minima of reactant and product. D s does not actually reach zero on account of the finite numerical resolution of our computation. However, the sharp minima are still visible. Figures 3e and 3f show the orthogonality measure for non-ideal CVs. The shape of the D-measures changes drastically. Most significantly, the sharp root or minimum at the maximum of the FEP turns into a local maximum for both D and D s , which is an unmistakable sign that results for these CVs cannot be trusted (see the dependence of the ∆F ‡ RP on θ in Fig. 4).
Using the numerically computed ρ(z), we calculate the reaction free energy and activation free energy, which are given, respectively, by eq. (42) and eq. (30), where we set z TS = z max . In Fig. 4 we plot ∆F RP , ∆F ‡ RP , D(z max ), and D s (z max ) as functions of θ. Fig. 4d shows clearly how sensitive D s (z max ) is to the choice of CV. At θ = 90 • D s (z max ) vanishes, since the chosen CV coincides with the ideal one. But as θ decreases, D s (z max ) rises sharply over a narrow interval of about 10 • . That is, for large θ, ∇U and ∇ξ are almost orthogonal, whereas with decreasing θ they become nearly parallel. The fall off of D s (z max ) as θ decreases from about 45 • is due to the interference of force vectors that are almost isotropically distributed, and result in essentially randomized alignment of the force and CV gradient vectors.
Since ρ(z) is strongly peaked around the minima of R and P (see Fig. 3), the choices of CV in the range of 45 • to 90 • separate the minima well. As a consequence, ∆F RP is essentially independent of the choice in this range (see Fig. 4a). In other words, over this range of choices one obtains an accurate value of the reaction free energy. Only for θ < 45 • , where the CV begins to fail to discriminate between R and P, does the error in ∆F RP set in rapidly. As seen in Fig. 4b, the activation free energy is dramatically more sensitive than ∆F RP to the choice of CV. It deviates from the correct value by more than "chemical accuracy" (1 kcal/mol) at θ ≈ 60 • . For θ < 40 • , ∆F ‡ RP even becomes negative. If this were correct, the rate of reaction would decrease with increasing temperature. This apparent sensitivity can be reasoned as follows. All points on the true separatrix have very low likelihood. A trial separatrix with θ < 90 • includes more likely configurations and therefore overestimates ρ(z TS ). Since the true ρ(z TS ) is very small, the relative error is large. For large probabilities, e.g., P(R), the same absolute error would incur a much smaller relative error. The relative error in the density directly translates to an absolute error in the activation free energy because of the logarithm of ρ(z TS ) (see eq. (31)).
The fact that ∆F RP is largely unaffected by the choice of the CV explains why CVs based purely on chemical intuition can yield reaction free energies comparable with experiment. However, ∆F RP is expected to become somewhat more sensitive to the choice of CV for more complex PES. Compared with the reaction free energy, the activation free energy is generally more sensitive. Hence, to achieve the same accuracy for ∆F ‡ RP and ∆F RP one must choose the CV with a great deal of care.
V. PITFALLS IN THE ESTIMATION OF THE ACTIVATION FREE ENERGY FROM THE FEP
To further illustrate the errors that one may incur by estimating ∆F ‡ RP directly from the FEP alone (i.e., by invoking eq. (34)), we consider first a simple one- dimensional model that can be treated for the most part analytically and then models of two real chemical processes.
A. One-dimensional Model
We consider a single particle of mass m moving in one dimension on the PES
U (x) = b x + 5 + e −ax 2 − b x − 5 ,(46)
where , which controls the steepness of the potential barrier, has units of kJ/mol. The parameter a, which controls the width of the barrier, is set to 1Å −2 and b = 1Å. The PES, plotted in Fig. 5a for the case = 5 kJ/mol, has two equal minima separated by a maximum at x = 0. Because U (x) diverges as x approaches −5 or 5, the particle is confined to the domain −5 < x < 5. R and P correspond, respectively, to the domains −5 < x < 0 and 0 < x < 5. The symmetry of the PES dictates that P(R) = P(P) = 0.5. Therefore, from eq. (12) we get
k R→P = ν/2P(R) = ν = k P→R ,(47)
where ν is the crossing frequency. From eqs. (29) and (47), we deduce the following expression: I. Activation free energies (kJ mol −1 ) for one-dimensional model PES U (eq. (46)) with = 5 kJ/mol for selections of temperatures (Kelvin) and particle masses (amu). ∆ F ‡ 1 = A1(zTS) − A1(zR,min) Letters above columns specify following differences: a) A2(zTS)−A2(zR,min), b) A2(zmax)−A2(zR,min), c) A2(zTS)−A2(zP,min), and d) A2(zmax)−A2(zP,min). Numbers above columns specify particle masses. We compute ν by molecular dynamics (MD) simulation, as detailed in the supplementary material. MD simulations were carried out at five temperatures in the range of 100-1000 K and for five different particle masses in the range of 1-100 amu.
∆F ‡ RP = −k B T ln (hν/k B T )(48)T /K ∆ F ‡ 1 ∆ F ‡ 2 ∆F ‡ (
We consider two CVs: ξ 1 (x) = x and ξ 2 (x) = 1/(x+5). Using eq. (32), we obtain the corresponding FEPs:
A 1 (z) = U (z) + k B T ln Z (49) A 2 (z) = U (z −1 − 5) + 2k B T ln z + k B T ln Z(50)
Setting = 5 kJ/mol ensures that even the most massive particle considered crosses the dividing surface at the lowest temperature during the 10 ns time interval of the MD simulation. Figs. 5b and 5c show plots of the FEPs based on eqs. (49) and (50). We note the strong distortion of configuration space induced by ξ 2 (x). The domains of R and P are reversed, the minima are not equal, and the maximum of the barrier between R and P does not occur precisely at z = 0.2, the inverse of the position of the maximum of the barrier of the PES at x = 0. Approximate activation free energies obtained according to eq. (34) are listed in Tab. I, along with "exact" values ∆F ‡ RP obtained from eq. (30), which yields exactly the same result for both CVs, and from eq. (48) via MD. The excellent agreement between the values obtained from eqs. (30) and (48) is gratifying. According to eq. (49), ∆ F ‡ 1 should be independent of both temperature and particle mass. Likewise, ∆ F ‡ 2 should depend on temperature, but we note that by definition ∆ F ‡ 2 is independent of mass. Tab. I bears out these expectations.
The dominant impression of Tab. I is the severe lack of agreement between approximate and exact activation free energies. The impact of the loss of the symmetry of the PES by ξ 2 is particularly evident. Since z TS ≈ z max , the results in columns a and b, which correspond to the forward reaction, agree quite well, as do those of columns c and d for the backward reaction. However, the magnitudes of the forward and backward activation free energies differ greatly. Even more noteworthy is the contrary dependence of the activation free energy on temperature. For the forward reaction it decreases with T , whereas for TABLE II. Activation free energies (kJ mol −1 ) for onedimensional model PES U1 (eq. (46)) with = 50 kJ/mol for selections of temperatures (Kelvin) and particle masses (amu). ∆ F ‡ 1 = A1(zTS) − A1(zR,min) Letters above columns specify following differences: a) A2(zmax) − A2(zR,min) and b) A2(zmax) − A2(zP,min). Numbers above columns specify particle masses. the backward reaction it increases markedly with T .
T /K ∆ F ‡ 1 ∆ F ‡ 2 ∆F ‡ (
Examination of the exact data reveals the following general trends. At fixed T , ∆F ‡ RP increases with particle mass m; the higher T , the greater the increase. At fixed m, ∆F ‡ RP increases with T ; the greater m, the greater the increase. Those are the same trends observed for the analytical models in Sec. III.
To see the influence of the parameter , we set = 50 kJ/mol. Unbiased molecular dynamics simulations were not performed for this choice of as no barrier crossings would be observed within the previously employed simulation time. Figure S2 of the supplementary material displays plots of the PES and FEPs and Tab. II lists approximate and exact free energies of activation. In this case the immediate impression from Tab. II is the greatly improved agreement between approximate and "exact" results. Though the symmetry is still lost by ξ 2 , the distortion is relatively less severe, so that forward and backward activation energies differ less. The contrary dependence of forward and reverse activation energy on T persists, but it is relatively weaker.
The trends in ∆F ‡ RP noted above for the case = 5 kJ/mol hold for = 50 kJ/mol, but the observed variations are relatively smaller. For example, whereas the change in ∆F ‡ RP for = 5 kJ/mol at T = 300 K is about 80% over the range of particle mass considered, it is only 13% for = 50 kJ/mol. A similar observation holds for variations of ∆F ‡ RP with T at fixed m. We stress that since both CVs perfectly distinguish between R and P, the computed "exact" activation free energy is identical for either, even though the CVs are very dissimilar. We consider the realistic three-dimensional model system pictured in Fig. 6(a): a [Cu(NH 3 ) 2 ] + -complex migrating between cavities (A and B) in chabazite, a mixed crystal of the family of zeolites. This process is of importance in the deactivation of nitrogen oxides where copper-exchanged zeolites are used as catalysts. [38][39][40][41][42] The migration can be regarded as a "chemical reaction", in which the Cu-complex in cavity A or B is the "reactant" or "product", respectively. The reaction consists of the complex diffusing out of cavity A through the 8ring (8 silicon sites) window and into cavity B. Millan et al. 43 have simulated this system by means of ab initio MD combined with umbrella sampling (for details see Ref. 43). The CV they employ, which is depicted in Fig. 6(b), is defined with respect to the 8-ring window that separates the cavities. It is the projection of the vector position of the Cu atom onto the normal to the "average" plane of the central 4 Si and 2 O atoms of the ring that remain nearly in the same plane.
Our primary purpose is to analyze the data of Millan et al. 43 in order to determine the exact values of the reaction free energy and activation free energy for the migration reaction described above. We are especially interested in the effect of mass on the activation free energy. The authors of Ref. 43 supplied the coordinates of the trajectories and the bias used for the umbrella sampling for every frame. We implemented the CV in pyTorch 44 to gain easy access to ∇ξ, and consequently m −1 ξ (see eq. (27)), through the automatic differentiation in Torch. We computed the weights of every frame with an in-house implementation of MBAR. 45 The weights were used to re-compute the FEP and compare it with the result of Millan et al. 43 calculation of λ ξ zTS (see eq. (30)). The FEPs are plotted in Fig. 7, which shows that the agreement of our FEP with that of Millan et al. 43 is excellent. The probability densities are normalized according to 4 −4 dz e −βA(z) = 1. Millan et al. 43 take the maximum of A(z), located at z = 0.35Å, to be the position of the TS. According to the definition of the CV, the TS should be at z = 0.0Å. We computed exact and approximate reaction and activation free energies for both choices of the TS. Tab. III shows very clearly the large influence of mass on the activation free energy. Further, the approximate free energies (∆ F AB and ∆ F ‡ AB ) obtained by us agree well with those of Millan et al. 43 . The precise choice of z TS has little effect on the activation free energies, because the FEP is quite flat around z = 0.
Since Millan et al. 43 used the same CV for all of the systems they simulated, the correction of the activation free energy should be about the same for all. Therefore the correction should not affect the ordering of the barriers (∆ F ‡ AB ) they determined approximately. However, we would expect any comparison with experimental activation barriers to depend strongly on the difference between the approximate and exact treatments.
C. Chemically Realistic Model -Radical Cyclization
As a second chemical example, we consider the intramolecular cyclization of the 5-hexenyl radical (see Fig. 8), a radical clock reaction. 46 The forward reaction involves the formation of a new single bond and the conversion of a C-C double bond to a single bond.
Carbon single bonds are usually stiff and have high activation barriers, as reflected in the experimental activation free energy for the cyclization, ∆F ‡ exp (300 K) = 42 ± 4 kJ/mol. 47 Hence, we expect the approximate relation in eq. (34) to hold. As CV we choose the distance between the two carbon atoms (C1 and C5) that form a new bond, ξ = d(C1 − C5). The associated mass m ξ is constant and equal to the reduced mass of the two carbon atoms (i.e., 6 amu). The system was simulated at 300 K by means of ab initio MD at the ωB97M-V/def2-TZVP 48,49 level of theory and solvated in benzene with the COSMO continuum solvation model. 50 We employed WTM-eABF 51-53 as enhanced sampling algorithm. The unbiased weights were recovered with the recently developed combination of eABF and MBAR. 54 Details of the simulation are given in the supplementary material. The FEP (Fig. 9a) shows one deep minimum for P (methylcyclopentane radical) and three shallow minima for R (5-hexenyl radical). We take all configurations with z > 2.2Å to belong to R. The scaled orthogonality measure D s (eq. (45), Fig. 9b) is lower than 0.25 for almost the entire range of z values, rising sharply only at the ends of the simulated range. The plot of D s shows a clear local minimum near the local maximum of the FEP, indicating that it is a good CV.
In Tab. IV we can see that the exact reaction and activation free energies obtained from eqs. (42) and (30), respectively, agree well with the approximate ones. Hence, this example confirms that eq. (34) does hold in cases of high barriers, low temperatures, light CVs, and narrow wells about the minima of R and P on the PES.
VI. DISCUSSION AND CONNECTION TO PRIOR WORK
This study is not the first work to present expressions for the rate constant and activation free energy based on transition state theory. [29][30][31][32][33][34]55 However, previous work often lacks a stepwise derivation of their expression for the rate constant. Further, Refs. 55 and 34, which also present equations for the activation free energy, still include local differences of the FEP in their final expressions, which can thus be interpreted as corrections to the approximate treatment. Because of complex notation it is difficult to verify whether their expressions are equivalent to our eq. (30). It is perhaps due to the complexity of the equations and lack of physical interpretability that their expressions have not been widely adopted. Therefore, we are motivated to present a meticulous and straightforward derivation of the exact formula (eq. (30)) for the activation free energy ∆F ‡ RP for the two-state process from a reactant R to a product P in a novel form. The formula involves three key quantities having clear physical interpretations. Two of these, ρ(z TS ) and P(R) = ΩR dz ρ(z), depend only on ρ(z), the marginal probability density that the CV ξ(x) takes the value z. The third, λ ξ zTS , can be rewritten
as h 2 /2πk B T m −1 ξ zTS
to indicate explicitly the dependence on the effective mass of the pseudo-particle associated with the CV. The three clearly defined terms also facilitate implementation.
The presence of the factor m −1 ξ zTS in the exact formula for k R→P (eq. (28)) shows that knowledge of ρ(z) (or alternatively A(z)) alone is insufficient to determine the rate constant k R→P . We note that in the "conventional" transition state theory 36 the rate constant is expressed in terms of canonical partition functions for reac-tant and activated complex (minus that associated with the CV (reaction coordinate)) and the discrete masses of the atoms enter into them. In the present treatment the effective mass m ξ depends not only on the discrete masses of atoms but also on the gradient of the CV (see eq. (27)). If the CV is linear in the Cartesian coordinates, then m −1 ξ zTS is readily expressible explicitly in terms of the discrete masses. 56 In general, however, the CV-conditioned ensemble average must be computed.
The "gauge-independent geometric" free-energy profile, given by
A G (z) = −k B T ln [ρ(z) |∇ x ξ| z ] = −k B T ln ρ(z) m −1 ξ z .(51)
has been proposed 24,25 as an alternative to the "standard" FEP (eq. (32)). Since the geometric FEP at the transition point is related to ∆F ‡ RP according to
A G (z TS ) − k B T ln h 2 2πk B T = −k B T ln ρ(z TS ) λ ξ TS = ∆F ‡ RP − k B T ln P(R) ,(52)
it is also referred to as the "kinetic" free-energy profile. 57 On one hand, like A(z), A G (z) cannot alone provide ∆F ‡ RP . On the other, unlike A(z), A G (z) cannot alone furnish ∆F RP . The essential reason is that e −βA G (z) is generally not a probability density, whereas e −βA(z) always is.
We remark on an apparent inconsistency in the dimensions of terms in eq. (31), as noted in Ref. 57. We observe that the dimensions of ρ(z) are those of ξ −1 and the dimensions of λ ξ are those of ξ. The argument of the logarithm is therefore dimensionless, as it should be. Thus, there is no inconsistency. It appears only because of the tendency to overlook that the definition of the FEP includes an implicit scaling factor, which is unfortunately rarely, if ever, pointed out. The same remarks apply as well to the geometric FEP.
VII. CONCLUSION
Our applications of the exact formula for the activation free energy demonstrate how significant errors can arise when ∆F ‡ RP is approximated simply by the difference between the values of the FEP at the transition state and reactant.
The often employed procedure to obtain ∆F ‡ RP solely from the FEP (by taking the difference between the values at the transition state and reactant (eq. (34))) is an approximation. If ρ(z) is strongly peaked in the vicinity of the minimum of R (i.e, at low temperature and small effective mass m ξ ), then eq. (34) may be satisfactory (see Section V C). However, it is especially questionable when the temperature is high, m ξ is large, and the barrier of the PES between R and P is low (see Section V B).
The exact formula for ∆F ‡ RP (eq. (30)) assumes implicitly that the CV is good (i.e., it is orthogonal to the separatrix). According to our study of the two-dimensional model PES with a systematically variable CV, as the CV becomes less good the reliability of ∆F ‡ RP decreases markedly, while that of the reaction free energy ∆F RP is only slightly affected. We conclude that one must choose the CV with considerable caution in order to achieve the same accuracy for both kinetic and thermodynamic properties.
The exact formulas for ∆F ‡ RP (eq. (30)) and ∆F RP (eq. (42)) depend only on CV-conditioned ensemble averages, which are readily available from enhanced-sampling simulations via reweighting techniques. 45,54,[58][59][60][61][62][63] Therefore, it should be more convenient to use these formulas than to resort to alternative special sampling strategies such as infrequent metadynamics. 64,65 In light of the results of the present study and those of our prior work 23 , we recommend less reliance on the FEP alone and more on the exact formulas, which can be easily evaluated from data provided by commonly employed advanced-sampling algorithms. The exact formulas are more reliable and can be clearly related to experimental data. In this regard we agree with Ref. 34 that use of the FEP alone should be discouraged, except we think that ∆F ‡ RP is a better touchstone for comparison between theory and experiment than the rate constant itself.
SUPPLEMENTARY MATERIAL
The supplementary material contains the following: (1) proof that eq. (14) yields the frequency of crossing the dividing surface; (2) proof of the equivalency of eqs. (12) and (13)
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts of interest to disclose.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. Starting with eq. (14) of the article,
ν = lim τ →∞ 1 τ τ 0 dt d dt Θ[ξ(x(t)) − z TS ] ,(A1)
we apply the chain rule of differentiation to obtain
ν = lim τ →∞ 1 τ τ 0 dt dΘ dξξ (t) = lim τ →∞ 1 τ τ 0 dt dΘ dξ ξ (t) = lim τ →∞ 1 τ τ 0 dt |δ[ξ(t) − z TS ]| ξ (t) (A2)
We now utilize the property of the Dirac distribution 66
δ[f (t)] = i δ(t − t i ) |(df /dt) t=ti | ,(A3)
where f (t i ) = 0 and (df /dt) t=ti = 0, to recast eq. (A2) as
ν lim τ →∞ 1 τ τ 0 dt i δ(t − t i ) ξ (t i ) ξ (t) = lim τ →∞ 1 τ i 1 ξ (t i ) τ 0 dt δ(t − t i ) ξ (t) = lim τ →∞ 1 τ Nτ j=1 1 = lim τ →∞ N τ τ (A4)
where N τ is the number of zeroes of ξ(t) − z TS on the interval [0, τ ], which is equal to the number of times ξ(t)− z TS changes sign during the interval. Hence, N τ /τ is just the frequency of crossing the dividing surface.
Appendix B: Proof of the Equivalency of Eqs. (12) and (13) We assume that in general the CV is "good" in that it distinguishes properly between R and P (i.e., no configuration of R has the same value of the CV as a configuration of P). Moreover, implicit in eq. (13) is the assumption that the value of z TS − ξ(x) is positive for configurations of R and negative for those of P. It follows that
Θ[z TS − ξ( x)] p,q = P(R),(B1)
where the Heaviside function is 1 in the domain of R, where its argument is positive.
Assuming the system to be ergodic, we can replace the time average in eq. (S2) with the ensemble average
ν = δ[ξ( x) − z TS ] ξ ( x) p,q .
(B2)
We note that since ν is the frequency of crossings due to both forward and reverse reactions, the absolute value of the rate of change of the CV is necessary to prevent cancellations of forward and reverse contributions. Further, because the system is taken to be in thermodynamic equilibrium, the forward and reverse reactions occur with the same frequency. Hence, we can simply count reactions in one direction, say forward from R to P, whereξ > 0. Then the absolute value ofξ becomes unnecessary. The sign of the velocity is enforced by introducing a Heaviside function. Thus, we have
ν/2 = δ[ξ( x) − z TS ]ξ( x) Θ(ξ) p,q(B3)
This is precisely the numerator of the expression in eq. (13) in the article. Dividing eq. (S7) by (S5) finally gives
ν 2P(R) = δ[ξ( x) − z TS ]ξ( x) Θ(ξ) p,q Θ[z TS − ξ( x)] p,q(B4)
Appendix C: Analytical One-Dimensional Models of Section III
Here we treat a one-dimensional system consisting of a single particle of mass m moving on a PES with two minima separated by a maximum. We consider two model PESs, comparing the approximate and "exact" free energies of activation.
We take the PES of the first to be a square well specified piecewise by
U SW (x) = ∞, x < 0 R , 0 < x < L R − δ B , L R − δ < x < L R + δ P , L R + δ < 0 < L ∞, L < x (C1)
Assuming that δ L, B > R , and B > P , and taking the CV to be ξ(x) = x, we derive the probability density
ρ SW (z) = Z −1 e −βUSW(z) .(C2)
Hence the probability of observing R is
P SW (R) = ΩR dz ρ SW (z) = Z −1 L R e −β R(C3)
According to eq. (30), we have for the "exact" free energy of activation
∆F ‡ SW = −k B T ln ρ SW (z TS ) λ ξ zTS P SW (R) = −k B T ln e −β B Z Z L R e −β R λ ξ zTS = B − R − k B T ln h 2 /2πmk B T L 2 R ,(C4)
where we use the relation λ ξ zTS = h 2 /2πmk B T . For the second model we consider a double-well PES having minima of R at x R,min and P at x P,min , separated by a maximum of B at the transition state. We approximate this PES about the minima by the harmonic-oscillator (HO) approximation (e.g., U (x) ≈ U HO (x) = R + k/2(x − x R,min ) 2 , where the force con-
stant is k = d 2 U dx 2 x=xR,min
). We again take the CV to be ξ(x) = x. Thus, the probability of observing R is
P(R) = ΩR dz e −βU (z) Z −1 ≈ Z −1 ∞ −∞ dx e −β( R + k 2 (x−xR,min) 2 ) = e −β R Z 2π kβ ,(C5)
where we approximate the probability density in the domain of R by ρ HO (ξ) = Z −1 e −βUHO(x) . Using eq. (30), we obtain the "exact" activation free energy
∆F ‡ HO = −k B T ln ρ HO (z TS ) λ ξ zTS P HO (R) = −k B T ln e −β B Z Z e −β R k 2πk B T h 2 /2πmk B T = B − R − k B T ln h 2 k/[(2πk B ) 2 mT 2 ] (C6)
where we again invoke the relation < λ ξ > zTS = h 2 /2πmk B T . According to eq. (34), the approximate activation free energy for both models is given by:
∆ F ‡ = A(z TS ) − A(z R,min ) = k B T ln ρ(z TS ) ρ(z R,min ) = U (z TS ) − U (z R,min ) = B − R (C7)
for both models. Comparing eq. (C7) with eq. (C4) and with eq. (C6), we see that the difference between "exact" and approximate activation free energies is, respectively
corr SW = ∆F ‡ SW − ∆ F ‡ = k B T ln 2πk B T mL 2 R /h 2 (C8) corr HO = ∆F ‡ HO − ∆ F ‡ = k B T ln (2π) 2 k 2 B T 2 m/h 2 k .(C9)
yields m = a/ (1 − a), which, when substituted back into eq. (D7), gives e S(θ) = (a − 1)e x + ae y
a 2 + (1 − a) 2 . (D9)
Computation of the Probability Density
The marginal probability density is given by
ρ(z) = Z −1 dx dy e −βU (x,y) δ(ξ(x, y) − z) (D10) where Z = dx dy e −βU (x,y) .(D11)
To facilitate the evaluation of the double integrals, we transform from Cartesian coordinates to the orthogonal coordinates defined by
q 1 =ξ(x, y) = ax + (1 − a)y (D12) q 2 = (a − 1)x + ay .(D13)
That ∇q 1 · ∇q 2 = 0 is manifest. The inverse transformation is
x = aq 1 + (a − 1)q 2 d y = (1 − a)q 1 + aq 2 d ,(D14)
where d = a 2 + (1 − a) 2 . Hence, the Jacobian is J = ∂x/∂q 1 ∂x/∂q 2 ∂y/∂q 1 ∂y/∂q 2
= a/d (a − 1)/d (1 − a)/d a/d(D15)
From eq. (D10) we have ρ(z) = Z −1 dq 1 dq 2 |J| e −βU (x,y) δ(q 1 − z)
= Z −1 dq 2 |J| e −βU (x,y) ,(D16)
where the Cartesian coordinates that are the arguments of the PES are given in terms of q 1 = ξ and q 2 by eq. (D14). Using eq. (D15) and the definition of d, we get |J| = 1/d. Hence, ρ(z) = dq 2 e −βU (x,y) dq 1 dq 2 e −βU (x,y)
For each simulation corresponding to a given temperature and particle mass, ten independent Langevin dynamics simulations were carried out with a friction constant of 1 ps −1 , a time step of 1 fs, and a total time of 10 ns. The system was propagated using the velocity Verlet algorithm. Crossing frequencies and activation free energies were computed for each simulation independently, only the final values were used for averages and estimation of the standard deviation.
Use of the Heaviside Function
Approximating the time derivative by the forward finite-difference formula, we rewrite eq. (A1) as
ν = 1 τ N f −1 i=1 ∆t Θ(ξ(t i+1 ) − z TS ) − Θ(ξ(t i ) − z TS ) ∆t = 1 τ N f −1 i=1 |Θ(ξ(t i+1 ) − z TS ) − Θ(ξ(t i ) − z TS )| , (E1)
where the summation on i is over consecutive time steps of length ∆t, N f is the number of steps of the MD simulation, and τ = (N f − 1)∆t is the duration of the MD trajectory. That the expression in eq. (E1), which is straightforward to implement, yields a proper count may be seen as follows. If, at time step i, ξ(t i ) and ξ(t i−1 ) are both greater than or less than z TS , the contribution is zero, since z TS is not crossed during the step. If, on the other hand, ξ(t i−1 ) < z TS and ξ(t i ) > z TS , or ξ(t i−1 ) > z TS and ξ(t i ) < z TS , then the contribution is 1, as z TS is crossed in one direction or the other during the step. We note that ∆t must be sufficiently small that highly frequent crossings are not inadvertently missed.
Use of the Dirac Delta Function and the Atomic Velocities
We consider here an alternative approach to the computation of ν. We begin by recasting eq. (A2) as
ν = lim τ →∞ 1 τ τ 0 dt δ[ξ(t) − z TS ] (∇ x ξ) T ·ẋ(t) . (E2)
Here we express the rate of change of the CV aṡ
ξ(t) = dξ dx · dx dt = (∇ x ξ) T ·ẋ(t) ,(E3)
where (∇ x ξ) T = (∂ξ/∂x 1 , ∂ξ/∂x 2 , . . . , ∂ξ/∂x 3N ) is the 3N-dimensional gradient. Discretizing the integration on time, we rewrite eq. (E2) as
ν = 1 τ N f −1 i=0 ∆t δ[ξ(t i ) − z TS ] (∇ x ξ(t i )) T ·ẋ(t i ) = 1 N f − 1 N f −1 i=0 δ[ξ(t i ) − z TS ] (∇ x ξ(t i )) T ·ẋ(t i ) .(E4)
In practice, of course, the duration of the MD simulation, and therefore the number of time steps, are finite. It is very unlikely that during a finite simulation ξ(t i ) is ever exactly equal to z TS . Hence, almost every configuration x(t) gives zero contribution. To circumvent this problem we introduce a continuous function to represent the delta function approximately. We begin by defining the continuous approximation to the Heaviside function
Θ(x; α) = 1 1 + e −αx ,(E5)
where α is a positive real number having dimension reciprocal length. (Observe that we can formally express the true Heaviside "function" by Θ(x) = lim α→∞ Θ(x; α).) The corresponding delta function is given by the derivative δ(x; α) = dΘ(x; α) dx = α e −αx (1 + e −αx ) 2 .
(E6)
Note that δ(x; α) satisfies exactly the relation ∞ −∞ dx δ(x; α) = 1 .
We can now rewrite eq. (E4) as
ν α = 1 N f − 1 N f −1 i=0 δ[ξ(t i ) − z TS ; α] (∇ x ξ(t i )) T ·ẋ(t i ) ,(E8)
where δ[ξ(t i ) − z TS ; α] is approximated by eq. (E6). The index α emphasizes the dependence of the crossing frequency on this methodological parameter.
The quality of this approximation depends on the choice of α. On one hand, if α is too small, MD frames that are far away from the dividing surface contribute significantly, thus yielding too large ν. On the other hand, if α is too large, all frames are weighted so lightly that ν is too small. The influence of α is shown graphically in Fig. 10. Note that we plot the magnitude of the difference so that small differences are visible on the logarithmic scale. One can clearly see that extreme choices of α can lead to errors in ν of up to a factor of 100. As predicted, ν α tends to zero for very large α, which is reflected by the curves approaching 1 (10 0 ). The optimal choice of α seems to be between 10 2 and 10 3 . Even choices in the range 10 1 to 10 4 generally yield ν α 's which are around 0.
FIG. 1 .
1Schematic summary of the present work showing an FEP with minima corresponding to reactant (R) and product (P) separated by a maximum. Commonly assumed, but incorrect, expression for activation free energy highlighted in red. The expression derived in this work is highlighted in green.
FIG. 2 .
2Contour plot of PES (eq. (43)) in units of kJ/mol. Red line is the ideal separatrix S. Dotted blue line is the "trial" separatrix S(θ) for θ = 45 • , angle between ∇ξ (blue arrow) and S.
a contour plot of which is shown inFig. 2. The particle coordinates x and y are given in units ofÅ and the energy in units of kJ/mol. The parameters , b, and c are taken to be 25 kJ mol −1Å−4 , 2Å 2 and 0.25Å 3 , respectively. The parameter effectively controls the height of the barrier of the PES between R and P; c controls the difference between the minima of R and P. The values are chosen to
FIG. 3 .
3Top panel, plots of probability density (right ordinate, orange curve) and FEP (left ordinate, blue curve), and bottom panel D(z) (left ordinate, blue curve) and Ds(z) (right ordinate, orange curve) for three choices of CV: a) θ = 90 • , b) θ = 48 • (maximum inFig. 4c), and c) θ = 32 • , last value for which the FEP still has a detectable local maximum.
FIG. 4 .
4Plots of a) reaction free energy ∆FRP, b) activation free energy ∆F ‡ RP , c) orthogonality criterion D(zmax), and d) scaled criterion Ds(zmax) versus θ. Orange dashed line indicates θ = 45 • . Gray dashed line in b) guides the eye to 0 kJ/mol.
FIG
. 5. a) PES U (x) with = 5 kJ/mol (eq. (46)); b) FEP for CV ξ = x (eq. (49)); c) FEP for CV ξ = 1 x+5 (eq. (50)).
B. Chemically Realistic Model -Mobility of Cu + in Cu-Chabazite FIG. 6. (a) Migration of Cu(NH3) + 2 complex from cavity A through 8-ring window into cavity B. (b) Depiction of the CV.
FIG. 7 .
7, as well as to compute the conditional ensemble average of m Comparison of FEP obtained in present study with that reported in Ref.43.
FIG. 8 .
8Scheme of the intramolecular cyclization of the reactant 5-hexenyl radical to the product methylcyclopentane radical.
FIG
. 9. a) Free energy profile for the reaction shown inFig. 8. b) Orthogonality measure Ds(z).
; (3) analytical one-dimensional models of Section III; (4) computational details of Section IV; (5) computation of the frequency of crossing the dividing surface; (6) plots of the FEPs for models of Section V A with large ; (7) computational details of Section V C. the MPI-FKF Stuttgart. R.G.-B. acknowledges support from the Jeffrey Cheah Career Development Chair.
9ν ref to 1.1ν ref , which corresponds at 300 K to an error of approximately 0.25 kJ/mol in ∆F ‡ RP . FIG. 11. a) PES U (x) with = 50 kJ/mol (eq. (46)); b) FEP for CV ξ = x ; c) FEP for CV ξ = 1 x+5 .
TABLE
97±0.08 9.67±0.06 11.02±0.12 11.83±0.06 12.79±0.16 500 4.53 1.43 1.58 7.79 7.94 10.39 14.96 17.08 18.48 19.97 10.35±0.08 14.96±0.05 17.08±0.08 18.47±0.10 19.93±0.10 1000 4.53 0.00 0.67 11.91 12.58 20.55 29.69 33.93 36.73 39.70 20.47±0.11 29.63±0.16 33.97±0.18 36.74±0.19 39.60±0.17 * ν obtained from MD by means of Heaviside function (see supplementary material) Number after ±-sign is standard deviation.eq. (30))
∆F ‡ (eq. (48)) *
a
b
c
d
1
9
25
49 100
1
9
25
49
100
100 4.53 3.77 3.78 5.09 5.10 4.50 5.41 5.84 6.12 6.42 4.49±0.10 5.48±0.13 5.90±0.29 6.12±0.27 6.22±0.26
200 4.53 3.10 3.13 5.70 5.72 5.56 7.39 8.24 8.80 9.39 5.57±0.09 7.39±0.10 8.24±0.11 8.78±0.11 9.29±0.17
300 4.53 2.50 2.55 6.35 6.40 6.98 9.72 11.00 11.84 12.73 6.
45.30 44.48 45.85 44.32 45.23 45.66 45.94 46.23 200 45.30 43.68 46.40 44.50 46.33 47.18 47.74 48.33 300 45.30 42.90 46.96 45.12 47.86 49.14 49.98 50.87 500 45.30 41.38 48.09 47.12 51.69 53.81 55.21 56.70 1000 45.30 37.77 51.00 54.58 63.72 67.96 70.76 73.73eq. (30)
a
b
1
9
25
49 100
100
TABLE III .
IIIComparison of approximate and exact free ener-
gies (in kJ mol −1 ).
zTS/Å ∆FAB ∆ FAB ∆F ‡
AB ∆ F ‡
AB
Ref. 43
0.35
-
1.5
-
17
present study
0.35
2.8
1.6
26.2 18.1
present study
0.00
2.8
1.6
25.8 17.6
TABLE IV .
IVComparison of approximate and exact free energies (in kJ mol −1 ) for the reaction shown inFig. 8∆FRP ∆ FRP ∆F ‡
RP ∆ F ‡
RP ∆F ‡
PR ∆ F ‡
PR
-49.1 -51.1 48.2 49.7 97.3 100.7
. Details of the MD Simulation
ACKNOWLEDGMENTSThe authors thank Dr. Reisel Millan, who provided full access to their simulations of chabazite and furnishedAppendix D: Computational Details of Section IV Required numerical computations are handled by NumPy67. Plots are generated with Matplotlib 68 .Determination of the Parameter aThe CV in Section IV of the article is given by eq.(40)ξ(x, y) = ax + (1 − a)y ,where a is restricted to the interval [0, 1]. It is determined by specifying the angle θ between ∇ξ and e S , the unit vector parallel with the true separatrix S (i.e., e y ). The angle is related to the two vectors bywhere θ is restricted to the interval [0, π/2]. From eq. (D1) we obtainSubstitution of eq. (D3) into eq. (D2) yieldsSolving this equation for a, we get a ± = sin 2 θ sin 2 θ − cos 2 θ ± sin 4 θ (sin 2 θ − cos 2 θ) 2 − sin 2 θ sin 2 θ − cos 2 θ (D5) We observe that this formula breaks down if sin θ = cos θ (i.e., if θ = π/4). In this case cos θ = 1/ √ 2 and from eq. (D4) we obtain a = 1/2, which corresponds to the CV whose gradient is (e x + e y )/2. The physically acceptable solutions given by eq. (D5) are a + when θ ∈ [0, π/4[ and a − when θ ∈ ]π/4, π/2]Determination of the "Trial" SeparatrixCorresponding to the chosen CV (i.e., to θ) is the "trial" separatrix S(θ), which is a line having the equationwhere the point (x max , 0) is the TS. The slope m is determined by requiring ∇ξ to be orthogonal to the unit vector parallel with S(θ), which is given byThe orthogonality condition We describe here the numerical implementation of the expression for ν (eq. (A1)) in the MD simulation.Appendix F: Plots of the FEPs for Models of Section V A with LargeIn Section V A of the article we consider two CVs: ξ 1 (x) = x and ξ 2 (x) = 1/(x + 5).Fig. 11ashows the the PES eq. (46) for the case a = 1Å −2 , b = 1Å, and = 50 kJ/mol.Figs. 11b and 11cshow plots of the FEPs based on the two CVs just as doesFig. 5of the article.Appendix G: Computation of FEPs for the Cyclization of the Hexenyl RadicalAb-initio MD simulations on DFT level were performed using a development version of the FermiONs++ program package[69][70][71][72]. For this purpose the ωB97M-V functional was applied with the def2-TZVP basis set49,73. To account for solvation in benzene the COSMO continuum solvation model was used50. An optimized minimum energy structure was heated from 0.1 K to 310 K over 3100 time steps with a step size of 0.1 fs. Initial momenta were randomly drawn from the Maxwell-Boltzmann distribution. Velocities were re-scaled every 10 time steps to increase the temperature by 1 K. For production runs the temperature was controlled by a Langevin thermostat with friction coefficient 0.001 fs −1 at 300 K. The time step was set to 0.5 fs. The dynamics was biased along reaction coordinates ξ = d(C1 − C5) with the WTM-eABF method 52,53 applying a recently published Python implementation 54 . For simulations along ξ the extended-variable was coupled to the reaction coordinate with a thermal width of 0.05Å and the system was confined with harmonic walls at 1.0Å and 6.0Å. The bias force was stored on a grid with bin width 0.05Å. The ABF force was scaled up linearly and the full bias applied in bins with more that 200 samples. For the Well-Tempered Metadynamics (WTM) potential Gaussian kernels of height 0.5 kJ/mol and standard deviation 0.1Å for ξ were deposited every 20 steps. The height of new Gaussian hills was scaled down over the course of the simulation with effective temperature of 2000 K. Sampling of ξ was performed with a single walker running for about 290 ps.To obtain thermodynamic properties statistical weights of individual frames were recovered in postprocessing using the MBAR algorithm45. For this purpose the sampled probability distribution was repartitioned into a mixture of Gaussian distributions with standard deviation 0.05Å for ξ 54 .
. P Kollman, 10.1021/cr00023a004Chem. Rev. 932395P. Kollman, Chem. Rev. 93, 2395 (1993).
10.1007/978-3-540-38448-9Free Energy Calculations. C. Chipot and A. PohorilleBerlin HeidelbergSpringer-VerlagC. Chipot and A. Pohorille, eds., Free Energy Calculations (Springer-Verlag, Berlin Heidelberg, 2007).
. C D Christ, A E Mark, W F Van Gunsteren, 10.1002/jccJ. Comput. Chem. 311569C. D. Christ, A. E. Mark, and W. F. Van Gunsteren, J. Comput. Chem. 31, 1569 (2009).
. C Chipot, 10.1002/wcms.1157WIRES Comput. Mol. Sci. 471C. Chipot, WIRES Comput. Mol. Sci. 4, 71 (2014).
. N Hansen, W F Van Gunsteren, 10.1021/ct500161fJ. Chem. Theory Comput. 102632N. Hansen and W. F. Van Gunsteren, J. Chem. Theory Comput. 10, 2632 (2014).
. R E Skyner, J L Mcdonagh, C R Groom, T Van Mourik, J B O Mitchell, 10.1039/C5CP00288EPhys. Chem. Chem. Phys. 176174R. E. Skyner, J. L. McDonagh, C. R. Groom, T. van Mourik, and J. B. O. Mitchell, Phys. Chem. Chem. Phys. 17, 6174 (2015).
. D L Mobley, M K Gilson, 10.1146/annurev-biophys-070816-033654Annu. Rev. of Biophys. 46531D. L. Mobley and M. K. Gilson, Annu. Rev. of Biophys. 46, 531 (2017).
P G Bolhuis, C Dellago, D Chandler, Proc. Nat. Acad. Sci. USA. Nat. Acad. Sci. USA975877P. G. Bolhuis, C. Dellago, and D. Chandler, Proc. Nat. Acad. Sci. USA 97, 5877 (2000).
. D Mendels, G Piccini, M Parrinello, 10.1021/acs.jpclett.8b00733J. Phys. Chem. Lett. 92776D. Mendels, G. Piccini, and M. Parrinello, J. Phys. Chem. Lett. 9, 2776 (2018).
. Y Wang, J M L Ribeiro, P Tiwary, Nat. Commun. 10Y. Wang, J. M. L. Ribeiro, and P. Tiwary, Nat. Commun. 10 (2019).
. L Sun, J Vandermause, S Batzner, Y Xie, D Clark, W Chen, B Kozinsky, J. Chem. Theory Comput. 181549L. Sun, J. Vandermause, S. Batzner, Y. Xie, D. Clark, W. Chen, and B. Kozinsky, J. Chem. Theory Comput. 18, 1549 (2022).
. L Bonati, V Rizzi, M Parrinello, 10.1021/acs.jpclett.0c00535J. Phys. Chem. Lett. 112998L. Bonati, V. Rizzi, and M. Parrinello, J. Phys. Chem. Lett. 11, 2998 (2020).
. D Wang, P Tiwary, J. Chem. Phys. 154D. Wang and P. Tiwary, J. Chem. Phys. 154 (2021).
. W L Jorgensen, J. Amer. Chem. Soc. 1113770W. L. Jorgensen, J. Amer. Chem. Soc. 111, 3770 (1989).
. J P Valleau, T J M , J. Comput. Phys. 23187J. P. Valleau and T. J.M., J. Comput. Phys. 23, 187 (1977).
. E Darve, A Pohorille, 10.1063/1.1410978J. Chem. Phys. 1159169E. Darve and A. Pohorille, J. Chem. Phys. 115, 9169 (2001).
A Laio, M Parrinello, 10.1073/pnas.202427399Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA9912562A. Laio and M. Parrinello, Proc. Natl. Acad. Sci. USA 99, 12562 (2002).
. C Abrams, G Bussi, Entropy. 16163C. Abrams and G. Bussi, Entropy 16, 163 (2014).
. V Spiwok, Z Sucur, P Hosek, Biotechnol. Adv. 331130V. Spiwok, Z. Sucur, and P. Hosek, Biotechnol. Adv. 33, 1130 (2015).
. O Valsson, P Tiwary, M Parrinello, Annu. Rev. Phys. Chem. 67159O. Valsson, P. Tiwary, and M. Parrinello, Annu. Rev. Phys. Chem. 67, 159 (2016).
. P G Bolhuis, D Chandler, C Dellago, P L Geissler, 10.1146/annurev.physchem.53.082301.113146Annu. Rev. Phys. Chem. 53291P. G. Bolhuis, D. Chandler, C. Dellago, and P. L. Geissler, Annu. Rev. Phys. Chem. 53, 291 (2002).
. N V Plotnikov, S C Kamerlin, A Warshel, Journal of Physical Chemistry B. 1157950N. V. Plotnikov, S. C. Kamerlin, and A. Warshel, Journal of Physical Chemistry B 115, 7950 (2011).
. J C B Dietschreit, D J Diestler, C Ochsenfeld, J. Chem. Phys. 156114105J. C. B. Dietschreit, D. J. Diestler, and C. Ochsenfeld, J. Chem. Phys. 156, 114105 (2022).
. C Hartmann, C Schütte, Physica D. 22859C. Hartmann and C. Schütte, Physica D 228, 59 (2007).
. C Hartmann, J C Latorre, G Ciccotti, EPJ Special Topics. 20073C. Hartmann, J. C. Latorre, and G. Ciccotti, EPJ Special Topics 200, 73 (2011).
M Fixman, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA713050M. Fixman, Proc. Natl. Acad. Sci. USA 71, 3050 (1974).
. K Otter, J. Chem. Phys. 1127283K. den Otter, J. Chem. Phys. 112, 7283 (2000).
. W K Otter, J. Chem. Theory Comput. 93861W. K. den Otter, J. Chem. Theory Comput. 9, 3861 (2013).
. B J Berne, M Borkovec, J E Straub, 10.1021/j100324a007J. Phys. Chem. 923711B. J. Berne, M. Borkovec, and J. E. Straub, J. Phys. Chem. 92, 3711 (1988).
. E A Carter, G Ciccotti, J T Hynes, R Kapral, 10.1016/S0009-2614(89)87314-2Chem. Phys. Lett. 156472E. A. Carter, G. Ciccotti, J. T. Hynes, and R. Kapral, Chem. Phys. Lett. 156, 472 (1989).
. P Hänggi, P Talkner, M Borkovec, Rev. Mod. Phys. 62251P. Hänggi, P. Talkner, and M. Borkovec, Rev. Mod. Phys. 62, 251 (1990).
. K Hinsen, B Roux, J. Chem. Phys. 1063567K. Hinsen and B. Roux, J. Chem. Phys. 106, 3567 (1997).
. T Bučko, S Chibani, J F Paul, L Cantrel, M Badawi, 10.1039/c7cp05562ePhys. Chem. Chem. Phys. 1927530T. Bučko, S. Chibani, J. F. Paul, L. Cantrel, and M. Badawi, Phys. Chem. Chem. Phys. 19, 27530 (2017).
. S Bailleul, K Dedecker, P Cnudde, L Vanduyfhuys, M Waroquier, V Van Speybroeck, 10.1016/j.jcat.2020.04.015J. Catal. 38838S. Bailleul, K. Dedecker, P. Cnudde, L. Vanduyfhuys, M. Waro- quier, and V. Van Speybroeck, J. Catal. 388, 38 (2020).
. E Vanden-Eijnden, F A , J. Chem. Phys. 123184103E. Vanden-Eijnden and F. A. Tal, J. Chem. Phys. 123, 184103 (2005).
Chemical kinetics. K J Laidler, Harper and Row4.K. J. Laidler, "Chemical kinetics," (Harper and Row, 1987) Chap. 4.
. M A Murcko, H Castejon, K B Wiber, 10.1021/jp9621742J. Phys. Chem. 10016162M. A. Murcko, H. Castejon, and K. B. Wiber, J. Phys. Chem. 100, 16162 (1996).
. J H Kwak, R G Tonkyn, D H Kim, J Szanyi, C H Peden, 10.1016/j.jcat.2010.07.031J. Catal. 275187J. H. Kwak, R. G. Tonkyn, D. H. Kim, J. Szanyi, and C. H. Peden, J. Catal. 275, 187 (2010).
. F Gao, J H Kwak, J Szanyi, C H F Peden, 10.1007/s11244-013-0145-8Topics in Catalysis. 561441F. Gao, J. H. Kwak, J. Szanyi, and C. H. F. Peden, Topics in Catalysis 56, 1441 (2013).
. N Martín, C R Boruntea, M Moliner, A Corma, 10.1039/C5CC03200HChem. Commun. 5111030N. Martín, C. R. Boruntea, M. Moliner, and A. Corma, Chem. Commun. 51, 11030 (2015).
. E Borfecchia, P Beato, S Svelle, U Olsbye, C Lamberti, S Bordiga, 10.1039/C8CS00373DChem. Soc. Rev. 478097E. Borfecchia, P. Beato, S. Svelle, U. Olsbye, C. Lamberti, and S. Bordiga, Chem. Soc. Rev. 47, 8097 (2018).
. C H Peden, 10.1016/j.jcat.2019.04.046J. Catal. 373384C. H. Peden, J. Catal. 373, 384 (2019).
. R Millan, P Cnudde, V Van Speybroeck, M Boronat, 10.1021/jacsau.1c00337JACS Au. 11778R. Millan, P. Cnudde, V. van Speybroeck, and M. Boronat, JACS Au 1, 1778 (2021).
A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. Curran Associates, Inc32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chil- amkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, in Ad- vances in Neural Information Processing Systems 32 (Curran Associates, Inc., 2019) pp. 8024-8035.
. M R Shirts, J D Chodera, 10.1063/1.2978177J. Chem. Phys. 1291M. R. Shirts and J. D. Chodera, J. Chem. Phys. 129, 1 (2008).
. D Griller, K U Ingold, Acc. Chem. Res. 13317D. Griller and K. U. Ingold, Acc. Chem. Res. 13, 317 (1980).
. C Chatgilialoglu, J Dickhaut, B Giese, 10.1021/jo00022a035J. Org. Chem. 566399C. Chatgilialoglu, J. Dickhaut, and B. Giese, J. Org. Chem. 56, 6399 (1991).
. N Mardirossian, M Head-Gordon, J. Chem. Phys. 14274111N. Mardirossian and M. Head-Gordon, J. Chem. Phys. 142, 074111 (2015).
. A Schäfer, H Horn, R Ahlrichs, J. Chem. Phys. 972571A. Schäfer, H. Horn, and R. Ahlrichs, J. Chem. Phys. 97, 2571 (1992).
. A Klamt, G J G J Schüürmann, J. Chem. Soc., Perkin Trans. 2799A. Klamt and G. J. G. J. Schüürmann, J. Chem. Soc., Perkin Trans. 2 , 799 (1993).
. A Lesage, T Lelievre, G Stoltz, J Henin, J. Phys. Chem. B. 1213676A. Lesage, T. Lelievre, G. Stoltz, and J. Henin, J. Phys. Chem. B 121, 3676 (2017).
. H Fu, H Zhang, H Chen, X Shao, C Chipot, W Cai, J. Phys. Chem. Lett. 94738H. Fu, H. Zhang, H. Chen, X. Shao, C. Chipot, and W. Cai, J. Phys. Chem. Lett. 9, 4738 (2018).
. H Fu, X Shao, W Cai, C Chipot, Acc. Chem. Res. 523254H. Fu, X. Shao, W. Cai, and C. Chipot, Acc. Chem. Res. 52, 3254 (2019).
. A Hulm, J C B Dietschreit, C Ochsenfeld, J. Chem. Phys. 15724110A. Hulm, J. C. B. Dietschreit, and C. Ochsenfeld, J. Chem. Phys. 157, 024110 (2022).
. G K Schenter, B C Garrett, D G Truhlar, J. Chem. Phys. 119G. K. Schenter, B. C. Garrett, and D. G. Truhlar, J. Chem. Phys. 119 (2003).
. E Neria, S Fischer, M Karplus, 10.1063/1.472061J. Chem. Phys. 1051902E. Neria, S. Fischer, and M. Karplus, J. Chem. Phys. 105, 1902 (1996).
. K M Bal, S Fukuhara, Y Shibuta, E C Neyts, J. Chem. Phys. 153K. M. Bal, S. Fukuhara, Y. Shibuta, and E. C. Neyts, J. Chem. Phys. 153 (2020).
. S Kumar, J M Rosenberg, D Bouzida, R H Swendsen, P A Kollman, 10.1002/jcc.540130812J. Comput. Chem. 131011S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, and P. A. Kollman, J. Comput. Chem. 13, 1011 (1992).
. G Tiana, Eur. Phys. J. B. 63235G. Tiana, Eur. Phys. J. B 63, 235 (2008).
. M Bonomi, A Barducci, M Parrinello, J. Comput. Chem. 301615M. Bonomi, A. Barducci, and M. Parrinello, J. Comput. Chem. 30, 1615 (2009).
. P Tiwary, M Parrinello, J. Phys. Chem. B. 119736P. Tiwary and M. Parrinello, J. Phys. Chem. B 119, 736 (2015).
. T M Schäfer, G Settanni, J. Chem. Theory Comput. 162042T. M. Schäfer and G. Settanni, J. Chem. Theory Comput. 16, 2042 (2020).
. M R Shirts, A L Ferguson, J. Chem. Theory Comput. 164107M. R. Shirts and A. L. Ferguson, J. Chem. Theory Comput. 16, 4107 (2020).
. A Dickson, P Tiwary, H Vashisth, 10.2174/1568026617666170414142908Curr. Topics Med. Chem. 172626A. Dickson, P. Tiwary, and H. Vashisth, Curr. Topics Med. Chem. 17, 2626 (2017).
. P Cossio, 10.1016/j.bpj.2021.11.2680Biophysical Journal. 1215P. Cossio, Biophysical Journal 121, 5a (2022).
C Cohen-Tannoudji, B Diu, F Laloë, Quantum Mechanics. New YorkJohn Wiley and SonsC. Cohen-Tannoudji, B. Diu, and F. Laloë, Quantum Mechanics (John Wiley and Sons, New York, 1977).
. C R Harris, K J Millman, S J Van Der Walt, R Gommers, P Virtanen, D Cournapeau, E Wieser, J Taylor, S Berg, N J Smith, R Kern, M Picus, S Hoyer, M H Van Kerkwijk, M Brett, A Haldane, J F Del Río, M Wiebe, P Peterson, P Gérard-Marchant, K Sheppard, T Reddy, W Weckesser, H Abbasi, C Gohlke, T E Oliphant, 10.1038/s41586-020-2649-2Nature. 585357C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerk- wijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peter- son, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant, Nature 585, 357 (2020).
. J D Hunter, 10.1109/MCSE.2007.55Computing in Science & Engineering. 990J. D. Hunter, Computing in Science & Engineering 9, 90 (2007).
. J Kussmann, C Ochsenfeld, J. Chem. Phys. 138134114J. Kussmann and C. Ochsenfeld, J. Chem. Phys. 138, 134114 (2013).
. J Kussmann, C Ochsenfeld, J. Chem. Theory Comput. 11918J. Kussmann and C. Ochsenfeld, J. Chem. Theory Comput. 11, 918 (2015).
. H Laqua, T H Thompson, J Kussmann, C Ochsenfeld, J. Chem. Theory Comput. 161456H. Laqua, T. H. Thompson, J. Kussmann, and C. Ochsenfeld, J. Chem. Theory Comput. 16, 1456 (2020).
. H Laqua, J Kussmann, C Ochsenfeld, J. Chem. Phys. 154214116H. Laqua, J. Kussmann, and C. Ochsenfeld, J. Chem. Phys. 154, 214116 (2021).
. N Mardirossian, M Head-Gordon, J. Chem. Phys. 144214110N. Mardirossian and M. Head-Gordon, J. Chem. Phys. 144, 214110 (2016).
Plots of the magnitude of relative difference between reference crossing frequency computed by eq. (S35) and that computed by eq. (S40) versus α for the one-dimensional model PES in Section V A ( = 5 kJ/mol) for a selection of masses and temperatures. Each curve is an average over 10 independent MD trajectories. Fig, The grey dashed line marks the threshold of 1 % relative deviationFIG. 10. Plots of the magnitude of relative difference between reference crossing frequency computed by eq. (S35) and that computed by eq. (S40) versus α for the one-dimensional model PES in Section V A ( = 5 kJ/mol) for a selection of masses and temperatures. Each curve is an average over 10 independent MD trajectories. The grey dashed line marks the threshold of 1 % relative deviation.
| []
|
[
"NAS-PINN: NEURAL ARCHITECTURE SEARCH-GUIDED PHYSICS-INFORMED NEURAL NETWORK FOR SOLVING PDES A PREPRINT",
"NAS-PINN: NEURAL ARCHITECTURE SEARCH-GUIDED PHYSICS-INFORMED NEURAL NETWORK FOR SOLVING PDES A PREPRINT"
]
| [
"Yifan Wang ",
"Linlin Zhong [email protected] ",
"\nSchool of Electrical Engineering\nSoutheast University No\n\n",
"\nSipailou\n210096NanjingJiangsu ProvinceP. R. China\n"
]
| [
"School of Electrical Engineering\nSoutheast University No\n",
"Sipailou\n210096NanjingJiangsu ProvinceP. R. China"
]
| []
| Physics-informed neural network (PINN) has been a prevalent framework for solving PDEs since proposed. By incorporating the physical information into the neural network through loss functions, it can predict solutions to PDEs in an unsupervised manner. However, the design of the neural network structure basically relies on prior knowledge and experience, which has caused great trouble and high computational overhead. Therefore, we propose a neural architecture search-guided method, namely NAS-PINN, to automatically search the optimum neural architecture for solving certain PDEs. By relaxing the search space into a continuous one and utilizing masks to realize the addition of tensors in different shapes, NAS-PINN can be trained through a bi-level optimization, where the inner loop optimizes the weights and bias of neural networks and the outer loop the architecture parameters. We verify the ability of NAS-PINN by several numerical experiments including Poisson, Burgers, and Advection equations. The characteristics of effective neural architectures for solving different PDEs are summarized, which can be used to guide the design of neural networks in PINN. It is found that more hidden layers do not necessarily mean better performance and sometimes can be harmful. Especially for Poisson and Advection, a shallow neural network with more neurons is more appropriate in PINNs. It is also indicated that for complex problems, neural networks with residual connection can improve the performance of PINNs. * This paper is currently under consideration by a journal. arXiv:2305.10127v1 [physics.comp-ph] 17 May 2023 NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs bunch of PDEs with the same form but different equation parameters. However, the training procedure is completely data-driven, neglecting the physical information governed by PDEs. The high computational cost caused by the training data collection and the network parameter optimization is also a problem.The iconic work of the latter is physics-informed neural network (PINN), which is the research of interest in this paper. PINN, first proposed by Raissi et al. [6], is a framework which embeds the physical information into the neural network by defining an appropriate loss function. The framework leverages the automatic differentiation (AD) [7] feature of deep learning to simplify the computation of partial derivatives and has proven effective in solving PDEs and discovering PDE parameters. Since proposed, PINN has rapidly gained attention from researchers. To solve PDEs in discrete or irregular computational domains, cPINN[8]and XPINN [9], characterized by computational domain decomposition, were proposed. In the original PINN framework, boundary conditions and initial conditions are softly constrained by defined loss functions. To guarantee the constraints, boundary and initial conditions in simple forms can be explicitly encoded into the neural network. With one step further, the penalty-free neural network (PFNN) [10] employs two independent neural networks for Dirichlet and Neuman boundary conditions respectively, successfully encoding boundary conditions in complicated forms into the model. The gradient of loss function is another research area of interest. The gradient-optimized PINN by both focus on the gradient information. The former aims to smooth the gradient distribution for faster convergence, and the latter incorporates the gradient information into the loss function for better performance. The AD technique used in PINN, which is fully dependent on values but irrelevant to spatial information, has also attracted much attention. Xiang et al.[13]and Chiu et al.[14]introduced spatial information into the model by substituting AD with numerical differentiation (ND) in different forms and successfully ensured that the solution obtained by the model complies with physical laws. Similar works can be found in[15], which introduced radial basis function finite difference to replace AD. The refs.[16] , [17] and [18] discussed the sampling strategy of PINN by adjusting the distribution of sampling points and saw a significant improvement. | null | [
"https://export.arxiv.org/pdf/2305.10127v1.pdf"
]
| 258,741,226 | 2305.10127 | 90da76991168c4ca8ff9221ce5addea0d4387296 |
NAS-PINN: NEURAL ARCHITECTURE SEARCH-GUIDED PHYSICS-INFORMED NEURAL NETWORK FOR SOLVING PDES A PREPRINT
May 15, 2023
Yifan Wang
Linlin Zhong [email protected]
School of Electrical Engineering
Southeast University No
Sipailou
210096NanjingJiangsu ProvinceP. R. China
NAS-PINN: NEURAL ARCHITECTURE SEARCH-GUIDED PHYSICS-INFORMED NEURAL NETWORK FOR SOLVING PDES A PREPRINT
May 15, 2023
Physics-informed neural network (PINN) has been a prevalent framework for solving PDEs since proposed. By incorporating the physical information into the neural network through loss functions, it can predict solutions to PDEs in an unsupervised manner. However, the design of the neural network structure basically relies on prior knowledge and experience, which has caused great trouble and high computational overhead. Therefore, we propose a neural architecture search-guided method, namely NAS-PINN, to automatically search the optimum neural architecture for solving certain PDEs. By relaxing the search space into a continuous one and utilizing masks to realize the addition of tensors in different shapes, NAS-PINN can be trained through a bi-level optimization, where the inner loop optimizes the weights and bias of neural networks and the outer loop the architecture parameters. We verify the ability of NAS-PINN by several numerical experiments including Poisson, Burgers, and Advection equations. The characteristics of effective neural architectures for solving different PDEs are summarized, which can be used to guide the design of neural networks in PINN. It is found that more hidden layers do not necessarily mean better performance and sometimes can be harmful. Especially for Poisson and Advection, a shallow neural network with more neurons is more appropriate in PINNs. It is also indicated that for complex problems, neural networks with residual connection can improve the performance of PINNs. * This paper is currently under consideration by a journal. arXiv:2305.10127v1 [physics.comp-ph] 17 May 2023 NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs bunch of PDEs with the same form but different equation parameters. However, the training procedure is completely data-driven, neglecting the physical information governed by PDEs. The high computational cost caused by the training data collection and the network parameter optimization is also a problem.The iconic work of the latter is physics-informed neural network (PINN), which is the research of interest in this paper. PINN, first proposed by Raissi et al. [6], is a framework which embeds the physical information into the neural network by defining an appropriate loss function. The framework leverages the automatic differentiation (AD) [7] feature of deep learning to simplify the computation of partial derivatives and has proven effective in solving PDEs and discovering PDE parameters. Since proposed, PINN has rapidly gained attention from researchers. To solve PDEs in discrete or irregular computational domains, cPINN[8]and XPINN [9], characterized by computational domain decomposition, were proposed. In the original PINN framework, boundary conditions and initial conditions are softly constrained by defined loss functions. To guarantee the constraints, boundary and initial conditions in simple forms can be explicitly encoded into the neural network. With one step further, the penalty-free neural network (PFNN) [10] employs two independent neural networks for Dirichlet and Neuman boundary conditions respectively, successfully encoding boundary conditions in complicated forms into the model. The gradient of loss function is another research area of interest. The gradient-optimized PINN by both focus on the gradient information. The former aims to smooth the gradient distribution for faster convergence, and the latter incorporates the gradient information into the loss function for better performance. The AD technique used in PINN, which is fully dependent on values but irrelevant to spatial information, has also attracted much attention. Xiang et al.[13]and Chiu et al.[14]introduced spatial information into the model by substituting AD with numerical differentiation (ND) in different forms and successfully ensured that the solution obtained by the model complies with physical laws. Similar works can be found in[15], which introduced radial basis function finite difference to replace AD. The refs.[16] , [17] and [18] discussed the sampling strategy of PINN by adjusting the distribution of sampling points and saw a significant improvement.
INTRODUCTION
PDEs are ubiquitous in both theoretical and practical science, including electromagnetism, fluid mechanics, finance and many other science fields [1]. As a result, it naturally brings the problem of solving PDEs, which is complex and cumbersome. Since most PDEs have no analytical solutions, many numerical methods have been proposed to achieve approximative numerical solutions. Traditional numerical methods such as finite difference method, finite element method and finite volume method basically discretize equations and computational domains into meshes and then obtain numerical solutions in discrete forms. Although these traditional methods can attain high precision and have rigorous mathematical proof, the computational cost will increase exponentially with the dimensions of PDEs, leading to the curse of dimensionality.
In recent years, the great improvement of deep learning has sparked a research trend of solving PDEs with deep neural networks (DNN). The practice of solving PDEs with DNNs is supported by the universal approximation theorem [2], which states that DNNs can theoretically approximate any continuous function. Generally, there are two mainstream deep learning approaches to solving PDEs: learning of neural operators and using PDEs as constraints [3]. The former is represented by a series of works based on DeepONet [4] and Fourier Neural Operator (FNO) [5]. This kind of method requires considerable amount of numerical results as training data and once trained, the model can handle a However, few works have been found to look into the design of the neural network structure in PINN. The design of neural networks is a significant topic in deep learning and can substantially affect the performance. Up until now, the neural network in PINN has generally been designed on the basis of prior knowledge and experience and has followed similar routines, which is to construct a neural network with 4 to 6 hidden layers and with the same number of neurons for each hidden layer [6,19]. The relationship between the neural network architecture and the performance of PINN has been studied in several works [20][21][22], bet these efforts were still fragmentary and time-consuming.
Neural architecture search (NAS) is a kind of algorithm to search for the optimum neural network architecture in a specific search space [23]. The traditional NAS algorithm builds architectures by permutations of neural network modules, trains and tests these architectures to determine their performance, and then chooses the best neural network architecture based on the performance ranking. Such a discrete process struggles with the problem of low efficiency and high computational costs. Progressive neural architecture search (PNAS) is a technique devised by Liu et al. [24] By gradually discarding structures with poor performance during the search phase, PNAS progressively shrinks the search space and successfully improves the efficiency. With the same purpose of reducing the number of parameters and enhancing efficiency, Hieu Pham et al. [25] proposed efficient neural architecture search (ENAS), which constructs a hypernetwork for parameter sharing. Similar works can be found in [26] and [27]. Furthermore, Liu et al. [28] led the study of differentiable NAS by proposing differentiable architecture search (DARTS), which loosens up the discrete search space into a continuous one. This paper proposes a neural architecture search-guided physics-informed neural network (NAS-PINN) by incorporating NAS into the framework of PINN. We realize the automatic search of the best neural architecture for solving a given PDE with a modest quantity of data. Masks are utilized for tensors addition to help search for different numbers of neurons in each layer. The effectiveness of the proposed method is demonstrated by numerical experiments on a range of PDEs. By analyzing the numerical results, the characteristics of efficient neural architectures are summarized for guiding the further research on PINNs.
The rest of the paper is organized as follows. In Section 2, the general form of PDEs is provided. The method of NAS-PINN is described in detail in Section 3. Section 4 presents the numerical results of different PDEs, including Poisson, Burgers and Advection equations. Finally, the work is concluded in Section 5.
PROBLEM STATEMENT
The general form of PDEs can be expressed as:
u(t, x) t + F(x, u(t, x)) = f (t, x), x ∈ Ω, t ∈ [0, T ], B(u) = b(t, x), x ∈ ∂Ω, I(u) = i(t, x), t = 0.(1)
Where u(t, x) is the latent solution to be decided, u(t, x) t is the temporal derivative, F(·) is the linear or nonlinear spatial differential operator containing possible orders of spatial derivatives, f (t, x) is the source term, B(·) is the boundary operator calculating boundary values, b(t, x) is the boundary condition, I(·) is the initial operator calculating initial values, i(t, x) is the initial condition, Ω is the computational domain and ∂Ω is the boundary.
By applying NAS-PINN, we will solve PDEs in the form above and discuss the characteristics of efficient neural architectures for different PDEs in the following sections.
METHOD
Physics-Informed Neural Network (PINN)
PINN, first proposed by Raissi et al. [6], is a neural network framework for solving PDEs and discovering PDE parameters by designing an appropriate loss function based on the equation to constrain the network. Here, we focus on the issue of solving PDEs, and a basic framework of PINN is displayed in Figure 1.
Considering a PDE in the form of Eq. (1), the inputs of PINN are spatial coordinates x and temporal coordinates t of training points called collocation points and the output is the predicted solutionû. By properly designing the loss function and minimizing it through a certain optimization algorithm, e.g. stochastic gradient descent (SGD), Adam [29], L-BFGS [30] and other variants, the output will finally satisfy Eq. (1) when the network successfully converges. According to Eq. (1), the loss function can be defined as follows:
Loss = ω F L F + ω B L B + ω I L I (2) L F = 1 N F N F i=1 l (û(t i , x i ) t + F(x i ,û(t i , x i )) − f (t i , x i ))(3)L B = 1 N B N B i=1 l (B(û i ) − b(t i , x i ))(4)L I = 1 N I N I i=1 l (I(û i ) − i(t i , x i ))(5)
where ω F , ω B and ω I are the weighting facotrs for different parts of loss function, N F , N B and N I are the numbers of collocation points in the computational domain, the boundary and the initial domain respectively, and l(·) is a certain metric function, which is usually selected as L 2 norm or its variants.
The PINN framework has been proved to be able to solve PDEs in a variety of circumstances, yet the design of the neural network has not received enough attention. For the remainder of this paper, we will focus on the neural architecture by leveraging NAS.
Differentiable NAS
The number of neural network layers is usually fixed in conventional NAS algorithms, and specific selections of operations are provided for each layer. Such configuration makes the search space discontinuous, as a result of which, it cannot be optimized through gradient-based methods, greatly restricting the convergence speed and efficiency of the algorithm [31].
Liu et al. [28] proposed DARTS and introduced the concept of differentiable NAS. Let O be a set composed of candidate operations, any of which represents a certain function o(x) for the input x. By applying a relaxation to candidate operations, the search space can be made continuous:
o (i,j) (x) = o∈O exp α (i,j) o o ∈O exp α (i,j) o o(x)(6)
Whereō (i,j (x) is the mixed operation between the i-th layer and the j-th layer after relaxation, α
(i,j) o
is the weight of operation o. The discrete process of testing and comparing all possible operation combinations now can be simplified as learning a set of suitable weights α (i,j) o by a gradient-based optimization method. When the algorithm converges, the relaxed search space can be extracted into a discrete neural architecture by selecting the candidate operation with the highest weight.
Since the framework above and the basic idea of NAS were first proposed in the field of computer vision, they mostly concentrate on convolutional neural networks, which consists of convolutional layers with different kernel sizes and different pooling layers [23]. However, in the context of PINN, applying dense neural networks (DNNs) is the common practice. Therefore, our primary goal in this work is to determine the architecture of DNNs in PINNs, i.e. the number of layers and the neurons in each layer.
Masks
Although Eq. (6) reduces the search space into a continuous one, tensor operations only allow tensors of the same shape to be added, making the search for number of neurons impractical, as shown in Figure 2(a). Inspired by the zero-padding in convolutional neural networks, we can pad neurons to the maximum number k [32], as shown in Figure 2(b). In Figure 2(c), by multiplying the padded neurons by one-zero tensor masks, we deactivate the extra neurons to simulate different numbers of neurons. Finally, by sharing weights, the optional hidden layers can be reduced to one and the output y can be expressed as:
y = σ (w · x + b) · [g 1 , g 2 , g 3 ] × mask 1 mask 2 mask 3 T (7)
Where σ(·) is the activation function, w and b are the weights and bias of one single hidden layer, g i is a scalar which is the weight for each number of neurons, mask i is the mask for each number of neurons and has a shape of 1 × k. Supposing the number of neurons is j, then the first j-th elements of mask are 1 and the rest (k − j) elements are 0. To determine the number of layers, we introduce the identity transformation as an operation, which represents skipping this layer, and the output y becomes:
y = a 1 · x + a 2 · σ (w · x + b) · [g 1 , g 2 , g 3 ] × mask 1 mask 2 mask 3 T (8)
where a 1 is the weight for identity transformation which means skipping this layer and a 2 is the weight for reserving this layer.
Eq. (8) gives the mapping relations between the input and output of each layer, and by repeatedly applying it, we can build a DNN model in which the most suitable layers can be selected according to weight a and the most suitable neurons in each layer can be decided by weight g. Here, we collectively call a and g as α, w and b as θ.
NAS-PINN
Now we can have the whole framework of NAS-PINN as shown in Figure 3, which can be considered as a bi-level optimization problem. In the inner loop, the weights and bias θ of DNN are optimized, while in the outer loop, the optimization object is to find the best α. The process can be expressed as: The loss function for the inner loop can be designed as Eqs.
min α M SE (θ * , α) s.t.θ * = arg min θ Loss (θ, α)(9)
(2) -(5) and the loss function for the outer loop can be written as:
M SE = 1 n n i=1 (û − u) 2(10)
Where u is the known analytical or numerical solution, n is the number of data points and for the outer loop, the required n can be rather small.
Such bi-level optimization problem can be solved through an alternate optimization and the corresponding process is demonstrated in Algorithm 1. When the training ends, a discrete neural network model can be derived according to α. Basically, we can first determine whether to skip one certain layer by comparing a 1 and a 2 . If the layer is reserved, we can then decide the number of neurons based on g. If a certain layer is skipped, there will be no need to investigate its weights g.
In some cases where a 1 and a 2 are relatively closed with each other, we assume that skipping the layer and reserving the layer are equally important and we offer a mixed model. In a mixed model, the layers are the combinations of identity transformations and neural network operations. The number of neurons is decided as the same as a discrete one, so these layers can be expressed as:
y = a 1 · x + a 2 · σ (w · x + b) · (g max × mask max ) T (11)
Where g max is the maximum one among all weights g, and mask max is the one-zero tensor mask corresponding to g max .
NUMERICAL EXPERIMENTS
In this section, we consider a range of PDEs to test the proposed NAS-PINN and try to find out the characteristics of efficient neural architectures for solving PDEs. Poisson equation, Burgers equation and Advection equation are considered in this work.
Poisson equation
Poisson equation is a class of basic PDEs describing electromagnetic field and heat field, widely applied in electromagnetism and mechanical engineering, etc. Here, we consider a 2-D Poisson equation with Dirichlet boundary condition:
∆ϕ(x, y) = −2π 2 cos(πx)cos(πy), x, y ∈ Ω ϕ(x, y) = cos(πx) cos(πy), x, y ∈ ∂Ω
This equation can be analytically solved:
ϕ(x, y) = cos(πx) cos(πy)
We first consider the Poisson equation in a square computational domain to verify the effectiveness of the proposed NAS-PINN. We construct a relatively small search space, which is a neural network with up to 5 hidden layers and 30, 50 or 70 neurons in each layer. Every possible neural architecture in the discrete search space is trained and tested respectively. Then we use NAS-PINN to search for a neural architecture and investigate whether it is the best one. All the 363 architectures in the discrete search space are trained with 500 collocation points randomly sampled in the domain and 100 boundary points uniformly distributed on the boundary. For the architecture search phase, 1000 collocation points and 200 boundary points are sampled by the same strategies as before to search for the best neural architecture. The obtained neural architecture is then trained from scratch in the same way as the 363 architectures. The predicted solutions and error distributions of the best three architectures are shown in Figure 4 and the L 2 error are listed in Table 1. All experiments are repeated 5 times and the L 2 error are obtained by the average of the 5 repetitions. The architectures are described in the form of sequences in Table 1. The first and the last element of the sequence stand for the input and output channel, while the other elements represent the neuron numbers for each layer. For example, the size of the input for the architecture No.98 is n × 2, where n is the batch size, 2 stands for the coordinates x and y, and the first hidden layer has 70 neurons. The architecture obtained through NAS-PINN is No.358.
From Table 1 and Figure 4, we can clearly see that the neural architecture by NAS-PINN has the smallest L 2 error and the smallest maximum error value, and its error distribution is improved compared to other architectures as well. Therefore, the proposed NAS-PINN does find out the best neural architecture in the given search space. Besides, although the architecture No.98 also shows relatively good performance (it is one of the best three in 363 possible architectures), it has much more parameters than the architecture by NAS-PINN, which indicates that more parameters do not necessarily mean better performance and an appropriately designed neural architecture appears to be particularly important. Furthermore, the common sense that a deeper neural network is always better seems not to be true in all circumstances in PINNs. At least for the given Poisson equation, a shallow but wide neural network (a neural network with fewer hidden layers but more neurons in each layer) prevails over the deep ones.
Poisson equation in irregular computational domains
To demonstrate the adaptability of NAS-PINN to irregular computational domains, we further inspect the Poisson equation in Eq. (12) in different computational domains, which include circular, L-shaped and flower-shaped domains. Specifically, the circular domain is a circle with a center at (0.5, 0.5) and a radius of 0.5. The L-shaped domain is the difference set of two squares. The lower left corners of these two squares are at (0, 0) and (1, 1) respectively, while their upper right corners are both at (2, 2).
For irregular computational domains, the search space is set to be a neural network with up to 7 hidden layers, and the numbers of neurons each layer can be selected from 10 to 110 in increments of 20. As irregular computational domains are rather complex, the architecture search phase adopts 2500 collocation points and 500 boundary points. To compare with the neural architecture by NAS-PINN, we manually select three reference neural architectures from the search space based on experience, which are: a) Architecture Giant, a neural network with the most parameters in the search space (7 layers and 110 neurons per layer), b) Architecture Dumpy, a shallow (2 hidden layers) neural network with the most neurons per layer (110 neurons per layer), c) Architecture Slender, a neural network with the most hidden layers (7 layers) but the fewest neurons per layer (10 neurons per layer). The three reference architectures as well as the NAS-PINN-searched architecture are trained from scratch by using 500 collocation points and 100 boundary points. The predicted solutions and error distributions in different computational domains are displayed in Figures 5, 6 and 7, and Table 2 gives the corresponding L 2 error. All the experiments are repeated 5 times and the average values are then calculated.
As supposed, irregular computational domains are more challenging to deal with than regular domains, which results in an interesting feature: the weights a 1 and a 2 are close to each other. We reckon this feature as a preference for residual structures. As Eq. (11) shows, if a 1 and a 2 are almost at the same level, the mapping relation becomes a summation of an identity transformation and a neural network operation [33]. Therefore, we reserve the residual structures when a 1 and a 2 are both smaller than a threshold ε, and we call such layers as mixed layers. Mixed layers are expressed in parentheses in Table 2. The results show that the proposed NAS-PINN can adapt to irregular computational domains well. It is indicated that when the problem is complex, neural architectures with residual connection are more appropriate in PINNs, and the layers with such residual connection tend to appear in the middle and last few layers of the neural networks. We also notice that Architecture Dumpy performs better than Architecture Giant and Slender in the circular and L-shaped computational domains and none of the NAS-PINN-searched architectures reaches the maximum layers (7 layers). This means a relatively shallow neural network with more neurons is more suitable for solving Poisson equation even in irregular domains.
Burgers equation
Burgers equation, as an important part of fluid mechanics and gas dynamics, describes the process of propagation and reflection of shock waves. Here, a time-varying 1-D Burgers equation with periodic boundary condition is considered:
u t + uu x − (υ/π) u xx = 0, x ∈ [−1, 1] , t ∈ [0, 1] u(0, x) = − sin (πx) u(t, −1) = u(t, 1) = 0(14)
Where υ is the diffusion coefficient.
The Burgers equation with different values of υ is investigated and the reference solutions are obtained through Chebyshev spectral method [34]. The search space and the reference neural architectures are kept the same with those in Section 4.2. In the architecture search phase, we uniformly take 21 points along the t-axis and 250 points along the x-axis. The same points are used to train all the neural architectures from scratch. Table 3 and Figures 8 and 9 give the results of Burgers equations with different diffusion coefficients υ. The experiments are repeated 5 times and the average values are then calculated.
It is found that the predictions of the NAS-PINN-searched architectures are more accurate than those of the reference architectures, and such advantage is especially evident when υ = 0.1. The NAS-PINN-searched architectures generally have 3 or 4 hidden layers, which is much smaller than the maximum number, further demonstrating that deeper neural networks do not necessarily produce better results, and a best number of hidden layers do exist for a certain problem. Meanwhile, the NAS-PINN-searched architectures prefer to use different numbers of neurons for each hidden layer, which is commonly neglected in experience-based neural network design. By comparing with the reference architectures, it also indicates that for Burgers equations, more hidden layers may be more critical than more neurons, as the architecture Slender can acquire equivalent or even better results compared to the architecture Giant.
Advection equation
Advection is one of the most significant processes in atmospheric motion, which is basically described by Advection equation. Here, we consider a 1-D Advection equation:
u t + βu x = 0, x ∈ (0, 1) , t ∈ (0, 2] u(0, x) = 0.8 sin (4πx + π/4)(15)
Where β is the advection speed and this equation has an analytical solution [19]:
u(t, x) = 0.8 sin [4π (x − βt) + π/4](16)
The Advection equations with different values of β are investigated. The search space and the reference neural architectures are the same as above. 40 points along the t-axis and 120 points along the x-axis are uniformly taken for the architecture search phase. The same points are used to train all the neural architectures from scratch. Table 4 and Figures 10 and 11 show the results of Advection equations with different advection speeds β, and the average values are taken from 5 independent experiments.
The NAS-PINN-searched neural architectures can always achieve the best results and it shows an obvious pattern that the smaller the β is, the deeper the neural network will be. This pattern still works when we consider about the reference architectures for that the architecture Dumpy performs best when β = 1 and the architecture Giant performs best when β = 0.1. The architecture Slender almost fails when β = 1, but the results become better with the decrease of β, and when β = 0.1, the architecture Slender can even get equivalent results compared to the other two reference architectures. However, the L 2 error indicates that the equation is more difficult to solve when the β is larger. That reminds us that a complicated problem does not always need a deep neural network while a relatively simple problem may require more parameters to solve, and such phenomenon further emphasizes the value of NAS-PINN. Besides, similar to the results of Burgers equations, when a relatively deep neural network is constructed, different numbers of neurons are preferred.
2-D Burgers equation
To explore the performance and characteristics of high dimensional problems, we consider a time-varying 2-D Burgers equation [35]. As the network takes (t, x) together as input, it can be reckoned as a 3-D problem here. This equation can be analytically solved:
u(t, x, y) = 1 1 + exp x+y−t 0.2(18)
The search space and the reference neural architectures are the same as above. In both the architecture search phase and the training phase, we uniformly take 20 points along the t-axis and 25 points along the x-axis and y-axis. Table 5 lists the corresponding L 2 error and Figures 12-14 show the results of 2D Burgers equation in three time slices, t=0, 1 and 2, respectively. Still, the average values are taken from 5 independent experiments. The architecture obtained by NAS-PINN achieves the minima L 2 error and the best error distributions. Compared to the architectures Dumpy and Slender, the NAS-PINN-searched architecture can raise the accuracy by an order of magnitude. The results show that the proposed NAS-PINN can adapt well to three dimensional problems. Similar to the conclusions of Burgers equations in Section 4.3, more layers is more crucial for getting better performance than having more neurons. However, there exists a most suitable number of hidden layers and too many layers can make the performance deteriorate. Again, it is recommended to vary the number of neurons used rather than sticking to the conventional wisdom of maintaining the number of neurons constant.
CONCLUSIONS
In this paper, we propose a neural architecture search-guided method, namely NAS-PINN, to automatically search the best neural architecture for solving given PDEs. By constructing the mixed operation and introducing the masks to realize the addition of tensors in different shapes, the architecture search problem can be relaxed into a continuous, bi-level optimization problem. It can search for the most suitable number of hidden layers and number of neurons for each layer in the given search space, and construct the best neural architecture for a given problem.
Through various numerical experiments, we verify the effectiveness of NAS-PINN and show its strong adaptability to irregular computational domains and high dimensional problems. The results further prove that more hidden layers do not necessarily mean better performance and sometimes more hidden layers can even be harmful. For Poisson equations and Advection equations, a relatively shallow neural network with more neurons in each layer can prevail over a deep one, which is quite against our common sense. Regardless of the input dimension, to have more layers appears to be crucial to solving Burgers equations, even when the number of neurons is fairly small. The numerical experiments also indicate that a neural network with different numbers of neurons for each layer is preferable to one whose hidden layers all have the same number of neurons. Furthermore, the proposed method can be easily applied to other PDEs and to exploiting features of efficient neural architectures for solving those equations.
NAS-PINN has focused on DNNs so far, and based on the framework of NAS-PINN, the search for convolutional neural networks (CNNs) can be realized in the future. A selection between DNN and CNN can be made automatically as well. Besides, the threshold of whether to reserve mixed layers is worthy of further study.
Figure 1 :
1The framework of PINN.
Figure 2 :
2Masks for searching number of neurons. (a) Tensors of different shapes cannot be added together. (b) Padding tensors with zero to yield the tensors of same shape. (c) Equivalent transformation by using one-zero tensor masks. (d) Distributive law of multiplication by sharing weights.
Figure 3 :
3The framework of NAS-PINN.
Algorithm 1 :
1NAS-PINN 1 create a DNN whose hidden layers are based on Eq. (8) 2 set the number of epochs n outer for the outer loop and the number of inner loops n inner in one outer loop 3 while epoch < n outer do 4 update θ using Loss expressed in Eq. (2) 5 if epoch mod n inner == 0 6 update α using M SE expressed in Eq. (10) 7 epoch = epoch + 1 8 Derive the discrete neural architecture according to α
Figure 4 :
4Poisson equation: The predicted solutions (a) and error distributions (b) of different neural architectures.
Figure 5 :Figure 6 :Figure 7 :
567Poisson equation: The predicted solutions (a) and error distributions (b) in the circular computational domain. Poisson equation: The predicted solutions (a) and error distributions (b) in the L-shaped computational domain. Poisson equation: The predicted solutions (a) and error distributions (b) in the flower-shaped computational domain.
Figure 8 :Figure 9 :
89Burgers equation: The predicted solutions (a) and the error distributions (b) of different neural architectures (υ = 0.1). Burgers equation: The predicted solutions (a) and the error distributions (b) of different neural architectures (υ = 0.04).
Figure 10 :Figure 11 :
1011u t + u(u x + u y ) = 0.1(u xx + u yy ), (x, y) ∈ [0, 1] × [0, 1] , t ∈ [0, 2] u(0, x, y) = 1 1+exp( x+y 0.2 ) u(t, x b , y b ) = Advection equation: The predicted solutions (a) and the error distributions (b) of different neural architectures (β = 0.1). Advection equation: The predicted solutions (a) and the error distributions (b) of different neural architectures (β = 0.4).
Figure 13 :Figure 14 :
13142-D Burgers equation: The predicted solutions (a) and error distributions (b) of different neural architectures (t = 1). 2-D Burgers equation: The predicted solutions (a) and error distributions (b) of different neural architectures (t = 2).
Table 1 :
1Poisson equation: L 2 error of the best three architecturesArchitecture number
Architecture
L 2 error
No. 98
[2, 70, 70, 30, 30, 50, 1] 1.57 × 10 −3
No. 356
[2, 70, 30, 1]
5.11 × 10 −4
No. 358 (NAS-PINN) [2, 50, 70, 1]
4.46 × 10 −4
Table 2 :
2Poisson equation: L 2 error in different computational domainsComputational domain Architecture Architecture
L 2 error
Circle
NAS-PINN
[2, 110, (50, 50, 50, 30,) 1]
2.25 × 10 −7
Giant
[2, 110 × 7, 1]
2.55 × 10 −6
Dumpy
[2, 110 × 2, 1]
3.02 × 10 −7
Slender
[2, 10 × 7, 1]
1.22 × 10 −5
L-shaped
NAS-PINN
[2, 110, 110, (10,) 1]
2.05 × 10 −6
Giant
[2, 110 × 7, 1]
8.39 × 10 −6
Dumpy
[2, 110 × 2, 1]
3.38 × 10 −6
Slender
[2, 10 × 7, 1]
4.37 × 10 −4
Flower-shaped
NAS-PINN
[2, 50, 70, (70, 70,) 70, 110, 1] 6.91 × 10 −6
Giant
[2, 110 × 7, 1]
8.97 × 10 −6
Dumpy
[2, 110 × 2, 1]
1.32 × 10 −5
Slender
[2, 10 × 7, 1]
1.85 × 10 −3
Table 3 :
3Burgers equation: L 2 error with different diffusion coefficients υυ
Architecture Architecture
L 2 error
υ = 0.1
NAS-PINN
[2, 90, 50, 110, 1]
8.87 × 10 −7
Giant
[2, 110 × 7, 1]
1.44 × 10 −6
Dumpy
[2, 110 × 2, 1]
1.52 × 10 −6
Slender
[2, 10 × 7, 1]
1.87 × 10 −6
υ = 0.07
NAS-PINN
[2, 90, 70, 30, 110, 1]
1.41 × 10 −6
Giant
[2, 110 × 7, 1]
2.15 × 10 −6
Dumpy
[2, 110 × 2, 1]
4.94 × 10 −6
Slender
[2, 10 × 7, 1]
2.35 × 10 −6
υ = 0.04
NAS-PINN
[2, 110, 110, 70, 110, 1] 1.51 × 10 −6
Giant
[2, 110 × 7, 1]
2.70 × 10 −6
Dumpy
[2, 110 × 2, 1]
1.60 × 10 −5
Slender
[2, 10 × 7, 1]
2.61 × 10 −6
Table 4 :
4Advection equation: L 2 error with different advection speeds ββ
Architecture Architecture
L 2 error
β = 1
NAS-PINN
[2, 110, 110, 1]
1.49 × 10 −4
Giant
[2, 110 × 7, 1]
6.08 × 10 −4
Dumpy
[2, 110 × 2, 1]
1.49 × 10 −4
Slender
[2, 10 × 7, 1]
2.67 × 10 −2
β = 0.4
NAS-PINN
[2, 90, 90, 90, 90, 110, 1]
1.30 × 10 −6
Giant
[2, 110 × 7, 1]
3.63 × 10 −6
Dumpy
[2, 110 × 2, 1]
1.50 × 10 −6
Slender
[2, 10 × 7, 1]
3.61 × 10 −5
β = 0.1
NAS-PINN
[2, 110, 50, 50, 70, 30, 90, 1] 2.56 × 10 −6
Giant
[2, 110 × 7, 1]
6.14 × 10 −6
Dumpy
[2, 110 × 2, 1]
6.37 × 10 −6
Slender
[2, 10 × 7, 1]
8.55 × 10 −6
Table 5 :
52-D Burgers equation: L 2 error of different neural architecturesArchitecture Architecture
L 2 error
NAS-PINN
[3, 110, 90, 70, 110, 1] 4.29 × 10 −8
Giant
[3, 110 × 7, 1]
8.52 × 10 −8
Dumpy
[3, 110 × 2, 1]
1.61 × 10 −7
Slender
[3, 10 × 7, 1]
1.53 × 10 −7
Numerical methods using MATLAB. Pearson prentice hall Upper Saddle River. J H Mathews, K D Fink, NJ. JH. Mathews, KD. Fink. Numerical methods using MATLAB. Pearson prentice hall Upper Saddle River, NJ, 2004.
Multilayer feedforward networks are universal approximators. K Hornik, M Stinchcombe, H White, Neural networks. 25K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5): 359-366, 1989.
S Huang, W Feng, C Tang, J Lv, arXiv:221105567Partial Differential Equations Meet Deep Neural Networks: A Survey. arXiv preprint. S. Huang, W. Feng, C. Tang, and J. Lv. Partial Differential Equations Meet Deep Neural Networks: A Survey. arXiv preprint, arXiv:221105567, 2022.
Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. L Lu, P Jin, G Pang, Z Zhang, G E Karniadakis, Nature machine intelligence. 33L. Lu, P. Jin, G. Pang, Z. Zhang, and GE. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3): 218-229, 2021.
Fourier Neural Operator for Parametric Partial Differential Equations. Z Li, N B Kovachki, K Azizzadenesheli, B Liu, K Bhattacharya, A Stuart, A Anandkumar, International Conference on Learning Representations. Z. Li, NB. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar. Fourier Neural Operator for Parametric Partial Differential Equations. In: International Conference on Learning Representations, pp.1-16, 2021.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving onlinear partial differential equations. M Raissi, P Perdikaris, G E Karniadakis, Journal of Computational Physics. 378M. Raissi, P. Perdikaris, and GE. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving onlinear partial differential equations. Journal of Computational Physics, 378: 686-707, 2019.
Automatic differentiation in machine learing: a survey. A G Baydin, B A Pearlmutter, A A Radul, J M Siskind, Journal of Machine Learning Research. 18AG. Baydin, BA. Pearlmutter, AA. Radul, and JM. Siskind. Automatic differentiation in machine learing: a survey. Journal of Machine Learning Research, 18: 1-43, 2018.
Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Ad, E Jagtap, G E Kharazmi, Karniadakis, Computer Methods in Applied Mechanics and Engineering. 365113028AD. Jagtap, E. Kharazmi, and GE. Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Computer Methods in Applied Mechanics and Engineering, 365: 113028, 2020.
Extended Physics-informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition based Deep Learning Framework for Nonlinear Partial Differential Equations. Ad, G E Jagtap, Karniadakis, AAAI Spring Symposium: MLPS. AD. Jagtap, GE. Karniadakis. Extended Physics-informed Neural Networks (XPINNs): A Generalized Space- Time Domain Decomposition based Deep Learning Framework for Nonlinear Partial Differential Equations. In: AAAI Spring Symposium: MLPS, pp.2002-2041, 2021.
PFNN: A penalty-free neural network method for solving a class of second-order boundaryvalue problems on complex geometries. H Sheng, C Yang, Journal of Computational Physics. 428110085H. Sheng, C. Yang. PFNN: A penalty-free neural network method for solving a class of second-order boundary- value problems on complex geometries. Journal of Computational Physics, 428: 110085, 2021.
Gradient Optimized Physics-Informed Neural Networks (GOPINNs): A Deep Learning Method For Solving The Complex Modified KdV Equation. J Li, J Chen, B Li, Nonlinear Dynamics. 107J. Li, J. Chen, and B. Li. Gradient Optimized Physics-Informed Neural Networks (GOPINNs): A Deep Learning Method For Solving The Complex Modified KdV Equation. Nonlinear Dynamics, 107: 781-792, 2022.
Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. J Yu, L Lu, X Meng, G E Karniadakis, Computer Methods in Applied Mechanics and Engineering. 393114823J. Yu, L. Lu, X. Meng, and GE. Karniadakis. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. Computer Methods in Applied Mechanics and Engineering, 393: 114823, 2022.
Z Xiang, W Peng, W Zhou, W Yao, arXiv:220207926Hybrid Finite Difference with the Physics-informed Neural Network for solving PDE in complex geometries. arXiv preprint. Z. Xiang, W. Peng, W. Zhou, and W. Yao. Hybrid Finite Difference with the Physics-informed Neural Network for solving PDE in complex geometries. arXiv preprint, arXiv:220207926, 2022.
CAN-PINN: A Fast Physics-Informed Neural Network Based on Coupled-Automatic-Numerical Differentiation Method. Ph, J C Chiu, C Wong, M H Ooi, Y S Dao, Ong, Computer Methods in Applied Mechanics and Engineering. 395114909PH. Chiu, JC. Wong, C. Ooi, MH. Dao, and YS. Ong. CAN-PINN: A Fast Physics-Informed Neural Network Based on Coupled-Automatic-Numerical Differentiation Method. Computer Methods in Applied Mechanics and Engineering, 395: 114909, 2022.
Accelerated Training of Physics Informed Neural Networks (PINNs) using Meshless Discretizations. R Sharma, V Shankar, Advances in Neural Information Processing Systems. R. Sharma, V. Shankar. Accelerated Training of Physics Informed Neural Networks (PINNs) using Meshless Discretizations. In: Advances in Neural Information Processing Systems, 2022.
Efficient training of physics-informed neural networks via importance sampling. R J Ma. Nabian, H Gladstone, Meidani, Computer-Aided Civil and Infrastructure Engineering. 368MA. Nabian, RJ. Gladstone, and H. Meidani. Efficient training of physics-informed neural networks via importance sampling. Computer-Aided Civil and Infrastructure Engineering, 36(8): 962-977, 2021.
A Daw, J Bu, S Wang, P Perdikaris, A Karpatne, arXiv:2207.02338Rethinking the Importance of Sampling in Physicsinformed Neural Networks. arXiv preprint. A. Daw, J. Bu, S. Wang, P. Perdikaris, and A. Karpatne. Rethinking the Importance of Sampling in Physics- informed Neural Networks. arXiv preprint, arXiv:2207.02338, 2022.
W Peng, W Yao, W Zhou, X Zhang, W Yao, arXiv:221010646Robust Regression with Highly Corrupted Data via Physics Informed Neural Networks. arXiv preprint. W. Peng, W. Yao, W. Zhou, X. Zhang, and W. Yao. Robust Regression with Highly Corrupted Data via Physics Informed Neural Networks. arXiv preprint, arXiv:221010646, 2022.
PDEBench: An ExtensiveBenchmark for Scientific Machine Learning. M Takamoto, T Praditia, R Leiteritz, D Mackinlay, F Alesiani, D Pfluger, M Niepert, Advances in Neural Information Processing Systems. 35M. Takamoto, T. Praditia, R. Leiteritz, D. MacKinlay, F. Alesiani, D. Pfluger, and M. Niepert. PDEBench: An ExtensiveBenchmark for Scientific Machine Learning. Advances in Neural Information Processing Systems, 35: 1596-1611, 2022.
Deep learning for thermal plasma simulation: Solving 1-D arc model as an example. L Zhong, Q Gu, B Wu, Computer Physics Communications. 257107496L. Zhong, Q. Gu, and B. Wu. Deep learning for thermal plasma simulation: Solving 1-D arc model as an example. Computer Physics Communications, 257: 107496, 2020.
Can Physics-Informed Neural Networks beat the Finite Element Method. U J Tg. Grossmann, J Komorowska, C B Latz, Schönlieb, arXiv:2302.04107arXiv preprintTG. Grossmann, UJ. Komorowska, J. Latz, and CB. Schönlieb. Can Physics-Informed Neural Networks beat the Finite Element Method?. arXiv preprint, arXiv:2302.04107, 2023.
A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. L Yuan, Y Q Ni, X Y Deng, S Hao, Journal of Computational Physics. 462111260L. Yuan, YQ. Ni, XY. Deng, and S. Hao. A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. Journal of Computational Physics, 462: 111260, 2022.
Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, IEEE Conference on Computer Vision and Pattern Recognition. B. Zoph, V. Vasudevan, J. Shlens, and QV. Le. Learning transferable architectures for scalable image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp.8697-8710, 2018.
Progressive neural architecture search. C Liu, B Zoph, M Neumann, J Shlens, W Hua, L J Li, L Feifei, A Yuille, J Huang, K Murphy, European Conference on Computer Vision. C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, LJ. Li, L. FeiFei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In: European Conference on Computer Vision, pp.19-34, 2018.
Understanding Architectures Learnt by Cell-based Neural Architecutre Search. Y Shu, W Wang, S Cai, International Conference on Learning Representations. Y. Shu, W. Wang, and S. Cai. Understanding Architectures Learnt by Cell-based Neural Architecutre Search. In: International Conference on Learning Representations, 2020.
Understanding and simplifying one-shot architecture search. G Bender, P J Kindermans, B Zoph, V Vasudevan, Q Le, International Conference on Machine Learning: PMLR. G. Bender, PJ. Kindermans, B. Zoph, V. Vasudevan, and Q. Le. Understanding and simplifying one-shot architecture search. In: International Conference on Machine Learning: PMLR, pp.550-559, 2018.
SMASH: One-shot model architecture search through hypernetworks. A Brock, T Lim, J M Ritchie, N J Weston, International Conference on Learning Representations. A. Brock, T. Lim, JM. Ritchie, and NJ. Weston. SMASH: One-shot model architecture search through hypernet- works. In: International Conference on Learning Representations, 2018.
DARTS: Differentiable Architecture Search. H Liu, K Simonyan, Y Yang, International Conference on Learning Representations. H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable Architecture Search. In: International Conference on Learning Representations, 2018.
Adam: A method for stochastic optimization. Dp, J Kingma, Ba, arXiv:14126980arXiv preprintDP. Kingma, J. Ba. Adam: A method for stochastic optimization. arXiv preprint, arXiv:14126980, 2014.
Updating quasi-Newton matrices with limited storage. J , Mathematics of computation. 35151J. Nocedal. Updating quasi-Newton matrices with limited storage. Mathematics of computation, 35(151): 773-782, 1980.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, International Conference on Machine Learning: PMLR. C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning: PMLR, pp,1126-1135, 2017.
Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions. A Wan, X Dai, P Zhang, Z He, Y Tian, S Xie, B Wu, M Yu, T Xu, K Chen, IEEE/CVF Conference on Computer Vision and Pattern Recognition. A. Wan, X. Dai, P. Zhang, Z. He, Y. Tian, S. Xie, B. Wu, M. Yu, T. Xu, and K. Chen. Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.12965-12974, 2020.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp.770-778, 2016.
Chebfun guide. Ta, N Driscoll, L N Hale, Trefethen, Pafnuty PublicationsOxfordTA. Driscoll, N. Hale, and LN. Trefethen. Chebfun guide. Pafnuty Publications, Oxford, 2014.
DeLISA: Deep learning based iteration scheme approximation for solving PDEs. Y Li, Z Zhou, S Ying, Journal of Computational Physics. 451110884Y. Li, Z. Zhou, and S. Ying. DeLISA: Deep learning based iteration scheme approximation for solving PDEs. Journal of Computational Physics, 451: 110884, 2022.
| []
|
[
"Frame Flexible Network",
"Frame Flexible Network"
]
| [
"Yitian Zhang [email protected] \nNortheastern University\n\n",
"Yue Bai [email protected] \nNortheastern University\n\n",
"Chang Liu [email protected] \nNortheastern University\n\n",
"Huan Wang [email protected] \nNortheastern University\n\n",
"Sheng Li [email protected] \nUniversity of Virginia\n\n",
"Yun Fu [email protected] \nNortheastern University\n\n"
]
| [
"Northeastern University\n",
"Northeastern University\n",
"Northeastern University\n",
"Northeastern University\n",
"University of Virginia\n",
"Northeastern University\n"
]
| []
| Existing video recognition algorithms always conduct different training pipelines for inputs with different frame numbers, which requires repetitive training operations and multiplying storage costs. If we evaluate the model using other frames which are not used in training, we observe the performance will drop significantly (seeFig. 1), which is summarized as Temporal Frequency Deviation phenomenon. To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly. Concretely, FFN integrates several sets of training sequences, involves Multi-Frequency Alignment (MFAL) to learn temporal frequency invariant representations, and leverages Multi-Frequency Adaptation (MFAD) to further strengthen the representation abilities. Comprehensive empirical validations using various architectures and popular benchmarks solidly demonstrate the effectiveness and generalization of FFN (e.g., 7.08/5.15/2.17% performance gain at Frame 4/8/16 on Something-Something V1 dataset over Uniformer). Code is available at https://github.com/BeSpontaneous/FFN. | 10.48550/arxiv.2303.14817 | [
"https://export.arxiv.org/pdf/2303.14817v1.pdf"
]
| 257,767,176 | 2303.14817 | 3acca2ff4e8a808c524261cff4acc8bc21b16eea |
Frame Flexible Network
Yitian Zhang [email protected]
Northeastern University
Yue Bai [email protected]
Northeastern University
Chang Liu [email protected]
Northeastern University
Huan Wang [email protected]
Northeastern University
Sheng Li [email protected]
University of Virginia
Yun Fu [email protected]
Northeastern University
Frame Flexible Network
Existing video recognition algorithms always conduct different training pipelines for inputs with different frame numbers, which requires repetitive training operations and multiplying storage costs. If we evaluate the model using other frames which are not used in training, we observe the performance will drop significantly (seeFig. 1), which is summarized as Temporal Frequency Deviation phenomenon. To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly. Concretely, FFN integrates several sets of training sequences, involves Multi-Frequency Alignment (MFAL) to learn temporal frequency invariant representations, and leverages Multi-Frequency Adaptation (MFAD) to further strengthen the representation abilities. Comprehensive empirical validations using various architectures and popular benchmarks solidly demonstrate the effectiveness and generalization of FFN (e.g., 7.08/5.15/2.17% performance gain at Frame 4/8/16 on Something-Something V1 dataset over Uniformer). Code is available at https://github.com/BeSpontaneous/FFN.
Introduction
The growing number of online videos boosts the research on video recognition, laying a solid foundation for deep learning which requires massive data. Compared with image classification, video recognition methods need a series of frames to represent the video which scales the computation. Thus, the efficiency of video recognition methods has always been an essential factor in evaluating these approaches. One existing direction to explore efficiency is designing lightweight networks [9,40] which are hardware friendly. Even if they increase the efficiency with an acceptable performance trade-off, these methods cannot make further customized adjustments to meet the dynamic-changing resource constraint in real scenarios. In community, there are two lines of research being proposed to resolve this issue. The first one is to design networks that can execute at various depths [10] or widths [37] to adjust the computation from the model perspective. The other line of research considers modifying the resolutions of input data [15,34] to accommodate the cost from the data aspect. However, these methods are carefully designed for 2D CNNs, which may hinder their applications on video recognition where 3D CNNs and Transformer methods are crucial components.
Different from image-related tasks, we need to sample multiple frames to represent the video, and the computational costs will grow proportionally to the number of sampled frames. Concretely, standard protocol trains the same network with different frames separately to obtain multiple models with different performances and computations. This brings challenges to applying these networks on edge devices as the parameters will be multiplied if we store all models, and downloading and offloading models to switch them will cost non-negligible time. Moreover, the same video may be sampled at various temporal rates on different platforms, employing a single network that is trained at a certain frame number for inference cannot resist the variance of frame numbers in real scenarios.
Training the model with a high frame number (i.e., high temporal frequency) and directly evaluating it at fewer frames (i.e., low temporal frequency) to adjust the cost is a naive and straightforward solution. To test its effectiveness, we compare it with Separated Training (ST) which trains the model at different temporal frequency individually and tests it with the corresponding frame. We conduct experiments on 2D-network TSM [18], 3D-network Slow-Fast [6] and Transformer-network Uniformer [16], and find obvious performance gaps between the inference results and ST from Fig. 1, which means these methods will exhibit significantly inferior performance if they are not evaluated at the frame number used in training. Further, we conduct the same experiments on different depths of deep networks and a similar phenomenon appears. We denote this generally existing phenomenon as Temporal Frequency Deviation.
The potential reason for Temporal Frequency Deviation has been explored in Sec. 3 and briefly summarized as the shift in normalization statistics. To address this issue, we propose a general framework, named Frame Flexible Network (FFN), which only requires one-time training, but can be evaluated at multiple frame numbers with great flexibility. We import several input sequences with different sampled frames to FFN during training and propose Multi-Frequency Alignment (MFAL) to learn the temporal frequency invariant representations for robustness towards frame change. Moreover, we present Multi-Frequency Adaptation (MFAD) to further strengthen the representation abilities of the sub-networks which helps FFN to exhibit strong performance at different frames during inference.
Although normalization shifting problem [36,37] and resolution-adaptive networks [15,34] have been studied, we stress that designing frame flexible video recognition frameworks to accommodate the costs and save parameters is non-trivial and has practical significance for the following reasons. First, prior works [15,34] carefully analyzed the detailed structure of 2D convolutions in order to privatize the weights for different scale images. While our method does not touch the specific design of the spatialtemporal modeling components and shares their weights for inputs with different frames. This procedure not only enables our method to be easily applied to various architectures (2D/3D/Transformer models), but also enforces FFN to learn temporal frequency invariant representations. Second, it is, indeed, a common practice to conduct Separated Training (ST) in video recognition, which needs multiplying memory costs to store individual models, and the models are hard to resist the variance in temporal frequency which limits their applications in actual practice. While FFN provides a feasible solution to these challenges which significantly reduces the memory costs of storing multiple models and can be evaluated at different frames to adjust the cost with even higher accuracy compared to ST.
With the proposed framework, we can resolve Temporal Frequency Deviation and enable these methods to adjust their computation based on the current resource budget by sampling different frames, trimming the storage costs of ST remarkably. Moreover, we provide a naive solution that enables FFN to be evaluated at any frame and increases its flexibility during inference. Validation results prove that FFN outperforms ST even at frames that are not used in training. The contributions are summarized as follows:
• We reveal the phenomenon of Temporal Frequency Deviation that widely exists in video recognition. It is detailedly analyzed and practically inspires our study.
Related Work
Video Recognition has been extensively explored in recent years and we can summarize the methods into three categories based on their architectures: 1) 2D networks: these methods [17,18,29,30] utilize 2D CNNs as the backbone and specifically design temporal modeling module for spatial-temporal modeling. 2) 3D networks: a straightforward solution for video recognition is to utilize 3D convolutions [2,6,28] which naturally consider the temporal information in frame sequences. 3) Transformer networks: based on Vision Transformers [4,19], many approaches [5,16,20] have been proposed recently for spatialtemporal learning and shown powerful performance.
Training-testing Discrepancy widely exists in many scenarios of deep learning. FixRes [27] discovers the deviation of image resolutions between training and testing. Based on this observation, there are methods [15,34] being designed to train a universal network to fit the images at different resolutions and [35] further extended this idea to 3D CNNs. Slimmable Neural Networks [36,37] train a shared network which can adjust its width to meet the resource constraints during inference. Different from these prior works, our work is motivated by Temporal Frequency Deviation in video recognition. This finding is essential as frame sampling is a necessary step for all methods and former procedures train the network with different frames individually which is parameter-inefficient and memory-consuming. Parameter-efficient Transfer Learning has aroused researchers' attention in NLP because of the arising of largescale pre-trained language models. An important research line is to design task-specific adapters [23,24] to achieve parameter-efficient. Recently, the idea of adapters has been extended to vision tasks as well and shown favorable performance [22,26,39]. In this work, instead of focusing on tuning from large-scale pre-trained models, we present Multi-Frequency Adaptation (MFAD) to increase the representation abilities of sub-networks. Dynamic Networks have been widely studied for efficient video recognition in recent years. Some methods [12,32,41] dynamically sample salient frames to reduce temporal redundancy for less cost, while others mainly focus on reducing spatial redundancy by adaptively processing frames with different resolutions [21] or cropping the most salient regions [31] for each frame. Note that these methods are designed to adaptively process every video (e.g., skip frames, crop patches) for efficiency and also requires repetitive training to obtain models with different computation. Our work aims to train a model which can be evaluated at different frames to adjust the costs and reduce the parameters of storing multiple models, while the mentioned dynamic networks do not solve this problem.
Temporal Frequency Deviation
Nearby Alleviation. We can observe Temporal Frequency Deviation phenomenon when the models are trained with high frame numbers but evaluated at fewer frames from Fig. 1. To step further, we train TSM [18] at 8/12 Frame and evaluate them at other frames. It is shown in Fig. 2 that there are performance gaps for both models if it is not evaluated with the same frame number which is used in training. Particularly, the discrepancies vary in terms of the value and the performance gap is smaller if the inference frame is close to the training frame number. We denote this phenomenon as Nearby Alleviation because Temporal Frequency Deviation is less severe at nearby frames. Normalization Shifting. Prior works [36,37] have studied the problem of normalization shifting in image classification. Specifically, when switching the widths of networks, different numbers of channels will lead to different means and variances of the aggregated features, leading to inconsistency in feature aggregation. While we do not consider the adjustment in model structure, the problem is whether the difference in frame numbers will cause normalization shifting. If we train the model with v H which has high temporal frequency and evaluate it with low temporal frequency v L , the input of Batch Normalization (BN) will be the intermediate feature x L and the corresponding output is:
y L ′ = γ H x L − µ H σ H 2 + ϵ + β H ,(1)
where µ H , σ H 2 are calculated from the data distribution of v H , and γ H , β H are learnt at the training process with v H . We calculate the statistics of the models trained with v L and v H separately and show it in Fig. 3. We can observe a discrepancy of BN statistics at different frame numbers. Note that µ and σ 2 are data-dependent which means that the divergence lies in data intrinsically. Thus, we conjecture that the discrepancy of BN statistics at different frames is an essential factor which leads to Temporal Frequency Deviation. Layer Normalization (LN) [1] has been widely used in Transformer-based models and its statistics are calculated in a similar way with BN which is related to the data distribution. Therefore, we believe the discrepancy of LN statistics is also one of the reasons for Temporal Frequency Deviation on Transformer-based models.
Frame Flexible Network
In this section, we first present the training and inference paradigms of Frame Flexible Network (FFN). Then, we propose Multi-Frequency Alignment which is composed of Weight Sharing and Temporal Distillation to learn temporal frequency invariant representations. Further, we introduce Multi-Frequency Adaptation which fits the frequency invariant features to different sub-networks and further increases their representation abilities. Note that FFN is a general framework which can be built on different architectures (shown in Sec. 5.2) and we just take CNN based method as an example in this part for easier description.
Framework
The goal of our work is to present a method which can be evaluated at multiple frames and exhibits similar or even better performance compared to Separated Training (ST). Based on the analysis in Sec. 3, Temporal Frequency Deviation will be less severe if the model is evaluated at nearby frames which are used in training. Therefore, we decide to import several sequences with different sampled frames to FFN shown in Fig. 4. Consider video v which is sampled at increasing frame numbers L, M and H, we can obtain v L , v M and v H with temporal frequency of Low, Medium and High, respectively. These three sequences will be utilized at training phase to construct three sub-networks F L (·), F M (·) and F H (·) accordingly. As for the inference paradigm, we will activate the sub-network which has the corresponding frame number with the input. In this manner, we build the computational stream that enables FFN to be evaluated with different frames during inference and adjust the computational costs accordingly.
Multi-Frequency Alignment
Prior resolution-adaptive networks [15,34] carefully privatize the weights for 2D convolutions to learn the scaleaware representations for inputs with different resolutions. Recently, there are several works [25,33] being proposed to maximize the mutual information of the same video at different temporal frequency for contrastive learning in video recognition. The core idea is that the same video instance with different speeds should share high similarity in terms of their discriminative semantics. Inspired by these works, we propose Multi-Frequency Alignment (MFAL) which leverages Weight Sharing and Temporal Distillation to efficiently expand the network and enforce the model to learn temporal frequency invariant representations. Weight Sharing. Given video v, we have v L , v M , and v H with an increased temporal frequency and decreased action speed because of the difference in sampled frames. We share the weights of convolutions and classifier across the three sub-networks in order to find a group of parameters θ that mutually model the spatial-temporal relationships for inputs with different temporal frequency:
p * = F * (v * ; θ) ,(2)
where * ∈ {L, M, H} and p stands for the predictions. Compared to specialized convolutions, Weight Sharing is parameter-efficient as it only stores one set of weights which can be applied to different input frames. Moreover, it exhibits great potential for better performance (shown in Tab. 4) as it will enforce the model to learn temporal frequency invariant representations which implicitly provides the prior knowledge that the same video with different temporal frequency belongs to the same class, making the model robust to temporal frequency variance. Temporal Distillation. In most cases, video recognition models trained with v H have better performance as the network will have access to more information of the original video. Therefore, we consider p H to be the most 'accurate' prediction among the three as v H has the most sampled frames. Applying Cross-Entropy loss on p H , we can update the parameters of F H (·) by:
L CE = − K k=1ŷ k log p H k ,(3)
whereŷ k is the one-hot label of class k and there are K classes in total. Directly calculating CE loss on p L and p M is a straightforward solution to update the parameters in F L (·) and F M (·), but it will lead to some problems. Firstly, the weights of convolutions are shared across three sub-networks and the optimal parameters for v L after optimization may not fit well to v M and v H . Moreover, optimizing CE loss of p L and p M will lead to less favorable parameters of convolutions compared to only calculating Eq. 3 as their inputs contain less information compared to v H which may lead to inferior performance. Consequently, we utilize KL divergence [14] loss to involve p L and p M in the computational graph and update the parameters of F L (·) and F M (·) using:
L KL = − K k=1 p H k log p M k p H k − K k=1 p H k log p L k p H k .(4)
As the weights of convolutions are shared across the three sub-networks, optimizing Eq. 4 will enforce the predictions of student (p L and p M ) and teacher (p H ) networks to be as similar as possible and transfer the good knowledge from F H (·) to F L (·) and F M (·). Considering the two losses in a uniform manner, we update the parameters of FFN by:
L = L CE + λ · L KL ,(5)
where λ is an introduced hyperparameter to balance the two terms and we simply let λ = 1 in our implementations without fine-tuning the hyperparameter. Considering Weight Sharing and Temporal Distillation uniformly, L CE will provide inter-class supervisory information to enlarge the distance between videos belonging to different classes, and L KL will further add intra-instance knowledge to the network training, i.e., p L , p M and p H should share high similarity with each other as temporal frequency variance will not change the class of the video. In this way, we not only enforce FFN to learn temporal frequency invariant representations, but also promise it to be easily applied to different structures as we do not touch the specific design of inner spatial-temporal modeling modules.
Multi-Frequency Adaptation
In the previous section, we propose MFAL to enforce FFN to learn temporal frequency invariant representations. Here, we present Multi-Frequency Adaptation (MFAD) to better fit the frequency invariant features to different subnetworks which further strengthen their representations.
According to our analysis in Sec. 3, normalization shifting is one of the reasons which leads to Temporal Frequency Deviation. Formally, we denote the intermediate features for v L , v M and v H as x L , x M and x H , respectively. Similar with [36,37], we provide specialized normalization for different input sequences v L , v M and v H :
y * = γ * x * − µ * √ σ * 2 + ϵ + β * ,(6)
where * ∈ {L, M, H}, and private normalization will learn its own γ and β and calculate the corresponding µ, σ 2 during training. Note that this procedure introduces negligible computation and parameters as normalization operation is a simple transformation and its parameters are often less than 1% of the model size.
Weight Alteration. Though Weight Sharing is necessary for MFAL, it may be difficult to find a set of parameters to display strong representation ability at all frames without further adaptation. Considering a shared convolution with weights W , the outputs of different sequences are:
y * = W ⊗ x * ,(7)
where ⊗ stands for convolution which applies the same transformation for inputs with different temporal frequency. We propose to alter the shared weights of each sub-network to diversify the parameters and strengthen their representation abilities through the transformation:
y * = ϕ * ⊗ W ⊗ x * ,(8)
which can also be written as:
y * = W * ⊗ x * , W * = ϕ * ⊗ W,(9)
where ϕ is a Depth-Wise convolution layer [3] at each Convolution Block which can covert the shared weights W into diversified weights W * . In this way, we can increase the representation ability of FFN through a simple and efficient transformation. Given that video recognition methods often use pre-trained models, we include the residual structure [8] to avoid the added module breaking the original computational graph of pre-trained models and restore their behaviors. Similarly, we also include Weight Alteration in Transformer Block and we choose the inserted location following [22] shown in Fig. 5. Note that Depth-Wise convolution is lightweight and adding it will introduce negligible parameters and computation.
Experiments
In this part, we validate Frame Flexible Network (FFN) on various architectures and benchmarks. First, we provide several baseline solutions and compare them with FFN. Further, we apply our method to different methods and datasets to prove its generalization ability. Moreover, we provide a naive inference paradigm to enable FFN to be evaluated at any frame. Finally, we conduct detailed ablations and analyses to validate the effectiveness of our designs.
Experiment Settings
Datasets. We conduct experiments on four datasets, including: (1) Something-Something V1&V2 [7] include 98k and 194k videos, respectively. They contain strong temporal dependency and show the most significant Temporal Frequency Deviation phenomenon among all datasets. (2) Ki-netics400 [11] is a large-scale dataset with 400 classes. (3) HMDB51 [13] is composed of 6,766 videos which can be categorized into 51 classes. We utilize the original three training/testing splits for training and evaluation. Implementation Details. We uniformly sample 4/8/16 frames for v L , v M and v H in all methods except for Slow-Fast [6] which samples 16/32/64 frames for fast pathway.
Main Results
Comparison with Baseline Methods. Tab. 1 shows that Proportional Sampling and Mixed Sampling help to alleviate Temporal Frequency Deviation as the performance at Frame 4/8 is better than the inference results of the model trained with standard protocol. Nevertheless, the increase is obtained at the cost of an accuracy drop at Frame 16. Then, we adjust the hyperparameter and the results show that both methods seem to provide a trade-off solution for this problem: if the performance at low frames is better, the results at high frame numbers will be worse.
Fine-tuning helps the baseline method to exhibit comparable performance with ST at 4 Frame, but at a loss of forgetting the knowledge at 16 Frame. Ensemble outperforms ST at Frame 8 with the cost of multiplying computation. Besides, its performance at 4/16 Frame is worse than ST which means that it still cannot effectively resolve Temporal Frequency Deviation problem. While FFN shows stronger results compared to ST and Ensemble at all frames with negligible added computation. Moreover, compared to ST and Ensemble which need repetitive training operations and multiplying storage costs, our method is trained for only one time, but can be evaluated at multiple frames, reducing the parameters of saving multiple models significantly which promises its applications on edge devices. Performance Analysis across Architectures. We further validate FFN on different architectures in Fig. 6. We first build our method on TSM [18] which does not contain any parameters in the temporal modeling module. FFN exhibits advantages in performance at all frames compared to baseline TSM and ST. Then, we implement FFN on TEA [17] which involves convolutions and normalization in the temporal modeling module and our results also surpass ST at all frames. Moreover, we extend FFN to 3D-network: Slow-Fast [6] and Transformer-network: Uniformer [16]. The results exhibit similar improvements at all frame numbers compared to ST which validates the flexibility and generalization ability of our method. Note that FFN introduces less than 1% extra computation but reduces the memory costs of storing individual models by multiple times. Performance Analysis across Datasets. In this part, we empirically evaluate FFN on various datasets in Fig. 7, including Something-Something V2, Kinetics400 and HMDB51. The first observation is that Temporal Frequency Deviation phenomenon is less obvious on Kinet-ics400 and HMDB51 as these two datasets contain less temporal information. Nevertheless, FFN continuously improves the accuracy of ST on these datasets as well. For example, there are 2.71/1.95/1.19% performance gains at Frame 4/8/16 on Kinetics400 which further demonstrate the generalization ability of our design.
Inference at Any Frame
We have proved that FFN can outperform ST at the frame numbers used in training, but the evaluation at other frames which are not included in training remains untouched. Motivated by Nearby Alleviation in Sec. 3, we provide a naive inference paradigm to enable FFN to be evaluated at any frame. Given a frame number n at inference phase, we will calculate the frame difference with L, M and H, and activate the sub-network with the minimal difference for validation. If the frame difference is the same for two subnetworks, we will choose the one which corresponds higher frame number by default. In this manner, we can evaluate at other frame numbers which are not used in training. Inbound Results. Fig. 8 shows that FFN outperforms ST at all frames within the range of 4-16 which are utilized in training. Though the improvement at 12 Frame is less obvious compared to other frames as it is the middle of 8/16 Frame which benefits the least from Nearby Alleviation. Outbound Results. Moreover, we evaluate FFN at frames that are out of the bound of 4-16. One can observe that FFN even exhibits better performance compared to ST at Frame 2/18/20 which further demonstrates its generalization capacity at non-seen frames.
Ablation
Input Sequences Combinations. Shown in Tab. 2, we import different numbers of sequences to FFN and evaluate their performance at various frames. First, we can observe that FFN(2) outperforms ST at 4/16 Frame which are used in training, but its performance at 8/12 Frame is worse than ST because of the missing middle sequence in training. In contrast, both FFN(3) and FFN(4) obtain higher accuracy at all frames compared to ST which can be attributed to the utilization of the middle sequence in training so that the Temporal Frequency Deviation at nearby frames can be mitigated by Nearby Alleviation. FFN(4) obtains better results at Frame 4/8/12 because of the added sequence, but it will cost more time and resources during training. Frame Numbers. We sample more frames to v H in this section and import four sequences with 4/8/16/24 frames to FFN, respectively. The first observation from Tab. 3 is that the performance of TSM-ST (24F) is even a little bit lower than TSM-ST (16F) which can be attributed to relatively simple temporal modeling measure of TSM. However, FFN still obtains better performance compared with ST at all frames and achieves the highest accuracy at 24 Frame, owing to the design of Temporal Distillation. Design Choices. We conduct ablation to verify the effectiveness of our designs in Tab
Conclusion and Limitations
In this paper, we reveal Temporal Frequency Deviation phenomenon and propose Frame Flexible Network (FFN) to address it. Specifically, we propose Multi-Frequency Alignment to learn temporal frequency invariant representations and present Multi-Frequency Adaptation to further strengthen the representation ability. Extensive experiments demonstrate that FFN, which only requires one-shot training, can be evaluated at multiple frames and outperforms Separated Training with significantly fewer parameters, making it favorable for applications on edge devices.
One limitation of FFN is that, it requires more GPU memory during training as we import several input sequences. Second, FFN introduces slightly extra computation because of Weight Alteration. In future work, we are interested in improving the training efficiency of FFN.
Acknowledgment
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Supplementary Material
A. Implementation Details
The training data is randomly cropped to 224 × 224 and we perform random flipping except for Something-Something datasets. At inference stage, all frames will be center-cropped to 224 × 224 except SlowFast [6] which adopts the resolution of 256 × 256 for evaluation. We use one-clip one-crop per video during evaluation except Uniformer [16] which utilizes one-clip three-crop evaluation protocol. We train all models on NVIDIA Tesla V100 GPUs and adopt the same training hyperparameters with the official implementations.
B. Results of Different Depths
C. Results of Different Middle Sequences
Another design choice in our method is the selection of middle sequence v M , as v L and v H are usually set at first based on the range of the computations. Thus, we sample 8/10/12 frames for v M respectively and evaluate them at various frame numbers in Tab. 6. When we sample 8 frames for v M , FFN obtains the best performance at 8 Frame compared to the other two choices and the phenomenon is the same when sampling 10 or 12 frames for v M . This meets our expectation as the specialized normalization for v M learns its corresponding transformation. Overall, all three choices lead to consistent improvement over Separated Training (ST) at all frames.
D. Any Frame Inference of Input Sequences Combinations
In the main text, we have conducted the ablation of input sequences combinations. We further validate the three models at more fine-grained frame numbers with the proposed inference paradigm and the results are shown in Tab. 7. One can observe that FFN(2) obtains lower accuracy compared to ST at 6/8/10 Frame because of the missing middle sequence. While FFN(4) achieves the highest performance at 8/10/12 Frame as the introduced sequence at Frame 12 will alleviate the Temporal Frequency Deviation nearby. In previous parts, we have conducted experiments which train the model at Frame 8/12/16 and evaluate their performance at different frames. Here we further train the model at 4 Frame and show the validation results in Fig. 9. Similarly, we can observe that frames close to 4 exhibit the slightest performance drop as their normalization statistics is more similar with frame 4 which further verifies the Nearby Alleviation phenomenon. We have shown the calculated normalization statistics, Mean: µ and Variance: σ 2 in previous sections. In this part, we further include the calculated statistics of Scale: γ and Bias: β in Fig. 10. One can observe that the two curves are not aligned with each other which further demonstrates that the discrepancy of BN statistics is an important reason for Temporal Frequency Deviation phenomenon and specializing normalization operations in deep networks is an intuitive way to resolve normalization shifting.
E. Further Verification of Nearby Alleviation
F. Statistics of Normalization Shifting
G. Validation of Normalization Shifting
To further prove that our method can mitigate the normalization shifting problem, we compare the BN statistics of ST (16F) and FFN (16F) which is trained with TSM [18] on Something-Something V1 [7] dataset. As is shown in Fig. 11, one can observe that the two curves are wellaligned with each other which demonstrates that the calculated statistics are very similar and the normalization shifting problem can be alleviated by FFN.
H. Quantitative Results
In the Experiments section, we show performance analysis of FFN across architectures and datasets in the figure and we also provide the corresponding quantitative results in Tab. 8 and Tab. 9 for reference.
Frequency Deviation phenomenon exists in different depths of deep networks.
Figure 1 .
1Temporal Frequency Deviation phenomenon widely exists in video recognition. All methods are trained with high frame number and evaluated at other frames to compare with Separated Training (ST) which individually trains the model at different frames on Something-Something V1 dataset.
Figure 2 .
2Nearby Alleviation phenomenon. TSM model is trained at 8 Frame and 12 Frame separately on Something-Something V1 dataset and will be evaluated at other frames.
Figure 3 .
3Batch Normalization statistics at various layers. TSM models are trained at 4 Frame and 16 Frame separately, and the statistics are calculated from the fourth stage of ResNet-50.
Figure 4 .
4Illustration of Frame Flexible Network (FFN). During training, given inputs with different temporal frequency v L , v M and v H , we propose Multi-Frequency Alignment which involves Weight Sharing and Temporal Distillation for temporal frequency invariant learning. Besides, we present Multi-Frequency Adaptation to fit the temporal invariant features to different sub-networks to further increase the representation abilities. During inference, we activate the sub-network which has the corresponding frame number with the input.
For the baseline results, we train all methods with v H and evaluate them at v L , v M and v H . Separated Training (ST) denotes training the network at v L , v M , and v H individually and evaluating them at the frame used in training. Baseline Methods. In addition to Separated Training (ST) introduced before, we provide four more baseline methods for this problem: (1) Mixed Sampling: We sample 4 and 16 frames for v L i and v H i , respectively. Then we randomly choose 4 consecutive frames v H ′ i from v H i and apply mixup [38] to integrate v L i into v H i . The hyperparameter ρ decides the probability of whether to apply Mixed Sampling at each iteration. (2) Proportional Sampling: We let the network randomly sample 4 frames or 16 frames at each iteration as this pair has the most significant Temporal Frequency Deviation phenomenon. The hyperparameter ϱ denotes the probability to sample 16 frames for every iteration. (3) Fine-tuning: We first train the model at 16 Frame and then fine-tune it at 4 Frame. (4) Ensemble: We make use of the models that are individually trained at 4,8 and 16 Frame and averagely ensemble them to form a new model.
Figure 6 .
6Validation results across different video recognition architectures on Something-Something V1 dataset, including 2D-network, 3D-network and Transformer-network. The improvements of FFN over ST are listed in the table.
Figure 7 .
7Validation results across various video recognition datasets. The improvements of FFN over ST are listed in the table.
Figure 9 .
9Validation results of TSM which is trained at 4 Frame on Something-Something V1 dataset.
Figure 10 .
10Batch Normalization statistics at various layers. TSM models are trained at 4 Frame and 16 Frame separately, and the statistics are calculated from the fourth stage of ResNet-50.
Figure 11 .
11Batch Normalization statistics at various layers. TSM-ST is trained at 16 Frame and both models are evaluated at 16 Frame as well. The statistics are calculated from the fourth stage of ResNet-50.
Figure 5. Specific designs of Weight Alteration, Convolution Block, and Transformer Block in Frame Flexible Network (FFN). Weight Alteration is a Depth-wise convolution layer with a residual structure and we insert it into each Convolution Block and Transformer Block.Conv
BN
Conv
BN
Conv
BN
Multi-Head
Self-Attention
LN
Feed
Forward
LN
Alter
Convolution Block
Transformer Block
Weight Alteration
DW Conv
Alter
Addition
Weight
Sharing
Table 1. Comparison with baseline methods on Something-Something V1 dataset. GFLOPs stands for the average computational cost to process a single video. The best results are bold-faced, the second best results are marked in color and the improvements are shown.Method
Specification Parameters
4 Frame (4F)
8 Frame (8F)
16 Frame (16F)
Top-1 Acc. (%) GFLOPs Top-1 Acc. (%) GFLOPs Top-1 Acc. (%) GFLOPs
TSM [18]
-
25.6M
20.60
16.4
37.36
32.7
48.55
65.4
TSM-Mixed
ρ = 0.50
25.6M
27.89
16.4
41.07
32.7
48.44
65.4
TSM-Mixed
ρ = 0.75
25.6M
30.43
16.4
42.56
32.7
47.81
65.4
TSM-Proportional
ϱ = 0.50
25.6M
37.56
16.4
44.82
32.7
45.37
65.4
TSM-Proportional
ϱ = 0.75
25.6M
32.06
16.4
43.15
32.7
47.14
65.4
TSM-Fine-tuning
16F→4F
25.6M
39.95
16.4
40.37
32.7
28.96
65.4
TSM-Ensemble
-
25.6×3M
35.88
16.4×3
46.25
32.7×3
46.82
65.4×3
TSM-ST
-
25.6×3M
39.71
16.4
45.63
32.7
48.55
65.4
TSM-FFN
-
25.7M
42.85 (2.90↑)
16.4
48.20 (1.95↑)
32.8
50.79 (2.24↑)
65.5
Table 2. Experiments of different input sequences combinations on Something-Something V1. The best results are bold-faced.TSM_ST
TSM_FFN
TSM_FFN*
∆ Acc1.(%)
2F*
+0.12
4F
+3.14
6F
+2.84
8F
+2.57
10F
+1.50
12F
+1.19
14F
+2.46
16F
+2.24
18F*
+2.35
20F*
+2.73
Figure 8. Validation results of FFN at various frame numbers on
Something-Something V1. The improvements of FFN over ST are
listed in the table. Outbound results are denoted with *.
Method
Sequences
Top-1 Acc. (%)
4F
8F
12F
16F
TSM-ST [18]
-
39.71 45.63 47.71 48.55
TSM-FFN(2)
4/16
41.69 37.93 48.10 49.79
TSM-FFN(3)
4/8/16
42.85 48.20 48.90 50.79
TSM-FFN(4) 4/8/12/16 43.40 48.66 49.77 50.63
. 4. First, we build FFN with shared normalization and one can observe an obvious performance drop at 4/16 Frame due to the shift in normalization statistics. Then, we remove Weight Alteration in the convolution block and it exhibits worse performance at all frames which proves the strength of Multi-Frequency Adaptation (MFAD) as it increases the representation abilities of the sub-networks at corresponding frames. Further, we optimize FFN by calculating CE loss on the predictions of all sub-networks respectively and do not utilize KL divergence loss for optimization. The results are clearly worse than ST which suggests that Temporal Distillation is a necessary component for Multi-Frequency Alignment (MFAL). Finally, we specialize convolutions (w/o WS) and note thatMethod
Parameters
Top-1 Acc. (%)
4F
8F
16F
24F
TSM-ST [18] 25.6×4M 39.71 45.63 48.55 47.90
TSM-FFN
25.7M
41.28 46.72 49.79 49.95
Table 3 .
3Validation results of FFN with more sampled frames on Something-Something V1. The best results are bold-faced.Method
Parameters Specification
Top-1 Acc. (%)
4F
8F
16F
TSM-ST [18] 25.6×3M
-
39.71 45.63 48.55
TSM-FFN
25.7M
w/o SN
35.79 46.80 44.62
TSM-FFN
25.6M
w/o WA
41.91 47.92 49.84
TSM-FFN
25.7M
w/o TD
39.61 45.65 48.07
TSM-FFN
25.6×3M
w/o WS
41.51 47.16 48.23
TSM-FFN
25.7M
-
42.85 48.20 50.79
Table 4 .
4Ablation of design choices of FFN on Something-
Something V1. SN, TD, WA, WS denotes Specialized Normal-
ization, Temporal Distillation, Weight Alteration, Weight Sharing,
respectively. The best results are bold-faced.
this operation will multiply the parameters by three times.
We can observe both FFN and FFN(w/o WA) outperform
specialized convolutions at all frames with fewer parame-
ters which demonstrates the effectiveness of MFAL to learn
temporal frequency invariant representations.
Table 5 .
5Experiments with different depths on Something-Something V1. The best results are bold-faced. R101)-FFN 45.15(4.39↑) 50.24(3.28↑) 51.79(2.22↑) As we have shown in the main text, Temporal Frequency Deviation phenomenon exists in different depths of the network which means it has no relation to the representation ability. But whether FFN can address this issue at other depths remains a problem. As previous experiments are built on ResNet-50 [8], we conduct experiments on ResNet-18, ResNet-101 and include their results in Tab. 5. The results show that FFN outperforms Separated Training (ST) at different frame numbers which proves that FFN can effectively resolve Temporal Frequency Deviation problem regardless of the depths of the deep network.Method
Top-1 Acc.(%)
v L
v M
v H
TSM(R18) [18]
16.82
33.12
42.95
TSM(R18)-ST
32.33
38.21
42.95
TSM(R18)-FFN
36.83(4.50↑) 41.61(3.40↑) 43.57(0.62↑)
TSM(R101) [18]
22.15
39.30
49.57
TSM(R101)-ST
40.76
46.96
49.57
TSM(
Table 6 .
6Experiments with different middle sequences on Something-Something V1. The best results are bold-faced. Frame 8 Frame 10 Frame 12 Frame 14 Frame 16 FrameTable 7. Any frame inference results of input sequences combinations on Something-Something V1. The best results are bold-faced.Method
v M
Top-1 Acc.(%)
4 Frame 6 TSM [18]
-
20.60
30.23
37.36
42.72
45.97
47.49
48.55
TSM-ST
-
39.71
43.73
45.63
47.31
47.71
48.01
48.55
TSM-FFN 8F
42.85
46.57
48.20
48.81
48.90
50.47
50.79
TSM-FFN 10F
43.10
44.77
47.81
49.26
49.63
50.67
51.12
TSM-FFN 12F
42.92
43.57
46.82
48.85
49.73
50.40
50.79
Method
Sequences
Top-1 Acc.(%)
4 Frame 6 Frame 8 Frame 10 Frame 12 Frame 14 Frame 16 Frame
TSM [18]
-
20.60
30.23
37.36
42.72
45.97
47.49
48.55
TSM-ST
-
39.71
43.73
45.63
47.31
47.71
48.01
48.55
TSM-FFN(2)
4/16
41.69
42.07
37.93
46.11
48.10
49.37
49.79
TSM-FFN(3)
4/8/16
42.85
46.57
48.20
48.81
48.90
50.47
50.79
TSM-FFN(4) 4/8/12/16
43.40
46.51
48.66
48.92
49.77
50.11
50.63
Table 8 .
8Quantitative results of different architectures experiments on Something-Something V1. The best results are bold-faced. Uniformer-FFN 51.41(7.08↑) 56.64(5.15↑) 58.88(2.17↑)Table 9. Quantitative results of different datasets experiments on TSM. The best results are bold-faced.Method
Top-1 Acc.(%)
v L
v M
v H
TSM [18]
20.60
37.36
48.55
TSM-ST
39.71
45.63
48.55
TSM-FFN
42.85(3.14↑) 48.20(2.57↑) 50.79(2.24↑)
TEA [17]
21.78
41.49
51.23
TEA-ST
41.36
48.37
51.23
TEA-FFN
44.97(3.61↑) 51.61(3.24↑) 54.04(2.81↑)
SlowFast [6]
15.08
35.08
45.88
SlowFast-ST
39.91
44.12
45.88
SlowFast-FFN
43.90(3.99↑) 47.11(2.99↑) 47.27(1.39↑)
Uniformer [16]
22.38
47.98
56.71
Uniformer-ST
44.33
51.49
56.71
Method
Dataset
Top-1 Acc.(%)
v L
v M
v H
TSM [18]
Sth-Sth V2
31.52
51.55
61.02
TSM-ST
53.38
59.29
61.02
TSM-FFN
56.07(2.69↑) 61.86(2.57↑) 63.61(2.59↑)
TSM [18]
Kinetics400
64.10
69.77
73.16
TSM-ST
66.25
70.38
73.16
TSM-FFN
68.96(2.71↑) 72.33(1.95↑) 74.35(1.19↑)
TSM [18]
HMDB51
42.16
46.38
48.30
TSM-ST
44.74
46.77
48.30
TSM-FFN
45.67(0.93↑) 47.67(0.90↑) 48.80(0.50↑)
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hin, arXiv:1607.06450ton. Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 3
Quo vadis, action recognition? a new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, CVPR. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 2
Xception: Deep learning with depthwise separable convolutions. François Chollet, CVPR. François Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, 2017. 5
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, arXiv:2010.11929Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 2
Multiscale vision transformers. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, Christoph Feichtenhofer, ICCV. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021. 2
Slowfast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, ICCV. 12Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019. 2, 6, 7, 11, 12
The" something something" video database for learning and evaluating visual common sense. Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, 612Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michal- ski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The" something something" video database for learning and evaluating visual common sense. In ICCV, 2017. 6, 12
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. 511Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 5, 11
G Andrew, Menglong Howard, Bo Zhu, Dmitry Chen, Weijun Kalenichenko, Tobias Wang, Marco Weyand, Hartwig Andreetto, Adam, arXiv:1704.04861Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprintAndrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. Mobilenets: Efficient convolu- tional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 1
Multi-scale dense networks for resource efficient image classification. Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens Van Der Maaten, Kilian Q Weinberger, arXiv:1703.09844arXiv preprintGao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens Van Der Maaten, and Kilian Q Weinberger. Multi-scale dense networks for resource efficient image classification. arXiv preprint arXiv:1703.09844, 2017. 1
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, arXiv:1705.06950The kinetics human action video dataset. arXiv preprintWill Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics hu- man action video dataset. arXiv preprint arXiv:1705.06950, 2017. 6
Scsampler: Sampling salient clips from video for efficient action recognition. Bruno Korbar, Du Tran, Lorenzo Torresani, ICCV. Bruno Korbar, Du Tran, and Lorenzo Torresani. Scsampler: Sampling salient clips from video for efficient action recog- nition. In ICCV, 2019. 3
Hmdb: a large video database for human motion recognition. Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, Thomas Serre, ICCV. Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. Hmdb: a large video database for human motion recognition. In ICCV, 2011. 6
Information theory and statistics. Courier Corporation. Solomon Kullback, Solomon Kullback. Information theory and statistics. Courier Corporation, 1997. 5
Learning to learn parameterized classification networks for scalable input images. Duo Li, Anbang Yao, Qifeng Chen, ECCV, 2020. 1. 24Duo Li, Anbang Yao, and Qifeng Chen. Learning to learn parameterized classification networks for scalable input im- ages. In ECCV, 2020. 1, 2, 4
Uniformer: Unifying convolution and self-attention for visual recognition. Kunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao, arXiv:2201.094501112arXiv preprintKunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Uni- fying convolution and self-attention for visual recognition. arXiv preprint arXiv:2201.09450, 2022. 2, 7, 11, 12
Tea: Temporal excitation and aggregation for action recognition. Yan Li, Bin Ji, Xintian Shi, Jianguo Zhang, Bin Kang, Limin Wang, CVPR, 2020. 612Yan Li, Bin Ji, Xintian Shi, Jianguo Zhang, Bin Kang, and Limin Wang. Tea: Temporal excitation and aggregation for action recognition. In CVPR, 2020. 2, 6, 12
Tsm: Temporal shift module for efficient video understanding. Ji Lin, Chuang Gan, Song Han, ICCV. 1112Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. In ICCV, 2019. 2, 3, 6, 8, 11, 12
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, ICCV. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 2
Video swin transformer. Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu, CVPR. 2022Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In CVPR, 2022. 2
Ar-net: Adaptive frame resolution for efficient action recognition. Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, Rogerio Feris, ECCV. Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, and Rogerio Feris. Ar-net: Adaptive frame resolution for effi- cient action recognition. In ECCV, 2020. 3
St-adapter: Parameter-efficient image-to-video transfer learning for action recognition. Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li, arXiv:2206.1355935arXiv preprintJunting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, and Hong- sheng Li. St-adapter: Parameter-efficient image-to-video transfer learning for action recognition. arXiv preprint arXiv:2206.13559, 2022. 3, 5
Adapterfusion: Nondestructive task composition for transfer learning. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych, arXiv:2005.00247arXiv preprintJonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non- destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020. 3
Adapterhub: A framework for adapting transformers. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych, arXiv:2007.07779arXiv preprintJonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Ka- math, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. Adapterhub: A framework for adapting transformers. arXiv preprint arXiv:2007.07779, 2020. 3
Semi-supervised action recognition with temporal contrastive learning. Ankit Singh, Omprakash Chakraborty, Ashutosh Varshney, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das, CVPR. Ankit Singh, Omprakash Chakraborty, Ashutosh Varshney, Rameswar Panda, Rogerio Feris, Kate Saenko, and Abir Das. Semi-supervised action recognition with temporal con- trastive learning. In CVPR, 2021. 4
Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. Yi-Lin Sung, Jaemin Cho, Mohit Bansal, CVPR. 2022Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In CVPR, 2022. 3
Fixing the train-test resolution discrepancy. Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou, NeurIPS. 2Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Hervé Jégou. Fixing the train-test resolution discrepancy. NeurIPS, 2019. 2
Learning spatiotemporal features with 3d convolutional networks. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar Paluri, ICCV. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. 2
Tdn: Temporal difference networks for efficient action recognition. Limin Wang, Zhan Tong, Bin Ji, Gangshan Wu, CVPR. Limin Wang, Zhan Tong, Bin Ji, and Gangshan Wu. Tdn: Temporal difference networks for efficient action recogni- tion. In CVPR, 2021. 2
Temporal segment networks: Towards good practices for deep action recognition. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool, ECCV. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recogni- tion. In ECCV, 2016. 2
Adaptive focus for efficient video recognition. Yulin Wang, Zhaoxi Chen, Haojun Jiang, Shiji Song, Yizeng Han, Gao Huang, arXiv:2105.03245arXiv preprintYulin Wang, Zhaoxi Chen, Haojun Jiang, Shiji Song, Yizeng Han, and Gao Huang. Adaptive focus for efficient video recognition. arXiv preprint arXiv:2105.03245, 2021. 3
A dynamic frame selection framework for fast video recognition. Zuxuan Wu, Hengduo Li, Caiming Xiong, Yu-Gang Jiang, Larry Steven Davis, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20203Zuxuan Wu, Hengduo Li, Caiming Xiong, Yu-Gang Jiang, and Larry Steven Davis. A dynamic frame selection frame- work for fast video recognition. IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 2020. 3
Video representation learning with visual tempo consistency. Ceyuan Yang, Yinghao Xu, Bo Dai, Bolei Zhou, arXiv:2006.15489arXiv preprintCeyuan Yang, Yinghao Xu, Bo Dai, and Bolei Zhou. Video representation learning with visual tempo consistency. arXiv preprint arXiv:2006.15489, 2020. 4
Mutualnet: Adaptive convnet via mutual learning from network width and resolution. Taojiannan Yang, Sijie Zhu, Chen Chen, Shen Yan, Mi Zhang, Andrew Willis, ECCV, 2020. 1. 24Taojiannan Yang, Sijie Zhu, Chen Chen, Shen Yan, Mi Zhang, and Andrew Willis. Mutualnet: Adaptive convnet via mutual learning from network width and resolution. In ECCV, 2020. 1, 2, 4
Mutualnet: Adaptive convnet via mutual learning from different model configurations. Taojiannan Yang, Sijie Zhu, Matias Mendieta, Pu Wang, Ravikumar Balakrishnan, Minwoo Lee, Tao Han, Mubarak Shah, Chen Chen, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20212Taojiannan Yang, Sijie Zhu, Matias Mendieta, Pu Wang, Ravikumar Balakrishnan, Minwoo Lee, Tao Han, Mubarak Shah, and Chen Chen. Mutualnet: Adaptive convnet via mutual learning from different model configurations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 2
Universally slimmable networks and improved training techniques. Jiahui Yu, S Thomas, Huang, ICCV. 25Jiahui Yu and Thomas S Huang. Universally slimmable net- works and improved training techniques. In ICCV, 2019. 2, 3, 5
Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, Thomas Huang, arXiv:1812.08928Slimmable neural networks. arXiv preprintJiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. arXiv preprint arXiv:1812.08928, 2018. 1, 2, 3, 5
Hongyi Zhang, Moustapha Cisse, David Yann N Dauphin, Lopez-Paz, arXiv:1710.09412mixup: Beyond empirical risk minimization. arXiv preprintHongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza- tion. arXiv preprint arXiv:1710.09412, 2017. 6
Minivit: Compressing vision transformers with weight multiplexing. Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, Lu Yuan, CVPR. 2022Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Minivit: Compressing vi- sion transformers with weight multiplexing. In CVPR, 2022. 3
Shufflenet: An extremely efficient convolutional neural network for mobile devices. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun, CVPR. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural net- work for mobile devices. In CVPR, 2018. 1
Look more but care less in video recognition. Yitian Zhang, Yue Bai, Huan Wang, Yi Xu, Yun Fu, arXiv:2211.099922022arXiv preprintYitian Zhang, Yue Bai, Huan Wang, Yi Xu, and Yun Fu. Look more but care less in video recognition. arXiv preprint arXiv:2211.09992, 2022. 3
| [
"https://github.com/BeSpontaneous/FFN."
]
|
[
"Two-component GW calculations: Cubic scaling implementation and comparison of vertex corrected and partially self-consistent GW variants",
"Two-component GW calculations: Cubic scaling implementation and comparison of vertex corrected and partially self-consistent GW variants"
]
| [
"Arno Förster \nTheoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands\n",
"˚, ;Erik Van Lenthe [email protected] \nTheoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands\n",
"Edoardo Spadetto \nTheoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands\n",
"Lucas Visscher \nTheoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands\n"
]
| [
"Theoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands",
"Theoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands",
"Theoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands",
"Theoretical Chemistry\nSoftware for Chemistry and Materials NV\nVrije Universiteit\nDe Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands"
]
| []
| We report an all-electron, atomic orbital (AO) based, two-component (2C) implementation of the GW approximation (GWA) for closed-shell molecules. Our algorithm is based on the space-time formulation of the GWA and uses analytical continuation of the self-energy, and pair-atomic density fitting (PADF) to switch between AO and auxiliary basis. By calculating the dynamical contribution to the GW self-energy at a quasi-one-component level, our 2C GW algorithm is only about a factor of two to three slower than in the scalar relativistic case. Additionally, we present a 2C implementation of the simplest vertex correction to the self-energy, the statically screened G3W 2 correction. Comparison of first ionization potentials of a set of 67 molecules with heavy elements (a subset of the SOC81 set) calculated with our implementation against results from the WEST code reveals mean absolute deviations of around 1 arXiv:2303.09979v2 [physics.chem-ph] 17 May 2023 70 meV for G 0 W 0 @PBE and G 0 W 0 @PBE0. These are most likely due to technical differences in both implementations, most notably the use of different basis sets, pseudopotential approximations, different treatment of the frequency dependency of the self-energy and the choice of the 2C-Hamiltonian. However, how much each of these differences contribute to the observed discrepancies is unclear at the moment. Finally, we assess the performance of some (partially self-consistent) variants of the GWA for the calculation of first IPs by comparison to vertical experimental reference values. G 0 W 0 PBE0 (25 % exact exchange) and G 0 W 0 BHLYP (50 % exact exchange) perform best with mean absolute deviations (MAD) of about 200 meV. Explicit treatment of spin-orbit effects at the 2C level is crucial for systematic agreement with experiment.On the other hand eigenvalue-only self-consistent GW (evGW ) and quasi-particle selfconsistent GW (qsGW ) significantly overestimate the IPs. Perturbative G3W 2 corrections increase the IPs and therefore improve the agreement with experiment in cases where G 0 W 0 alone underestimates the IPs. With a MAD of only 140 meV, 2C-G 0 W 0 PBE0 + G3W 2 is in best agreement with the experimental reference values. | null | [
"https://export.arxiv.org/pdf/2303.09979v2.pdf"
]
| 258,741,072 | 2303.09979 | 2ad2d7cef0405aa5671e36caf24efee37c05c420 |
Two-component GW calculations: Cubic scaling implementation and comparison of vertex corrected and partially self-consistent GW variants
Arno Förster
Theoretical Chemistry
Software for Chemistry and Materials NV
Vrije Universiteit
De Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands
˚, ;Erik Van Lenthe [email protected]
Theoretical Chemistry
Software for Chemistry and Materials NV
Vrije Universiteit
De Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands
Edoardo Spadetto
Theoretical Chemistry
Software for Chemistry and Materials NV
Vrije Universiteit
De Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands
Lucas Visscher
Theoretical Chemistry
Software for Chemistry and Materials NV
Vrije Universiteit
De Boelelaan 1083NL-1081 HV, 1081HVAmsterdam, AmsterdamNLThe Netherlands, The Netherlands
Two-component GW calculations: Cubic scaling implementation and comparison of vertex corrected and partially self-consistent GW variants
We report an all-electron, atomic orbital (AO) based, two-component (2C) implementation of the GW approximation (GWA) for closed-shell molecules. Our algorithm is based on the space-time formulation of the GWA and uses analytical continuation of the self-energy, and pair-atomic density fitting (PADF) to switch between AO and auxiliary basis. By calculating the dynamical contribution to the GW self-energy at a quasi-one-component level, our 2C GW algorithm is only about a factor of two to three slower than in the scalar relativistic case. Additionally, we present a 2C implementation of the simplest vertex correction to the self-energy, the statically screened G3W 2 correction. Comparison of first ionization potentials of a set of 67 molecules with heavy elements (a subset of the SOC81 set) calculated with our implementation against results from the WEST code reveals mean absolute deviations of around 1 arXiv:2303.09979v2 [physics.chem-ph] 17 May 2023 70 meV for G 0 W 0 @PBE and G 0 W 0 @PBE0. These are most likely due to technical differences in both implementations, most notably the use of different basis sets, pseudopotential approximations, different treatment of the frequency dependency of the self-energy and the choice of the 2C-Hamiltonian. However, how much each of these differences contribute to the observed discrepancies is unclear at the moment. Finally, we assess the performance of some (partially self-consistent) variants of the GWA for the calculation of first IPs by comparison to vertical experimental reference values. G 0 W 0 PBE0 (25 % exact exchange) and G 0 W 0 BHLYP (50 % exact exchange) perform best with mean absolute deviations (MAD) of about 200 meV. Explicit treatment of spin-orbit effects at the 2C level is crucial for systematic agreement with experiment.On the other hand eigenvalue-only self-consistent GW (evGW ) and quasi-particle selfconsistent GW (qsGW ) significantly overestimate the IPs. Perturbative G3W 2 corrections increase the IPs and therefore improve the agreement with experiment in cases where G 0 W 0 alone underestimates the IPs. With a MAD of only 140 meV, 2C-G 0 W 0 PBE0 + G3W 2 is in best agreement with the experimental reference values.
Introduction
Due to its favorable price-to-performance ratio, the GW approximation (GWA) 1,2 (G: singleparticle Green's function, W : screened electron-electron interaction) is one of the most popular methods for the calculation of charged excitations in finite systems. 3,4 Over the last decade, the GWA has been implemented into a large number of electronic structure codes [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and GW implementations for massively parallel architectures, 17,21-24 low-order scaling implementations, 15,16,18,19,25 effectively linear scaling stochastic formulations, 26,27 fragment-based approaches [28][29][30][31] or embedding techniques [32][33][34] have enabled applications of the GW method to large biomolecules, 16,35 nanostructures 24,31,36 or interfaces. 24 A large numbers of studies has by now contributed to a thorough understanding of the impact of technical aspects of these implementations, like the choice of single-particle basis, pseudopotential (PP) approximations, or frequency treatment, 16,[37][38][39][40][41] as well as the performance of various GW approaches for the first ionization potentials (IP) and electron affinities (EA) of weakly correlated organic molecules. [42][43][44][45][46][47][48][49] More recently, the GWA has also been benchmarked for core excitations [50][51][52][53][54] and strongly correlated systems like open-shell molecules 55 or transition metal compounds with partially filled 3d shells. [56][57][58][59][60][61][62] Fully selfconsistent GW (scGW ) calculations are relatively expensive, technically demanding, and not necessarily very accurate for the calculation of IPs and EAs. 43,46,48 Instead, the much cheaper perturbative G 0 W 0 approach 63,64 or its eigenvalue-only self-consistent extension (evGW ) are typically the method of choice. Despite their often excellent accuracy, these methods fail when the KS orbitals for which the GW corrections are evaluated are qualitatively wrong. 35,44,46 In the quasi-particle self-consistent GW method (qsGW ), 65-67 the frequency dependent and non-Hermitian GW self-energy is mapped self-consistently to an effective static and Hermitian non-local potential which is a functional of the non-interacting singleparticle Green's function. Therefore, the results are strictly independent of the KS density functional which is used as starting-point for the calculation. 16,35 The available benchmark data suggest that for molecules qsGW is at least as accurate as G 0 W 0 . 20,49,68 Less is known about the accuracy of the GWA for molecules containing heavier elements.
One reason for this is that for those systems only a limited number of accurate first-principle results are available. 69,70 Another reason is that comparison to experimental data is complicated by spin-orbit coupling (SOC) whose explicit treatment requires to implement the GWA in a 2-component (2C) framework. While Aryasetiawan and coworkers have generalized Hedin's equation to spin-dependent interactions 71,72 more than a decade ago, only a few 2C implementations of the GWA for molecules have been realized so far. [73][74][75][76][77] The probably most systematic study of SOC effects in molecules has been performed by Scherpelz and Govoni 74 who have compiled a set of 81 molecules containing heavy elements (referred to as SOC81 in the following). 74 They performed two-component (2C) GW @PBE 78 and GW @PBE0 79,80 calculations for this set using the WEST code 21,24 and found that SOC can shift scalar relativistic (1C) first ionization potentials by up to 400 meV for molecules containing iodine. 74 Interestingly, they observed that the 1C results were often closer to experiment than the 2C ones. Also, the fact that GW @PBE and GW @PBE0 are not necessarily very accurate for molecules 46,48,81,82 suggests that the good performance of those methods for these systems might at least partially be due to fortuitous error cancellation.
The accuracy of G 0 W 0 calculations based on starting points with a higher fraction of exact exchange has however not been systematically investigated for molecules containing heavy elements. Also, little is known about the performance of partially self-consistent approaches.
In efforts to improve over the GW approximation, also the role of higher order terms in the expansion of the electronic self-energy in terms of W (vertex corrections), has been assessed over the last years for small and medium molecules. 45,49,81,[83][84][85][86][87][88] The available results suggest that they generally fail to improve consistently over the best available GW variants when they are combined with QP approximations. 49,89,90 However, they can remove some of the starting point dependence of G 0 W 0 81,87 and often tremendously improve the description of electron affinities. 84,91 With the exception of one recent study which focused on first-row transition metal oxides, 87 the available benchmark results are limited to charged valence excitations in mostly organic molecules. It is not known how these methods perform for molecules containing heavier elements, where electron correlation effects and screening effects might be stronger.
In this work, we address some of these open questions. We present systematic benchmarks of 2C-GWA at different levels of self-consistency, ranging from G 0 W 0 to qsGW . We also investigate the effect of the statically screened G3W 2 term 49 on the QP energies in a 2C framework. Our calculations are performed using a newly developed 2C (qs)GW implementation, a generalization of our atomic orbital based qsGW and G 0 W 0 algorithms. 15,16 Our 2C implementation retains the same favorable scaling with system size and increases the prefactor of the calculations by only a factor of two compared to the 1C case. This relatively small increase in computational effort is achieved by calculating the dynamical contributions to the electron self-energy at a quasi-one-component level. Therefore, our new implementation also allows us to describe SOC effects in large molecules. All other quantities, including the polarizability, are treated at the full 2C level without any further approximations.
The remainder of this paper is organized as follows: In section 2, we review the 2C-GW working equations and give a detailed overview of our implementation. After describing the details of our calculations in section 3, we report the results of our detailed benchmark calculations in section 2: First, to assess the influence of the different technical parameters in both implementations, we compare G 0 W 0 @PBE0 IPs for SOC81 to the ones from Scherpelz and Govoni. 74 We then use our new implementation to calculate the first ionization potentials of the molecules in the SOC81 database using some of the most accurate available GW approaches: qsGW , eigenvalue-only self-consistent GW (evGW ), eigenvalue-only selfconsistent GW with fixed screened interaction after the first iteration (evGW 0 ), and G 0 W 0 based on hybrid starting points with different fractions of exact exchange. Finally, section 5 summarizes and concludes this work.
Theory
GW approximation and G3W 2 correction
The central object of this work is the GW`G3W 2 self-energy,
Σ GW`G3W 2 p1, 2q " Σ H p1, 2q`Σ GW p1, 2q`Σ G3W 2 p1, 2q .(1)
Here,
Σ H p1, 2q " v H p1qδp1, 2q "´iδp1, 2q ż d3 v c p1, 3qGp3, 3`q ,(2)
with the Hartree-potential v H ,
Σ GW p1, 2q " iGp1, 2qW p1, 2q(3)
and Σ G3W 2 p1, 2q "´ż d3d4Gp1, 3qW p1, 4qGp3, 4qGp4, 2qW p3, 2q .
Space, spin, and imaginary time indices are collected as 1 " pr 1 , σ 1 , iτ 1 q. W is the screened Coulomb interaction which is obtained by the Dyson equation
W p1, 2q " W p0q p1, 2q`ż d3d4W p0q p1, 3qP p0q p3, 4qW p4, 2q .(5)
Here,
W p0q p1, 2q " v c pr 1 , r 2 qδ σ,σ 1 δpt 1´t2 q ,(6)
is the bare Coulomb interaction and P p0q is the polarizability in the random phase approximation (RPA),
P p0q p1, 2q "´iGp1, 2qGp2, 1q .(7)
Finally, G is the interacting single-particle Green's function which is connected to its noninteracting counterpart G p0q by a Dyson equation with the electronic self-energy (1) as its kernel,
Gp1, 2q " G p0q p1, 2q`ż d3d4G p0q p1, 3qΣp3, 4qGp4, 2q .(8)
If necessary, one can transform all quantities to imaginary frequency using the Laplace
transform 92 f piωq "´i ż dτ F piτ qe iωτ .(9)
The self-consistent solution of eqs. (3), (5), (7) and (8) is referred to as GW approximation.
Typically, (8) is approximated. To this end, one defines an auxiliary Green's function G psq which is related to G p0q by
G psq " G p0q p1, 2q`ż d3d4G p0q p1, 3qv Hxc p3, 4qG psq p4, 2q ,(10)
where v Hxc is a (potentially local) generalized Kohn-Sham [93][94][95] Hartree-exchange-correlation potential. G is then obtained from G psq by Gp1, 2q " G psq p1, 2q`ż d3d4G psq p1, 3q rΣ Hxc p3, 4q´v Hxc p3, 4qs Gp4, 2q .
In the basis of molecular orbitals (MO) tφ k u, G psq is diagonal,
G psq pp 1 " Θpiτ qG ą pp 1 piτ q´Θp´iτ qG ă pp 1 piτ q ,(12)
with greater and lesser propagators being defined as
G ą pp 1 piτ q "´iΘp p qe´ pτ(13)
and G ă pp 1 piτ q "´iΘp´ p qe´ pτ .
Here, it is understood that all QP energies k and KS eigenvalues KS k are measured relative to the chemical potential µ which we place in the middle of the HOMO-LUMO gap. Θ is the Heaviside step-function and p, q, r, s . . . denote spinors. Under the assumption that the KS eigenstates are a good approximation to the GW eigenstates, the off-diagonal elements of the operator Σ Hxc´vHxc in (11) can be neglected. This leads to
rΣ xc s pp p p q´rv xc s pp " p´ KS p ,(15)
Solving this equation as a perturbative correction is referred to as G 0 W 0 , while in evGW , eqs. (3), (5), (7) and (15) are solved self-consistently instead. Splitting the operator Σ Hxcv Hxc in (11) into Hermitian and anti-Hermitian part and discarding the latter one, the solution of (11) can be restricted to its QP part only. [96][97][98][99] Restricting the self-energy further to its static limit, a single-particle problem similar to the KS equations is obtained,
ÿ q ! " Σ H Hxc ‰ pq´r v Hxc s pq ) φ q prq "` p´ KS p˘φ p prq ,(16)
where Σ H " 1 2`Σ`Σ :˘d enotes the Hermitian part of the self-energy. Solving eqs. (3), (5), (7) and (16) self-consistently is referred to as the qsGW 65-67 approximation. 100 There are many possible ways to construct the qsGW Hamiltonian. 67,[101][102][103][104] In our implementation, we use the expression
" Σ pGW q pt n uq ‰ pq " $ ' ' & ' ' % " Σ pGW q p p q ‰ pq p " q " Σ pGW q p˜ q ‰ pq else .(17)
with˜ " 0. If, as in our implementation, 16 the self-energy on the real frequency axis is calculated via analytical continuation (AC), eq. (17) is numerically more stable 16,105 than constructions of the qsGW Hamiltonian in which also the off-diagonal elements are evaluated at the QP energies. 10,67 Kramers-restricted two-component formalism
Recently, an 2C implementation of the GWA for Kramers-unrestricted systems has been implemented by Holzer with O pN 4 q scaling with system size. 77 In this work we will focus on application to closed-shell molecules with no internal or external magnetic fields. This allows us to simplify the treatment considerably as it possible to define a Kramers-restricted set of spinors in which pairs of spinors are related by time-reversal symmetry.
We expand each molecular spinor in a primary basis of atomic orbitals (AO), tχ µ u µ"1,...,N bas ,
as φ k prq "¨φ Ò k prq φ Ó k prq‹ ‚" ÿ µ¨b kÒµ χ µ prq b kÓµ χ µ prq‹ ‚" ÿ µ¨p b R kÒµ`i b I kÒµ qχ µ prq pb R kÓµ`i b I kÓµ qχ µ prq‹ ‚ ,(18)
where Ò (σ " 1 2 ) and Ó (σ "´1 2 ) denote the different projections of spin on the z-axis. Each spinor φ k can be related by the time-reversal symmetry or Kramers' operatorK to a Kramers' partner φk with the same energy, k " k,
Kφ k "¨φ Ò k prq φ Ó k prq‹ ‚"¨´φ Ók prq φ Òk prq‹ ‚"¨´φ Ó,R k prq`iφ Ó,I k prq φ Ò,R k prq´iφ Ò,I k prq‹ ‚" φk .(19)
Using quaternion algebra it is possible to reduce the dimension of matrices that need to be considered to half the original size. 106 Alternatively, one may keep the full dimension, but use the spinor pairing to define matrices as either real or imaginary. We will take the latter approach in this work. Denoting pairs of spinors with pp,pq, noting thatKφp "´φ p and transforming a purely imaginary diagonal operator A that obeys A pp " App and A pp "´Ap p we can deduce
A µν,ÒÒ " ÿ p b pÒµ A pp bp Òν`ÿ p bp Òµ Appbp Òν " ÿ p bp Óµ Appbp Óν`ÿ p bp Óµ A pp b pÓν "´Aμ ν,ÓÓ A µν,ÓÒ " ÿ p b pÓµ A pp bp Òν`ÿ p bp Óµ Appbp Òν "´ÿ p bp Òµ Appbp Óν´ÿ p bp Òµ A pp b pÓν " Aμ ν,ÒÓ .(20)
Is is convenient to split this operator into real and imaginary components, and we use the character of the MO coefficient products to label real (superscript R) and imaginary (superscript I) parts of the operator,
A R µν,σσ 1 " ÿ p b R pσµ A pp b R pσ 1 ν`ÿ p b R pσµ Appb R pσ 1 ν`ÿ p b I pσµ A pp b I pσ 1 ν`ÿ p b Ī pσµ Appb Ī pσ 1 ν(21)
and
A I µν,σσ 1 " ÿ p b R pσµ A pp b I pσ 1 ν`ÿ p b R pσµ Appb Ī pσ 1 ν´ÿ p b I pσµ A pp b R pσ 1 ν´ÿ p b Ī pσµ Appb R pσ 1 ν .(22)
The time-ordered single-particle Green's function fulfills eq. (20) and therefore in AO basis obeys the relations G ž µν,Ò piτ q "´G žμ ν,Ó piτ q G ž µν,Ö piτ q "G žμ ν,OE piτ q .
Convenient is sometimes also to re-express these quantities in a spin matrix basis. We then get (denoting the unit matrix as 0, and the Pauli spin matrices as x, y and z)
G ž 0 µν piτ q "G ž µν,Ò piτ q`G ž µν,Ó piτ q " 2G ž R µν,Ò piτ q, G ž x µν piτ q "G ž µν,Ö piτ q`G ž µν,OE piτ q " 2G ž I µν,Ö piτ q, G ž y µν piτ q "iG ž µν,Ö piτ q´iG ž µν,OE piτ q " 2iG ž R µν,Ö piτ q, G ž z µν piτ q "G ž µν,Ò piτ q´G ž µν,Ó piτ q " 2G ž I µν,Ò piτ q ,(24)
which more clearly shows the relation to 1-component theories in which only the first Green's function has a non-zero value.
Polarizability in imaginary time
We next consider the polarizability. 71,72,107 Whereas in the complete formalism of Aryasetiawan and Biermann 71 the polarizability includes the response of the charge density to magnetic fields as well as the induction of current densities, both of these are considered strictly zero in a Kramers-restricted formalism. We can then define the relevant part of the polarizability in AO basis as
P p0q
Kramers symmetry implies ÿ σ,σ 1 "Ò,Ó iG ą I µσ,κσ 1 piτ qG ă R νσ 1 ,λσ p´iτ q`iG ą R µσ,κσ 1 piτ qG ă I νσ 1 ,λσ p´iτ q " 0 ,
as well as P p0q µνÒ,κλÒ piτ q "P p0q µνÓ,κλÓ piτ q P p0q µνÒ,κλÓ piτ q "P p0q µνÓ,κλÒ piτ q .
We proof these relations in appendix A. Already in the primary AO basis this would reduce the number of matrix elements that are to be calculated considerably. Further efficiency can be gained by expanding the polarizability and the Coulomb potential in a basis of auxiliary functions tf α u α"1,...,Naux with products of primary basis functions being expressed as
χ µ prqχ ν prq " ÿ α c µνα f α prq .(29)
To calculate the fitting coefficients, we use the pair-atomic density fitting (PADF) method [108][109][110][111][112][113] in the implementation of ref. 114. The following working equations are however completely general and can be implemented using any type of density fitting (DF). For instance, global density fitting using the overlap kernel 115 (also known as RI-SVS) or the attenuated Coulomb kernel 116,117 which have already been used to achieve low-scaling GW implementations 18,19 would be suitable choice as well.
For the polarizability we can eliminate the explicit dependence on spin in the transformation to the auxiliary basis and work with the spin-summed form
P p0q αβ piτ q " ÿ σσ 1 "Ò,Ó c µνα P p0q µκσ,νλσ 1 piτ qc κλβ .(30)
Likewise we define spin-independent representations of the Coulomb potential and screened interaction in the auxiliary basis as v αβ "
ż
drdr 1 f α prqv c pr, r 1 qf β pr 1 q (31) W αβ piτ q " ż drdr 1 f α prqW pr, r 1 , iτ qf β pr 1 q ,(32)
Our final expression for the polarizability is
P p0q αβ piτ q "´2ic µνα ! G ą R µκ,ÒÒ piτ qG ă R νλ,ÒÒ piτ q´G ą I µκ,ÒÒ piτ qG ă I νλ,ÒÒ piτ q G ą R µκ,ÒÓ piτ qG ă R νλ,ÒÓ piτ q´G ą I µκ,ÒÓ piτ qG ă I νλ,ÒÓ piτ q ) c κλβ ,(33)
or equivalently
P p0q αβ piτ q "´1 2 ic µνα ! G ą 0 µκ piτ qG ă 0 νλ piτ q´G ą x µκ piτ qG ă x νλ piτ q G ą y µκ piτ qG ă y νλ piτ q´G ą z µκ piτ qG ă z νλ piτ q ( c κλβ .(34)
The first term in this expression is equivalent in the spin-restricted 1C formalism. 15 Evaluation of (33) or (34) is therefore exactly four times more expensive than in a scalar relativistic calculation. Equation (33) can be implemented with quadratic scaling with system size using PADF. 15
Polarizability in imaginary frequency and MO basis
The AO based implementation of the polarizability is advantageous for rather large molecules only and it is computationally not efficient for the molecules in the SOC81 database typically containing just a few often heavy atoms. The AO based algorithms become advantageous when the local nature of the atomic orbitals can be exploited. 15 This is only possible when the system is spatially extended and many functions in the basis set decay fast with the distance from the nucleus on which they are centered.
Especially for small systems with many heavy atoms, implementations in the canonical basis are much faster since in those systems the locality of the AO basis cannot be exploited.
For this reason we also implemented the polarizability in the MO representation. In the following, we will use i, j . . . to label occupied, and a, b . . . to label virtual orbitals. Using eq. (12) and these indices, eq. (25) becomes P p0q aiai piτ q "´iΘpτ qe´p a´ i qτ´i Θp´τ qe´p i´ aqτ (35) in the MO basis. Using (9), the corresponding expression on the imaginary frequency axis is
P p0q aiai piωq "´1 a´ i´i ω´1 a´ i`i ω .(36)
Using the last equation on the r.h.s. of (18) and (29), we can write down a transformation from the auxiliary basis to the MO basis as
φ : i prqφ a prq " ÿ α c iaα f α prq (37) with c iaα " ÿ µκ pbi Òµ b aÒκ`bi Óµ b aÓκ qc µκα " c R iaα`i c I iaα " ÿ µκ pb R iÒµ b R aÒκ`b I iÒµ b I aÒκ`b R iÓµ b R aÓκ`b I iÓµ b I aÓκ qc µκὰ i ÿ µκ pb R iÒµ b I aÒκ´b I iÒµ b R aÒκ`b R iÓµ b I aÓκ´b I iÓµ b R aÓκ qc µκα .(38)
Using this expression, eq. (36) becomes
P p0q αβ piωq "c aiα P p0q aiai piωqc aiβ "2 ! c R iaα ReP p0q aiai´c I iaα ImP p0q aiai ) c R iaβ`2 ! c R iaα ImP p0q aiai`c I iaα ReP p0q aiai ) c I iaβ .(39)
Screened interaction and self-energy
If necessary, the polarizability is transformed to the imaginary frequency axis where the screened interaction is calculated in the basis of auxiliary functions using eq. (5),
W αβ piωq " v αβ`ÿ γδ v αγ P p0q γδ piωqW δγ piωq .(40)
For the evaluation of the self-energy, we partition the screened Coulomb interaction as
Ă W " W´v .(41)
This allows us to use different approximations for the dynamical and static contributions to the self-energy. To evaluate the self-energy on the imaginary frequency axis, we first define the time-ordered self-energy 118
Σ xc piτ q " Σ x`Θ pτ qΣ ą c piτ q´Θp´τ qΣ ă c piτ q .(42)
Here, the greater and lesser components of the self-energy are given by
" Σ ž c ‰ µν,σσ 1 piτ q " iG ž κλ,σσ 1 piτ qc µκα Ă W αβ piτ qc νλβ ,(43)
and the singular contribution (Fock term) as
rΣ x s µν,σσ 1 " iG ă κλ,σσ 1 piτ Ñ 0´qc µκα v αβ c νλβ .(44)
Dynamical contribution In the basis of Pauli matrices, (43) can be expanded as
" Σ ž c ‰ µν piτ q " i¨G ž 0 κλ piτ q`G ž z κλ piτ q G ž x κλ piτ q´iG ž y κλ piτ q G ž x κλ piτ q`iG ž y κλ piτ q G ž 0 κλ piτ q´G ž z κλ piτ q‹ ‚cµκα Ă W αβ piτ qc νλβ .(45)
In the correlation part of the self-energy we only calculate the contribution due to G ž 0 , i.e., G ž x ,G ž y , G ž z are set to zero. Therefore, using (24), eq. (45) reduces to
" Σ ž c ‰ µν piτ q " 2i¨G ž R κλ,Ò piτ q 0 0 G ž R κλ,Ò piτ q‹ ‚cµκα Ă W αβ piτ qc νλβ .(46)
This quantity has the form as in the 1C formalism and in the same way as in our 1C
implementation. 15 Notice also, that G ž R has a prefactor of´i due to the definitions eqs. (13) and (14). We Fourier transform (43) to the imaginary frequency axis using eq. (9), for which we follow the treatment of Liu et al. 119 From there, the self-energy is transformed back to the MO basis and analytically continued to real frequencies using the algorithm by Vidberg and Serene. 120 For details on the AC for G 0 W 0 and qsGW we refer to our previous work. 15,16 Hartree-exchange contribution Equation (44) is recovered from (45) by replacing Ă W piτ q
with v c and using D " G ă piτ Ñ 0´q instead of G ă piτ q. The resulting expression is identical to the ones typically implemented in 2C-Hartree-Fock codes, 121,122
rΣ x s µν "¨D 0 κλ`D z κλ D x κλ´i D y κλ D x κλ`i D y κλ D 0 κλ´D z κλ‹ ‚cµκαvαβcνλβ(47)
where the different components of D are obtained as the iτ Ñ 0 limit of eq. (24). In qsGW ,
we also need to evaluate the block-diagonal Hartree-contribution to the self-energy,
rΣ H s µν "¨D 0 κλ D 0 κλ‹ ‚cµναvαβcκλβ(48)
The full qsGW Hamiltonian is then constructed according to eq. (17) and eq. (72) is solved in the MO basis from the previous iteration. The new set of MO expansion coefficients and QP energies is then used to evaluate eq. (24) in the next iteration.
The G3W 2 Correction
As explained in ref. 49, we evaluate the contribution of the G3W 2 term to the self-energy as a perturbative correction to the solution of the GWA. Relying on the assumption that GW already gives rather accurate QP energies we expand Σ G3W 2 around the GW QP energies and obtain
GW`G3W 2 p " GW p`Σ G3W 2 pp p GW p q ,(49)
at zeroth order where Σ G3W 2 pp is evaluated using the GW QP energies obtained from the solution of (15) or (16). We restrict ourselves to the statically screened G3W 2 self-energy which is obtained from (4) by replacing both W p1, 2q with W p1, 2qδpt 1´t2 q. 49 In terms of G psq and in a basis of single-particle states (In case of G 0 W 0 or evGW this would be the basis of KS states, in case of qsGW the basis of qsGW eigenstates), this term becomes 123
Σ G3W 2 pp p p q " occ ÿ i virt ÿ ab W piω " 0q paib W piω " 0q aibp a` b´ i´ p´o cc ÿ ij virt ÿ a W piω " 0q piaj W piω " 0q iajp a´ i´ j` p ,(50)
with W piω " 0q pqrs " ż drdr 1 φ p prqφ : q prqW pr, r 1 , iω " 0qφ r pr 1 qφ :
s pr 1 q .(51)
Using the transformation eqs. (37) and (38) we write (51) as
W piω " 0q pqrs " ÿ α d pqα c rsβ ,(52)
with
d pqα " ÿ β c pqβ W piω " 0q αβ .(53)
When complex matrix algebra is used, inserting this transformation into (50)
The final self-energy correction (50) is then evaluated as
Σ G3W 2 pp p p q " occ ÿ i virt ÿ ab e paib e aibp´fpaib f aibp a` b´ i´ p´o cc ÿ ij virt ÿ a e piaj e iajp´fpiaj f iajp a´ i´ j` p .(55)
Here, the by far most expensive step is the calculation of the first four intermediates defined in the first equation of (54). Therefore, evaluating (55) is four times more expensive than the corresponding 1C expression.
Computational Details
Choice of 2C-Hamiltonian
The 2C GW equations have been implemented in a locally modified development version of the Slater Type orbital (STO) based ADF engine 124 within the Amsterdam modeling suite (AMS2022). 125 In principle, the implementation is independent of the choice of the particular choice of the 2C Hamiltonian. In the work, we use the zeroth-order regular approximation (ZORA) Hamiltonian by van Lenthe et al, [126][127][128] which can be written as 128
h ZORA 1 prq "ĥ ZORA,SR 1 prq`ĥ ZORA,SO 1 prq .(56)
The first term,ĥ 133 In a next step, these 4-component equations are transformed to a 2C form. 134 We found, that the particular choice of 2C Hamiltonian (ZORA, X2C or RA-X2C) only affects the final ionization potentials (IP) by a few 10 meV.
ZORA,SR 1 prq " v ext prq` p c 2 2c 2´v ext prq p(57)
Basis Sets
In all calculations, we expand the spinors in (18) in all-electron STO basis sets of triple-and quadruple-ζ quality (TZ3P and QZ6P, respectively). 135 The STO type basis sets in ADF are restricted to a maximum angular momentum of l " 3, which complicates reaching the basis set limit for individual QP energies. 41,136 This is especially true for heavier elements with
This choice is well justified since the major part correction to the KS QP energies comes from the scalar relativistic part of the GW correction. The spin-orbit correction and the G3W 2 corrections are typically of the order of only a few hundred meV in magnitude (also see explicit values in the supporting information). Therefore, even relatively large errors in these quantities while only have a minor effect on the final results.
Technical Details
We perform G 0 W 0 calculations using PBE, PBE0 and BHLYP 142 orbitals and eigenvalues.
The latter functional contains 50 % of exact exchange which is typically the optimal fraction for G 0 W 0 QP energies for organic molecules. and BAND. 114 The size of the auxiliary basis in this approach can be tuned by a single threshold which we set to aux " 1ˆ10´1 0 in all partially self-consistent calculations and to aux " 1ˆ10´8 for G 0 W 0 . This corresponds to a very large auxiliary basis which is typically around 12 times larger than the primary basis and eliminates PADF errors for relative energies of medium molecules almost completely. 114 Imaginary time and imaginary frequency variables are discretized using non-uniform bases T " tτ α u α"1,...Nτ and W " tω α u α"1,...Nω of sizes N τ and N ω , respectively, tailored to each system. More precisely, (9) is implemented as
F piω α q " Ω pcq αβ F piτ β q (63) F piω α q " Ω psq αβ F piτ β q ,(64)
where F and F denote even and odd parts of F . The transformation from imaginary frequency to imaginary time only requires the (pseudo)inversion of Ω pcq and Ω psq , respectively.
Our procedure to calculate Ω pcq and Ω psq as well as T and W follows Kresse and cowork-ers. 119,144,145 The technical specifications of our implementation have been described in the appendix of ref. 135.
Convergence acceleration
For the molecules in the SOC81 set, we have found that the evGW and evGW 0 calculations On the other hand, using our own DIIS implementation of ref. 16 the qsGW equations often do not convergence for the systems in the SOC81 set. As discussed in the literature, 135,147 this issue is related to multiple QP solutions which seem to occur frequently in systems containing heavy elements. More sophisticated DIIS algorithms might offer a solution to this problem. 148 In addition to the switching between the QP peaks there is additional numerical strain which most likely arises from precision issues from the AC of the self-energy. Especially problematic are the off-diagonal elements of the self-energy matrix which should be zero at convergence. For a more detailed discussion we refer to our previous work. 16 In this work, we have found a linear mixing strategy with adaptive mixing parameter α mix to lead to stable convergence of the qsGW SCF procedure after typically around 15
iterations. Specifically, we start the self-consistency cycle with α
Results
Comparison to WEST
In this section 4, we compare our results for SOC81 to the ones calculated by Scherpelz et al. 74 with the WEST code. 21,24
Multi-solution Cases
Before discussing the results in detail, we notice that Scherpelz and Govoni identified in total 14 systems 149 in the SOC81 set for which the non-linear QP equations (15) have multiple solutions for G 0 W 0 @PBE. 74 All of these solutions can be found graphically in the sum-overstates formalism (analytical integration of the self-energy) 8 to their TZ3P+ ( QZ6P+) counterparts that adding any higher angular momenta functions will improve the quality of the AO basis, the effect is typically more pronounced on the TZ than on the QZ level. This might then lead to larger inaccuracies in the CBS limit extrapolation than in plane-wave based implementations.
Scalar relativistic Ionization potentials
ADF/BAND WEST scalar 2C scalar 2C G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @ G 0 W 0 @
Changes in Ionization Potentials due to Spin-Orbit Coupling
Finally, the agreement between ADF/BAND and WEST is slightly better for the 2C than for the scalar relativistic calculations. This can be explained by the different division of scalar and spin-orbit relativistic effects in both codes (see table 2). In particular, the division between scalar relativistic and SOC effects is not unique and depends on the method of separation. 133 This is also illustrated by the data shown in fig. 1 where we plot the difference between the first IP in the scalar and the 2C relativistic case calculated with WEST (x-axis) against the one calculated with ADF. Overall, we find good agreement between both implementations.
WEST tends to predict slightly larger shifts due to SO coupling than ADF, especially for Figure 1: Comparison of the IP shift due to spin-orbit coupling as calculated with ADF compared to WEST for G 0 W 0 @PBE and G 0 W 0 @PBE0. All values are in eV.
G 0 W 0 @PBE. This most likely indicates that ADF/BAND recovers more of the relativistic effects in the scalar relativistic treatment than WEST. At the G 0 W 0 @PBE level we also notice one significant outlier (CI 4 ) where ADF/BAND predicts significantly larger shifts due to SO coupling than WEST. In this section, we compare the different (partially self-consistent) GW variants against experimental IPs. Table 3 shows the first IPs calculated at the 2C level using (61) Since we take into account SO effects and since our IPs are complete basis set limit extrapolated, vertical experimental IPs are a reliable reference. Besides errors due to the technical parameters discussed in section 4, other potential sources of uncertainty are the neglect of vibronic effects in our calculations, as well as errors in experimental geometries.
Comparison to experiment
Due to the lack of high-quality data from other ab initio calculations, these experimental reference values are however the most suitable for our purpose. which is typically about the optimal fraction for the small and medium organic molecules in the GW100 set. 82 The good performance of G 0 W 0 @PBE0 indicates that a smaller fraction of exact exchange might be beneficial for the systems in SOC81*. This might be due to stronger screening effects in these systems containing heavy elements.
In contrast to the cited benchmark studies, evGW slightly, and qsGW more pronounced, overestimate the reference values. As shown in figure 2, qsGW is comparable with G 0 W 0 @PBE in showing a larger spread of errors than the best performing methods. The weak performance of this method might be due to the stronger screening in the investigated systems which is typically underestimated by qsGW . This then leads to overestimated IPs and HOMO-LUMO gaps. This issue which is well documented for solids 66,101,[154][155][156] and it has been shown that it can be overcome by inclusion of an effective two-point kernel from timedependent DFT or the Bethe-Salpeter equation (BSE) with a statically screened exchange kernel. [157][158][159][160] Our results indicate that it might be worthwhile to explore such options also for molecular systems.
With a MAD of 150 meV, the best performing GW method is eigenvalue-only selfconsistent GW with the screened interaction kept fixed at the PBE0 level (evGW 0 @PBE0).
In an evGW calculation the QP gaps increase during the iterations, leading to underestimated screening. This is compensates for by keeping the screening fixed at the PBE0 level which explains the good performance of this method. It should be noted that despite the partial self-consistency, 2C-evGW 0 is a particularly economic method in our implementation.
The 2C polarizability is only to be evaluated once, while the self-energy, which is recalculated in each iteration, is effectively of 1C form.
Effect of the perturbative G3W 2 correction
The perturbative inclusion of the G3W 2 term increases the first IPs. In contrast, in ref. 49 it was shown that the G3W 2 term tends to decrease the IPs in the ACC24 set. As shown in is the most accurate.
Shift of ionization potentials due to spin-orbit coupling
Generally, the SOC correction is negative, i.e. reduces the scalar relativistic IPs. This means, in case of G 0 W 0 @PBE0 the scalar relativistic results are in better agreement with experiment than the 2C ones. This is shown infigure 3b). On the other hand, for the accurate partially self-consistent approaches but also for G0W 0@BHLYP, as shown in figure 3c) to figure 3e), contraction of the lower (upper) components of a degenerate orbital set that is split by the spin-orbit interaction. 161 The ionization takes place from the upper, more diffuse, orbitals in which the exchange interaction is decreased as compared to the orbitals obtained with a scalar relativistic method. These changes in the exchange interaction induced by relativity are incompletely captured by an approximate exchange density functional approximation resulting in a too small spin-orbit splitting. Employing some non-local exchange, as done in DFT with hybrid functionals, or some form of self-consistency is required to obtain the full magnitude of this subtle effect of relativity.
Conclusions
We have presented an all-electron, AO based 2C implementation of the GWA for closed-shell molecules in the ADF 124 and BAND 139 engines of AMS. 125 As in our 1C GW implementation, 15 we leverage the space-time formulation of the GWA, AC of the self-energy, and the PADF approximation to transform between the representations of 4-point correlation func-tions in the AO and the auxiliary basis to achieve formally cubic scaling with system size. 15 The AO-based implementation of the 2C-GWA is particularly efficient: The evaluation of the polarizability is only four times slower than in a 1C calculation. We furthermore only consider the 1-component contribution to the Green's function to evaluate the dynamical part of the self-energy. All in all, this leads to a 2C algorithm which is only about two to three times more expensive than its 1C counterpart.
While the effect of SOC can faithfully be estimated by combining a 2C DFT calculation with a scalar relativistic GW calculation, 74 the new implementation will be particularly useful to calculate optical excitations within the 2C-BSE@GW method.
To verify the correctness of our implementation we have calculated the first IPs of a subset of 67 our of the 81 molecules in the SOC81 dataset, 74 which excludes the multisolution cases. We have then compared our results to the ones calculated by Scherpelz and
Govoni with the WEST code. 74 For scalar relativistic G 0 W 0 @PBE and G 0 W 0 @PBE0 first IPs, we found MADs to the WEST results of below 100 meV, respectively. With MADs of 70 meV, respectively, the agreement at the 2C level is better than in the scalar relativistic case, which can be rationalized by the different partition of scalar and spin-orbit relativistic effects in both codes. Reaching agreement between GW codes for molecules containing heavy elements is challenging due to relativistic effects and potentially larger errors due to incomplete single particle basis and PPs. As for the GW100 database, 37 further benchmark results using different types of single-particle basis, for instance Gaussian type orbitals, will be necessary to clarify the origin of the discrepancies between both codes.
Finally, we have used the new implementation to assess the accuracy of G 0 W 0 based on different starting points and of partially self-consistent approaches for the first IPs of the molecules in the SOC81 set. evGW and qsGW overestimate the experimental vertical ionization energies. Especially the latter method performs poorly, which is in contrast to the good performance for small and medium, predominantly organic molecules. 49 which are difficult to describe correctly with Pade models of the frequency-dependence of the self-energy in an AC treatment. It is important to address this issue, since systems containing heavy elements, including transition metal compounds where problems with AC are ubiquitous, will be among the targets of 2C implementations. AC can be avoided by using analytical integration of the self-energy 8,150,151 or contour deformationtechniques. 21,50,74,152 AC of the screened interaction can also be combined with CD of the self-energy 153,163 to compute a single-matrix element of the self-energy in the MO basis with cubic scaling with system size. This technique is therefore suitable for G 0 W 0 and also for evGW or BSE@GW calculations where Hedin shifts 54,164 or other rigid scissor-like shifts of the KS spectrum 19,75,165 can be employed to avoid the explicit calculation of all diagonal elements of the self-energy. Since in qsGW the full self-energy matrix is needed, such an algorithm would scale as O pN 5 q with system size and is therefore only suitable for small molecules. Together with the already mentioned convergence problems as well as the generally poor performance for the systems considered herein, this is in principle a strong argument against the use of qsGW for such systems.
A Proof of Eqs. 29 and 30
In this appendix we proof eqs. (27) and (28), which are valid under Kramers symmetry. We employ relation eq. (19) to first proof (28). In real space, P p0q pr Ò, r 1 Ò, iτ q "´i ÿ ia e´p a´ i qτ φ Ò i prqφ Òi pr 1 qφ Ò a pr 1 qφ Òå prq "´i ÿ ia e´p a´ i qτ φ Ói prqφ Ó i pr 1 qφ Óå pr 1 qφ Ó a prq "P p0q pr 1 Ó, r Ó, iτ q " P p0q pr Ó, r 1 Ó, iτ q
with the last equality due to the symmetry of P p0q . In the same way, we also show the identity P p0q pr Ò, r 1 Ó, iτ q "´i ÿ ia e´p a´ i qτ φ Ò i prqφ Ói pr 1 qφ Ó a pr 1 qφ Òå prq "´i ÿ ia e´p a´ i qτ φ Ói prqφ Ò i pr 1 qφ Òå pr 1 qφ Ó a prq "P p0q pr 1 Ò, r Ó, iτ q " P p0q pr Ó, r 1 Ò, iτ q .
After transformation to the AO basis, these are the identities in (28).
Equation (27), ÿ σ,σ 1 "Ò,Ó iG ą I µκ,σσ 1 piτ qG ă R νλ,σ 1 σ p´iτ q`iG ą R µκ,σσ 1 piτ qG ă I νλ,σ 1 σ p´iτ q " 0 .
follows from the cancellation of terms in the sums due to the identities G ą I µκ,ÒÒ piτ qG ă R νλ,ÒÒ p´iτ q "´G ą I µκ,ÓÓ piτ qG ă R νλ,ÓÓ p´iτ q
G ą R µκ,ÒÒ piτ qG ă I νλ,ÒÒ p´iτ q "´G ą R µκ,ÓÓ piτ qG ă I νλ,ÓÓ p´iτ q
G ą I µκ,ÒÓ piτ qG ă R νλ,ÓÒ p´iτ q "´G ą I µκ,ÓÒ piτ qG ă R νλ,ÒÓ p´iτ q
G ą R µκ,ÒÓ piτ qG ă I νλ,ÓÒ p´iτ q "´G ą R µκ,ÓÒ piτ qG ă I νλ,ÒÓ p´iτ q ,
These relations follow directly from eq. (24), as in each of the four terms there is exactly one sign change upon applying Kramers' symmetry. In this appendix we compare the computational timings of 1C and 2C GW calculations in our implementation. We report here timings for Tris(2-phenylpyridine)iridium
B Computational timings
[Ir(ppy) 3 ], a molecule with 320 electrons which is widely used in organic light-emitting diodes (OLEDs) due to its high quantum yields, enabled by thermally activated delayed fluorescence (TADF). 166 Timing results for the full complex at the TZ3P and QZ6P level using the ADF engine are shown in table 5. Systems like Ir(ppy) 3 which contain many first-and second-row atoms are suitable for AO-based implementations since they can exploit sparsity in the AO basis. For clusters of heavy elements, for instance the Pb 14 Se 13 cluster considered in ref. 74, MO-based implementations are more suitable, even though their asymptotic scaling with system size is less favorable.
As one would expect from the equations in section 2, independently of the basis set the calculation of the polarizability is four times slower in the 2C case, while the timings for the other most time-consuming parts of a G 0 W 0 calculation remain the same. In the QZ calculations, the timings are dominated by the calculation of the polarizability and therefore the 2C calculation is slower compared to the 1C calculation than for the TZ calculations.
A single iteration of a partially self-consistent calculation (both evGW and qsGW ) is as time-consuming as a G 0 W 0 calculation. An evGW 0 calculation is more economic as a evGW calculation, since the polarizability needs to be evaluated only once, saving about a factor of 2 in each iteration.
occupied d-or f -shells where higher angular momenta functions are needed to polarize the basis.137 The numerical atomic orbital (NAO) based BAND engine138,139 of AMS can be used with basis functions of arbitrary angular momenta. To obtain converged QP energies we therefore augment our TZ3P and QZ6P basis sets with higher angular momenta functions and calculate scalar relativistic QP energies. In the choice of the higher angular momenta functions we follow the construction of the Sapporo-DKH3-(T,Q)ZP-2012 basis sets140,141 for all elements in the fourth to the sixth row of the periodic table. In the following we denote these basis sets as TZ3P+ and QZ6P+. Except for the Lanthanides, where the highest angular momenta are l " 5 and l " 6, the augmented TZ (QZ) basis set typically contains basis functions with angular momentum up to l " 4 (l " 5) for elements beyond the third row. The basis set definitions are included in the supporting information.To calculate our final QP energies we first calculate complete basis set (CBS) limit extrapolated scalar relativistic QP energies with the BAND code using the expression Z3P`q) denotes the value of the QP energy calculated with QZ6P+ (TZ3P+) and N QZ bas and N T Z bas denote the respective numbers of basis functions (in spherical harmonics so that there are e.g. 5 d and 7 f functions). This expression is commonly used for the extrapolation of GW QP energies to the complete basis set limit for localized basis functions.37 Spin-orbit corrections ∆ 2C n are then calculated with ADF using the QZ6P basis set, ∆ 2C n pQZ6P q "
44,82,143 evGW and qsGW calculations are performed starting from PBE0 orbitals and eigenvalues. In all calculations we set the numerical quality to VeryGood.123 The auxiliary bases used to expand 4-point correlation functions are automatically generated from products of primary basis functions. For this, we use a variant of an algorithm introduced in ref.113 which has recently been implemented in ADF
converge within 5
5-8 iterations within an accuracy of a few meV when the DIIS implementation of ref. 146 is used. All evGW results presented in this work have been obtained using this DIIS implementation with a convergence criterion of 3 meV.
iteration. In case the SCF error increases, we reset the mixing parameter to α p0q mix .
As indicated by the mean signed deviations (MSD) in table 1, ADF/BAND tends to predict lower IPs than WEST, independent of the starting point of the G 0 W 0 calculation. With mean absolute deviations (MAD) of 100 meV for G 0 W 0 @PBE and 90 meV for G 0 W 0 @PBE0 in the scalar relativistic case and of 70 meV each in the 2C case, the deviations are of the same order of magnitude as the ones we obtained for the GW100 database.39,135 Several technical aspects of the GW implementations in ADF/BAND and WEST which are summarized in table 2 might contribute to the observed deviations. As discussed in the preceding section, these are mainly related to the different frequency treatments in both codes as well as differences in the single-particle basis. Importantly, WEST is based on PPs while we used all-electron basis sets in all ADF and BAND calculations.As already discussed extensively by Scherpelz and Govoni, 74 the choice of the PP and the partitioning of core, semi-core and valence electrons might heavily affect the values of the IPs. For instance, in ref. 74, it was shown that using different valence configurations for iodine might induce changes in IPs of the order of one eV. In all-electron calculations, this issue is completely avoided. However, possible issues might arise from inconsistencies in the augmentation of the TZ3P and QZ6P basis sets with additional high-l functions. While it can be verified by comparison of TZ3P (QZ6P) results
Figure 2 :
2Distribution of the deviations of IPs (in eV) obtained with different 2C methods to the experimental reference values
with six different flavors of GW : G 0 W 0 based on PBE, PBE0 and BHLYP orbitals and eigenvalues(G 0 W 0 @PBE, G 0 W 0 @PBE0, G 0 W 0 @BHLYP respectively),evGW using PBE0 orbitals and eigenvalues (evGW @PBE0), eigenvalue-only self-consistent GW where the screened interaction is fixed at the PBE0 level (evGW 0 @PBE0), and qsGW . MADs of all considered methods are shown in table 4. The deviations to experiment are also visualized in figure 2.
figure 3b )
3b, in case of G 0 W 0 @PBE0 the inclusion of this contribution improves agreement with experiment, while for G 0 W 0 @BHLYP and the partially self-consistent methods it worsens it (figure 3c) -f)). Typically, the contribution of the G3W 2 term to the IP is only of the order of about 0.1 eV. However, in some cases, we observe very large G3W 2 shifts of up to 0.5 eV, for instance for RuO 4 and OsO 4 for all GW methods. This worsens agreement with experiment but their larger effect underlines the importance of vertex corrections for these systems. Out of all tested methods, with a MAD of only 140 meV, G 0 W 0 @PBE0 + G3W 2
Figure 3 :Figure 4 :
34Distribution of the deviations to experimental reference values of IPs. Shown for each method are results for scalar relativistic, 2C and 2C calculations with perturbative G3W 2 correction. All values are in eV. it is crucial to take into account SOC. These observations are also reflected in the MSD and MADs shown in table 4.Finally, infigure 4we investigate the change in first IPs due to the explicit treatment of SOC among the different GW methods. On the x-axis, we plot the evGW IPs and on the y-axis the G 0 W 0 ones for different starting points. A higher amount of exact exchange in the underlying exchange-correlation functional increases the difference between the IPs at the 1C and the 2C level. The same effect as for evGW can also be observed for qsGW (see supporting information). This can be explained by considering the more (less) pronounced relativistic Differences in 2C QP energies to 1C QP energies with G 0 W 0 using different starting points (x-axis) compared to evGW . All values are in eV.
increases the computational effort by a factor of 16 (notice that the denominator is always real) compared to the 1C case. To reduce the computational effort, we use real matrix algebra and definethe intermediates
W R{I,R{I
pqrs
"
ÿ
α
d R{I
pqα c
R{I
rsβ
e pqrs "W R,R
pqrs´W
I,I
pqrs
f pqrs "W R,I
pqrs`W
I,R
pqrs .
describes scalar relativistic effects and we use this Hamiltonian in all 1C calculations. The p2c 2´v ext prqq 2 σ¨p∇v ext prqˆ pq(58) accounts for SOC. We employ the Hamiltonian(56) in all of the following 2C calculations.We also tested two Hamiltonians obtained from an exact transformation of the 4-componentDirac equation to 2-components (X2C and RA-X2C, respectively. In the latter variant, a regular approach to calculate the transformation matrix is used). 129,130 In the X2C and RA-X2C method implemented in ADF, first the 4-component Dirac equation for a model potential (MAPA) of the molecule is calculated for the given basis set, using the modified Dirac equation (MDE) by Dyall 131 for X2C, or using the regular approach 132 to the modified Dirac equation (RA-MDE) for RA-X2C. In the basis set limit the MDE and the RA-MDE should yield same results for the model potential (MAPA) but using a finite basis set, the results for MDE and RA-MDE will differ.second termĥ
ZORA,SO
1
prq "
c 2
AC, however, typically fails to detect all solutions in these cases. Furthermore, the resulting QP energies will be rather inaccurate since it is impossible to build a Padé model which reliably represents the energy dependence of self-energy matrix elements with strongly varying frequency dependence (see ref.74 for examples).50,153 The occurrence of multiple solutions can be an artefact of the starting point used in a G 0 W 0 calculation.39 It can also caused by a break-down of the single QP picture caused by pronounced static correlation effects. The occurrence of multiple solutions complicates the comparison of our results to WEST, since it is not clear if the same solutions are compared. It also complicates the extrapolation of results to the CBS limit since it is unclear if the same QP solution is found for all basis sets. Also comparison to experimental data is difficult since it is unclear if the detected solutions correspond to QP or to satellite peaks in the experimental spectra. For all these reasons, we decided to exclude these systems from the following benchmark. This leaves us with 67 systems to which we refer to as SOC81*.,150,151 or contour deformation
techniques, 21,50,74,152 by plotting the self-energy matrix elements as a function of frequency.
Also in cases where QP spectra are calculated with different basis sets it is possible to identify
the matching peaks in individual spectra and perform a reliable extrapolation to the CBS
limit.
Table 1 :
1Scalar relativistic and 2C G 0 W 0 @PBE and G 0 W 0 @PBE0 ionization potentials (IP) for the SOC81* database calculated with ADF/BAND. The corresponding values from WEST are given for comparison. All values are in eV.
Table 2 :
2Comparison of the implementations of 2C-G 0 W 0 in WEST and ADF/BAND.WEST
ADF/BAND
Single-particle basis
Plane-wave
Slater type orbital
All-electron
No
Yes
Frequency treatment
Contour deformation Analytical continuation
QP equations
Secant method
Bisection
Relativistic Hamiltonian 2C-pseudopotentials ZORA
2C self-energy
Static part only
Static and Dynamic part
Table 1 shows our scalar relativistic and 2C IPs using G 0 W 0 @PBE and G 0 W 0 @PBE0
and for comparison the corresponding values from ref. 74 calculated with the WEST code.
Table 3 :
3First ionization potentials (IP) for the SOC81* database calculated with different
2C GW methods. All values are in eV.
G 0 W 0
Name
PBE PBE0 BHLYP evGW 0 @PBE0 evGW @PBE0 qsGW
exp.
Al 2 Br 6
10.30 10.70
10.98
10.92
11.09
11.24 10.97
AlBr 3
10.44 10.81
11.06
11.03
11.19
11.31 10.91
AlI 3
9.19
9.53
9.76
9.69
9.83
9.72
9.66
AsBr 3
9.76
10.09
10.33
10.26
10.38
10.50 10.21
AsCl 3
10.53 10.88
11.15
11.05
11.17
11.40 10.90
AsF 3
12.38 12.80
13.14
13.03
13.21
13.46 13.00
AsF 5
14.47 15.30
15.81
15.74
16.13
16.62 15.53
AsH 3
10.42 10.54
10.70
10.70
10.78
10.79 10.58
AsI 3
8.70
9.11
9.34
9.19
9.28
9.41
9.00
Continued on next page
Table 4 :
4Mean signed deviations (MSD) and mean absolute deviations (MAD) to experiment
for the SOC81* set for different 1C-GW , 2C-GW and 2C-G3W 2 for different starting points
and different levels of partial self-consistency. All values are in eV.
G 0 W 0 @
PBE PBE0 BHLYP evGW 0 evGW qsGW
MSD
1C-GW
-0.45 -0.04
0.23
0.18
0.35
0.43
2C-GW
-0.54 -0.14
0.12
0.07
0.23
0.35
2C-GW`G3W 2 -0.46 -0.06
0.22
0.15
0.35
0.47
MAD
1C-GW
0.45
0.16
0.27
0.21
0.36
0.44
2C-GW
0.54
0.20
0.19
0.15
0.26
0.39
2C-GW`G3W 2 0.46
0.14
0.25
0.20
0.37
0.49
Consistent with previous benchmarks on several sets of small and medium molecules, 43,44,46,48,49,82
G0W 0@PBE greatly underestimates the first IPs. G 0 W 0 @PBE0 and G 0 W 0 @BHLYP per-
form much better, with G 0 W 0 @PBE0 showing a tendency to underestimate and G 0 W 0 @BHLYP
to overestimate the experimental reference values. BHLYP contains 50 % of exact exchange
,162 Both methods are outperformed by G0W 0 based on PBE0 and BHLYP starting points with fraction of 25 % and 50% of exact exchange. With a MAD of 150 meV, out of all GW methods the best agreement with experiment is achieved when the screened interaction is kept fixed at the PBE0 level in an eigenvalue-only self-consistent calculation (evGW 0 @PBE0). Including SOC effects though explicit 2C calculations lowers the IPs while the inclusion of the statically screened G3W 2 correction increases them. Since G0W 0@PBE0 alone tends to underestimate the experimental reference values, 2C-G 0 W 0 PBE0 + G3W 2 profits from favorable error cancellation and with a MAD of 140 meV is in excellent agreement with the experimental reference values.In our benchmarks, we restricted ourselves to 67 out of the 81 molecules in the SOC81 benchmark set. For the other cases the non-linear QP equation(15)has multiple solutions.74
Table 5 :
5Computational timings and first IP of Ir(ppy) 3 for different basis sets at the 1C and 2C level using G 0 W 0 @PBE0.TZ3P
QZ6P
1C
2C
1C
2C
N bas
1566
2895
Total
[core h]
41
82
728
1995
P p0q
[core h]
14
53
409
1655
W
[core h]
4
4
30
30
Σ
[core h]
21
20
205
213
first IP [eV]
6.09
5.81
6.13
5.78
µνσ,κλσ 1 piτ q " iΘpτ qG ą µκ,σσ 1 piτ qG ă νλ,σ 1 σ p´iτ q`iΘp´τ qG ă µκ,σσ 1 piτ qG ą νλ,σ 1 σ p´iτ q .(25)Due to the symmetry P p0q piτ q " P p0q p´iτ q, we can focus on the first term which we split in terms of real (R) and imaginary (I) components G ą µκ,σσ 1 piτ qG ă νλ,σ 1 σ p´iτ q "G ą R µκ,σσ 1 piτ qG ă R νλ,σ 1 σ p´iτ q´G ą I µκ,σσ 1 piτ qG ă I νλ,σ 1 σ p´iτ q iG ą I µκ,σσ 1 piτ qG ă R νλ,σ 1 σ p´iτ q`iG ą R µκ,σσ 1 piτ qG ă I νλ,σ 1 σ p´iτ q .(26)
Supporting Information AvailableAll Quasiparticle energies calculated in this work. All basis set files.Rev. B 2021, 104, L201116.(100) It should be understood that in practice one solvesin the nth iteration, which reduces to(16)for n " 1.
New method for calculating the one-particle Green's function with application to the electron-gas problem. L Hedin, Phys. Rev. 139796Hedin, L. New method for calculating the one-particle Green's function with applica- tion to the electron-gas problem. Phys. Rev. 1965, 139, A796.
. R M Martin, L Reining, D M Ceperley, Interacting electronsMartin, R. M.; Reining, L.; Ceperley, D. M. Interacting electrons;
The GW approximation: content, successes and limitations. L Reining, Rev. Comput. Mol. Sci. Wiley InterdiscipReining, L. The GW approximation: content, successes and limitations. Wiley Inter- discip. Rev. Comput. Mol. Sci. 2018, 8, e1344.
The GW Compendium: A Practical Guide to Theoretical Photoemission Spectroscopy. D Golze, M Dvorak, P Rinke, Front. Chem. 7377Golze, D.; Dvorak, M.; Rinke, P. The GW Compendium: A Practical Guide to The- oretical Photoemission Spectroscopy. Front. Chem. 2019, 7, 377.
Resolution-of-identity approach to Hartree-Fock, hybrid density functionals, RPA, MP2 and GW with numeric atom-centered orbital basis functions. X Ren, P Rinke, V Blum, J Wieferink, A Tkatchenko, A Sanfilippo, K Reuter, M Scheffler, New J. Phys. 53020Ren, X.; Rinke, P.; Blum, V.; Wieferink, J.; Tkatchenko, A.; Sanfilippo, A.; Reuter, K.; Scheffler, M. Resolution-of-identity approach to Hartree-Fock, hybrid density function- als, RPA, MP2 and GW with numeric atom-centered orbital basis functions. New J. Phys. 2012, 14, 053020.
Unified description of ground and excited states of finite systems: The self-consistent GW approach. F Caruso, P Rinke, X Ren, M Scheffler, A Rubio, Phys. Rev. B. 8681102Caruso, F.; Rinke, P.; Ren, X.; Scheffler, M.; Rubio, A. Unified description of ground and excited states of finite systems: The self-consistent GW approach. Phys. Rev. B 2012, 86, 081102(R).
Self-consistent GW: Allelectron implementation with localized basis functions. F Caruso, P Rinke, X Ren, A Rubio, M Scheffler, Phys. Rev. B. 75105Caruso, F.; Rinke, P.; Ren, X.; Rubio, A.; Scheffler, M. Self-consistent GW: All- electron implementation with localized basis functions. Phys. Rev. B 2013, 88, 075105.
The GW-method for quantum chemistry applications: Theory and implementation. M J Van Setten, F Weigend, F Evers, J. Chem. Theory Comput. 9Van Setten, M. J.; Weigend, F.; Evers, F. The GW-method for quantum chemistry applications: Theory and implementation. J. Chem. Theory Comput. 2013, 9, 232- 246.
Off-diagonal self-energy terms and partially self-consistency in GW calculations for single molecules: Efficient implementation and quantitative effects on ionization potentials. F Kaplan, F Weigend, F Evers, M J Van Setten, J. Chem. Theory Comput. 11Kaplan, F.; Weigend, F.; Evers, F.; Van Setten, M. J. Off-diagonal self-energy terms and partially self-consistency in GW calculations for single molecules: Efficient imple- mentation and quantitative effects on ionization potentials. J. Chem. Theory Comput. 2015, 11, 5152-5160.
Quasi-Particle Self-Consistent GW for Molecules. F Kaplan, M E Harding, C Seiler, F Weigend, F Evers, M J Van Setten, J. Chem. Theory Comput. 12Kaplan, F.; Harding, M. E.; Seiler, C.; Weigend, F.; Evers, F.; Van Setten, M. J. Quasi-Particle Self-Consistent GW for Molecules. J. Chem. Theory Comput. 2016, 12, 2528-2541.
MOLGW 1: Many-body perturbation theory software for atoms, molecules, and clusters. F Bruneval, T Rangel, S M Hamed, M Shao, C Yang, J B Neaton, Comput. Phys. Commun. 208Bruneval, F.; Rangel, T.; Hamed, S. M.; Shao, M.; Yang, C.; Neaton, J. B. MOLGW 1: Many-body perturbation theory software for atoms, molecules, and clusters. Comput. Phys. Commun. 2016, 208, 149-161.
Snchez-Portal, D. An O(N3) implementation of Hedins GW approximation for molecules. D Foerster, P Koval, J. Chem. Phys. 74105Foerster, D.; Koval, P.; Snchez-Portal, D. An O(N3) implementation of Hedins GW approximation for molecules. J. Chem. Phys. 2011, 135, 074105.
Sánchez-Portal, D. Fully self-consistent GW and quasiparticle self-consistent GW for molecules. P Koval, D Foerster, Phys. Rev. B. 155417Koval, P.; Foerster, D.; Sánchez-Portal, D. Fully self-consistent GW and quasiparticle self-consistent GW for molecules. Phys. Rev. B 2014, 89, 155417.
Scalable Molecular GW Calculations: Valence and Core Spectra. D Mejia-Rodriguez, A Kunitsa, E Aprà, N Govind, J. Chem. Theory Comput. 17Mejia-Rodriguez, D.; Kunitsa, A.; Aprà, E.; Govind, N. Scalable Molecular GW Cal- culations: Valence and Core Spectra. J. Chem. Theory Comput. 2021, 17, 7504-7517.
Low-Order Scaling G0W0 by Pair Atomic Density Fitting. A Förster, L Visscher, Förster, A.; Visscher, L. Low-Order Scaling G0W0 by Pair Atomic Density Fitting.
. J. Chem. Theory Comput. 16J. Chem. Theory Comput. 2020, 16, 7381-7399.
Low-Order Scaling Quasiparticle Self-Consistent GW for. A Förster, L Visscher, Molecules. Front. Chem. 9736591Förster, A.; Visscher, L. Low-Order Scaling Quasiparticle Self-Consistent GW for Molecules. Front. Chem. 2021, 9, 736591.
GW in the Gaussian and Plane Waves Scheme with Application to Linear Acenes. J Wilhelm, M Del Ben, J Hutter, J. Chem. Theory Comput. 12Wilhelm, J.; Del Ben, M.; Hutter, J. GW in the Gaussian and Plane Waves Scheme with Application to Linear Acenes. J. Chem. Theory Comput. 2016, 12, 3623-3635.
Toward GW Calculations on Thousands of Atoms. J Wilhelm, D Golze, L Talirz, J Hutter, C A Pignedoli, J. Phys. Chem. Lett. 9Wilhelm, J.; Golze, D.; Talirz, L.; Hutter, J.; Pignedoli, C. A. Toward GW Calcula- tions on Thousands of Atoms. J. Phys. Chem. Lett. 2018, 9, 306-312.
Low-scaling GW with benchmark accuracy and application to phosphorene nanosheets. J Wilhelm, P Seewald, D Golze, J. Chem. Theory Comput. 17Wilhelm, J.; Seewald, P.; Golze, D. Low-scaling GW with benchmark accuracy and application to phosphorene nanosheets. J. Chem. Theory Comput. 2021, 17, 1662- 1677.
All-electron GW methods implemented in molecular orbital space: Ionization energy and electron affinity of conjugated molecules. S H Ke, Phys. Rev. B. 84Ke, S. H. All-electron GW methods implemented in molecular orbital space: Ionization energy and electron affinity of conjugated molecules. Phys. Rev. B 2011, 84, 205415.
Large Scale GW Calculations. M Govoni, G Galli, J. Chem. Theory Comput. 11Govoni, M.; Galli, G. Large Scale GW Calculations. J. Chem. Theory Comput. 2015, 11, 2680-2696.
Large-scale GW calculations on pre-exascale HPC systems. Del Ben, M Da Jornada, F H Canning, A Wichmann, N Raman, K Sasanka, R Yang, C Louie, S G Deslippe, J , Comput. Phys. Commun. 235Del Ben, M.; da Jornada, F. H.; Canning, A.; Wichmann, N.; Raman, K.; Sasanka, R.; Yang, C.; Louie, S. G.; Deslippe, J. Large-scale GW calculations on pre-exascale HPC systems. Comput. Phys. Commun. 2019, 235, 187-195.
Static subspace approximation for the evaluation of G0W0 quasiparticle energies within a sum-over-bands approach. Del Ben, M Da Jornada, F H Antonius, G Rangel, T Louie, S G Deslippe, J Canning, A , Phys. Rev. B. 125128Del Ben, M.; da Jornada, F. H.; Antonius, G.; Rangel, T.; Louie, S. G.; Deslippe, J.; Canning, A. Static subspace approximation for the evaluation of G0W0 quasiparticle energies within a sum-over-bands approach. Phys. Rev. B 2019, 99, 125128.
GPU Acceleration of Large-Scale Full-Frequency GW Calculations. V W Z Yu, M Govoni, J. Chem. Theory Comput. 18Yu, V. W. Z.; Govoni, M. GPU Acceleration of Large-Scale Full-Frequency GW Cal- culations. J. Chem. Theory Comput. 2022, 18, 4690-4707.
Cubic-Scaling All-Electron GW Calculations with a Separable Density-Fitting Space-Time Approach. I Duchemin, X Blase, J. Chem. Theory Comput. 17Duchemin, I.; Blase, X. Cubic-Scaling All-Electron GW Calculations with a Separable Density-Fitting Space-Time Approach. J. Chem. Theory Comput. 2021, 17, 2383- 2393.
Stochastic GW Calculations for Molecules. V Vlček, E Rabani, D Neuhauser, R Baer, J. Chem. Theory Comput. 13Vlček, V.; Rabani, E.; Neuhauser, D.; Baer, R. Stochastic GW Calculations for Molecules. J. Chem. Theory Comput. 2017, 13, 4997-5003.
Swift GW beyond 10,000 electrons using sparse stochastic compression. V Vlček, W Li, R Baer, E Rabani, D Neuhauser, Phys. Rev. 75107Vlček, V.; Li, W.; Baer, R.; Rabani, E.; Neuhauser, D. Swift GW beyond 10,000 electrons using sparse stochastic compression. Phys. Rev. B 2018, 98, 075107.
Development of the fragment-based COHSEX method for large and complex molecular systems. T Fujita, Y Noguchi, Phys. Rev. 205140Fujita, T.; Noguchi, Y. Development of the fragment-based COHSEX method for large and complex molecular systems. Phys. Rev. B 2018, 98, 205140.
Charge-transfer excited states in the donor/acceptor interface from large-scale GW calculations. T Fujita, Y Noguchi, T Hoshi, J. Chem. Phys. 114109Fujita, T.; Noguchi, Y.; Hoshi, T. Charge-transfer excited states in the donor/acceptor interface from large-scale GW calculations. J. Chem. Phys. 2019, 151, 114109.
Photoluminescent properties of the carbon-dimer defect in hexagonal boron-nitride: A manybody finite-size cluster approach. M Winter, M H E Bousquet, D Jacquemin, I Duchemin, X Blase, Phys. Rev. Mater. 202195201Winter, M.; Bousquet, M. H. E.; Jacquemin, D.; Duchemin, I.; Blase, X. Photolumi- nescent properties of the carbon-dimer defect in hexagonal boron-nitride: A many- body finite-size cluster approach. Phys. Rev. Mater. 2021, 5, 95201.
Universal polarization energies for defects in monolayer, surface, and bulk hexagonal boron nitride: A finite-size fragments GW approach. D Amblard, G D'avino, I Duchemin, X Blase, Phys. Rev. Mater. 202264008Amblard, D.; D'avino, G.; Duchemin, I.; Blase, X. Universal polarization energies for defects in monolayer, surface, and bulk hexagonal boron nitride: A finite-size fragments GW approach. Phys. Rev. Mater. 2022, 6, 064008.
Decomposition and embedding in the stochastic GW selfenergy. M Romanova, V Vlček, J. Chem. Phys. 2020134103Romanova, M.; Vlček, V. Decomposition and embedding in the stochastic GW self- energy. J. Chem. Phys. 2020, 153, 134103.
Efficient treatment of molecular excitations in the liquid phase environment via stochastic many-body theory. G Weng, V Vlček, J. Chem. Phys. 54104Weng, G.; Vlček, V. Efficient treatment of molecular excitations in the liquid phase environment via stochastic many-body theory. J. Chem. Phys. 2021, 155, 054104.
Subsystem-Based GW / Bethe -Salpeter Equation. J Tölle, T Deilmann, M Rohl, J Neugebauer, J. Chem. Theory Comput. 17Tölle, J.; Deilmann, T.; Rohl, M.; Neugebauer, J. Subsystem-Based GW / Bethe - Salpeter Equation. J. Chem. Theory Comput. 2021, 17, 2186-2199.
Quasiparticle Self-Consistent GW-Bethe-Salpeter equation calculations for large chromophoric systems. A Förster, L Visscher, J. Chem. Theory Comput. 18Förster, A.; Visscher, L. Quasiparticle Self-Consistent GW-Bethe-Salpeter equation calculations for large chromophoric systems. J. Chem. Theory Comput. 2022, 18, 6779-6793.
Growth Optimization and Device Integration of Narrow-Bandgap Graphene Nanoribbons. G Borin Barin, Q Sun, M Di Giovannantonio, C Z Du, X Y Wang, J P Llinas, Z Mutlu, Y Lin, J Wilhelm, J Overbeck, C Daniels, M Lamparski, H Sahabudeen, M L Perrin, J I Urgel, S Mishra, A Kinikar, R Widmer, S Stolz, M Bommert, C Pignedoli, X Feng, M Calame, K Müllen, A Narita, V Meunier, J Bokor, R Fasel, P Ruffieux, 182202301Borin Barin, G.; Sun, Q.; Di Giovannantonio, M.; Du, C. Z.; Wang, X. Y.; Lli- nas, J. P.; Mutlu, Z.; Lin, Y.; Wilhelm, J.; Overbeck, J.; Daniels, C.; Lamparski, M.; Sahabudeen, H.; Perrin, M. L.; Urgel, J. I.; Mishra, S.; Kinikar, A.; Widmer, R.; Stolz, S.; Bommert, M.; Pignedoli, C.; Feng, X.; Calame, M.; Müllen, K.; Narita, A.; Meunier, V.; Bokor, J.; Fasel, R.; Ruffieux, P. Growth Optimization and Device Inte- gration of Narrow-Bandgap Graphene Nanoribbons. Small 2022, 18, 2202301.
Benchmarking G0W0 for Molecular Systems. M J Van Setten, F Caruso, S Sharifzadeh, X Ren, M Scheffler, F Liu, J Lischner, L Lin, J R Deslippe, S G Louie, C Yang, F Weigend, J B Neaton, F Evers, P Rinke, Gw100, J. Chem. Theory Comput. 11Van Setten, M. J.; Caruso, F.; Sharifzadeh, S.; Ren, X.; Scheffler, M.; Liu, F.; Lis- chner, J.; Lin, L.; Deslippe, J. R.; Louie, S. G.; Yang, C.; Weigend, F.; Neaton, J. B.; Evers, F.; Rinke, P. GW100: Benchmarking G0W0 for Molecular Systems. J. Chem. Theory Comput. 2015, 11, 5665-5687.
Correlation energy for the homogeneous electron gas: Exact Bethe-Salpeter solution and an approximate evaluation. E Maggio, G Kresse, Phys. Rev. B. 235113Maggio, E.; Kresse, G. Correlation energy for the homogeneous electron gas: Ex- act Bethe-Salpeter solution and an approximate evaluation. Phys. Rev. B 2016, 93, 235113.
GW100: Comparison of Methods and Accuracy of Results Obtained with the WEST Code. M Govoni, G Galli, J. Chem. Theory Comput. 14Govoni, M.; Galli, G. GW100: Comparison of Methods and Accuracy of Results Obtained with the WEST Code. J. Chem. Theory Comput. 2018, 14, 1895-1909.
Real-Space Based Benchmark of G0W0 Calculations on GW100: Effects of Semicore Orbitals and Orbital Reordering. W Gao, J R Chelikowsky, J. Chem. Theory Comput. 15Gao, W.; Chelikowsky, J. R. Real-Space Based Benchmark of G0W0 Calculations on GW100: Effects of Semicore Orbitals and Orbital Reordering. J. Chem. Theory Comput. 2019, 15, 5299-5307.
Extrapolating unconverged GW energies up to the complete basis set limit with linear regression. F Bruneval, I Maliyov, C Lapointe, M.-C Marinica, J. Chem. Theory Comput. 16Bruneval, F.; Maliyov, I.; Lapointe, C.; Marinica, M.-C. Extrapolating unconverged GW energies up to the complete basis set limit with linear regression. J. Chem. Theory Comput. 2020, 16, 4399-4407.
GW approximation of the many-body problem and changes in the particle number. F Bruneval, Phys. Rev. Lett. 103Bruneval, F. GW approximation of the many-body problem and changes in the particle number. Phys. Rev. Lett. 2009, 103, 1-4.
Benchmark of GW methods for azabenzenes. N Marom, F Caruso, X Ren, O T Hofmann, T Körzdörfer, J R Chelikowsky, A Rubio, M Scheffler, P Rinke, Phys. Rev. B. 245127Marom, N.; Caruso, F.; Ren, X.; Hofmann, O. T.; Körzdörfer, T.; Chelikowsky, J. R.; Rubio, A.; Scheffler, M.; Rinke, P. Benchmark of GW methods for azabenzenes. Phys. Rev. B 2012, 86, 245127.
Benchmarking the starting points of the GW approximation for molecules. F Bruneval, M Marques, J. Chem. Theory Comput. 9Bruneval, F.; Marques, M. Benchmarking the starting points of the GW approximation for molecules. J. Chem. Theory Comput. 2013, 9, 324-329.
Beyond the GW approximation: A second-order screened exchange correction. X Ren, N Marom, F Caruso, M Scheffler, P Rinke, Phys. Rev. B -Condens. Matter Mater. Phys. 9281104Ren, X.; Marom, N.; Caruso, F.; Scheffler, M.; Rinke, P. Beyond the GW approxima- tion: A second-order screened exchange correction. Phys. Rev. B -Condens. Matter Mater. Phys. 2015, 92, 081104(R).
Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules III: A Benchmark of GW Methods. J W Knight, X Wang, L Gallandi, O Dolgounitcheva, X Ren, J V Ortiz, P Rinke, T Körzdörfer, N Marom, J. Chem. Theory Comput. 12Knight, J. W.; Wang, X.; Gallandi, L.; Dolgounitcheva, O.; Ren, X.; Ortiz, J. V.; Rinke, P.; Körzdörfer, T.; Marom, N. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules III: A Benchmark of GW Methods. J. Chem. Theory Comput. 2016, 12, 615-626.
Evaluating the GW Approximation with CCSD(T) for Charged Excitations Across the Oligoacenes. T Rangel, S M Hamed, F Bruneval, J B Neaton, J. Chem. Theory Comput. 12Rangel, T.; Hamed, S. M.; Bruneval, F.; Neaton, J. B. Evaluating the GW Approx- imation with CCSD(T) for Charged Excitations Across the Oligoacenes. J. Chem. Theory Comput. 2016, 12, 2834-2842.
Benchmark of GW Approaches for the GW100 Test Set. F Caruso, M Dauth, M J Van Setten, P Rinke, J. Chem. Theory Comput. 12Caruso, F.; Dauth, M.; Van Setten, M. J.; Rinke, P. Benchmark of GW Approaches for the GW100 Test Set. J. Chem. Theory Comput. 2016, 12, 5076-5087.
Exploring the statically screened G3W2 correction to the GW self-energy : Charged excitations and total energies of finite systems. A Förster, L Visscher, Phys. Rev. B. 2022125121Förster, A.; Visscher, L. Exploring the statically screened G3W2 correction to the GW self-energy : Charged excitations and total energies of finite systems. Phys. Rev. B 2022, 105, 125121.
Core-Level Binding Energies from GW: An Efficient Full-Frequency Approach within a Localized Basis. D Golze, J Wilhelm, M J Van Setten, P Rinke, J. Chem. Theory Comput. 14Golze, D.; Wilhelm, J.; Van Setten, M. J.; Rinke, P. Core-Level Binding Energies from GW: An Efficient Full-Frequency Approach within a Localized Basis. J. Chem. Theory Comput. 2018, 14, 4856-4869.
Assessing GW Approaches for Predicting Core Level Binding Energies. M J Van Setten, R Costa, F Viñes, F Illas, J. Chem. Theory Comput. 14Van Setten, M. J.; Costa, R.; Viñes, F.; Illas, F. Assessing GW Approaches for Pre- dicting Core Level Binding Energies. J. Chem. Theory Comput. 2018, 14, 877-883.
Accurate Absolute and Relative Core-Level Binding Energies From GW. D Golze, L Keller, P Rinke, J. Phys. Chem. Lett. 11Golze, D.; Keller, L.; Rinke, P. Accurate Absolute and Relative Core-Level Binding Energies From GW. J. Phys. Chem. Lett. 2020, 11, 1840-1847.
All-Electron BSE@ GW Method for K -Edge Core Electron Excitation Energies. Y Yao, D Golze, P Rinke, V Blum, Y Kanai, J. Chem. Theory Comput. 18Yao, Y.; Golze, D.; Rinke, P.; Blum, V.; Kanai, Y. All-Electron BSE@ GW Method for K -Edge Core Electron Excitation Energies. J. Chem. Theory Comput. 2022, 18, 1569-1583.
Benchmark of GW Methods for Core-Level Binding Energies. J Li, Y Jin, P Rinke, W Yang, D Golze, J. Chem. Theory Comput. 18Li, J.; Jin, Y.; Rinke, P.; Yang, W.; Golze, D. Benchmark of GW Methods for Core- Level Binding Energies. J. Chem. Theory Comput. 2022, 18, 7570-7585.
Sánchez-Portal, D. GW approximation for open-shell molecules: A first-principles study. M Mansouri, D Casanova, P Koval, New J. Phys. 202123Mansouri, M.; Casanova, D.; Koval, P.; Sánchez-Portal, D. GW approximation for open-shell molecules: A first-principles study. New J. Phys. 2021, 23 .
Benchmark many-body GW and Bethe-Salpeter calculations for small transition metal molecules. S Körbel, P Boulanger, I Duchemin, X Blase, M Marques, S Botti, Körbel, S.; Boulanger, P.; Duchemin, I.; Blase, X.; Marques, M.; Botti, S. Benchmark many-body GW and Bethe-Salpeter calculations for small transition metal molecules.
. J. Chem. Theory Comput. 10J. Chem. Theory Comput. 2014, 10, 3934-3943.
Benchmarking the Fundamental Electronic Properties of small TiO2 Nanoclusters by GW and Coupled Cluster Theory Calculations. E Berardo, F Kaplan, K Bhaskaran-Nair, W A Shelton, M J Van Setten, K Kowalski, M A Zwijnenburg, Berardo, E.; Kaplan, F.; Bhaskaran-Nair, K.; Shelton, W. A.; Van Setten, M. J.; Kowalski, K.; Zwijnenburg, M. A. Benchmarking the Fundamental Electronic Prop- erties of small TiO2 Nanoclusters by GW and Coupled Cluster Theory Calculations.
. J. Chem. Theory Comput. 13J. Chem. Theory Comput. 2017, 13, 3814-3828.
Benchmarking the GW Approximation and Bethe-Salpeter Equation for Groups IB and IIB Atoms and Monoxides. L Hung, F Bruneval, K Baishya, S Ögüt, J. Chem. Theory Comput. 13Hung, L.; Bruneval, F.; Baishya, K.;Ögüt, S. Benchmarking the GW Approximation and Bethe-Salpeter Equation for Groups IB and IIB Atoms and Monoxides. J. Chem. Theory Comput. 2017, 13, 2135-2146.
Photoelectron spectra of copper oxide cluster anions from first principles methods. B Shi, S Weissman, F Bruneval, L Kronik, S Ögüt, J. Chem. Phys. 64306Shi, B.; Weissman, S.; Bruneval, F.; Kronik, L.;Ögüt, S. Photoelectron spectra of copper oxide cluster anions from first principles methods. J. Chem. Phys. 2018, 149, 064306.
Practical GW scheme for electronic structure of 3 d-transitionmetal monoxide anions : ScO-, TiO-, CuO-, and ZnO. Y Byun, S Ögüt, J. Chem. Phys. 134305Byun, Y.-m.;Ögüt, S. Practical GW scheme for electronic structure of 3 d-transition- metal monoxide anions : ScO-, TiO-, CuO-, and ZnO-. J. Chem. Phys. 2019, 151, 134305.
Photoelectron spectra of early 3d-transition metal dioxide molecular anions from GW calculations. M Rezaei, S Ögüt, J. Chem. Phys. 94307Rezaei, M.;Ögüt, S. Photoelectron spectra of early 3d-transition metal dioxide molec- ular anions from GW calculations. J. Chem. Phys. 2021, 154, 094307.
Benchmarking time-dependent density functional theory for singlet excited states of thermally activated delayed fluorescence chromophores. X Wang, S Gao, M Zhao, N Marom, Phys. Rev. Res. 202233147Wang, X.; Gao, S.; Zhao, M.; Marom, N. Benchmarking time-dependent density func- tional theory for singlet excited states of thermally activated delayed fluorescence chromophores. Phys. Rev. Res. 2022, 4, 033147.
First-principles theory of quasiparticles: Calculation of band gaps in semiconductors and insulators. M S Hybertsen, S G Louie, Phys. Rev. Lett. 55Hybertsen, M. S.; Louie, S. G. First-principles theory of quasiparticles: Calculation of band gaps in semiconductors and insulators. Phys. Rev. Lett. 1985, 55, 1418-1421.
Electron correlation in semiconductors and insulators: Band gaps and quasiparticle energies. M S Hybertsen, S G Louie, Phys. Rev. B. 5390Hybertsen, M. S.; Louie, S. G. Electron correlation in semiconductors and insulators: Band gaps and quasiparticle energies. Phys. Rev. B 1986, 34, 5390.
All-electron self-consistent GW approximation: Application to Si, MnO, and NiO. S V Faleev, M Van Schilfgaarde, T Kotani, Phys. Rev. Lett. 126406Faleev, S. V.; van Schilfgaarde, M.; Kotani, T. All-electron self-consistent GW ap- proximation: Application to Si, MnO, and NiO. Phys. Rev. Lett. 2004, 93, 126406.
Quasiparticle self-consistent GW theory. M Van Schilfgaarde, T Kotani, S Faleev, Phys. Rev. Lett. 226402van Schilfgaarde, M.; Kotani, T.; Faleev, S. Quasiparticle self-consistent GW theory. Phys. Rev. Lett. 2006, 96, 226402.
Quasiparticle self-consistent GW method: A basis for the independent-particle approximation. T Kotani, M Van Schilfgaarde, S V Faleev, Phys. Rev. B. 165106Kotani, T.; van Schilfgaarde, M.; Faleev, S. V. Quasiparticle self-consistent GW method: A basis for the independent-particle approximation. Phys. Rev. B 2007, 76, 165106.
Accuracy Assessment of GW Starting Points for Calculating Molecular Excitation Energies Using the Bethe-Salpeter Formalism. X Gui, C Holzer, W Klopper, J. Chem. Theory Comput. 14Gui, X.; Holzer, C.; Klopper, W. Accuracy Assessment of GW Starting Points for Calculating Molecular Excitation Energies Using the Bethe-Salpeter Formalism. J. Chem. Theory Comput. 2018, 14, 2127-2136.
Two-component relativistic equation-of-motion coupledcluster methods for excitation energies and ionization potentials of atoms and molecules. Y Akinaga, T Nakajima, J. Phys. Chem. A. 121Akinaga, Y.; Nakajima, T. Two-component relativistic equation-of-motion coupled- cluster methods for excitation energies and ionization potentials of atoms and molecules. J. Phys. Chem. A 2017, 121, 827-835.
Equation-of-motion coupled-cluster theory based on the 4-component Dirac-Coulomb(-Gaunt) Hamiltonian. Energies for single electron detachment, attachment, and electronically excited states. A Shee, T Saue, L Visscher, A Severo Pereira Gomes, J. Chem. Phys. 149Shee, A.; Saue, T.; Visscher, L.; Severo Pereira Gomes, A. Equation-of-motion coupled-cluster theory based on the 4-component Dirac-Coulomb(-Gaunt) Hamilto- nian. Energies for single electron detachment, attachment, and electronically excited states. J. Chem. Phys. 2018, 149 .
Generalized Hedin's equations for quantum manybody systems with spin-dependent interactions. F Aryasetiawan, S Biermann, Phys. Rev. Lett. 116402Aryasetiawan, F.; Biermann, S. Generalized Hedin's equations for quantum many- body systems with spin-dependent interactions. Phys. Rev. Lett. 2008, 100, 116402.
Generalized Hedin equations and σgσW approximation for quantum many-body systems with spin-dependent interactions. F Aryasetiawan, S Biermann, J. Phys. Condens. Matter. 21Aryasetiawan, F.; Biermann, S. Generalized Hedin equations and σgσW approxima- tion for quantum many-body systems with spin-dependent interactions. J. Phys. Con- dens. Matter 2009, 21, 6-9.
One-electron energies from the two-component GW method. M Kühn, F Weigend, Kühn, M.; Weigend, F. One-electron energies from the two-component GW method.
. J. Chem. Theory Comput. 11J. Chem. Theory Comput. 2015, 11, 969-979.
Implementation and Validation of Fully Relativistic GW Calculations: Spin-Orbit Coupling in Molecules, Nanocrystals, and Solids. P Scherpelz, M Govoni, I Hamada, G Galli, J. Chem. Theory Comput. 12Scherpelz, P.; Govoni, M.; Hamada, I.; Galli, G. Implementation and Validation of Fully Relativistic GW Calculations: Spin-Orbit Coupling in Molecules, Nanocrystals, and Solids. J. Chem. Theory Comput. 2016, 12, 3523-3544.
Ionized, electron-attached, and excited states of molecular systems with spin-orbit coupling: Two-component GW and Bethe-Salpeter implementations. C Holzer, W Klopper, J. Chem. Phys. 150Holzer, C.; Klopper, W. Ionized, electron-attached, and excited states of molecular systems with spin-orbit coupling: Two-component GW and Bethe-Salpeter imple- mentations. J. Chem. Phys. 2019, 150, 204116.
NMR Coupling Constants Based on the Bethe-Salpeter Equation in the GW Approximation. Y J Franzke, C Holzer, F Mack, J. Chem. Theory Comput. 18Franzke, Y. J.; Holzer, C.; Mack, F. NMR Coupling Constants Based on the Bethe- Salpeter Equation in the GW Approximation. J. Chem. Theory Comput. 2022, 18, 1030-1045.
Practical post-Kohn -Sham methods for time-reversal symmetry breaking references. C Holzer, ChemRxiv. 2023Holzer, C. Practical post-Kohn -Sham methods for time-reversal symmetry breaking references. ChemRxiv 2023, 1-42.
Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 1996, 77, 3865-3868.
Toward reliable density functional methods without adjustable parameters: The PBE0 model. C Adamo, V Barone, J. Chem. Phys. 110Adamo, C.; Barone, V. Toward reliable density functional methods without adjustable parameters: The PBE0 model. J. Chem. Phys. 1999, 110, 6158-6170.
Assessment of the Perdew-Burke-Ernzerhof exchangecorrelation functional. M Ernzerhof, G E Scuseria, J. Chem. Phys. 5029Ernzerhof, M.; Scuseria, G. E. Assessment of the Perdew-Burke-Ernzerhof exchange- correlation functional. J. Chem. Phys. 1999, 110, 5029.
Assessing the G 0 W 0 Γ 0 (1) Approach: Beyond G 0 W 0 with Hedin's Full Second-Order Self-Energy Contribution. Y Wang, P Rinke, X Ren, J. Chem. Theory Comput. 17Wang, Y.; Rinke, P.; Ren, X. Assessing the G 0 W 0 Γ 0 (1) Approach: Beyond G 0 W 0 with Hedin's Full Second-Order Self-Energy Contribution. J. Chem. Theory Comput. 2021, 17, 5140-5154.
Recommendation of Orbitals for G 0 W 0 Calculations on Molecules and Crystals. L Zhang, Y Shu, C Xing, X Chen, S Sun, Y Huang, D G Truhlar, J. Chem. Theory Comput. 18Zhang, L.; Shu, Y.; Xing, C.; Chen, X.; Sun, S.; Huang, Y.; Truhlar, D. G. Recom- mendation of Orbitals for G 0 W 0 Calculations on Molecules and Crystals. J. Chem. Theory Comput. 2022, 18, 3523-3537.
A Finite-Field Approach for GW Calculations beyond the Random Phase Approximation. H Ma, M Govoni, F Gygi, G Galli, J. Chem. Theory Comput. 15Ma, H.; Govoni, M.; Gygi, F.; Galli, G. A Finite-Field Approach for GW Calculations beyond the Random Phase Approximation. J. Chem. Theory Comput. 2019, 15, 154- 164.
Stochastic Vertex Corrections: Linear Scaling Methods for Accurate Quasiparticle Energies. V Vlček, J. Chem. Theory Comput. 15Vlček, V. Stochastic Vertex Corrections: Linear Scaling Methods for Accurate Quasi- particle Energies. J. Chem. Theory Comput. 2019, 15, 6254-6266.
Dynamically screened vertex correction to GW. Y Pavlyukh, G Stefanucci, R Van Leeuwen, Phys. Rev. 202045121Pavlyukh, Y.; Stefanucci, G.; van Leeuwen, R. Dynamically screened vertex correction to GW. Phys. Rev. B 2020, 102, 045121.
The GW Miracle in Many-Body Perturbation Theory for the Ionization Potential of Molecules. F Bruneval, N Dattani, M J Van Setten, Front. Chem. 2021749779Bruneval, F.; Dattani, N.; van Setten, M. J. The GW Miracle in Many-Body Per- turbation Theory for the Ionization Potential of Molecules. Front. Chem. 2021, 9, 749779.
Vertex effects in describing the ionization energies of the first-row transition-metal monoxide molecules. Y Wang, X Ren, J. Chem. Phys. 2022214115Wang, Y.; Ren, X. Vertex effects in describing the ionization energies of the first-row transition-metal monoxide molecules. J. Chem. Phys. 2022, 157, 214115.
Self-consistency in GW Γ formalism leading to quasiparticle-quasiparticle couplings. C Mejuto-Zaera, V Vlček, Phys. Rev. 2022165129Mejuto-Zaera, C.; Vlček, V. Self-consistency in GW Γ formalism leading to quasiparticle-quasiparticle couplings. Phys. Rev. B 2022, 106, 165129.
One-electron spectra and susceptibilities of the threedimensional electron gas from self-consistent solutions of Hedin's equations. A L Kutepov, G Kotliar, Phys. Rev. B. 35108Kutepov, A. L.; Kotliar, G. One-electron spectra and susceptibilities of the three- dimensional electron gas from self-consistent solutions of Hedin's equations. Phys. Rev. B 2017, 96, 035108.
Self-consistent solution of Hedin's equations: Semiconductors and insulators. A L Kutepov, Phys. Rev. B. 95Kutepov, A. L. Self-consistent solution of Hedin's equations: Semiconductors and insulators. Phys. Rev. B 2017, 95, 195120.
Assessment of the second-order statically screened exchange correction to the random phase approximation for correlation energies. A Förster, J. Chem. Theory Comput. 18Förster, A. Assessment of the second-order statically screened exchange correction to the random phase approximation for correlation energies. J. Chem. Theory Comput. 2022, 18, 5948-5965.
GW spacetime method for the self-energy of large systems. M M Rieger, L Steinbeck, I D White, H N Rojas, R W Godby, Comput. Phys. Commun. 117Rieger, M. M.; Steinbeck, L.; White, I. D.; Rojas, H. N.; Godby, R. W. GW space- time method for the self-energy of large systems. Comput. Phys. Commun. 1999, 117, 211-228.
. P Hohenberg, W Kohn, Inhomogeneous Electron Gas. Phys. Rev. 136Hohenberg, P.; Kohn, W. Inhomogeneous Electron Gas. Phys. Rev. 1964, 136, 864- 871.
Self-Consistent Equations Including Exchange and Correlation Effects. W Kohn, L J Sham, Phys. Rev. 1401133Kohn, W.; Sham. L. J., Self-Consistent Equations Including Exchange and Correlation Effects. Phys. Rev. 1965, 140, A1133.
Generalized Kohn-Sham schemes and the band-gap problem. A Seidl, A Görling, P Vogl, J Majewski, M Levy, Phys. Rev. B. 53Seidl, A.; Görling, A.; Vogl, P.; Majewski, J.; Levy, M. Generalized Kohn-Sham schemes and the band-gap problem. Phys. Rev. B 1996, 53, 3764-3774.
Properties of the one-particle green's function for nonuniform manyfermion systems. A J Layzer, Phys. Rev. 129Layzer, A. J. Properties of the one-particle green's function for nonuniform many- fermion systems. Phys. Rev. 1963, 129, 897-907.
One-particle properties of an inhomogeneous interacting electron gas. L J Sham, W Kohn, Phys. Rev. 145Sham, L. J.; Kohn, W. One-particle properties of an inhomogeneous interacting elec- tron gas. Phys. Rev. 1966, 145, 561-567.
Quasiparticle GW calculations for solids, molecules, and two-dimensional materials. F Hüser, T Olsen, K S Thygesen, Phys. Rev. B -Condens. Matter Mater. Phys. 87235132Hüser, F.; Olsen, T.; Thygesen, K. S. Quasiparticle GW calculations for solids, molecules, and two-dimensional materials. Phys. Rev. B -Condens. Matter Mater. Phys. 2013, 87, 235132.
Normalization of exact quasiparticle wave functions in the Green ' s function method guaranteed by the Ward identity. T Nakashima, H Raebiger, K Ohno, PhysNakashima, T.; Raebiger, H.; Ohno, K. Normalization of exact quasiparticle wave functions in the Green ' s function method guaranteed by the Ward identity. Phys.
Accurate quasiparticle spectra from selfconsistent GW calculations with vertex corrections. M Shishkin, M Marsman, G Kresse, Phys. Rev. Lett. 246403Shishkin, M.; Marsman, M.; Kresse, G. Accurate quasiparticle spectra from self- consistent GW calculations with vertex corrections. Phys. Rev. Lett. 2007, 99, 246403.
Electronic structure of Pu and Am metals by self-consistent relativistic GW method. A Kutepov, K Haule, S Y Savrasov, G Kotliar, Phys. Rev. B -Condens. Matter Mater. Phys. 155129Kutepov, A.; Haule, K.; Savrasov, S. Y.; Kotliar, G. Electronic structure of Pu and Am metals by self-consistent relativistic GW method. Phys. Rev. B -Condens. Matter Mater. Phys. 2012, 85, 155129.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals. A L Kutepov, V S Oudovenko, G Kotliar, Comput. Phys. Commun. 219Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G. Linearized self-consistent quasiparti- cle GW method: Application to semiconductors and simple metals. Comput. Phys. Commun. 2017, 219, 407-414.
Quasiparticle Self-Consistent GW Study of Simple Metals. C Friedrich, S Blügel, D Nabok, 12Friedrich, C.; Blügel, S.; Nabok, D. Quasiparticle Self-Consistent GW Study of Simple Metals. Nanomaterials 2022, 12 .
Gaussian-based quasiparticle self-consistent GW for periodic systems. J Lei, T Zhu, J. Chem. Phys. 2022214114Lei, J.; Zhu, T. Gaussian-based quasiparticle self-consistent GW for periodic systems. J. Chem. Phys. 2022, 157, 214114.
Quaternion symmetry in relativistic molecular calculations: The Dirac-Hartree-Fock method. T Saue, H J Jensen, J. Chem. Phys. 111Saue, T.; Jensen, H. J. Quaternion symmetry in relativistic molecular calculations: The Dirac-Hartree-Fock method. J. Chem. Phys. 1999, 111, 6211-6222.
GW calculations including spin-orbit coupling: Application to Hg chalcogenides. R Sakuma, C Friedrich, T Miyake, S Blügel, F Aryasetiawan, Phys. Rev. B -Condens. Matter Mater. Phys. 84Sakuma, R.; Friedrich, C.; Miyake, T.; Blügel, S.; Aryasetiawan, F. GW calculations including spin-orbit coupling: Application to Hg chalcogenides. Phys. Rev. B -Con- dens. Matter Mater. Phys. 2011, 84, 1-10.
Density functional calculations, using Slater basis sets, with exact exchange. M A Watson, N C Handy, A J Cohen, J. Chem. Phys. 119Watson, M. A.; Handy, N. C.; Cohen, A. J. Density functional calculations, using Slater basis sets, with exact exchange. J. Chem. Phys. 2003, 119, 6475-6481.
Hybrid density functional calculations of nuclear magnetic shieldings using slater-type orbitals and the zeroth-order regular approximation. M Krykunov, T Ziegler, E Van Lenthe, Int. J. Quantum Chem. 109Krykunov, M.; Ziegler, T.; Van Lenthe, E. Hybrid density functional calculations of nuclear magnetic shieldings using slater-type orbitals and the zeroth-order regular approximation. Int. J. Quantum Chem. 2009, 109, 1676-1683.
Attractive electron-electron interactions within robust local fitting approximations. P Merlot, T Kjaergaard, T Helgaker, R Lindh, F Aquilante, S Reine, T B Pedersen, J. Comput. Chem. 34Merlot, P.; Kjaergaard, T.; Helgaker, T.; Lindh, R.; Aquilante, F.; Reine, S.; Peder- sen, T. B. Attractive electron-electron interactions within robust local fitting approx- imations. J. Comput. Chem. 2013, 34, 1486-1496.
Chargeconstrained auxiliary-density-matrix methods for the Hartree-Fock exchange contribution. P Merlot, R Izsák, A Borgoo, T Kjaergaard, T Helgaker, S Reine, J. Chem. Phys. 94104Merlot, P.; Izsák, R.; Borgoo, A.; Kjaergaard, T.; Helgaker, T.; Reine, S. Charge- constrained auxiliary-density-matrix methods for the Hartree-Fock exchange contri- bution. J. Chem. Phys. 2014, 141, 094104.
On Resolution-of-the-Identity Electron Repulsion Integral Approximations and Variational Stability. L N Wirz, S S Reine, T B Pedersen, J. Chem. Theory Comput. 13Wirz, L. N.; Reine, S. S.; Pedersen, T. B. On Resolution-of-the-Identity Electron Re- pulsion Integral Approximations and Variational Stability. J. Chem. Theory Comput. 2017, 13, 4897-4906.
Accurate localized resolution of identity approach for linear-scaling hybrid density functionals and for many-body perturbation theory. A C Ihrig, J Wieferink, I Y Zhang, M Ropo, X Ren, P Rinke, M Scheffler, V Blum, New J. Phys. 93020Ihrig, A. C.; Wieferink, J.; Zhang, I. Y.; Ropo, M.; Ren, X.; Rinke, P.; Scheffler, M.; Blum, V. Accurate localized resolution of identity approach for linear-scaling hybrid density functionals and for many-body perturbation theory. New J. Phys. 2015, 17, 093020.
Toward Pair Atomic Density Fitting for Correlation Energies with Benchmark Accuracy. E Spadetto, P H T Philipsen, A Förster, L Visscher, J. Chem. Theory Comput. 19Spadetto, E.; Philipsen, P. H. T.; Förster, A.; Visscher, L. Toward Pair Atomic Density Fitting for Correlation Energies with Benchmark Accuracy. J. Chem. Theory Comput. 2023, 19, 1499-1516.
On some approximations in applications of Xα theory. B I Dunlap, J W Connolly, J R Sabin, J. Chem. Phys. 71Dunlap, B. I.; Connolly, J. W.; Sabin, J. R. On some approximations in applications of Xα theory. J. Chem. Phys. 1979, 71, 3396-3402.
Use of approximate integrals in ab initio theory. An application in MP2 energy calculations. M Feyereisen, G Fitzgerald, A Komornicki, Chem. Phys. Lett. 208Feyereisen, M.; Fitzgerald, G.; Komornicki, A. Use of approximate integrals in ab initio theory. An application in MP2 energy calculations. Chem. Phys. Lett. 1993, 208, 359-363.
Auxiliary basis expansions for large-scale electronic structure calculations. Y Jung, A Sodt, P M W Gill, M Head-Gordon, Proc. Natl. Acad. Sci. Natl. Acad. Sci102Jung, Y.; Sodt, A.; Gill, P. M. W.; Head-Gordon, M. Auxiliary basis expansions for large-scale electronic structure calculations. Proc. Natl. Acad. Sci. 2005, 102, 6692- 6697.
R Van Leeuwen, N E Dahlen, G Stefanucci, C O Almbladh, U M A Barth, C A Ullrich, F Nogueira, A Rubio, K Burke, Gross, Time-Dependent Density Funct. Theory; Marques. E. K.HeidelbergSpringervan Leeuwen, R.; Dahlen, N. E.; Stefanucci, G.; Almbladh, C. O.; Von Barth, U. In Time-Dependent Density Funct. Theory; Marques, M. A., Ullrich, C. A., Nogueira, F., Rubio, A., Burke, K., Gross, E. K., Eds.; Springer Heidelberg, 2015; pp 185-217.
Cubic scaling GW: Towards fast quasiparticle calculations. P Liu, M Kaltak, J Klimeš, G Kresse, Phys. Rev. B. 165109Liu, P.; Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling GW: Towards fast quasipar- ticle calculations. Phys. Rev. B 2016, 94, 165109.
Solving the Eliashberg equations by means of N-point Padé approximants. H J Vidberg, J W Serene, J. Low Temp. Phys. 29Vidberg, H. J.; Serene, J. W. Solving the Eliashberg equations by means of N-point Padé approximants. J. Low Temp. Phys. 1977, 29, 179-192.
Self-consistent treatment of spin-orbit interactions with efficient Hartree-Fock and density functional methods. M K Armbruster, F Weigend, C Van Wüllen, W Klopper, Phys. Chem. Chem. Phys. 10Armbruster, M. K.; Weigend, F.; Van Wüllen, C.; Klopper, W. Self-consistent treat- ment of spin-orbit interactions with efficient Hartree-Fock and density functional meth- ods. Phys. Chem. Chem. Phys. 2008, 10, 1748-1756.
Spin-orbit coupling from a two-component self-consistent approach. I. Generalized Hartree-Fock theory. J K Desmarais, J P Flament, A Erba, J. Chem. Phys. 74107Desmarais, J. K.; Flament, J. P.; Erba, A. Spin-orbit coupling from a two-component self-consistent approach. I. Generalized Hartree-Fock theory. J. Chem. Phys. 2019, 151, 074107.
A Quadratic Pair Atomic Resolution of the Identity Based SOS-AO-MP2 Algorithm Using Slater Type Orbitals. A Förster, M Franchini, E Van Lenthe, L Visscher, Förster, A.; Franchini, M.; van Lenthe, E.; Visscher, L. A Quadratic Pair Atomic Resolution of the Identity Based SOS-AO-MP2 Algorithm Using Slater Type Orbitals.
. J. Chem. Theory Comput. 16J. Chem. Theory Comput. 2020, 16, 875 -891.
. E Baerends, T Ziegler, A Atkins, J Autschbach, O Baseggio, D Bashford, A Bérces, F Bickelhaupt, C Bo, P Boerrigter, L Cavallo, C Daul, D Chong, D Chulhai, L Deng, R Dickson, J Dieterich, D Ellis, M Van Faassen, L Fan, T Fischer, A Förster, C F Guerra, M Franchini, A Ghysels, A Giammona, S Van Gisbergen, A Goez, A Götz, J Groeneveld, O Gritsenko, M Grüning, S Gusarov, F Harris, P Van Den Hoek, Z Hu, C Jacob, H Jacobsen, L Jensen, L Joubert, J Kaminski, G Van Kessel, C König, F Kootstra, A Kovalenko, M Krykunov, E Van Lenthe, D Mccormack, A Michalak, M Mitoraj, S Morton, J Neugebauer, V Nicu, L Noodleman, V Osinga, S Patchkovskii, M Pavanello, C Peeples, P Philipsen, D Post, C Pye, H Ramanantoanina, P Ramos, W Ravenek, M Reimann, J Rodríguez, P Ros, R Rüger, P Schipper, D Schlüns, H Van Schoot, G Schreckenbach, J Seldenthuis, M Seth, J Snijders, M Solà, M Stener, M Swart, D Swerhone, V Tognetti, G Te Velde, P Vernooijs, L Versluis, L Visscher, O Visser, F Wang, T Wesolowski, 2022van Wezenbeek, E.; Wiesenekker, G.; Wolff, S.; Woo, T.; Yakovlev, A. ADF2022.1 (modified development versionBaerends, E.; Ziegler, T.; Atkins, A.; Autschbach, J.; Baseggio, O.; Bashford, D.; Bérces, A.; Bickelhaupt, F.; Bo, C.; Boerrigter, P.; Cavallo, L.; Daul, C.; Chong, D.; Chulhai, D.; Deng, L.; Dickson, R.; Dieterich, J.; Ellis, D.; van Faassen, M.; Fan, L.; Fischer, T.; Förster, A.; Guerra, C. F.; Franchini, M.; Ghysels, A.; Giammona, A.; van Gisbergen, S.; Goez, A.; Götz, A.; Groeneveld, J.; Gritsenko, O.; Grüning, M.; Gusarov, S.; Harris, F.; van den Hoek, P.; Hu, Z.; Jacob, C.; Jacobsen, H.; Jensen, L.; Joubert, L.; Kaminski, J.; van Kessel, G.; König, C.; Kootstra, F.; Kovalenko, A.; Krykunov, M.; van Lenthe, E.; McCormack, D.; Michalak, A.; Mitoraj, M.; Mor- ton, S.; Neugebauer, J.; Nicu, V.; Noodleman, L.; Osinga, V.; Patchkovskii, S.; Pavanello, M.; Peeples, C.; Philipsen, P.; Post, D.; Pye, C.; Ramanantoanina, H.; Ramos, P.; Ravenek, W.; Reimann, M.; Rodríguez, J.; Ros, P.; Rüger, R.; Schip- per, P.; Schlüns, D.; van Schoot, H.; Schreckenbach, G.; Seldenthuis, J.; Seth, M.; Snijders, J.; Solà, M.; Stener, M.; Swart, M.; Swerhone, D.; Tognetti, V.; te Velde, G.; Vernooijs, P.; Versluis, L.; Visscher, L.; Visser, O.; Wang, F.; Wesolowski, T.; van Wezenbeek, E.; Wiesenekker, G.; Wolff, S.; Woo, T.; Yakovlev, A. ADF2022.1 (modi- fied development version). 2022.
. R Rüger, M Franchini, T Trnka, A Yakovlev, E Van Lenthe, P Philipsen, T Van Vuren, B Klumpers, Amsterdam, The NetherlandsVrije UniversiteitSoini, T. AMS 2022.1, SCM, Theoretical ChemistryRüger, R.; Franchini, M.; Trnka, T.; Yakovlev, A.; van Lenthe, E.; Philipsen, P.; van Vuren, T.; Klumpers, B.; Soini, T. AMS 2022.1, SCM, Theoretical Chemistry, Vrije Universiteit, Amsterdam, The Netherlands, http://www.scm.com. 2022.
Relativistic regular two-component hamiltonians. E Van Lenthe, E J Baerends, J G Snijders, J. Chem. Phys. 4597Van Lenthe, E.; Baerends, E. J.; Snijders, J. G. Relativistic regular two-component hamiltonians. J. Chem. Phys. 1993, 99, 4597.
Relativistic total energy using regular approximations. E Van Lenthe, E J Baerends, J G Snijders, J. Chem. Phys. 101Van Lenthe, E.; Baerends, E. J.; Snijders, J. G. Relativistic total energy using regular approximations. J. Chem. Phys. 1994, 101, 9783-9792.
The zero-order regular approximation for relativistic effects: The effect of spin-orbit coupling in closed shell molecules. E Van Lenthe, J G Snijders, E J Baerends, J. Chem. Phys. 105Van Lenthe, E.; Snijders, J. G.; Baerends, E. J. The zero-order regular approximation for relativistic effects: The effect of spin-orbit coupling in closed shell molecules. J. Chem. Phys. 1996, 105, 6505-6516.
Interfacing relativistic and nonrelativistic methods. I. Normalized elimination of the small component in the modified Dirac equation. K G Dyall, J. Chem. Phys. 1069618Dyall, K. G. Interfacing relativistic and nonrelativistic methods. I. Normalized elimi- nation of the small component in the modified Dirac equation. J. Chem. Phys. 1997, 106, 9618.
Quasirelativistic theory equivalent to fully relativistic theory. W Kutzelnigg, W Liu, J. Chem. Phys. 241102Kutzelnigg, W.; Liu, W. Quasirelativistic theory equivalent to fully relativistic theory. J. Chem. Phys. 2005, 123, 241102.
An exact separation of the spin-free and spin-dependent terms of the Dirac-Coulomb-Breit Hamiltonian. K G Dyall, K G Dyana, J. Chem. Phys. 1002118Dyall, K. G.; Dyana, K. G. An exact separation of the spin-free and spin-dependent terms of the Dirac-Coulomb-Breit Hamiltonian. J. Chem. Phys. 1994, 100, 2118.
Spin separation in the regular Hamiltonian approach to solutions of the Dirac equation. A J Sadlej, J G Snijders, Chem. Phys. Lett. 229Sadlej, A. J.; Snijders, J. G. Spin separation in the regular Hamiltonian approach to solutions of the Dirac equation. Chem. Phys. Lett. 1994, 229, 435-438.
On the distinction between scalar and spin-orbit relativistic effects. L Visscher, E Van Lenthe, Chem. Phys. Lett. 306Visscher, L.; Van Lenthe, E. On the distinction between scalar and spin-orbit rela- tivistic effects. Chem. Phys. Lett. 1999, 306, 357-365.
Construction of the Foldy-Wouthuysen transformation and solution of the Dirac equation using large components only. E Van Lenthe, E J Baerends, J G Snijders, J. Chem. Phys. 2373Van Lenthe, E.; Baerends, E. J.; Snijders, J. G. Construction of the Foldy-Wouthuysen transformation and solution of the Dirac equation using large components only. J. Chem. Phys. 1996, 105, 2373.
GW100: A Slater-Type Orbital Perspective. A Förster, L Visscher, J. Chem. Theory Comput. 17Förster, A.; Visscher, L. GW100: A Slater-Type Orbital Perspective. J. Chem. Theory Comput. 2021, 17, 5080-5097.
Atomic structures and orbital energies of 61,489 crystal-forming organic molecules. A Stuke, C Kunkel, D Golze, M Todorović, J T Margraf, K Reuter, P Rinke, H Oberhofer, Sci. 2020Stuke, A.; Kunkel, C.; Golze, D.; Todorović, M.; Margraf, J. T.; Reuter, K.; Rinke, P.; Oberhofer, H. Atomic structures and orbital energies of 61,489 crystal-forming organic molecules. Sci. Data 2020, 7, 1-11.
Atomic orbital basis sets. F Jensen, Wiley Interdiscip. Rev. Comput. Mol. Sci. 3Jensen, F. Atomic orbital basis sets. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2013, 3, 273-295.
Precise density-functional method for periodic structures. Te Velde, G Baerends, E J , Phys. Rev. B. 44Te Velde, G.; Baerends, E. J. Precise density-functional method for periodic structures. Phys. Rev. B 1991, 44, 7888-7903.
. P Philipsen, G Te Velde, E Baerends, J Berger, P De Boeij, M Franchini, J Groeneveld, E Kadantsev, R Klooster, F Kootstra, M Pols, P Romaniello, M Raupach, D Skachkov, J Snijders, C Verzijl, J C Gil, J M Thijssen, G Wiesenekker, C A Peeples, G Schreckenbach, T Ziegler, BAND. 2022AmsterdamSCM, Theoretical Chemistry, Vrije UniversiteitPhilipsen, P.; te Velde, G.; Baerends, E.; Berger, J.; de Boeij, P.; Franchini, M.; Groeneveld, J.; Kadantsev, E.; Klooster, R.; Kootstra, F.; Pols, M.; Romaniello, P.; Raupach, M.; Skachkov, D.; Snijders, J.; Verzijl, C.; Gil, J. C.; Thijssen, J. M.; Wiesenekker, G.; Peeples, C. A.; Schreckenbach, G.; Ziegler., T. BAND 2022.1 (mod- ified development version), SCM, Theoretical Chemistry, Vrije Universiteit, Amster- dam, The Netherlands, http://www.scm.com. 2022.
Segmented contracted basis sets for atoms H through Xe: Sapporo-(DK)-nZP sets (n = D, T, Q). T Noro, M Sekiya, T Koga, Theor. Chem. Acc. 131Noro, T.; Sekiya, M.; Koga, T. Segmented contracted basis sets for atoms H through Xe: Sapporo-(DK)-nZP sets (n = D, T, Q). Theor. Chem. Acc. 2012, 131, 1-8.
Sapporo-(DKH3)-nZP (n 5 D, T, Q) sets for the sixth period s-, d-, and p-block atoms. T Noro, M Sekiya, T Koga, Theor. Chem. Acc. 132Noro, T.; Sekiya, M.; Koga, T. Sapporo-(DKH3)-nZP (n 5 D, T, Q) sets for the sixth period s-, d-, and p-block atoms. Theor. Chem. Acc. 2013, 132, 1-5.
Density-functional thermochemistry. III. The role of exact exchange. A D Becke, J. Chem. Phys. 98Becke, A. D. Density-functional thermochemistry. III. The role of exact exchange. J. Chem. Phys. 1993, 98, 5648-5652.
A systematic benchmark of the ab initio Bethe-Salpeter equation approach for low-lying optical excitations of small organic molecules. F Bruneval, S M Hamed, J B Neaton, J. Chem. Phys. 244101Bruneval, F.; Hamed, S. M.; Neaton, J. B. A systematic benchmark of the ab initio Bethe-Salpeter equation approach for low-lying optical excitations of small organic molecules. J. Chem. Phys. 2015, 142, 244101.
Low scaling algorithms for the random phase approximation: Imaginary time and laplace transformations. M Kaltak, J Klimeš, G Kresse, J. Chem. Theory Comput. 10Kaltak, M.; Klimeš, J.; Kresse, G. Low scaling algorithms for the random phase ap- proximation: Imaginary time and laplace transformations. J. Chem. Theory Comput. 2014, 10, 2498-2507.
Cubic scaling algorithm for the random phase approximation: Self-interstitials and vacancies in Si. M Kaltak, J Klimeš, G Kresse, Phys. Rev. B. 54115Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling algorithm for the random phase approximation: Self-interstitials and vacancies in Si. Phys. Rev. B 2014, 90, 054115.
Unphysical Discontinuities in GW Methods. M Véril, P Romaniello, J A Berger, P F Loos, J. Chem. Theory Comput. 14Véril, M.; Romaniello, P.; Berger, J. A.; Loos, P. F. Unphysical Discontinuities in GW Methods. J. Chem. Theory Comput. 2018, 14, 5220-5228.
Unphysical Discontinuities, Intruder States and Regularization in GW Methods. E Monino, P.-F Loos, J. Chem. Phys. 2022231101Monino, E.; Loos, P.-F. Unphysical Discontinuities, Intruder States and Regularization in GW Methods. J. Chem. Phys. 2022, 231101 .
Iterative subspace algorithms for finite-temperature solution of Dyson equation. P Pokhilko, C N Yeh, D Zgid, J. Chem. Phys. 202294101Pokhilko, P.; Yeh, C. N.; Zgid, D. Iterative subspace algorithms for finite-temperature solution of Dyson equation. J. Chem. Phys. 2022, 156, 094101.
15 systems with multiple solutions. However, for CI 4 , all three solutions are very close to each other. Therefore, we retain this system in our benchmark. In principle, there areIn principle, there are 15 systems with multiple solutions. However, for CI 4 , all three solutions are very close to each other. Therefore, we retain this system in our bench- mark.
Ionization energy of atoms obtained from GW self-energy or from random phase approximation total energies. F Bruneval, J. Chem. Phys. 136Bruneval, F. Ionization energy of atoms obtained from GW self-energy or from random phase approximation total energies. J. Chem. Phys. 2012, 136, 194107.
Full-frequency GW without frequency. S J Bintrim, T C Berkelbach, J. Chem. Phys. 41101Bintrim, S. J.; Berkelbach, T. C. Full-frequency GW without frequency. J. Chem. Phys. 2021, 154, 041101.
Implementation of an all-electron GW approximation based on the projector augmented wave method without plasmon pole approximation: Application to Si, SiC, AlAs, InAs, NaH, and KH. S Lebègue, B Arnaud, M Alouani, P E Bloechl, Phys. Rev. B -Condens. Matter Mater. Phys. 67155208Lebègue, S.; Arnaud, B.; Alouani, M.; Bloechl, P. E. Implementation of an all-electron GW approximation based on the projector augmented wave method without plasmon pole approximation: Application to Si, SiC, AlAs, InAs, NaH, and KH. Phys. Rev. B -Condens. Matter Mater. Phys. 2003, 67, 155208.
Robust Analytic-Continuation Approach to Many-Body GW Calculations. I Duchemin, X Blase, J. Chem. Theory Comput. 16Duchemin, I.; Blase, X. Robust Analytic-Continuation Approach to Many-Body GW Calculations. J. Chem. Theory Comput. 2020, 16, 1742-1756.
Quasiparticle and optical properties of rutile and anatase TiO2. W Kang, M S Hybertsen, Phys. Rev. B -Condens. Matter Mater. Phys. 85203Kang, W.; Hybertsen, M. S. Quasiparticle and optical properties of rutile and anatase TiO2. Phys. Rev. B -Condens. Matter Mater. Phys. 2010, 82, 085203.
Quasiparticle self-consistent GW theory of III-V nitride semiconductors: Bands, gap bowing, and effective masses. A Svane, N E Christensen, I Gorczyca, M Van Schilfgaarde, A N Chantis, T Kotani, Phys. Rev. B -Condens. Matter Mater. Phys. 115102Svane, A.; Christensen, N. E.; Gorczyca, I.; van Schilfgaarde, M.; Chantis, A. N.; Kotani, T. Quasiparticle self-consistent GW theory of III-V nitride semiconductors: Bands, gap bowing, and effective masses. Phys. Rev. B -Condens. Matter Mater. Phys. 2010, 82, 115102.
Quasiparticle band structure of Zn-IV-N2 compounds. A Punya, W Lambrecht, M Van Schilfgaarde, Phys. Rev. B -Condens. Matter Mater. Phys. 84Punya, A.; Lambrecht, W.; van Schilfgaarde, M. Quasiparticle band structure of Zn- IV-N2 compounds. Phys. Rev. B -Condens. Matter Mater. Phys. 2011, 84, 165204.
Vertex function compliant with the Ward identity for quasiparticle self-consistent calculations beyond GW. A Tal, W Chen, A Pasquarello, Phys. Rev. B. 161104Tal, A.; Chen, W.; Pasquarello, A. Vertex function compliant with the Ward identity for quasiparticle self-consistent calculations beyond GW. Phys. Rev. B 2021, 103, 161104.
Effect of ladder diagrams on optical absorption spectra in a quasiparticle self-consistent GW framework. B Cunningham, M Grüning, P Azarhoosh, D Pashov, M Van Schilfgaarde, Phys. Rev. Mater. 234603Cunningham, B.; Grüning, M.; Azarhoosh, P.; Pashov, D.; van Schilfgaarde, M. Effect of ladder diagrams on optical absorption spectra in a quasiparticle self-consistent GW framework. Phys. Rev. Mater. 2018, 2, 034603.
B Cunningham, M Gruening, D Pashov, M Van Schilfgaarde, Qsgw, arXiv:2106.05759v1Quasiparticle Self consistent GW with ladder diagrams in W. Cunningham, B.; Gruening, M.; Pashov, D.; van Schilfgaarde, M. QSGW: Quasipar- ticle Self consistent GW with ladder diagrams in W. arXiv:2106.05759v1 2021,
Optical response and band structure of LiCoO2 including electron-hole interaction effects. S K Radha, W Lambrecht, B Cunningham, M Grüning, D Pashov, M Van Schilfgaarde, Phys. Rev. B. 115120Radha, S. K.; Lambrecht, W.; Cunningham, B.; Grüning, M.; Pashov, D.; van Schil- fgaarde, M. Optical response and band structure of LiCoO2 including electron-hole interaction effects. Phys. Rev. B 2021, 104, 115120.
Relativity and the Periodic System of Elements. P Pyykko, J P Desclaux, Acc. Chem. Res. 12Pyykko, P.; Desclaux, J. P. Relativity and the Periodic System of Elements. Acc. Chem. Res. 1979, 12, 276-281.
A similarity renormalization group approach to Green's function methods. A Marie, P.-F Loos, arXiv:2303.059842023Marie, A.; Loos, P.-F. A similarity renormalization group approach to Green's function methods. arXiv:2303.05984 2023, 1-14.
Tetrahedron integration method for strongly varying functions: Application to the GT self-energy. C Friedrich, Phys. Rev. B. 75142Friedrich, C. Tetrahedron integration method for strongly varying functions: Appli- cation to the GT self-energy. Phys. Rev. B 2019, 100, 075142.
Assessment of the GW approximation using Hubbard chains. T J Pollehn, A Schindlmayr, R W Godby, J. Phys. Condens. Matter. 11273Pollehn, T. J.; Schindlmayr, A.; Godby., R. W. Assessment of the GW approximation using Hubbard chains. J. Phys. Condens. Matter 1998, 1, 1273.
Simple eigenvalue-self-consistent $GW 0$. V Vlček, R Baer, E Rabani, D Neuhauser, J. Chem. Phys. 174107Vlček, V.; Baer, R.; Rabani, E.; Neuhauser, D. Simple eigenvalue-self-consistent $GW 0$. J. Chem. Phys. 2018, 149, 174107.
Up-Conversion Intersystem Crossing Rates in Organic Emitters for Thermally Activated Delayed Fluorescence: Impact of the Nature of Singlet vs Triplet Excited States. P K Samanta, D Kim, V Coropceanu, J L Brédas, J. Am. Chem. Soc. 139Samanta, P. K.; Kim, D.; Coropceanu, V.; Brédas, J. L. Up-Conversion Intersystem Crossing Rates in Organic Emitters for Thermally Activated Delayed Fluorescence: Impact of the Nature of Singlet vs Triplet Excited States. J. Am. Chem. Soc. 2017, 139, 4042-4051.
| []
|
[
"Coded Mask Instruments for Gamma-Ray Astronomy",
"Coded Mask Instruments for Gamma-Ray Astronomy"
]
| [
"Andrea Goldwurm [email protected] \nDépartement d'Astrophysique /IRFU/DRF\nUniversité Paris Cité\nCNRS\nCEA\nAstroparticule et Cosmologie\nF-75013ParisFrance\n",
"Aleksandra Gros [email protected] \nDépartement d'Astrophysique /IRFU/DRF\nUniversité Paris Cité\nCNRS\nCEA\nAstroparticule et Cosmologie\nF-75013ParisFrance\n",
"Andrea Goldwurm \nCEA-Saclay\n91191Gif-sur-YvetteFrance\n",
"Aleksandra Gros \nUniversité Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\n91191Gif-sur-YvetteFrance\n"
]
| [
"Département d'Astrophysique /IRFU/DRF\nUniversité Paris Cité\nCNRS\nCEA\nAstroparticule et Cosmologie\nF-75013ParisFrance",
"Département d'Astrophysique /IRFU/DRF\nUniversité Paris Cité\nCNRS\nCEA\nAstroparticule et Cosmologie\nF-75013ParisFrance",
"CEA-Saclay\n91191Gif-sur-YvetteFrance",
"Université Paris-Saclay\nUniversité Paris Cité\nCEA\nCNRS\n91191Gif-sur-YvetteFrance"
]
| []
| Coded mask instruments have been used in high-energy astronomy for the last forty years now and designs for future hard X-ray/low gamma-ray telescopes are still based on this technique when they need to reach moderate angular resolutions over large field of views, particularly for observations dedicated to the, now flourishing, field of time domain astrophysics. However these systems are somehow unfamiliar to the general astronomers as they actually are two-step imaging devices where the recorded picture is very different from the imaged object and the data processing takes a crucial part in the reconstruction of the sky image. Here we present the concepts of these optical systems applied to high-energy astronomy, the basic reconstruction methods including some useful formulae and the trend of the expected and observed performances as function of the system designs. We review the historical developments and recall the flown space-borne coded mask instruments along with the description of a few relevant examples of major successful implementations and future projects in space astronomy. | 10.1007/978-981-16-4544-0_44-1 | [
"https://export.arxiv.org/pdf/2305.10130v1.pdf"
]
| 258,741,110 | 2305.10130 | 0859266a5dcd7ff262f45fe80425877e95210ee9 |
Coded Mask Instruments for Gamma-Ray Astronomy
17 May 2023
Andrea Goldwurm [email protected]
Département d'Astrophysique /IRFU/DRF
Université Paris Cité
CNRS
CEA
Astroparticule et Cosmologie
F-75013ParisFrance
Aleksandra Gros [email protected]
Département d'Astrophysique /IRFU/DRF
Université Paris Cité
CNRS
CEA
Astroparticule et Cosmologie
F-75013ParisFrance
Andrea Goldwurm
CEA-Saclay
91191Gif-sur-YvetteFrance
Aleksandra Gros
Université Paris-Saclay
Université Paris Cité
CEA
CNRS
91191Gif-sur-YvetteFrance
Coded Mask Instruments for Gamma-Ray Astronomy
17 May 202310.1007/978-981-16-4544-0In: Handbook of X-ray and Gamma-ray Astrophysics, Springer 2023, Singapore, eds. C. Bambi, A. Santangelo, ISBN: 978-981-16-4544-0Coded MasksCoded AperturesImaging SystemsGamma-Ray AstronomyImage DecodingImage ProcessingData Analysis
Coded mask instruments have been used in high-energy astronomy for the last forty years now and designs for future hard X-ray/low gamma-ray telescopes are still based on this technique when they need to reach moderate angular resolutions over large field of views, particularly for observations dedicated to the, now flourishing, field of time domain astrophysics. However these systems are somehow unfamiliar to the general astronomers as they actually are two-step imaging devices where the recorded picture is very different from the imaged object and the data processing takes a crucial part in the reconstruction of the sky image. Here we present the concepts of these optical systems applied to high-energy astronomy, the basic reconstruction methods including some useful formulae and the trend of the expected and observed performances as function of the system designs. We review the historical developments and recall the flown space-borne coded mask instruments along with the description of a few relevant examples of major successful implementations and future projects in space astronomy.
Introduction
Coded aperture mask imaging systems, in short Coded Mask Instruments hereinafter CMI, are multiplexing optical devices that allow, through a proper spatial modulation of the incident radiation and its following recording by a position sensitive detector, the simultaneous measurement of flux and position of multiple sources in the field of view.
The basic idea of these systems is to couple a position sensitive photon-detector (PSD) to a filter or a mask that absorbs part of the incident radiation in such a way to operate a spatial modulation of the recorded photon flux that is dependent on the angular distribution of the sources in the field of view. From the recorded image, the input sky image is reconstructed through a specific data post-processing that takes into account the modulation that the mask operates.
CMIs are employed when conventional focusing systems based on lenses or reflectors cannot be used. Nowadays their major application is in high-energy astronomy, and in particular in the hard X-ray (10-100 keV) and soft gamma-ray (100 keV-10 MeV) domains where conventional focusing techniques, commonly employed at energies lower than 15 keV, are not easily implemented because the radiation wavelengths are comparable or shorter than the typical inter-atomic distances. At energies higher than a few MeV the mask material becomes more and more transparent, the Compton scattering dominant and the geometrical spatial modulation, based on the photoelectric absorption process, less efficient. Absorption increases again at energies >10 MeV thanks to the pair-production effect, but there it is more efficient to use the intrinsic directional properties of this interaction on the few detected photons rather than the collective effect of shadow projection by many rays in order to measure the source direction. X/gamma-ray astronomy is also the domain where the high and variable background becomes dominant over the source contributions, which drastically limits the performance of standard on/off monitoring techniques and where the simultaneous measurement of source and background is crucial even for the simple source detection.
Coded masks were conceived in the 1970s-1980s and employed successfully in the past 40-30 years in high-energy astronomy, on balloon-borne instruments first and then onboard space missions like Spacelab 2, GRANAT, and BeppoSAX. They have been chosen as imaging systems for experiments on a number of major missions presently in operation, the European INTEGRAL, the American Swift and the Indian ASTROSAT, and for some future projects like the Chinese-French SVOM mission. Today the NuSTAR and the Hitomi missions have successfully pushed up to 80 keV the technique of grazing incidence X-ray mirrors [49] [90]. However the limited field of view (few arcmin) achieved by these telescopes and the variability of the sky at these energies make the coded mask systems still the best options to search for bright transient or variable events in wide field of views.
Coded aperture systems have been employed also in medicine and in monitoring nuclear plants and implementations in nuclear security programs are also envisaged [17] [2]. Even if basic concepts are still valid for these systems, certain conditions, specific to gamma-ray astronomy, can be relaxed (e.g., source at infinity, high level and variable background, etc.), and therefore designs and data analysis for CMI for terrestrial studies can take avery different form. In particular for close sources (the so-called near-field condition), the system can actually provide three-dimensional imaging because of the intrinsic enlarging effect of shadow projection as the source distance decreases. This interesting property is not applicable in astronomy and we will not discuss it here.
In spite of the large literature on the topic, few comprehensive reviews were dedicated to these systems; the most complete is certainly the one by Caroli et al. 1987 [14], which however was compiled before the extensive use of CMI in actual missions. In this paper we review the basic concepts, the general characteristics and specific terminology (troughout the paper key terms are written in bold when first defined) of coded mask imaging for gamma-ray astronomy ( § 2), with a historical presentation of the studies dedicated to the search of the optimum mask patterns and best system designs. We present ( § 3) in a simple way the standard techniques of the image reconstruction based on cross-correlation of the detector image with a function derived from the mask pattern, providing the explicit formulae for this analysis and for the associated statistical errors, and the further processing of the images which usually involves iterative noise cleaning. We will discuss the performance of the systems ( § 4), in particular the sensitivity and localization accuracy, under some reasonable assumptions on the background, and their relation with the instrument design.
We cannot be exhaustive in all topics and references of this vast subject. Clearly the analysis of coded mask system data relies, as for any other telescope, on a careful calibration of the instrument, the understanding of systematic effects of the detector and the measurement and proper modeling of the background. For these aspects these telescopes are not different from any other one and we will not treat these topics, apart from the specific question of non-uniform background shape over the PSD, because they are specific to detectors, satellites, space operations and environment of the individual missions. Also we will not discuss detailed characteristics of the PSDs and we will neglect description of one-dimensional (1-d) aperture designs and systems that couple spatial and time modulation (like rotational collimators), as we are mainly interested in the overall 2-d coded aperture imaging system.
We include a section ( § 5) on the application of CMI in high-energy astronomy with a presentation of the historical developments from rocket-borne to space-borne projects and mentioning all the experiments that were successfully flown up to today on space missions. Finally we dedicate specific sub-sections ( § 5.3-5.5) to three gamma-ray CMI telescopes: SIGMA, that flew on the GRANAT space mission in the 1990s, IBIS currently operating on the INTEGRAL gamma-ray observatory and ECLAIRs, planned for launch in the next years on board the SVOM mission. These experiments are used to illustrate the different concepts and issues presented and to show some of the most remarkable "imaging" results obtained with CMI, in highenergy astronomy in the past 30 years. Fig. 1 Coded aperture principle. Two sources at infinity project a different pattern of the mask on the detector plane (shadow-gram). For a cyclic system, here a mask with a URA basic pattern of 5×3 replicated to 9×5, it is the same pattern but shifted according to the source position.
Basics Principles of Coded Mask Imaging
Definitions and Main Properties
In coded aperture telescopes the source radiation is spatially modulated by a mask, a screen of opaque and transparent elements, usually of the same shape and size, ordered in some specific way (mask pattern), before being recorded by a position sensitive detector. For each source, the detector will simultaneously measure its flux together with background flux in the detector area corresponding to the projected transparent mask elements, and background flux alone in detector area corresponding to the projected opaque elements (Fig. 1). From the recorded detector image (Fig. 2), which includes the shadows of parts of the mask (shadow-grams) projected by the sources in the field of view onto the detector plane and using the mask pattern itself an image of the sky can, under certain conditions, be reconstructed.
Mask patterns must be designed to allow each source in the field of view to cast a unique shadow-gram on the detector, in order to avoid ambiguities in the reconstruction of the sky image. In fact each source shadow-gram shall be as different as possible from those of the other sources. The simplest aperture that fulfills this condition is of course the one that has only one hole, the well-known pinhole cam-era. The response to a point source of this system is a peak of the projected size of the hole and null values elsewhere. The overall resulting image on the detector is a blurred and inverted image of the object. However the sensitive area and angular resolution, for given mask-detector distance, are inversely proportional to each other: effective area can only be increased by increasing the hole size, which worsens the angular resolution (increases the blurring).
A practical alternative is to design a mask with several small transparent elements of the same size (Fig. 1), a multi-hole camera. The resolution still depends on the dimension of one individual hole but the sensitive area can be increased by increasing the number of transparent elements. In this case however the disposition of the holes is important since when more than one hole is used, ambiguity can rise regarding which part of the sky is contributing to the recorded image. For example with a regular chess board pattern mask, different sources would project identical shadows and disentangling their contributions would be impossible. Mask patterns that have good imaging properties exist ( § 2.3). With the use of a properly designed multiple-hole mask system an image of the sky can be recovered from the recorded shadow-gram through a convenient computation. In general the sky image reconstruction, or image deconvolution as it is often called, is based on a correlation procedure between the recorded detector image and a decoding array derived from the mask pattern. Such correlation will provide a maximum value for the decoding array "position" corresponding to the source position, where the match between the source shadow-gram and the mask pattern is optimum, and generally lower values elsewhere. Note that, unlike focusing telescopes, individual recorded events are not uniquely positioned in the sky: each event is either background or coming from any of the sky areas which project an open mask element at the event detector position. The sky areas compatible with a single recorded event will draw a mask pattern in the sky. It is rather the mask shadow, collectively projected by many source rays, that can be "positioned" in the sky.
Assuming a perfect detector (infinite spatial resolution) and a perfect mask (infinitely thin, totally opaque closed elements, totally transparent open elements), the angular resolution of such a system is then defined by the angle subtended by one hole at the detector. The sensitive area depends instead on the total surface of transparent mask elements viewed by the detector. So, reducing hole size or increasing mask to detector distance while increasing accordingly the number of holes improves the angular resolution without loss of sensitivity. Increasing the aperture area will increase the effective surface but, since the estimation of the background is also a crucial element, this does not mean that the best sensitivity would increase monotonically with the increase of the mask open fraction (the ratio between transparent mask area and total mask area, also sometimes designed as aperture or transparent fraction). In the gamma-ray domain where the count rate is dominated by the background, the optimum aperture is actually one-half. In the X-ray domain instead, the optimum value rather depends on the expected sky to be imaged even if in general, because of the Cosmic X-ray Background (CXB) which dominates at low energies, the optimal aperture is somewhat less than 0.5.
The field of view (FOV) of the instrument is defined as the set of sky directions from which the source radiation is modulated by the mask and its angular dimension is determined by the mask and the detector sizes and by their respective distance, in the absence of collimators. Since only the modulated radiation can be reconstructed, in order to optimize the sensitive area of the detector and have a large FOV, masks larger than the detector plane are usually employed, even if equal dimensions (for the so-called simple or box type CMI systems) have also been used. The FOV is thus divided in two parts: the fully coded (FC) FOV for which all source radiation directed toward the detector plane is modulated by the mask and the partially coded (PC) FOV for which only a fraction of it is modulated by the mask (Fig. 3). The non-modulated source radiation, even if detected, cannot be distinguished from the (non-modulated) background. In order to reduce its statistical noise and background radiation, collimators on the PSD or an absorbing tube connecting the mask and detector are used. Fig. 3 Left: A coded aperture telescope geometry with a mask larger than the detector and a shield connecting them. The field of views around the telescope axis are shown: the FCFOV (red), the Half Modulation EXFOV (green) and the Zero Response EXFOV (black). Right: Relation, for a square CMI, between the array sizes of mask (M), detector (D), and sky (S), with indication, in the sky, of the FOVs (same color code than left panel).
The typical CMI geometry and its FOVs are shown in Fig. 3. If holes are uniformly distributed over the mask, the sensitivity is approximately constant in the FCFOV and decreases in the PCFOV linearly because the mask modulation decreases to zero. The total FOV (FC+PC) is often called extended (EX) FOV and can be characterized by the level of modulation of the PC. Figure 3 right shows the relative sizes of the (ZR)EXFOV, detector and mask. For simple systems the FCFOV is limited to the on-axis direction and all the EXFOV is PCFOV. Table 1 reports the approximate imaging characteristics provided by a coded aperture system (as illustrated in Fig. 3) as functions of its geometrical parameters along one direction. Values of the IBIS/ISGRI system ( § 5.4) for which the design parameters are given in the notes are reported as example. The EXFOV dimensions are given for half modulation level and for zero response. Both angular resolution ( § 4.2) and localization power ( § 4.3), which are at the first order linked to the angle subtended by the mask element size to the detector and the detector pixel (or spatial resolution) to the mask, depend actually also on the reconstruction method and even on the distribution of holes in the mask pattern as described below. √ 3σ D where σ D is the linear detector resolution (in σ ) for continuous detector. SNR here is the "imaging signal to noise ratio" SNR I for known source position ( § 4.1). IBIS/ISGRI approximate parameters: L M = 1064 mm, L D = 600 mm, H = 3200 mm, m = 11.2 mm, d = 4.6 mm.
Coding and Decoding: The Case of Optimum Systems
To analyze the properties of coded mask systems we first simplify the treatment by considering an optimum coded mask system which provides after the image reconstruction a shift invariant and side-lobes-free spatial response to a point source, the so called System Point Spread Function (SPSF), in the FCFOV (e.g., [28]).
We assume a fully absorbing infinitely thin mask, a perfectly defined infinitely thin PSD with infinite spatial resolution and perfect detection efficiency. The object, the sky image, described by the term S is viewed by the imaging system composed by a mask described by the function M and a detector that provides an image D. S, M and D are then continuous real functions of two real variables. M assumes values of 1 in correspondence to transparent elements and 0 for opaque elements, the detector array D is given by the correlation 1 of the sky image S with M plus an un-modulated background array term B
D = S M + B
If M admits a so-called correlation inverse function, say G, such that M G = δfunction, then we can reconstruct the sky by performing
S = D G = S M G + B G = S δ + B G = S + B G
and S differs from S only by the B G term.
In certain cases, when, for example, the mask M is derived from a cyclic replication of an optimum basic pattern, if the background is flat then the term B G is also constant and can be removed. Since even for very high detector resolution the information must be digitally treated in the form of images with a finite number of image pixels, the problem must be generally considered in its discrete form. We can formulate the same process in digital form by substituting the continuous functions with discrete arrays and considering discrete operators instead of integral operators. S, M, G, D, and B terms will therefore be finite real 2-d arrays, and the delta function the delta symbol of Kronecker. The discrete correlation is obtained by finite summations and the reconstructed sky S by
S i, j = ∑ kl D k,l G i+k, j+l
with i, j indices that run over the sky image pixels and k, l over the detector pixels. Mask patterns that admit a correlation inverse array exist ( § 2.3), and can be used to design the so-called optimum systems. For instance, for masks M that have an auto-correlation given by a delta function, the decoding array constructed posing G = 2 · M − 1 (i.e., G = +1 for M = 1 and G = −1 for M = 0) is then a correlation inverse. To have such a side-lobe-free response in an optimum system, a source must however be able to cast on the detector a whole basic pattern. To make use of all the detector area and to allow more than one source to be fully coded, the mask basic pattern is normally taken to be the same size and shape as the detector and the total mask made by a cyclic repetition of the basic pattern (in general up to a maximum of 2 times minus 1 in each dimension to avoid ambiguities) (Fig. 4). For such optimum systems, a FCFOV source will always project a cyclically shifted version of the basic pattern, and correlating the detector image with the G decoding array within the limits of the FCFOV will provide a flat side-lobe peak with position-invariant shape at the source position ( Fig. 4 right).
A source outside the FCFOV but within the PCFOV will instead cast an incomplete pattern, and its contribution cannot be a priori automatically subtracted by the correlation with the decoding array at other positions than its own, and it will produce secondary lobes over all the reconstructed EXFOV including the FCFOV. Following a standard nomenclature of CMI, we refer to these secondary lobes as coding noise.
On the other hand the modulated radiation from PC sources can be reconstructed by extending with a proper normalization the correlation procedure to the PCFOV. The reconstructed sky in the total field (EXFOV) is therefore composed by the cen-tral region (FCFOV) of constant sensitivity and optimum image properties, i.e., a position-invariant and flat side-lobes SPSF, surrounded by the PCFOV of decreasing sensitivity and non-perfect SPSF (Fig. 5). In the PCFOV, the SPSF includes coding noise, the sensitivity decreases, and the relative variance increases toward the edge of the field. Also, even FCFOV sources will produce coding noise in the PCFOV, while sources outside the EXFOV are not modulated by the mask and simply contribute to the background level on the detector. When a complete mask is made of a cyclic repetition of a basic pattern, then each source in the FOV will produce eight large secondary lobes (in rectangular geometry) at the positions which are symmetrical with respect to the real source position at distances given by the basic pattern: these spurious peaks of large coding noise are usually called ghosts or artifacts (Fig. 5).
These optimum masks also minimize the statistical errors associated to the reconstructed peaks and make it uniform along the FCFOV. Since G is two-valued and made of +1s or -1s the variance associated to the reconstructed image in the FCFOV is given by V = G 2 D = Σ D, the variance associated to each reconstructed sky image pixel is constant in the FCFOV and equal to the total counts recorded by the detector. This implies that the source signal to noise ratio (SNR) is simply
SNR = C S √ C S +C B = Reconstructed Source Counts √ Total Detected Counts
where C S and C B are source and background recorded counts. The deconvolution is then equivalent to summing up counts from all the detector open elements (source and background counts) and subtracting counts from the closed ones for that source (background counts only).
Historical Developments and Mask Patterns
Following the first idea to modulate incident radiation using Fresnel plates, formulated by Mertz and Young [65], the concept of a pinhole camera with multiple holes for high energy astronomy (the multiple-pinhole camera) was proposed by Ables [1] and Dicke [23] at the end of the 1960s. In these designs multiple holes of the same dimensions are distributed randomly on the absorbing plate and in spite of the inherent production of side-lobes in the SPSF, the increase in the aperture fraction compared to the single pinhole design highly improves the sensitivity of the system, at the same time maintaining the angular resolution. Toward the end of the 1970s it was realized that special mask patterns could provide optimum imaging properties for coded aperture systems, and then a large number of the early works focused on the search for these optimal or nearly optimal aperture patterns. Most of these are built using binary sets called cyclic different sets (CDS) [6] which have the remarkable property that their cyclic auto-correlation function (ACF) is two-valued and approximates a delta function modulo a constant term. Certain of these sets can be disposed (following certain prescriptions) to form 2-d arrays, the so-called basic patterns, which also have the property of having 2-d cyclic auto-correlations which are bi-dimensional delta functions, thus allowing design of coded aperture systems where a correlation inverse is directly derived from the mask pattern. Thus by disposing, in a rectangular geometry, 2×2 such 2-d basic pattern side by side (actually less than 2 times in each direction in order to avoid full repetition of the pattern and then ambiguity in reconstruction) at a certain distance from a detector plane of the same dimension as the basic pattern one obtains an optimum system with maximum FCFOV, free of peak repetitions and coding noise. Care must be taken on how the mosaic of the basic pattern is done in order for a source to project on the detector a shifted, but complete, version of the basic pattern.
Patterns Based on Cyclic Different Sets
A cyclic different set D(N, k, λ ) is a collection of k distinct integer residues d 1 , ..., d k modulo N, for which the congruence d i − d j = q mod(N) has exactly λ distinct solution pairs (d i ,d j ) in D for every residue q = 0 mod(N). If such a different set D exists, then λ = k(k − 1)/(N − 1). This mathematical definition simply means that for these sets, a cyclic (over the dimension N of the larger set to which they belong) displacement vector between any two elements of the set occurs exactly a constant number of times, given by the parameter λ , which is called the "redundancy" of the set. For this reason binary arrays based on CDS are also called uniformly redundant arrays (URA). From this property immediately follows that a 1-d binary sequence M of dimension N built from a CDS D(N, k, λ ) by the following prescription
m i = 1 if i ∈ D 0 if i / ∈ D has an ACF a i = ∑ j m j m j+i = k for i = 0 mod N λ for i = 0 mod N
that is a δ function. The parameter k − λ is also an important characteristic of the set since it determines the difference between the peak and the plateau of the ACF. The higher this value the better is the signal to noise response to a point source of the derived imaging system. Several types of CDS exist and early studies on the subject were focused to find as many such sequences as possible and establish the way to build them. A class that was already well known from the coding theory was the Non Redundant Arrays (NRA), which are in fact CDS with redundancy = 1. These have however densities of elements very small (< 10%) and therefore provide apertures far from the ideal 30% to 50% open fraction needed by X/gamma-ray astronomy.
The most interesting CDS for 2-d astronomical imaging are those called msequences or also pseudo-noise (PN), pseudo-random, shift-register or Hadamard-Singer sets. These are part of the more general class of Hadamard sets, which are CDS with N = 4t − 1 and t integer (k = 2t − 1, λ = t − 1) and are related to the rows of Hadamard matrices (which are matrices with mutually orthogonal rows). The m-sequence sets that exist for N = 2 m − 1 with m integer > 1 are particularly interesting because they have nearly 50% open fraction and when m is even they can be factorized in p × q arrays with p = 2 m/2 + 1 and q = 2 m/2 − 1 in order to form rectangular (quasi-square) arrays.
The first to propose to use different sets for building 2-d imaging optimum systems for X-ray astronomy were independently Gunson and Polichronopulos [47] and Miyamoto [66] in 1976. They both identified the m-sequences as the original set to use for the design, but with different mapping in 2-d arrays to obtain the ba-sic pattern. The second author actually started from the Hadamard arrays that were studied in particular for spectroscopy by Harwit, Sloane, and collaborators [50]. The proposed pattern is equivalent to the one obtained by filling up the array with the PN sequence row by row. The first authors instead proposed to build basic patterns from m-sequences filling the array along extended diagonals (this further requires that p and q are mutually primes). In the two cases, the mosaic of cyclic repetition of the basic pattern must be performed in a different way in order to preserve the δ -function ACF property. For the diagonal prescription, the basic patterns can just set adjacent to each other; for the row by row construction, those on the side must be shifted vertically by one row (see [14] for the details). We call these masks Hadamard masks to distinguish them from the URA described below, even if both can be considered URA. A more complete discussion of the way PN-sequences are used for an imaging coded mask instrument of the type proposed by [47], including the way of filling the 2-d array by extended diagonal, was provided soon after by Proctor et al. [73] who also discussed the implementation in the SL1501 experiment A particular subset of Hadamard CDS, the twin prime CDS, are those for which p and q are primes and differ by 2 (q = p − 2). These sets can be directly mapped in p × q arrays using the prescription proposed by Fenimore and Cannon [28] in 1977. In a series of other seminal papers these authors and collaborators improved the description of coded aperture imaging using these URA arrays and discussed their performances and a number of other associated topics [26] [27][29] [30]. These URA masks, as we will call them following [28], are generated from quadratic residue sequences of order p and q (p = q + 2) according to the following prescription:
M i j = 0 if i = 0 1 if j = 0, i = 0 1 if C p i · C q j = 1 where C p i = +1 if ∃ k ∈ Z, 1 ≤ k < p, i = k 2 mod p −1 otherwise 0 otherwise
Other 2-d rectangular arrays presenting delta function ACF were identified as Perfect Binary Arrays (PBA). Again they are a generalization in 2-d of CDS, include the URAs, and are based on different set group theory [54].
Early designs of CMI assumed rectangular geometry but in 1985 mask patterns for hexagonal geometry were proposed by Finger and Prince [33]. These are based on Skew-Hadamard sequences (Hadamard sequences with order N prime and constructed from quadratic residues) that, for dimensions N = 12t + 7 where t is an integer, can be mapped onto hexagonal lattices, with axes at 60 • from each other, to form hexagonal URA, the HURA. In addition to be optimum arrays (they have a δ -function ACF), they are also anti-symmetric with respect to their central element (complete inversion of the pattern) under 60 • rotation. This property allows one to use them to subtract a non-uniform background, if a rotation of the mask of 60 • can be implemented, and even to smear out the ghosts created by a replicated pattern if a continuous rotation can be performed. The hexagonal geometry is also particularly adapted to circular detectors. The complications induced by moving elements in satellites have limited the use of such mask/anti-mask concept based on mask rotation with respect to the detector plane. A rotating HURA mask ( Fig. 6 center) was successfully implemented in the GRIP balloon borne experiment [3] and operated during a few flights allowing for an efficient removal of the background non-uniformity ( § 5.1). A fixed non-replicated HURA of 127 elements has been implemented for the SPI/INTEGRAL instrument ( § 5.2).
Other Optimum Patterns
The limited number of dimensions for which CDS exist coupled to the additional limitation that N must be factorized in two integers for a rectangular geometry or comply with more stringent criteria for the hexagonal one implies that a small number of sequences can actually be used for optimum masks. This led several authors to look for other optimum patterns, and several new designs were proposed in the 1980s and 1990s, even if somehow related to PN sequences. Even though for these patterns the ACF is not exactly a delta function, it is close enough that a simple modification of the decoding arrays from the simple mask patterns allows recovery of a shift invariant and side-lobe-free SPSF. For these masks therefore an inverse correlation array exists and an optimum imaging system can be designed.
The most used of them was certainly the Modified URA or MURA (Fig. 6 right) of Gottesman and Fenimore [44]. Square MURAs exist for all prime number linear dimensions and this increases by about a factor 3 the number of rectangular optimal arrays with respect to the URA and Hadamard sets. They are basically built like URA on quadratic residues but for the first element (or central element for a 2-d pattern) which is defined as not part of the set. The MURAs also have symmetric properties with respect to the central element which permits a MURA using the complement of the pattern (but keeping the central element to 0 value). The correlation inverse is built like in URAs (+1 for mask open elements and −1 for opaque ones) apart from the central element, and its replications, if any, which are set to +1 even if the element is opaque. With this simple change from the mask pattern, the derived decoding array G is a correlation inverse and the system is optimum.
Other optimum rectangular designs for which a correlation inverse can be defined were obtained from the product of 1-d PN sequences, the Pseudo Noise Product (PNP), or 1-d MURAs (MP and MM products patterns).
Real Systems and Random Patterns
More recent studies of mask patterns have focused on more practical issues such as how to have opaque elements all connected between them by at least one side in order to build robust self-supporting masks able to resist, without (absorbing) support structures, to the vibration levels of rocket launches. As explained above, even for an optimum mask pattern, any source in the PCFOV will produce coding noise and spurious peaks also in the FCFOV. In order to obtain a pure optimum system one has then to implement a collimator which reduces the response to 0 at the edge of the FCFOV. This solution was proposed by [47] who also suggested to include the collimator directly into the mask rather than in the detector plane. However the total loss of the PCFOV (even if affected by noise) and the loss of efficiency also for FC sources not perfectly on-axis are too big a price to pay to obtain a clean system and led to the abandonment of the collimator solution in favor of a shield between the mask edges and detector borders in order to reduce background and out of FOV source contributions.
In addition the geometry of optimum systems cannot be, in practice, perfectly realized. Effects like dead area or noisy pixels of the detector plane, missing data from telemetry errors, not perfect alignment, tilt or rotation of the mask with respect to the detector, absorption and scattering effects of supporting structures of the mask or of the detector plane and several other systematic effects directly increase the coding noise and ghosts and degrade the imaging quality of the system.
Since the imperfect design of real instrument generally breaks down the optimum imaging properties of the cyclic optimum mask patterns, today these patterns are not anymore considered essential for a performing coded mask system and there is a clear revival of random patterns. Indeed for the typical scientific requirements of CMI (detection/localization of sources in large FOV) one prefers to have some low level of coding noise spread over a large FOV rather than few large ghosts produced by the needed cyclic repetition of the optimum patterns giving strong ambiguities in source location. This is why the most recent instruments were designed using random or quasi-random patterns. The drawback is that, for practical reasons, like the need to have solid self-supporting masks, pure random distributions are also difficult to implement and then for these "quasi-random masks" the inherent coding noise becomes less diffuse and more structured. The issue then becomes how to optimize the choice of these quasi-random patterns in order to get best performance in terms of coding noise, SPSF, sensitivity and localization.
Image Reconstruction and Analysis
Reconstruction Methods
A coded mask telescope is a two-step imaging system where a specific processing of the recorded data is needed in order to reconstruct the sky image over the field of view of the instrument.
The reconstruction is usually based on a correlation procedure, however, in principle, other methods can be envisaged. Indeed from the simple formulae that describe the image formation in a CMI ( § 2.2) and which give the relations between the input sky S, the mask M and the detector D, it follows that S can be derived by the simple inversion technique, by means of the Fourier transform (FT) of M and D, with S = IFT (FT (D)/FT (M)) = S + IFT (FT (B)/FT (M)), where IFT stands for the inverse FT. However this direct inverse method usually produces a large amplification of the noise in the reconstructed image, since the FT of M always contains very small or even null terms, and the operation on the background component, which is always present, diverges and leads to very large terms.
A way to overcome this problem is to apply a Wiener filter as a reconstruction method [80] [96] in order to reduce the frequencies where the noise is dominant over the signal when performing the inverse deconvolution. It consists in convolving the recorded image D with a filter W F whose FT is
FT (W F) = FT (M)/[(FT (M)) 2 + (FT (SNR)) −1 ]
The filter showed to be efficient to recover the input sky image especially when a non-optimal system is employed, but it requires an estimate of the spectral density of the signal to noise ratio (SNR) which is not in principle known a priori. A simple application using a constant SNR value with spatial frequency was used and compared well to correlation and also to Bayesian methods.
Indeed Bayesian methods have also been specifically applied to CMI in particular in the form of an iterative Maximum Entropy Method (MEM) algorithm [80] [96]. The results with MEM are not very different from those obtained by the correlation techniques. The heavy implementation of MEM compared to the latter ones, and the problems linked to how to establish the criteria for stopping the iterative procedure to avoid over-fitting the data have made these techniques less popular than correlation coupled to iterative cleaning.
Most of these data processes are heavy and time-consuming, especially when images are large, and the issue of computation time is relevant in CMI analysis, in particular when iterative algorithms need to compute several times the sky image or a model, like in MEM. Some studies in the past have concentrated on fast algorithms for the deconvolution. Systems based on pseudo-noise mask patterns and Hadamard arrays could exploit the Fast Hadamard Transform (FHT) which reduces the convolution processing from an order of N 2 to one proportional to N logN [30]. Another method exploits the URA/MURA symmetry (large part of these arrays are given by the multiplication of the first line with the first column) in order to reduce significantly the number of operations [77] [39]. However today, at least for astronomical applications, the use of the highly optimized routines of 2-d discrete fast Fourier transform (FFT) available in most software packages for any kind of array order, is usually sufficient for the required implementations based on correlation. The search for fast algorithms or for specific patterns that allow fast decoding is therefore, these days, somehow less crucial.
Recently deep learning methods mainly based on convolutional neural networks were proposed to improve the performance of image reconstruction from data of CMI in condition of near-field observations. The tests performed for these specific conditions of terrestrial applications, with their additional complexity of the source distance-dependent image magnification, show that these novel techniques provide enhanced results compared to the simple correlation analysis [101]. Further developments in this direction can be expected in the near future.
Deconvolution by Correlation in the Extended FOV
The cross-correlation deconvolution described in § 2.2 for the FCFOV can be applied to the PCFOV, by extending the correlation of the decoding array G with the detector array D in a non-cyclic form to the whole field (EXFOV) [39] [43]. To perform this a FOV-size G array is derived from the mask array M following a prescription that we describe below, and by padding the array with 0 elements outside M in order to complete the matrix for the correlation.
Since only the detector section modulated by the PC source is used to reconstruct the signal, the statistical error at the source position and also the significance of the ghost peaks, if any, are minimized. To ensure a flat image in the absence of sources, detector pixels which for a given sky position correspond to mask opaque elements must be balanced, before subtraction, with a proper ratio of the number of transparent to opaque elements for that reconstructed sky pixel. This normalization factor is stored in a FOV-size array, called here Bal, and its use in decoding is equivalent to the so-called balanced deconvolution for the FCFOV [28].
In order to correctly account for detector pixel contributions or even attitude drifts or other effects, a weighting array W of the size of the detector array and with values comprised between 0 and 1 is defined and multiplied with the array D before correlation [39]. It is used to neglect the detector areas which are not relevant (e.g., for bad, noisy, or dead area pixels) by setting the corresponding entries to 0. If one is interested in studying weak sources when a bright one is also present in FOV, W may be used to suppress the bright source contamination by setting to 0 the W entries corresponding to detector pixels illuminated by the bright source above some given fraction. The array W is also used to give different weights to parts of the detector, for example when pixels have different efficiencies, e.g., due to different dead times or energy thresholds. The balance array Bal is built using W to properly normalize the balance considering the weights given to the detector pixels. Obviously when W contains some zero values, it means that there is not complete uniform coding of the basic pattern, and this will break the perfect character of an optimum system, introducing coding noise. In case a small fraction of pixels is concerned the effect will be however small.
In order to insure the best imaging sensitivity, G is built from the mask M by:
G = 1 a · M − 1
where the factor a gives the aperture of the mask. For a = 0.5 (like in URAs) G = 2 · M − 1 and assumes values +1 or -1 as in the standard prescriptions [26]. Defining the two arrays G + and G − such that
G + = G for G ≥ 0 0 elsewhere G − = G for G ≤ 0 0 elsewhere
where of course G = G + + G − , we obtain the reconstructed sky count image from
S = G + (D ·W ) − Bal · (G − (D ·W )) A(1)
where dot operator or division applied to matrices indicates here element-byelement matrix multiplication or division. The balance array used to account for the different open to closed mask element ratios is given by
Bal = G + W G − W
and ensures a flat image with 0 mean in absence of sources. The normalization array
A = (G + · M) W − Bal · ((G − · M) W )
allows a correct source flux reconstruction which takes into account the partial modulation. With this normalization the sky reconstruction gives at the source peak the mean recorded source counts within one totally illuminated detector pixel. Note that source flux shall not be computed by integrating the signal around the peak, as this is a correlation image. An additional correction for off-axis effects (including, e.g., variations of material transparency etc.) may have to be included, once the reconstruction, including ghost cleaning, has been carried out. The normalized variance, which is approximately constant in the FCFOV for optimum or pure random masks, and whose relative value increases outside the FCFOV going towards the edges, is computed accordingly 2 :
V = (G + ) 2 (D ·W 2 ) + Bal 2 · ((G − ) 2 (D ·W 2 )) A 2(2)
since the cross-terms G + · G − vanish. 2 Here X 2 = X · X
Here it is assumed that the variance in the detector image is just given by the detector image itself (assumption of Poisson noise and not processing of the image); however if it is not the case the D array in this last expression shall be substituted by the estimated detector image variance. The signal to noise image is given by the ratio S √ V and is used to search for significant excesses. The deconvolution procedure can be explicitly expressed by discrete summations over sky and detector indices of the type given in § 2.2 for S i j [43].
Different normalizations may be applied in the reconstruction [87]; for example one can normalize in order to have in the sky image the total number of counts in the input detector image. However the basic properties of the reconstructed sky image do not change. In particular with the presence of a detector background there are more unknowns than measurements and therefore reconstructed sky pixels are correlated. It is possible to show [87] that, at least for optimum masks the level of correlation is of the order of 1/N (where N is again the number of elements in the basic pattern). Clearly if binning is introduced then the level of correlation increases, depending on the reconstruction algorithm employed as discussed below.
All the previous calculations can be performed in an efficient and fast way using the discrete fast Fourier transform algorithm because all operations involved are either element-by-element products or summations or array correlations for which we can use the correlation theorem 3 .
Detector Binning and Resolution: Fine, Delta and Weighted Decoding
We have until now implicitly assumed to have a detector of infinite spatial resolution and data digitization for which images are recorded in detector elements (pixels) with the same shape and pitch as the mask elements and that sources are located in the center of a sky pixel, allowing for perfect detector recording of the projected mask shadow. These approximations are of course not verified in a real system, which implies a degradation of the imaging performance. Recorded photons are either collected in discrete detector elements (for pixelated detectors) or recorded by a continuous detector (like an Anger camera) subject to a localization error described by the detector point spread function (PSF), and where the measured positions are digitally recorded in discrete steps (pixels). In both cases we will have detector pixels with a finite detector spatial resolution characterized by the detector pixel size d or the σ D of a Gaussian describing the detector PSF and the digitization. Pixels may have sizes and pitches different from those of the mask elements, but for a good recording of the mask shadow, resolution and digital pixels must be equal or smaller than the mask element size, otherwise the shadow boundary is poorly measured and there is a large loss of sensitivity and in source localization. One can define the resolution parameter r as the ratio r = m/d in each direction of the linear sizes of the mask element and the detector pixel (where pixel size means pixel pitch, since the physical pixel may be smaller with some dead area around it).
Fenimore and Cannon [29] considered the case of r integer in both directions and showed that the same procedure of cross-correlation reconstruction can be carried out just by binning the array M with the same pixel grid as the detector, which will give the rebinned mask M R , by assigning to all its pixels corresponding to one given mask element the value of that element, defining G accordingly (G = 2 · M R − 1, for a=0.5) and then by carrying out the correlation over all pixels. So, for example, for r × r detector pixels (square geometry) per mask element, each element of the mask is divided in r × r mask pixels. To each of them, one assigns the value of the element and then carries out the G-definition and correlations accordingly. This is the fine cross-correlation deconvolution.
Another way, when building the decoding array G, is to assign the value of the mask element to one pixel from the r × r ones that bin this mask element while the others are set to the aperture a. For a URA (a=0.5), the G array has +1 or -1 for one pixel per mask element and the others are set to 0 (and do not intervene in the correlation). This is the so-called delta-decoding [27] [29]. This implies that the reconstructed adjacent r × r sky pixels are built using different pixels of the detector and therefore they are statistically independent. Of course a delta-decoding reconstruction can be transformed in fine decoded image by convolving the deltadecoded image with a r × r box-function of 1s. The delta-decoding also allows one to use the FHT in the case of detector binning finer than the mask element (if M is a Hadamard array, the rebinned array M R build for the fine decoding is not) [30]. As discussed above, FHT is not relevant anymore as the FFT can do the job, but the relative independence of delta-decoded sky image pixel over sizes of the SPSF peak was found useful in order to apply standard methods of chi-square fitting directly on the reconstructed sky images including parameter uncertainty estimations [39].
When there is a non-integer number of pixels per mask element, which is a typical, and sometimes desirable 4 , condition of pixelated detectors, then the mask is rebinned by projecting the M array on a regular grid with same pixel pitch as in the detector and by assigning to the mask pixels the fraction of open element area projected in the pixel. The same decoding array definition and correlation operation given above (Eqs. 1-2 and all associated definitions) are then applied using the rebinned mask array M R at the place of M. M R can take (for non-integer r) fractional values between 0 and 1, and the decoding G array also can have different fractional values accordingly. Weighing the inverse correlation using a filtered mask describing the not-integer binning or the finite detector resolution optimizes the SNR of point sources [18][39] [11] and is usually implemented (weighted decoding) even if this implies a further smearing of the source peak. Fig. 7 shows some of the image arrays involved in the weighted sky reconstruction process described above and applied to IBIS data ( § 5.4) of a Cygnus region observation [43].
Image Analysis
Following the prescriptions given above, one obtains a reconstructed sky in the EX-FOV of the instrument, composed of an intensity and a variance image. They are "correlation" images; each sky image pixel value is built by a linear operation on all, or part of, the detector pixels. Sky image pixels are therefore highly correlated in particular within an area of one mask element. Statistical properties of these images are different from standard astronomical images and their analysis, including fine derivation of source parameters, error estimation, the various steps to reduce systematic noise from background or source coding noise and final combination of cleaned images in large mosaics must take into account their characteristics.
Significance of Detection
The reconstructed and normalized sky image shall be searched for significant peaks by looking for excesses over the average value, that should be, by construction (and neglecting the effect of non uniform background) close to zero. This is done by searching for relevant peaks in the SNR image. In the absence of systematic effects the distribution of this SNR image shall follow the standard normal distribution. Deviations from such distribution indicates residual systematic effects or presence of sources and their ghosts (Fig. 8 left).
Excesses in signal to noise larger than a certain threshold are considered as sources. However the concept of significance level in such a decoded image where each sky pixel is built by correlating all, or part of, the pixels of the detector image needs to be carefully considered. If we are interested to know if one or few sources at given specified positions are detected, then we can use the standard rule of the 3 sigma excess that will give a 99.7% probability that the detected excess at that precise position under test is not a background fluctuation. If, instead, we search all over the whole image for a significant excess, then the confidence level must take into account that we perform a large number of trials (in fact different linear combinations of nearly the same data set) to search for such excess.
Assuming standard normal distribution for the noise fluctuations, the probability that an excess (in σ ) larger than α is produced by noise is
P(α) = 1 √ 2π +∞ α e −x 2 2 dx = 1 2 er f c( α √ 2 )
The confidence level of a detection (not a noise fluctuation) is then 1−P(α) in a single trial. Assuming that we have N independent measurements then the confidence level for such excess to be a source will be reduced to
[1 − P(α)] N ∼ 1 − NP(α) f or NP(α) 1
For a given confidence level and N, the value of α is found from this relation. Curves of α as a function of N can be calculated (see Fig. 8 right and [14]) and it is found that to have a confidence level of 99% for number of pixels N=10 4 − 10 5 , the excess must be in the range 4.5-6.0. For coded mask however it is difficult to evaluate N, since it does not simply correspond to the number of pixels in the sky reconstructed image, unless this refers to the FCFOV of an optimum system with one detector pixel per mask element. The reason is that in general sky pixels are not fully independent and are highly correlated over areas of the size of the typical SPSF. The best way to evaluate the threshold is therefore through simulations. A value of 5.5-6.5 σ is typically assumed for a secure (may be conservative) source detection threshold in reconstructed images of 200-300 pixel linear size.
System Point Spread Function
An isolated significant excess in the deconvolved sky image may indicate the presence of a point-like source, which will be characterized by the System Point Spread Function (SPSF), that is the spatial response to an isolated bright point source of the overall imaging system, including the deconvolution process (Fig. 9). The SPSF includes a possibly shift-invariant, main peak, proportional to the source intensity, and usually non-shift-invariant, side-lobes of the coding noise, also proportional to the source intensity. For a perfect cyclic optimum coded mask system, the main peak is shift invariant and the side-lobes are flat within the FCFOV for a source in the FCFOV, but large side-lobes appear in the PCFOV (ghosts) along with a diffuse moderate coding noise, and when the source is in the PCFOV, the width of the main peak may vary depending on the mask pattern, and side-lobes, including the main ghosts, appear all over the field. In random masks, side-lobes are distributed all over the image including in the FCFOV, even for sources in the FCFOV, but, generally, with low amplitude and without the strong ghosts typical of cyclic systems. For a pixelated detector and a sky reconstruction based on the weighted crosscorrelation as described in § 3.2-3.3, the SPSF can be described by a peak function correlated with a set of positive and negative delta functions of different amplitudes (what we will call here the correlation function) that take into account the mask pattern and the decoding operation based on correlation (see, e.g., [27]). A positive δ -function of maximum amplitude of this set is of course positioned at the source location and will provide the main peak of the SPSF at the source position. The other positive and negative deltas, convolved with the peak function, describe the coding noise spread over the image (including ghosts). Assuming from hereon a square geometry with square mask elements of linear dimension m and square detector pixels of linear dimension d (extension to rectangular geometry is trivial and analog, less trivial, relations can be given for the hexagonal one) the peak function Q is given by the normalized correlation of four 2-d box functions 5 , two of mask element width
∏ m (x, y) = ∏ m (x) · ∏ m (y) and two of pixel width ∏ d (x, y) = ∏ d (x) · ∏ d (y) Q(x, y) = Q(x) · Q(y) where Q(x) = ∏ m (x) ∏ d (x) ∏ m (x) ∏ d (x) d 2 m
This function, a blurred square pyramidal function for square geometry, can be expressed analytically. The 1-d analytical function Q that composes it has a peak value (at zero lag) given by the simple equation
Q(0) = 1 − 1 3r(3)
where, as usual, r is the ratio r = m/d. This quantity, which corresponds to the term coding power in [82], is important because it appears in the expression of the error estimate for the source flux and location. The 2-d function Q, can be also conveniently approximated by a 2-d Gaussian function with FWHM width of √ r 2 + 1 along the two axes ( Fig. 9 right). For a continuous detector the SPSF (where now pixel is the data discrete sampling step) is the function above but further convolved with the detector PSF. The explicit formulae of the SPSF for such a system, where the detector PSF is approximated by a Gaussian, were given in the description of SIGMA/GRANAT data analysis [39] [11].
The use of the SPSF in the analysis of CMI data is important because in general both the detector resolution and the sampling in discrete pixels are finite. Then the discrete images produced by the correlations, with the same steps of the data sampling, do not provide the full information, unless the resolution is exactly given by the sampling, pixels are in integer number per mask element and the source is exactly located at the center of a sky-projected pixel in order to project a shadow exactly sampled by the detector pixels. Of course an artificial finer sampling can be introduced in the correlation analysis, but this implies rebinning of data with alteration of their statistical properties and increase in computing time for deconvolution (dominant part of the overall processing), and finally the precision may not be adequate to the different level of SNR of the sources, where the brightest ones may be located with higher accuracy than the artificial oversampling used.
Therefore, in order to evaluate source parameters, and in particular the position of the source, in a finer way than provided by the sky images with the sampling equivalent to the detector pixels, a practical method is to perform a chi-square fit of the detected excess in the deconvolved sky image with the continuous SPSF peak analytical formula or its Gaussian approximation (see Fig. 9 right). The procedure can also be used to disentangle partially overlapping sources [8]. Once the fine position of the source is determined, a model of the projected image on the detector can be computed and used to evaluate the source flux, subtract the source contribution or its coding noise, and perform simultaneous fit with other sources and background models to extract spectra and light curves.
Even though the fit, in the deconvolved image, of the source peak with the model of the SPSF peak will provide a reasonable estimate of the source parameters, the error calculation cannot be performed in the standard way directly using the chisquare value of the best fit and its variation around the minimum, because pixels are too much correlated. Nevertheless formulae for the expected error in source flux and source localization can be derived from the formalism of chi-square estimation in the detector space and can be used to provide uncertainties, after some calibration on real data that will account for residual systematic biases.
Flux and Location Errors
One can show that the correlation reconstruction for a single point-like source in condition of dominant Gaussian spatially flat background noise is equivalent to the minimum chi-square determination of source flux and position in the detector image space, where one can determine the errors, using the minimum chi-square paradigm.
Using the notations used above for the SPSF and introducing the terms t for integration time, A for detector geometrical area, b (in cts/s/cm 2 ) for a background count rate, and s 0 (in cts/s/cm 2 ) for the considered source count rate, both integrated within an energy band, we define
σ CR = b A · t SNR CR = s 0 σ CR f I (x, y) = SPSF(x, y) N · a − a
where f I is called the image function and is linked, as shown, to the shape of the SPSF, and N is the average number of mask elements in the detector area A (N = A/m 2 ) (and in the basic pattern for an optimum system). σ CR and SNR CR are, respectively, the minimum error and maximal signal to noise from purely statistical noise given by the measured count rates in case perfect reconstruction can be achieved and for a mask aperture of 1 (i.e., no mask, where all the area A is used for the measure) with the idealistic assumption that a measurement of the background is available (in the same observation time t).
Using the minimum chi-square method applied to the detector image compared to a source shadow-gram model, one obtains, from the inversion of the Hessian matrix of the chi-square function, the expression for source flux and position errors, expressed as 1 σ at 68.3% confidence level in one parameter, which are related to the image function and to its second partial derivatives [18][32] [52].
The flux error is given by
σ S = σ CR 1 a · f I (0, 0) = σ CR 1 Q(0) 1 a(1 − a · f M )(4)
where the mask function f M is 1 for optimal masks and, in average, for random masks and is given by a more complex relation for the general case, which involves the cross-correlation of the mask pattern. The SNR is then
SNR = SNR CR · Q(0) · a(1 − a · f M )(5)
The location error along one direction also can be expressed by an analytical expression and involves the second derivative of the image function:
σ X = 1 SNR CR · 1 a · ∂ 2 f I (0,0) ∂ x 2 = K X · d SNR(6)
For optimal masks (URA, MURA, etc.) with a = 0.5 as well as for random masks in average the following formula for the constant K x holds approximately:
K X = r · Q(0) 2(7)
The error here, as for the flux is given at 1 σ along one axis direction. The fact that Eqs. 6-7 hold for both URAs and random masks does not mean that these mask types always have the same localization capability as their signal to noise is not the same if the aperture is different. These expressions are equivalent to those reported in [18] and [32] and can be extended to the case of continuous detectors by replacing the pixel linear dimension d with the value 2 √ 3σ D ≈ 1.5w D where σ D is the detector spatial resolution in σ and w D in FWHM 6 .
A more complicated expression, involving properly computed f M and its second partial derivatives, can be obtained for general (or not-so-random) masks. We do not show it explicitly here because it is too cumbersome, but it has been used to identify which quasi-random masks that cannot be purely random in order to make them self-supporting have optimal sensitivity-location accuracy pairs. 6 Following [82] the numerical factor comes from the fact that 1 2 √ 3 is the rms uncertainty in a variable which is known to plus or minus half a pixel.
Non-uniform Background and Detector Response
In gamma-ray astronomy the background is generally dominant over the source contribution. Its statistical noise, spatial structure and time variability are therefore important problems for any kind of instrument working in this energy range. CMI, unlike non-imaging instruments, allow measurement of the background simultaneously with the sources, limiting the problems linked to its time variability. However if the background is not flat over the detector plane, its inherent subtraction during image deconvolution does not work properly. In fact any spatial modulation is even magnified by the decoding procedure [58]. Therefore the non-uniform background shall be corrected before decoding as well as any non-uniform spatial detector response which may affect both the background and the source contributions.
Using an estimation of the detector spatial efficiency E for the given observation (spatial efficiency variations due to noisy pixels, dead times or other time-varying effects) and of the detector non-uniformity U (quantum efficiency spatial variation depending on energy), along with a measure (e.g. from empty field observations already corrected for both E and U) or a model of the background shape B, a correction of the detector image D affected by non-uniformity can be given by
D C = D E ·U − b · B
and use then this corrected image D C to reconstruct the sky (Eqs. 1-2). The background normalization factor b can be computed from the ratio of the averages of the input detector and background images or from their relative exposure times. If one can neglect the variance of both B and b and assuming the Poisson distribution in each detector pixel, the variance of the corrected image can be approximated by
σ 2 D C = D E 2 ·U 2
This implies that in the computation for the sky variance (Eq. 2), this detector variance shall be used instead of simply the detector image D C .
Of course the details of the procedures, including other different, more sophisticated correction techniques, to account for spatial modulations not due to the mask, depend on the instrument properties, observing conditions, and calibration data (see, e.g., [43] [79]). In general extensive ground and in-flight calibrations, including empty-field observations, will be needed in order to get the best models of the background and of the instrument response.
One typical contribution to a non-uniform background is the CXB, dominant at low energies, and whose contribution on the detector plane, despite its isotropic character, becomes significantly non-uniform for large FOVs. In fact the CXB is viewed by each detector pixel through all the instrument opening with different solid angles, dependent on the instrument geometry (mask holes, shield, collimator, supporting structures, etc.). An example of such effect expected on the ECLAIRs detector plane is shown in Fig. 10, which also illustrates the noise that this effect produces in the decoded sky image if not properly corrected before reconstruction. In § 5.5, dedicated to ECLAIRs, we discuss the further modulation of the CXB produced when the Earth enters the instrument FOV.
Other observational conditions can also be very important in this regard. For satellite orbits that intersect the radiation belts or the South Atlantic Anomaly, parts of the satellite, instrument and the detector itself may be activated during the passage through this cloud of high-energy particles. The non-uniform distribution of the material in or around the detector may produce an additional non-flat time-varying background remnant that will spoil the images. A careful study of these effects is often required in order to introduce proper corrections in the analysis.
Overall Analysis Procedure, Iterative Cleaning and Mosaics
Once the raw data, possibly in event list form, are calibrated and binned in detector images, along with their weighting array, and then background, non-uniformity and efficiency are corrected, the decoding can be performed by applying Eqs. 1-2 and prescriptions given in § 3.2-3.3 in order to derive preliminary sky images. Point sources are then searched throughout them by looking for SNR significant peaks. The detected source is finely located by fitting the peak of the SPSF function to the detected excess. A localization error can be associated (Eqs. 6-7) from the source SNR which allows to select the potential candidates for the identification.
Iterative cleaning of coding noise from detected sources is performed in order to search for the weaker objects. This is done by modeling each source and subtracting its contribution, either in the detector image, which then must be decoded to look again for new sources, or directly in the deconvolved one. Typically the procedure is iterative, starting with the most significant source in the field and going on to the weaker sources, one by one, until no excess is found above the established detection Fig. 11 Simplified scheme of an overall image analysis procedure for CMI data including selection and binning of corrected/calibrated events, background and non-uniformity correction, decoding using the mask pattern, an IROS cleaning procedure on detected sources, and finally image mosaic, illustrated using the IBIS images [43].
threshold. Few iterations can be implemented, by restarting the procedure with the source fluxes corrected by the contamination from all other sources, for a deeper search. For close sources with overlapping main peaks a simultaneous fit of their SPSFs may have to be implemented. A catalog is usually employed to identify, and even to facilitate the search, of the sources. This iterative cleaning procedure has been sometimes called Iterative Removal Of Sources (IROS) [48] [43], the most important element of which is the proper estimation of the source contribution in the recorded image which depends on a well-calibrated model of the instrument. As for the background correction, very often the source modelling is not perfect, and the ghost cleaning procedure leaves systematic effects which may dominate the noise in the images of large exposure times or on large sets of combined data.
One way to smear out background and source residual systematic noise is to combine reconstructed images from different pointing directions and orientations. Overlapping cleaned sky images can be combined, after a normalization accounting for off-axis losses, in sky mosaics by a proper roto-translation to a common grid frame and then a weighted sum using the inverse of variance as the weight. While this is a standard procedure in astronomy imaging, here again one has to remember that we are treating correlation images and that the combined variance shall be computed including the co-variance term. The combination of images may take different forms depending on the scope of the mosaics (e.g., preserve source flux estimation versus reducing source peak smearing) [85] [43].
A schematic picture of the overall analysis procedure using the IBIS images as example is shown in Fig. 11 (see also the procedures described by [79] and [55]).
Coded Mask System Performances
From the error estimations ( § 3.4.3), one can determine the expected CMI performance as function of instrument parameters and design. It is usually evaluated in terms of sensitivity, angular resolution, localization accuracy, field of view and shape of SPSF. We already discussed the FOV in § 2.1 and the SPSF in § 3.4.2.
Sensitivity and Imaging Efficiency
The sensitivity of a coded mask system is given by the minimum point-like source flux that can be detected above a certain significance level n σ . The lower the minimum flux, the higher the sensitivity of the instrument.
This minimum flux can be derived as function of the CMI parameters from the flux error estimation of Eq. 4. Let ε be the detector efficiency (we neglect here energy redistribution) and τ o and τ c , respectively, the transparencies of the open and closed mask elements, all dependent on the energy E of the incident radiation; then for given observation conditions and detector and mask parameters, with symbol meaning as in previous sections, assuming Gaussian statistics and neglecting systematic effects (see [83] for different hypothesis), the continuum sensitivity F S , in units of ph/cm 2 /s/keV, of a coded aperture system, on-axis and in the energy interval ∆ E around E (keV), is given by
F S = n 2 σ · [(1 − a) · τ o + a · τ c ] + [(1 − a) · τ o + a · τ c ] 2 + 4·t·b·A(τ o −τ c ) 2 ·a·(1−a) n 2 σ 2 · ε · A · t · ∆ E · (τ o − τ c ) 2 · a · (1 − a)(8)
In the case of dominant background the same relation holds with the term [(1 − a) · τ o + a · τ c ] at the numerator set to zero. Equation 8 can be solved toward n σ for a given source flux F S providing the upper signal to noise (SNR) limit attainable on-axis for that exposure or toward time t to have the observation exposure needed to reach the desired detection significance n σ for a given source flux F S . This formula, and the analogous ones for SNR and exposure, usually found in the literature (e.g., [15][83]), neglects both the mask pattern and the finite spatial resolution of the detector. Therefore it approaches the case for optimum or pure random mask systems with infinite resolution or with integer resolution parameter r and the source exactly located in the center of a sky pixel, that is, when detector pixels are all either fully illuminated or fully obscured for that source. This is in fact the most favorable configuration and gives the highest sensitivity, but in the general case one must take into account the effect of the finite detector spatial resolution which is dependent on source position in the FOV. As discussed in § 3.4.3, this gives an additional loss in the SNR due to imaging, which, averaged over source location within a pixel, is given by the term Q(0) of the SPSF, which depends on the resolution parameter r through Eq. 3.
We therefore define the imaging efficiency as ε I = Q(0) = 1− 1 3r . The formula of Eq. 8 for the sensitivity can be used as it is also when including an average imaging loss over a pixel size, if one replaces everywhere the value n σ with n σ I = n σ ε I . In the same way, the SNR derived from Eq. 8 will be reduced to an imaging SNR by a factor given by the imaging efficiency, i.e., SNR I = SNR · ε I . This formula, modified with the imaging efficiency corresponds to Eq. 5 for the SNR discussed in § 3.4.3, and approximates well the sensitivity within the FCFOV when the source position is known and the flux evaluation is performed by fitting the SPSF at the source position, or, which is the same, by correlating with a rebinned mask shifted at the exact source position. If one wants to include in the calculation the fact that the source position is not known (e.g., to establish the detection capability of unknown sources in the images) then an additional loss shall be included which takes into account that the deconvolution is performed in integer steps (sky pixels) usually not matching the source position (the phasing error of [30]). If the source location is not in the center of a pixel, its peak will be spread over the surrounding pixels in the reconstructed image and the SNR for the source will appear lower. In this case the prescriptions given above hold, but the expression for the imaging efficiency shall be replaced by the integral over 1 pixel of the SPSF peak, which can be approximated by ε I ≈ 1 − 1 2.1r for pixelated detector and square geometry [42]. For example for the IBIS/ISGRI system (r = 2.4), the average (over a pixel) imaging efficiency is ε I = 0.86 for known source location (fit of detected peak) and 0.80 for unknown location (peak in the image).
The sensitivity formula, and its extensions for different hypothesis, can also be used to determine the optimum open fraction a of the mask [26] [83]. One can easily see that for dominant background (b s) the SNR is optimized for a = 0.5. However if b is not dominant or if the sky component of the background is relevant, or in other applications like nuclear medicine [2], open fractions lower than 0.5 are optimal. As discussed by [52] and [83] however, the optimum value varies slowly with parameters and remains generally close to 0.4-0.5.
Other elements of the CM imaging system have an influence on the sensitivity. They are the used deconvolution procedure, the background shape and its correction, the source position knowledge, and of course the numerous systematic effects that may be present and some mentioned in the subsection on "real systems". Also a decrease of the sensitivity with the increase of the source distance from the optical axis is present due to reduction of mask modulation, the vignetting effect of the mask thickness, and the possible variation of open and closed mask element transparency with the incident angle. In this case the additional sensitivity loss dependence on source direction angle shall be integrated in the term of the detector efficiency ε in Eq. 8, which then becomes dependent on energy but also on the source direction angle θ , that is ε = ε(E, θ ).
Angular Resolution
The separating power of a CMI system is basically determined by the angle subtended at the detector of one mask element. However the finite detector spatial resolution also affects the resolution. For a weighted cross-correlation sky reconstruction (Eqs. 1 -2), the resulting width of the on-axis SPSF peak in one direction, which gives the angular resolution (AR) in units of sky pixels, is well approximated, with the usual meaning of resolution parameter r, for square geometry and a pixelated detector, by AR(FW HM) = r 2 + 1 (9) To obtain the angular resolution in angular units (radians) on-axis one has to take the arc-tangent of this value divided by the mask to detector distance H (Table 1). Of course the angle subtended by a pixel varies along the FOV because of projection effects, and that shall be considered for the off-axis values. Moreover the separating power may vary along the FOV and particularly in the PCFOV because the coding noise may deform the shape of the SPSF main peak while vignetting effect of mask thickness will reduce its width. Figure 12 left shows the fitted width (in pixel units) of the IBIS SPSF, along the two image axes passing by the image center. The width is consistent with the AR value of Eq. 9 and of Table 1 within the FCFOV but changes wildly in the PCFOV [45]. In any case the SPSF width of a system can be evaluated at any location in the image, and the fitting procedure applied to detected sources can either use the fixed computed value or let the width be a free parameter.
Point Source Localization Accuracy
An essential characteristic of an imaging system is the quality of the localization of detected sources. As we have seen in the analysis section the fine localization of a detected point-like source within the pixels around the significant peak excess shall be derived by a fitting procedure. This is usually implemented as a fit of the source peak in the decoded sky image with a function that describes [39] [11] or approximates (e.g., a bi-dimensional Gaussian function) [43][45] the SPSF, but can in principle (with a more complex procedure which for each tested location models and compares to data the shadowgram of the studied source) be performed on the detector image. For this last implementation, formal errors can be derived and can be related to the SPSF and the source strength. As discussed in § 3.4.3 the uncertainty on the location is inversely proportional to the significance of the source. The typical procedure is then to express the location error radius for a given confidence level, as a function of the source SNR. Once the function is defined and calibrated for a given system the error to be associated to the fitted position of a source is derived from its SNR.
Using the relation for the location error σ x (1-σ error along one direction) of Eqs. 6-7 and assuming that the joint distribution of errors in both directions is bivariate normal and that they are uncorrelated, one can apply the Rayleigh distribution to obtain and relate to the system parameters (including r), the 90% confidence level error radius as PSLE(SNR) = √ 2 · ln10 · σ x (SNR). The error can be expressed in angular units by taking the arc-tangent of the value divided by the mask to detector separation H, with the usual caveat that off-axis projection effects shall be considered. In [83] the location error was rather approximated with the expression for the angular resolution (Eq. 9) divided by the SNR, while in [14] with the angle subtended by the PSD spatial resolution divided by the SNR. A more accurate and, for optimum or random masks, formally correct approximation with the explicit dependence on r is in fact
PSLE(SNR) ≈ arctan √ ln10 SNR · d H · r − 1 3 (10)
which gives the 90% c.l. angular error radius of the estimation of a location of an on-axis source with signal to noise SNR. The SNR to use in the above expression is the imaging SNR I for a SPSF fit at the source location. If one wants to use the SNR measured in the images (in average affected by sampling) the value that should be used is the average estimation SNR I for unknown location, in which case the constant of Eq. 10 changes.
In any case the PSLE expression above is valid for ideal conditions and shall be considered only as a lower limit obtainable for a given system geometry. In real systems, the non-perfect geometry, systematic effects, and the way to measure the SNR induce generally larger errors in the location than predicted by Eq. 10 and can even change the expected 1/SNR trend. In fact the PSLE will generally tend to a constant value greater than 0 for high SNR, which at the minimum includes the finite attitude accuracy. The PSLE curve as function of SNR is therefore always calibrated with simulations or directly on the data using known sources (Fig. 12). Reducing systematic effects and improving analysis techniques shall lead the calibrated curve to approach the theoretical one. Figure 12 shows the measured offsets of known sources with IBIS compared to the predicted error [45]. The SNR used to plot the data was the SNR measured in the images and the theoretical curve is then plotted with a specific constant. Even though the data roughly follow the 1/SNR trend, systematic effects prevent the system to reach the "ideal" performance even at large SNR.
Sensitivity versus Localization Accuracy
In the design of CMI it is often important to make a trade-off between sensitivity and localization power. Figure 13 shows the variation of the sensitivity (in terms of the inverse of the flux error σ S of Eq. 4) and of the location accuracy (given here by the inverse of the location error σ X of Eq. 6) with the resolution parameter r. In the left panel, different r values are obtained maintaining mask element (and mask pattern) fixed and varying the detector pixel size for two types of masks. The formulae correctly predict the performance evaluated through simulations, also shown in the plot: for a fixed mask element size (and then angular resolution) increasing detector resolution improves both sensitivity and accuracy. The cases considered are in condition of dominant background; thus the performance parameters with a 30% aperture mask are slightly worse than those for a 50% aperture.
In the right panel the location accuracy is plotted versus sensitivity for different values of r where its variation is obtained by fixing the value of d and varying m. The apparent incoherence with the left panel plot (where accuracy increases with r) is due to the different way r is varied. Clearly by increasing m, which determine the angular resolution, the location accuracy decreases from its maximum at r = 1, in contrast to the sensitivity which increases with r. As discussed in [83], often the trade-off values of r are set in the range 1.5-3 in order to have, for given detector resolution d, better sensitivity at moderate expenses of positional precision.
This opposite trend (for a fixed and finite detector spatial resolution) comes from the fact that localization is determined from a measure on the detector of the position of the boundary between transparent and opaque mask elements; therefore the larger the total perimeter of mask holes (which is maximized for given a when holes are small and are isolated), the better the measure of the source position. On the other hand, signal to noise is optimum when total mask hole perimeter is minimum (i.e., when open elements are large and agglomerated) which reduces the blurring that occurs at the boundary between open and closed mask elements.
The above considerations explain not only the dependence on r but also the one on the element distribution (mask pattern). In Fig. 13 right, the curve for the specific quasi-random pattern of ECLAIRs (see § 5.5) is also plotted. Considering this kind of prediction, the r value for ECLAIRs was finally fixed to 2.5, to reach the desired localization accuracy with the highest possible sensitivity. The formulae of Eqs. 4-7, as in general those published before, do not include the terms related to the mask pattern and do not predict the performances precisely other than for optimum systems or, in average, for fully random masks. But these terms can be computed using the mask auto-correlation in particular to select patterns which have best sensitivity/accuracy pair for given science objectives, as was done for ECLAIRs. Indeed, by comparing the values at integer r, its pattern appears better in both sensitivity and localization accuracy than the comparable 40% aperture random mask. Fig. 13 Theoretical, computed with full complex formulae, and simulated CMI performance as function of the resolution parameter r = m/d assuming dominant background. Left: Variation of sensitivity (solid line) and location accuracy (dashed) with r for two types of masks, an optimum system with a replicated 95×95 MURA of 53×53 basic pattern (blue) and a random mask of same dimensions and 30% aperture (green). Variation of r is obtained maintaining fixed m (and the mask pattern) and decreasing d (with d < m). Computed curves are compared to simulations, shown by data points and their error bars with the same color code. Right: Normalized accuracy versus normalized sensitivity curves for different r, where r is varied by fixing detector resolution d and varying m for the same masks of left panel (green, blue) plus a random mask with the same dimensions and a = 0.4 (violet). Curves for random masks are obtained by computing, and averaging, error values of a large sample of patterns. Dots give the specific values for integer r, while gray crosses those from the approximate formulae that neglect mask pattern (Eqs. 5-7). The cyan cross indicates the reference value for IBIS/ISGRI (r = 2.4) in the MURA curve (blue). The red curve is for an optimized quasi-random (auto-sustained) mask of a = 0.4 and dimensions 46×46, which is the pattern chosen for ECLAIRs whose performance is positioned in this plot by the brown cross at r = 2.5. Both sensitivity and accuracy depend on the configuration of the system and therefore values for different systems are not directly comparable. For example, localization accuracy of IBIS/ISGRI is much higher than for ECLAIRs, because sky pixels are 5 wide while for ECLAIRs are 30 wide, even if their accuracy values appear identical in this plot.
Coded Mask Instruments for High-Energy Astronomy
The development of coded mask imaging systems has been, from the beginning, linked to the prospect of employing these devices in high-energy astronomy. We review here the implementation of CMI to this field from the first rocket experiments to the missions presently in operation or expected in the close future. Even if not exhaustive, this summary provides a chronological panorama of CMI in astronomy which illustrates the topics discussed above and recalls the main achievements obtained in imaging the gamma-ray sky with these devices (for a complete list of hard X-ray (> 10 keV) experiments including CMI see [16]).
Specific subsections are dedicated to three major experiments successfully flown, or to be launched soon, on space missions, a representative set of CMI, with different and complementary characteristics. SIGMA, the first gamma-ray CMI on a satellite, featured an Anger-type gamma-camera with a continuous spatial resolution depending on energy. Thanks to its imaging capability in the hard X-ray range it became the black hole hunter of the 90s and provided the first 20-arcmin resolution images of the Galactic bulge at energies above 30 keV, and its success opened the way to the INTEGRAL mission. IBIS, presently operating on INTEGRAL, is the most performing gamma-ray imager ever flown, reaching, for the brightest sources, better than 20 location accuracy at 100 keV over a large FOV. It still provides, along with the BAT/Swift experiment, some of the most crucial results in the gamma-ray domain. ECLAIRs/SVOM is the future CMI to be mounted on an autonomously re-pointing platform dedicated to time domain astronomy. The quasi-random mask, optimized to push the threshold at low energies, shall open to the community the efficient detection of cosmological gamma-ray bursts (GRB).
First Experiments on Rockets and Balloons
The CMI concept was first applied to high-energy astronomy with instruments mounted on sounding rockets or on stratospheric balloons. Following the first ideas on coded aperture imaging, several such projects were initiated mainly by American, English and Italian groups.
The first experiment that actually probed the CM concept in astronomy was SL1501 [72], built by a UK laboratory and launched on a British Skylark sounding rocket in 1976. Composed of a position sensitive proportional counter (PSPC) and a rectangular 93×11 Hadamard mask, both of the same dimensions (box-type), delimited by the diameter of the rocket, it provided in the few minute flight the first X-ray (2-10 keV) images of the Galactic Center (GC) with an angular resolution of 2.5 ×21 (the higher resolution side purposely oriented along the Galactic plane) in a square 3.8°FOV [73]. SL1501 data were combined with those of Ariel V space mission in order to establish the activity of the X-ray sources of the region and together they even permitted to detect and localize some GC X-ray bursts.
A balloon-borne CMI which was highly successful was the US Gamma-Ray Imaging Payload (GRIP) experiment [3], that flew several times between 1988 and 1995 from Australia. Composed of an NaI(Tl) Anger camera working between 30 keV and 10 MeV coupled to a rotating mask [18] of about 2000 elements disposed on multiple repetition of a 127 HURA basic pattern (Fig. 6), this telescope imaged a FOV of 14°with 1.1°angular resolution and provided some of the first high-quality images of the Galactic Center at energies higher than 30 keV, in particular confirming the results obtained by SIGMA in the same period [19]. GRIP also detected and located gamma-ray emission from SN1997a, confirming the discovery from the Kvant Roentgen observatory.
Another American balloon experiment, EXITE2 [60], based on a phoswich (NaI/CsI) detector but coupled to a fixed rectangular URA, with a collimator that limited the FOV to the 4.5°FCFOV, flew several times between 1993 and 2001 (after a preliminary 1988-1989 flight) giving some results on hard point sources. This project was a technological preparation for a more ambitious CMI space mission, EXIST, not yet included in the US space program.
The American Directional Gamma-ray (DGT) experiment deserves to be mentioned because it aimed to push the CM technique to high energies [25]. Using a set of 35 BGO scintillation crystals working in the range from 150 keV to 10 MeV coupled to a 2-cm-thick lead mask with 13×9 elements disposed as a replicated 7×5 URA basic pattern, it covered a FCFOV of 23°×15°with a 3.8°angular resolution. The instrument suffered by a large non-uniform background which limited its performances. Nevertheless DGT could probe the CM concept at high energies with the detection of Crab nebula and the black holes (BH) Cyg X-1 and Cyg X-3 above 300 keV during a 30-hr balloon flight from Palestine (Texas) in 1984 [63].
These experiments and several others, only conceived, failed or operated but for short periods, probed the coded aperture imaging concept and paved the way to the implementation of CMI in space missions. Table 2 reports the list of the fully 2-d imaging coded mask instruments successfully launched on, or securely planned for, an astronomy satellite mission.
Coded Mask Instruments on Satellites
Other CMI with 1-d only design (or two 1-d systems disposed orthogonal to each other) were launched on space missions and some provided relevant results mainly as (all-sky) monitors of point-like sources. These were Gamma-1 on Gamma (URSS-Fr), XRT on Tenma (Japan), ASM on Rossi XTE (US), WXM on HETE2 (US) and SuperAgile on AGILE (Italy); none of them is presently in operation. A 1-d coded mask all-sky monitor system presently in operation is the SSM on the Indian ASTROSAT mission [81]. We will not describe them, as they are not, fully, imaging systems, even if the 1-d CMI concept, particularly when coupling orthogonal systems that give locations along the two axes, is an interesting one and has certain advantages for which it is still considered for some future missions.
The first successful CMI flown on a space mission was the UK XRT experiment, launched as part of Spacelab 2 (SL2) on board the NASA Space Shuttle Challenger for an 8-day flight in August 1985 [86] [97]. Two modules were included, equipped with the same multi-wire proportional counter working in the 2.5-25 keV range but with two Hadamard masks of different basic pattern, 31×29 for the coarse one and 129×127 for the fine one, and different mask element size which allowed for, respectively, coarse and high resolutions over the same 6.8°-wide FOV. Remarkable results were obtained from XRT/SL2, which provided in particular the first GC images with few arcmin resolution at energies > 10 keV (Fig. 14 left) [84]. Other XRT results concerned galaxy clusters, X-ray binaries (XRB) and the Vela supernova remnant.
The following CMI in space was the Coded Mask Imaging Spectrometer telescope (COMIS/TTM) [12] flown on the Kvant module of the soviet MIR station as part of the Roentgen Observatory which included three other non-imaging experiments. The instrument, built by Dutch and UK laboratories, used again a Hadamard mask (Fig. 6) coupled to a PSPC in a simple system [51]. It operated at different times between 1987 and 1999 and provided interesting hard X-ray images of the Galactic Center and upper limits of the famous SN1987a in the LMC [88].
In spite of the progress obtained in the UK and US, it was finally France that built the first soft gamma-ray CMI to fly on a satellite, SIGMA. It was launched in 1989 on the Soviet satellite GRANAT along with few other experiments: the Russian ART-S and CMI ART-P, the Danish rotating collimator monitor Watch and the French Phebus burst detector. SIGMA spectacular results firmly established the superiority of CM imaging over collimation, on/off chopping or Earth occultation techniques for gamma-ray astronomy. SIGMA is described in § 5.3.
Soon after SIGMA, the Dutch Wide Field Camera (WFC) [53] working in Xrays up to 30 keV was launched in 1996 on the Italian space mission Beppo-SAX [10]. This instrument was based on a pseudo-random mask, with a pattern called "triadic residues", with low open fraction (33%), more adapted to the X-ray domain than URAs, arranged in a simple system configuration which provided a 40°×40°P CFOV [52]. Two such cameras were disposed orthogonal to each other and to the optical axis of the other SAX instruments and particularly of the X-ray mirror telescope. The WFC allowed the discovery of the first X-ray afterglow of a GRB [21] when the satellite could re-point its X-ray mirrors on the transient source positioned with arcmin precision (Fig. 14 right). This was a crucial astrophysical discovery which allowed the community, with follow-up optical observations, to establish that GRBs are extra-galactic events, which now we know are connected to explosion or coalescence of stars in external galaxies.
The heritage of SIGMA allowed Europe to maintain and consolidate the advantage in CMI. In fact France, Germany and Italy took the lead of the development of the two main instruments of the INTEGRAL Mission, both based on coded aperture techniques. The International Gamma-Ray Astrophysical Laboratory [98] of the European Space Agency (ESA) with participation of Russia and the US was launched on the 17th October 2002 from Baikonour by a Proton rocket on a very eccentric orbit, which allows long uninterrupted observations of the sky, about 3 days before entry in the radiation belts. The platform carries four co-axial instruments, the two main gamma-ray CMI, the imager IBIS, and the high-resolution spectrometer SPI, plus the coded mask X-ray monitor JEM-X and the optical telescope OMC. INTEGRAL performs observations in dithering mode where a set of pointing of ≈ 30 min are carried out along a grid of directions about 2°apart around the target source. Data are sent to ground in real time which allows fast analysis and reaction in the case of detection of transient events.
The Spectrometer on INTEGRAL [93] working in the range 20 keV-8 MeV is composed of 19 individual cooled Germanium detectors of hexagonal shape with 3.2 cm side disposed in an hexagonal array 1.7 m below a thick tungsten non-replicated and non-rotating HURA mask of order 127, with hexagonal elements of the size of the Ge crystals. The very high spectral resolution (2.5 keV at 1.33 MeV) of the 6-cm-thick Ge detectors allows study of gamma-ray lines with moderate imaging capabilities (2.5°resolution over a 16°FCFOV and a 30°half coded EX-FOV) thanks to the CM system and the dithering mode which permits a better correction of the background. The SPI CsI anti-coincidence system is by itself a large area detector that is presently used also for the search of GRB events outside the FOV of the CMI. The Danish JEM-X monitor [61] working in the range 3-30 keV is also a CMI. Composed of two identical modules with the same non-cyclic fixed HURA of more than 20,000 elements with 25% aperture but rotated by 180°in order to have different ghost distribution and high-pressure Microstrip Gas Chamber (MGC) detectors, it provides 3 angular resolution images over a 7°FWHM FOV. IBIS is certainly the core of the CM imaging capabilities of INTEGRAL and is described in § 5.4.
INTEGRAL provides the community with a large amount of excellent astrophysical results and crucial discoveries, in particular with the mapping of the 511 keV line of the Galaxy; the measurements of gamma-ray lines from SNR and close supernovae (SN); the detection and study of GRBs and BH sources in binary systems and in active galactic nuclei (AGN), of all variety of neutron star (NS) systems; and the detailed imaging of the GC region. The mission is still operating today [56] with many important results obtained each year, the most recent ones in the domain of time domain astronomy (see, e.g., [31] and references therein).
Meanwhile, on the side of the high-energy time domain astronomy, Beppo 7 had, once more, showed the way. The future of this domain would reside on agile spacecrafts with capability of fast (therefore autonomous) re-pointing and a set of multiwavelength instrumentation, including a large field imaging instrument at high energies, based on coded mask technique, and high-resolution mirror-based telescopes at low energies. Too late to implement these features in INTEGRAL, the US did not miss the opportunity by developing in collaboration with UK and Italy the Neil Gehrels Swift mission [36], dedicated to GRBs and the transient sky. Using a platform conceived for the military "star-wars" program, with unprecedented, and still unequalled, capability of fast (tens of seconds) autonomous re-pointing, this mission has provided since its launch in 2004 exceptional results in the domain of GRB science [34] but also of the variable and transient high-energy sky [35].
Many of these are based on the Burst Alert Telescope (BAT) [5] a large coded mask instrument which is still in operation along with the two narrow-field telescopes of the mission, one for X-rays (XRT) and one for ultraviolet-optical frequencies (UVOT). BAT (Fig. 15) is the instrument that detects and localizes the GRB and triggers the platform re-pointing. It is composed of an array of 32,768 individual square CdZnTe semiconductor detectors, for a total area of 5240 cm 2 , coupled to a large random mask with a "D" shape and a 50% aperture, made of about 52,000 elements, each of 1 mm thickness and dimensions 5×5 mm 2 with a resolution ratio r = 1.2 with respect to the detector pixels. The mask is set at 1 m from the detection plane and it is connected to the detector by a graded-Z shield that reduces the background. BAT provides a resolution of about 20 over a huge FOV of 1.4 sr (half coded), a location accuracy of 1 -4 , and good sensitivity in the range 15-150 keV. Ground BAT data analysis is described in [91] [79][7] [67] and it is quite similar to the standard one described above. Figure 16 left shows the BAT reconstructed image of the Galactic Center from which one can appreciate the imaging capability of this CMI and can compare it to the IBIS one ( § 5.4). A specific feature of the instrument is that a BAT data analysis is continuously performed on board in near real time thanks to an image processor which allows the detection and position of GRBs within 12 s from their start and the rapid triggering of the platform re-pointing to the computed location. BAT performances are the key of the large success of the mission. The instrument detects and positions about 100 GRBs per year allowing the following red-shift determination for about 1/3 of them. It provides excellent results on many different variable hard X-ray sources, both Galactic and extra-galactic, like AGNs, magnetars, different types of binaries, and others [35], for example it allowed the discovery in 2011 of the first tidal disruption event with a relativistic jet [13]. More than 1600 non-GRB hard X-ray sources have been detected by BAT/Swift in the first 105 months of operations [67] (Fig. 16 right). The most recent launch of a coded aperture instrument on a space mission is the Cadmium Zinc Telluride Imager (CZTI) [9] of the Indian ASTROSAT [81] mission in operation since 2015. The CZTI is composed of four identical and co-axial modules disposed 2×2 on the platform. The modules are based on a mask composed of 4×4 arrays, each one following a Hadamard pattern built (in different way) from the same 255 PN sequence, and then coupled to a pixelated CdZnTe detector in a "simple system" configuration. Its overall parameters are given in Table 2 but for more complete and recent reports about the in-flight calibrations and performance of the instrument, see [94]. Important results have been obtained on transient sources, pulsars and polarization measurements.
SIGMA on GRANAT: the First Gamma-Ray Coded Mask
Instrument on a Satellite SIGMA (Système d'Imagerie Gammaà Masque Aléatoire) is the first gamma-ray coded mask telescope flown on a satellite [69] and provided extraordinary discoveries and results in the domain of black hole astrophysics. Launched on the 1st December 1989 from Baikonour (URSS) by a Proton rocket on the three-axis stabilized Soviet GRANAT satellite, it operated in pure pointing mode till 1995 and then mainly in scanning mode for a couple of years more. A schematic view of SIGMA is given in Fig. 17 where its coded mask is also shown, with its characteristic URA pattern, after the instrument was mounted on the platform. Made of a NaI(Tl) Anger camera, composed of 1.25-cm-thick circular scintillating crystal viewed by 61 photo-multipliers, surrounded by a CsI(Tl) anti-coincidence system, and set at 2.5 m from a 1.5-cm-thick tungsten coded mask, SIGMA could provide images in the 35 keV-1.3 MeV range with angular resolution of 20 -13 in a FCFOV of 4.7°×4.3°and a half-coded EXFOV of 11.5°×10.9°with an on-axis 40-120 keV 3σ sensitivity of the order of 100 mCrabs in 1-day observation. The rectangular mask of 53×49 elements was a replicated 31×29 URA (and not a random mask as implied by the instrument acronym) and the events, detected in the central rectangular 725 cm 2 area of the NaI crystal corresponding to the URA basic pattern, were coded in 124×116 pixel detector images and in 95 energy channels. Data analysis and calibration of SIGMA are reported in [39] and [11] and references therein. A specific feature of the decoding process was the use of the efficiency array to take into account the drifts of the platform [39] for the extension of the sky image reconstruction to the PCFOV of the instrument. Another important element was that the continuous spatial resolution of the gamma camera varied with energy (see Fig. 19) and time, and therefore it had to be monitored and modelled along the mission [11] in order to optimize the analysis of the data. The SIGMA long and repeated observations of the Galactic Bulge (Fig. 18), allowed by the fact that its imaging capabilities could be fully exploited over its huge PCFOV, clarified the situation of the high-energy emission from this very active and variable region by showing in particular that the central degrees at energies > 20 keV are fully dominated by the source 1E 1740.7-2942, not particularly bright at low energies, that was soon after identified as the first persistent Galactic BH micro-quasar displaying extended radio jets. They also led to the discovery of the other persistent X-ray BH binary GRS 1758-258 [62] of the bulge and second identified micro-quasar of the Galaxy (too close to the NS XRB GX5-1 to be studied by previous non-imaging instruments), to a measure of the weakness of the central massive black hole Sgr A * at high energies, and to the detection of several other BH and NS X-ray persistent and transient bulge sources [41]. Again thanks to its huge FOV and imaging capabilities SIGMA was very efficient in discovering Galactic BH X-ray transients (or X-ray novae) which are particularly hard sources. It detected and positioned seven of them in its 6 years of nominal operations, and between them is the other famous BH micro-quasar GRS 1915+105 that was the first specimen to reveal super-luminal radio jets. An important result was the detection, in the BH X-ray Nova Muscae 91, of a weak and transient feature around 511 keV (Fig. 19), the energy of the electron-positron annihilation line [40]. These results showed how well-designed CMI can provide also high-quality spectra and light curves of gamma-ray sources. SIGMA detected, in its 12 Ms 1990-1997 survey, a total of about 35 sources including 14 BH candidates, 10 XRB, 5 AGN, 2 pulsars and 9 new sources [76].
SIGMA data were complemented at low energies by those of the ART-P coded mask hard X-ray telescope [89] which had four identical modules made of a PSPC coupled to a replicated URA mask and providing 6 angular resolution over less than 2°FOV. ART-P's most relevant results were the hard X-ray images of the GC that complemented nicely those of SIGMA [70] and revealed a diffuse emission consistent with the molecular clouds of the region and interpreted as scattering by the clouds of high-energy emission emitted elsewhere. Initially used to put limits on the activity of Sgr A * , the detected emission was later recognized as a signal of the Galactic SMBH past activity and was coupled to the measurements of the molecular cloud Sgr B2 with IBIS/INTEGRAL to constrain the Sgr A * ancient outbursts [75]. [40]. Left: Sky image sectors around the source in different energy bands which show the large variation of the width of the SPSF due to the changes in the gamma-camera spatial resolution with energy, along with the reappearance of a significant excess in the band around 500 keV. Right: Source spectrum, derived from the deconvolved images, which shows the presence of a high-energy feature.
IBIS on INTEGRAL: the Most Performant Gamma-Ray Coded Mask Instrument
The main gamma-ray imaging device on INTEGRAL (Fig. 20 left) is the Imager on board the INTEGRAL satellite, a hard X-ray/soft gamma-ray coded mask telescope [92] developed mainly by Italy and France. IBIS is composed of a replicated MURA mask of 95×95, 1.6-cm-thick tungsten square elements (see the pattern in Fig. 6) with 50% open fraction coupled to two position sensitive detectors, the Integral Soft Gamma-Ray Imager (ISGRI) and the Pixellated Imaging Caesium Iodide Telescope (PICsIT), both of the same dimension of the central MURA basic pattern of 53×53 elements. ISGRI [59] is made of 128×128 individual 2-mm-thick cadmium telluride (CdTe) semiconductor square detectors each of dimensions 4×4 mm 2 (for a total area of 2600 cm 2 ) (Fig. 20 right), works in the range 15 keV-1000 MeV and is placed 3.2 m below the mask. The overall IBIS/ISGRI sensitivity is of the order of a mCrab for 1 Ms exposure at 80 keV with typical spectral resolution of 7% (FWHM).
PICsIT [24] is placed 10 cm below ISGRI and is composed of 64×64 CsI bars, each exposing a collecting area 4 times of an ISGRI pixel and working in the range 175 keV-10 MeV. The detector planes are surrounded by an active anti-coincidence system of BGO blocks, and an absorbing tube connects the unit with the mask allowing for reduction of un-modulated sky radiation. Data of both instruments are recorded, transmitted, and analyzed independently, but coincident events from the two detector layers are combined to provide the so-called Compton mode data which are particularly useful to study polarimetry properties of the incident radiation. In the following we will refer to the IBIS/ISGRI system only, given that it provides the best imaging performances of the telescope and we will neglect the Compton mode.
IBIS has provided the most precise images of the GC (Fig. 21 left) at > 20 keV before the recent extension of the grazing incidence technique to 70-80 keV with NuSTAR, and it is still the best imager that can cover such large FOV (> 2°) at high energies. The most recent and remarkable discoveries of this telescope have been the detection of emission of a close supernova (SN2014j) in the Na lines and the identification of a magnetar flare (Fig. 21 right) with a fast radio burst (FRB) [64].
IBIS Data Analysis and Imaging Performance
The IBIS coded mask system and the standard analysis procedures of the data are described in [43] and [45], but see also [55] for sky surveys at high energies and [74] for analysis of extended sources. The instrument analysis software is integrated in the Integral Science Data Center (ISDC) [22] through which it is distributed to users as Off-line Scientific Analysis (OSA) packages. After 20 years of operations the instrument is still providing excellent data and several new features have been integrated in the analysis procedures [56]. We have already largely used characteristics and data from this system in order to illustrate CMI design, analysis, and performance concepts: in Table 1 for imaging design/performance, in Fig. 6 for the mask pattern, in Fig. 7 for decoding process, in Fig. 8 for distribution of peaks in reconstructed image, in Figs. 9 and 12 for the resulting SPSF and PSLE, and in Fig. 11 for the overall analysis process.
Indeed IBIS represents a typical CMI with a cyclic optimum (MURA) mask coupled to a pixelated detector. The detector spatial resolution is just given by the geometrical dimension of the square pixels, independent from energy. CdTe square pixels have size of 4 mm, but the pitch between them is 4.6 mm with 0.6 mm of dead area. The mask elements are not integer number of pixel pitch also in order to avoid ambiguity in source position due to the dead zones. This does introduce a non-perfect coding even for sources in the FCFOV; however other factors break the perfect coding, and noise is anyway introduced. Imaging performances were studied on different data sets of bright and weak known point-like sources along the years [45][78] [46]. The FCFOV is 8°×8°, the half-coded EXFOV 19°×19°, and the zero-response one 29°×29°. The detector pixel pitch (and therefore the reconstructed sky pixel for the decoding process described above) subtends an angle of 5 on the sky, while the mask element an angle of 12 . With a ratio mask element to pixel pitch of 2.43, IBIS is expected to have an average image efficiency (at source location) of 86%; an angular resolution, for weighted reconstruction, of 13 FWHM in the FCFOV; and a localization better than 0.5 at SNR > 30. The width of the SPFS however varies wildly along the PCFOV due to the secondary lobes as shown in Fig. 12 left [45]. The localization error as measured at the beginning of the mission (Fig. 12 right) [45], even if it followed well the expected 1/SNR trend and reached values of less than 1 arcmin at SNR > 30, was not as good as the theoret-ical curve and stalled at a constant level of 20 even for very high SNR. With the improvement of the analysis software and the reduction of systematic effects, the PSLE was significantly reduced [78] [46] and reaches now about 40 at SNR 30. Figure 22 reports the PSLE as a function of the source SNR obtained at later stages of the mission.
Systematic effects like pixel on/off, absorption by different detector structures, mask vignetting, and absorption by mask elements including screws and glue and in the mask support honeycomb structure have been studied along the years. All shall be accurately accounted for in source modelling. In fact while the MURA optimum system provides clean and narrow SPSF in the FCFOV, it also creates strong ghosts and coding noise in particular along the image axis passing through the source position, which must be removed in order to search for weaker excesses. An iterative algorithm of search, modelling, and removal of sources is implemented in OSA (Fig. 11) in order to clean the images before summing them in sky mosaics (Fig. 23).
ECLAIRs on SVOM: the Next Coded Mask Instrument in Space
The Chinese-French SVOM (Space-based multi-band astronomical Variable Object Monitor) space mission [95][20], planned, today, for a launch in 2023-2024, is a multi-wavelength observatory dedicated to the astrophysics of GRBs and of the high-energy variable sky. Between the four instruments of the payload, the hard X-ray coded mask imager ECLAIRs [37] (Fig. 24), operating in the 4-150 keV energy range, will autonomously detect onboard GRBs and other high-energy transients providing their localization to the ground (through a fast VHF system) and triggering on board, under certain criteria, the slew of the platform in order to point in few minutes the SVOM narrow-field telescopes working in X-rays (Micro X-ray channel plate Telescope, MXT) and in the optical (VT) toward the event.
The ECLAIRs detection plane is made of 6400 pixels of Schottky type CdTe (4×4 mm 2 , 1 mm thick) for a total geometrical area (including dead zones) of ≈ 1300 cm 2 . A 54×54 cm 2 coded mask with 40% open fraction is located 46 cm above the detection plane to observe a FOV of 2 sr (zero coded) with an angular resolution of 90 arcmin (FWHM). A passive lateral Pb/Al/Cu-layer shield blocks radiation and particles coming from outside the aperture. Sky images will be reconstructed in maps of 199×199 square pixels with angular size ranging from 34 on-axis down to 20 at the edges of the FOV. ECLAIRs provides a sensitive area of ≈ 400 cm 2 , a point source localization error better than 12 for 90% of the sources at the detection limit and is expected to detect each year about 70 GRBs, several non-GRB extragalactic transients, dozens of AGNs and hundreds of Galactic X-ray transients and persistent sources. Its low energy threshold of 4 keV will open to SVOM the realm of extra-galactic soft X-ray transients, such as X-Ray flashes or SN shock breakouts, which are still poorly explored, and in particular will allow the detection and study of cosmological GRBs whose emission peak is red-shifted in the X-ray band. The 46×46 square mask elements have linear size 2.53 times the detector pixel pitch, and their distribution follows an optimized quasi-random pattern chosen by requiring connection between elements in order to allow the mask to be autosustained. Thousands of quasi-random patterns of this kind with a 40% aperture, which optimizes performance at these energies, were generated and studied, using the formulae of error estimation for general masks (mentioned but not explicitly given in § 3.4.3), in order to select the one presenting the best compromise between sensitivity and source localization for the GRB science, compatible with the mechanical criteria. The performance as function of the resolution parameter r for the specific chosen mask pattern is shown in Fig. 13 right along with the relative values, at the selected resolution factor which was chosen in order to optimize the system for the scientific objectives of the mission.
For the chosen design, the predicted imaging performances are shown in Fig. 25. Left panel shows the peak of the SPSF that, given the non-optimum system based on a quasi-random mask, does present relevant side-lobes even in the center of the FCFOV. Once the source is detected and positioned, the lobes must be cleaned by an IROS procedure in order to search for weaker sources. Right panel shows the localization error curve from simulations of sources at different SNR. The accuracy is expected to be within half the size of the VT FOV (≈ 26 ) in order to always have the event within both the optical and the X-ray telescope FOVs after the slew of the platform to the ECLAIRs measured GRB position. The actual mask, integrated in the ECLAIRs instrument, is shown in Fig. 24 right. It is composed by a Ti-Ta-Ti sandwich with the tantalum providing the main absorbing power and the titanium the mechanical strength. A central opaque cross, of width 1.4 times the mask element size, is added, along with fine titanium supporting structures running along the mask elements on the side which avoids vignetting of off-axis sources, to make the overall structure resistant to the expected vibration amplitudes of the launch. The design of the ECLAIRs mask has been optimized in this way in order to allow the instrument to be sensitive at energies as low as 4 keV. That requires to have a solid self-sustained mask without a support structure that would absorb the radiation passing through the mask open elements. The multilayer thermal coating insulation that envelops the telescope in order to protect the camera from light and micro-meteoroids will stop however X-rays below 3-4 keV and determines the low-energy threshold of the instrument.
One particular feature of the SVOM mission is that the general program observations, during which data on other sources are collected while waiting to detect GRB events, will be scheduled giving priority to an attitude law that optimizes the search of GRB. SVOM will generally point opposite to the sun, toward the Earth night, so that detected GRB can be rapidly observed with ground-based observatories, and also, to reduce noise, will avoid the bright Galactic plane and rather observe the sky Galactic poles. These constraints and the low Earth orbit of the satellite (≈ 650 km) will lead to a frequent and variable occultation of the instrument FOVs by the Earth. ECLAIRs will then often experience partial Earth occultation of its large FOV during which the CXB will be modulated in a variable way during the ≈ 90 min orbit. Figure 26 shows a simulation of the expected spatial modulation on the detector by this effect, the impact on the imaging performance, and the expected correction results. Given the uncertainties of the CXB model and the additional components of Earth albedo and reflection, the background correction of ECLAIRs images affected by the Earth in its FOV in real conditions will certainly be challenging.
However CMI have been proven to be robust and effective imaging systems, and new exciting results are expected from this novel coded mask-based high-energy mission, dedicated to the transient sky, that will be launched soon.
Summary and Conclusions
In this review we have described the concept of coded mask instrument for gammaray astronomy, discussed the mask patterns, and introduced definitions and terminology useful to understand the large literature on the subject. We have illustrated the correlation analysis procedure to apply to the data of standard CM systems, along with some practical recipes for the analysis. We have provided the formulae to evaluate errors and performance of the systems from their design parameters and illustrated them with data, simulations, or calculations for some of the CMI presently in operation or in preparation. Finally we have described the historical development of the field and the main CMI implementations in space missions and recalled some of the important results obtained by these devices in imaging the soft gamma-ray sky.
Even if focusing techniques, with their power to reduce background and to reach arcsec scale angular resolutions, are more and more extended to high energies and are definitely performing well in exploring the most dense sky regions like the Galactic center, they are limited to narrow fields, and CMI remain the best options for the simultaneous monitoring of large sky regions.
Since the X-/gamma-ray sky is dominated by compact objects which are, most of the time, very variable and even transient, these surveys are crucial to explore this realm, especially in the new era of time-domain and multi-messenger astronomy. Indeed rapid localization at moderate resolutions of high-energy electromagnetic counterparts of gravitational wave or neutrino burst events, which have large positional uncertainties, can trigger the set of high-resolution observations with narrowfield instruments which finally lead to identification of the events. This is what happened for the very first identified GW source (GW170817), and this is the strategy envisaged for the next multi-messenger campaigns of observations. The reaction time for the follow-up of fast transients is obviously very important; therefore the way to go is to couple imaging wide-field monitors with a multi-wavelength set of space or/and ground-based narrow-field telescopes with fast autonomous capability to point the sky positions provided by the monitors. This strategy first implemented by Swift and ready to be used by SVOM is still based on CMI.
Another technique that is emerging to design large field of view X-ray telescopes is the so-called lobster-eye or micro-pore optics (MPO). The concept, taken from the optical system of the eyes of crustaceans, is to use grazing reflection by the walls of many, very small channels to concentrate X-rays toward the focal plane PSD. By disposing a wide micro-channel plate with a large number of micro-holes with very polished and flat reflecting walls over a properly curved surface, a focusing system with a large FOV can be obtained. MPO is used for the MXT of SVOM [38] that will obtain X-ray images with arcmin angular resolution over 1°FOV, but larger systems are now being developed, e.g. for the Einstein Probe mission [99].
However MPO technique is for now limited to low X-ray energies, and projects for future high-energy missions dedicated to variable sky still plan to implement coded mask wide-field cameras, as, for example, the set of orthogonal 1-d cameras in the Chinese-European eXTP (enhanced X-ray Timing and Polarimetry) mission [100], or the two full 2-d cameras of the XGIS (X-Gamma ray Imaging Spectrometer) instrument [57] of the Theseus (Transient High-Energy Sky and Early Universe Surveyor) [4] project, proposed recently to ESA for a medium size mission (M7) of the Cosmic Vision program. This last instrument, based on a random mask coupled to Silicon Drift Detectors, features a combined ZR-EXFOV of 117°×77°, an AR of 120 and a localization accuracy of 15 at SNR = 7 in the 2-150 keV range. A rather complex CMI devoted to soft gamma-rays is also proposed for a future NASA explorer mission, GECCO, an observatory working in the 50 KeV-10 MeV range that combines coded mask imaging at low energies and a Compton mode system for the high energies [68]. A specific feature of this CMI is its capability of deploying after launch a mast, which can extend the mask to any distance from the detector up to 20 m, in order to reach the desired imaging performances by tuning this system parameter (H in Table 1).
In conclusion, coded mask instruments are very efficient devices to carry out imaging surveys of the hard X-ray/soft gamma-ray sky over large fields of view with moderate angular resolution and localization power and are still considered for future missions dedicated to the time-domain astronomy, for which lighter and more agile system are now designed. While several such instruments are still presently in operation on INTEGRAL, Swift and ASTROSAT, the next CMI to fly soon is ECLAIRs, on the SVOM multi-wavelength mission, which pushes the technique to cover a large energy band from X to hard X-rays and is expected to provide exceptional results on GRB and the transient sky science.
Fig. 2
2Coded aperture principle. The resulting images recorded by the position sensitive detector for the configuration ofFig. 1for the two sources separately (left and right), and combined (center).
Fig. 4
4Mask, detector, FC sky of an optimum CMI. Left: A 33 × 29 replicated mask of basic pattern 17 × 15 (Hadamard from an m-sequence CDS with N = 255 and m = 8 disposed along the diagonal, see § 2.3). Center: Simulated detector image of two sources in the FCFOV and low background for a CMI with the mask on the left and a detector of same size of its basic pattern, with 3 × 3 pixels per mask element. Right: The relative decoded SNR sky image in the FCFOV.
Fig. 5
5Reconstructed SNR sky image in the full EXFOV for the CMI and simulation of Fig. 4 left/center. The central part corresponds to the FCFOV (same as Fig. 4 right) and shows optimal properties: the shift invariant peaks and flat side-lobes of the two simulated FC sources. In the PCFOV coding noise and the strong eight ghosts of the main source are clearly visible.
[72] ( § 5.1). Examples of Hadamard masks are shown in Figs. 4 left, 6 left.
Fig. 6
6Optimum coded aperture patterns, note the symmetry of URAs with respect to the "more random" Hadamard ones (alsoFig. 4left). Left: The non-replicated 255×257 Hadamard pattern of the COMIS/TTM mask (from [51]), from an m-sequence CDS ordered along the extended diagonal. Center: Replicated HURA mask of 127 basic pattern used in the GRIP experiment (from [3], © NASA). Right: The IBIS 95×95 mask, replication of the MURA 53×53 basic pattern (central red square) (see § 5.4).
Fig. 7
7IBIS images showing the sky reconstruction process from data of the Cygnus region. From left/top to right/bottom: binned corrected detector image (130×134 pixels) including dead zones (D C ), associated efficiency image (W ), rebinned MURA mask on a detector pixel grid (233×233 pixels) (M R ), decoded intensity sky image (S) (358×362 pixels), associated variance (V ) and final SNR sky image after cleaning of coding noise of the two detected sources (Cyg X-1 and Cyg X-3).
Fig. 8
8Left: SNR distribution of a CMI (IBIS/ISGRI) reconstructed sky image without sources before (red) and after (blue) correction of some systematic effects, compared to the expected normal distribution (dashed line) (reproduced with permission from[55], © ESO). Right: Variation of the confidence level of detection (in %), for no a priori knowledge of source position, as function of number of sky pixels in a CMI sky image for different SNR levels of detection.
Fig. 9
9IBIS/ISGRI SPSF from data of a point source observation. Left: Source peak in the center of the FCFOV of a decoded sky image. Color code highlights the, rather low, coding noise particularly affecting (given the symmetry of the MURA mask) the image axes centered on the source. Right: Source profiles in a decoded image along the 2 axis (black lines) and the Gaussian model, approximation of the SPSF, that best fits the excess (red lines) (from[46]).
Fig. 10
10Non-uniform CXB in ECLAIRs. Left: Simulation of the CXB intensity (counts/pix) on the ECLAIRs detector during one orbit. Right: Decoded sky SNR map of the detector image (left), when two bright sources (SNR>60) are also added in the simulation and no correction for the non-uniform background is performed. The sources can be barely seen in spite of their high SNR.
Fig. 12
12IBIS/ISGRI imaging performance from INTEGRAL early data (reproduced with permission from [45], © ESO). Left: FWHM of the fitted Gaussian to the SPSF along the two central sky image axes: constant at ≈ 2.6 pix in the FCFOV, it changes wildly in the PCFOV. The upper horizontal line gives the average value in the FCFOV (delimited by 2 vertical lines), the lower one the size of a mask element (2.43 pix). Right: PSLE radius at 90% c.l. from measured offsets of known sources at different signal to noises compared to prediction (solid line). Data are well fitted by a 1/SNR function plus a constant (dashed line).
Fig. 14
14Left: Image of the Galactic Center obtained by the XRT/SL2 instrument in the 3-30 keV band (reproduced by permission from [84], © Springer Nature 1987). Right: Detection of GRB 960720 by WFC/SAX (reproduced with permission from [71], © ESO). Top: 3D shadow picture of the WFC 40°×40°FOV of the observation showing the GRB peak along with the one from Cyg X-1. Bottom: Maps around the GRB before during and after the ≈30 s of the burst.
Fig. 15
15Left: Scheme of the BAT CM instrument on board the Swift satellite (Credit NASA, https://swift.gsfc.nasa.gov/about swift/bat desc.html). Right: Assembly of the BAT coded mask, characterized by a "D" shape and a random distribution of elements (reproduced by permission from[5], © Springer Nature 2005).
Fig. 16
16Left: Reconstructed image of the Galactic Center from BAT data (reproduced with permission from [7], © AAS). Right: The BAT/Swift catalogue of sources (> 15 keV) detected in the first 105 months of operations (reproduced with permission from [67], © AAS).
Fig. 17
17The SIGMA/GRANAT coded mask instrument. Left: Scheme of the SIGMA telescope (reproduced with permission from[11], © AAS). Right: SIGMA, with its URA tungsten coded mask, mounted on the GRANAT spacecraft (credit CEA/IRFU).
Fig. 18
18The Galactic Bulge and Galactic Center observed with SIGMA/GRANAT[41]. Left: The mosaic of the 40-80 keV sky images reconstructed from SIGMA data from observations of the GC in 1990-1994. Right: Zoom in the central GC region, dominated by the bright micro-quasar 1E 1740.7-2942. Very weak emission is present at the position of the SMBH Sgr A*.
Fig. 19
19SIGMA Observation of Nova Muscae 91 in January 1991
Fig. 20
20IBIS/INTEGRAL coded mask instrument. Left: Artistic view of the INTEGRAL satellite, where the characteristic MURA and HURA patterns of the IBIS and SPI masks are visible (credit ESA). Right: The ISGRI detection plane composed of 8 modules of 2048 CdTe detectors integrated in the IBIS instrument (reproduced with permission from [59], © ESO).
Fig. 21
21IBIS/INTEGRAL results. Left: Image of the Galactic Center (2.5°×1.5°) in the 20-40 keV from the mosaic of 5 Ms of observations of the field (reproduced with permission from [8], © AAS). Right: IBIS detection of the magnetar SGR 1935+2154 flares coincident with a FRB [64]: sky image, source location and light curve (blue) compared to the radio bursts (red) (INTEGRAL POM 07/2020, credits: S. Mereghetti and ESA).
Fig. 22
22IBIS/ISGRI imaging performance: recent determination of the PSLE. Left: PSLE vs SNR in FCFOV and PCFOV compared to early measurements[78] (credit: IBIS Observer's Manual, 2017, ESA SOC). Right: Recent PSLE measurements (dots) and derived curves (red, orange, yellow) using refined analysis, compared to previous results and theoretical trend (violet)[46].
Fig. 23
23IBIS/ISGRI imaging performance: mosaic in the Cygnus Galactic region after the decoding, analysis, cleaning, roto-translation and sum of several individual sky images[43].
Fig. 24
24The ECLAIRs/SVOM coded mask instrument. Left: A scheme of the instrument showing the different elements (from[37]). Right: The ECLAIRs Mask mounted on the instrument at the CNES premises (credit CST/CNES).
Fig. 25
25ECLAIRs/SVOM imaging performance. Left: Central part of the SPSF for on-axis position. Lobes close to the central source peak are present. Right: Expected PSLE radius at 90% c.l. versus the source signal to noise. Source offsets from true positions obtained from simulations are indicated by black dots and the fitted PSLE, with 1/SNR trend, by the solid line (from[37]).
Fig. 26
26Effect of CXB modulation by partial Earth occultation of the ECLAIRs FOV. From left to right and top to bottom: Simulated detector image of the Earth-modulated CXB in 20 s exposure (during which the Earth can be considered stable in the FOV). Configuration of Earth occultation of ECLAIRs FOV considered in the simulation. Decoded SNR sky image of the simulated detector image and including two not-obscured sources when background is not corrected: large modulation is present, and the sources are not easily detected. Decoded SNR sky image when proper model of CXB modulated by the Earth is used for the background correction: the reconstructed image is flat, and the two sources are detected as the highest peaks.
Table of Contents
ofCoded Mask Instruments for γ-Ray Astronomy -A.Goldwurm & A. Gros Coded Mask Instruments for Gamma-Ray Astronomy . . . . . . . . . . . . . . . . 1 Andrea Goldwurm and Aleksandra Gros 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Overall Analysis Procedure, Iterative Cleaning and Mosaics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Coded Mask System Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1 Sensitivity and Imaging Efficiency . . . . . . . . . . . . . . . . . . . 30 4.2 Angular Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 Point Source Localization Accuracy . . . . . . . . . . . . . . . . . Ray Coded Mask Instrument . . . . . . . . . . . . . . . . . 45 5.5 ECLAIRs on SVOM: the Next Coded Mask Instrument in Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53. . 3
2
Basics Principles of Coded Mask Imaging . . . . . . . . . . . . . . . . . . . . . 5
2.1
Definitions and Main Properties . . . . . . . . . . . . . . . . . . . . . 5
2.2
Coding and Decoding: The Case of Optimum Systems . . 8
2.3
Historical Developments and Mask Patterns . . . . . . . . . . . 11
3
Image Reconstruction and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1
Reconstruction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2
Deconvolution by Correlation in the Extended FOV . . . . . 17
3.3
Detector Binning and Resolution: Fine, Delta and
Weighted . 32
4.4
Sensitivity versus Localization Accuracy . . . . . . . . . . . . . . 34
5
Coded Mask Instruments for High-Energy Astronomy . . . . . . . . . . 36
5.1
First Experiments on Rockets and Balloons . . . . . . . . . . . . 36
5.2
Coded Mask Instruments on Satellites . . . . . . . . . . . . . . . . 38
5.3
SIGMA on GRANAT: the First Gamma-Ray Coded
Mask Instrument on a Satellite . . . . . . . . . . . . . . . . . . . . . . . 42
5.4
IBIS on INTEGRAL: the Most Performant
Gamma-
Table 1
1Expected Imaging Properties of a Coded Aperture System Notes : L M mask linear size, L D detector linear size, H detector-mask distance, m mask element linear size (m > d), d detector pixel linear size for pixelated detector, d = 2Quantity
Angular value
IBIS/ISGRI
FCFOV (100 % sensitivity)
2 · arctan (L M −L D )
2·H
8.2°E
XFOV (50 % sensitivity)
2 · arctan L M
2·H
18.9°E
XFOV (0 % sensitivity)
2 · arctan (L M +L D )
2·H
29.2°A
ngular resolution on-axis (FWHM)
≈ arctan
√
m 2 +d 2
H
13
Localization error radius on-axis (90% c.l.)
≈ arctan 1.52
SNR
d
H
m
d − 1
3
22 at SNR=30
Table 2
2Coded Mask Instruments on SatellitesCMI
Satellite
Detector Mask Pattern
Energy
Ang.Res. FOV Operat. Ref
type
type or order (keV)
(FWHM) (at ZR) (years)
XRT
Challenger PSPC
Had 129×127* 2.5-25
3 -12
6.8°1985 [97]
TTM
MIR Kvant PSPC
Had 255×257 2-32
1.8
15.8°87-99 [12]
SIGMA GRANAT Anger
URA 31×29
30-1300
15
20°89-97 [69]
ART-P
GRANAT PSPC
URA 43×41
4-30
6
1.8°89-93 [89]
WFC
SAX
PSPC
Tria 256×256 2-30
5
40°96-02 [53]
IBIS
INTEGRAL CdTe CsI MUR 53×53
15-10000
12
30°02-[25] [92]
SPI
INTEGRAL HPGe
HUR 127
20-15000 2.5°45°02-[25] [93]
JEM-X INTEGRAL MGC
HUR 22501
3-35
3.3
13.2°02-[25] [61]
BAT
Swift
CdZnTe Rand 54000
15-150
17
120°×85°04-[25] [5]
CZTI
ASTROSAT CdZnTe Had 17×15
20-200
17
11.8°15-[..] [9]
ECLAIRs SVOM
CdTe
Rand 46×46
4-150
90
90°[24-29] [37]
Notes: [ ] foreseen at the time of writing (Jul 2022); * second module.
The two dimension integral correlation between two functions, say A and B, is indicated by the symbol and is given by A B = C(s,t) = Ā (x, y) · B(x + s, y + t)dxdy whereĀ is the complex conjugate of the function A.
For which A B = IFT (FT (A) · FT (B)) where FT is the Fourier transform, IFT the inverse Fourier transform and the bar indicates complex conjugate.
The exact integer ratio, when pixels are surrounded by dead zones, leads to incompressible localization uncertainty, given by the angle subtended by the dead area, for source positions for which mask element borders are projected within the dead zones.
A 1-d box function is given by: ∏ p (z) = 1 for |z| ≤ p/2 0 elsewhere
Beppo-SAX, the Italian Satellite for X-ray Astronomy was named in honor of Giuseppe Occhialini, the eclectic and visionary Italian physicist, to whom we owe, in addition to a huge scientific heritage, the strong involvement of Europe in astrophysical space programs.
AcknowledgmentsWe thank all the colleagues who have shared with us (and some still do) the exciting adventure of exploiting coded aperture imaging in high-energy astronomy.
. J G Ables, Proc Astr. Soc. Australia. 1172Ables, J.G., 1968, Proc Astr. Soc. Australia, 1, 172
. R Accorsi, NIMPR A474273Accorsi, R., et al., 2001, NIMPR A, 474, 273
W E Althouse, Proc 19 th ICRC. 19 th ICRCLa Jolla3299Althouse, W.E., et al., 1985, Proc 19 th ICRC (La Jolla), 3, 299
. L Amati, Experimental Astronomy. 52183Amati, L., et al., 2021, Experimental Astronomy, 52, 3, 183
. S D Barthelmy, Space Science Reviews, V. 120Barthelmy, S.D., et al., 2005, Space Science Reviews, V. 120, 3-4, 143-164
. L D Baumert, Lecture Notes in Math. 182Springer VerlagBaumert, L.D., 1971, Lecture Notes in Math., Springer Verlag, Berlin, 182
. W H Baumgartner, ApJS. 20719Baumgartner, W.H., et al., 2013, ApJS, 207, 19
. G Bélanger, ApJ. 636275Bélanger, G., et al., 2006, ApJ, 636, 275
. V Bhalerao, JApA. 3831Bhalerao, V., et al., 2017, JApA, 38, 31
. G Boella, Astron. Astrophys. Suppl. Ser. 122299Boella, G., et al., 1997, Astron. Astrophys. Suppl. Ser. 122, 299
. L Bouchet, Ap.J. 548990Bouchet, L., et al., 2001, Ap.J., 548, 990
A C Brinkman, Intern. Conf. Proc., eds. Perola, Rome (I). 263Brinkman, A.C., et al., 1985, Intern. Conf. Proc., eds. Perola, Rome (I), 263
. D N Burrows, Nat. 476421Burrows, D.N., et al., 2011 Nat, 476, 421
. E Caroli, Space Sci. Rev. 45349Caroli, E., et al., 1987, Space Sci. Rev., 45, 349
. J N Carter, MNRAS. 19833Carter, J.N., et al., 1982, MNRAS, 198, 33
. E Cavallari, F Frontera, Space. Sci. Rev. 212429Cavallari, E. & Frontera, F., 2017, Space. Sci. Rev., 212, 429
. M J Cieślak, Rad.Mea. 9259Cieślak, M.J., et al., 2016, Rad.Mea., 92, 59
. W R Cook, IEEE Trans. Nucl. Sci. 771Cook, W.R., et al., 1984, IEEE Trans. Nucl. Sci., NS-31, 771
. W R Cook, Ap.J. 37275Cook, W.R., et al., 1991, Ap.J., 372, L75
B Cordier, Conf. Proc. Swift: 10 years of discovery. 2335Cordier, B., et al., 2015, Conf. Proc. Swift: 10 years of discovery, POS, 233, 5
. E Costa, Nat. 387783Costa, E., et al., 1997, Nat, 387, 783
. T J Courvoisier, .-L , A&A. 41153Courvoisier, T.J.-L., et al., 2003, A&A, 411, L53
. R H Dicke, Ap.J. 153101Dicke, R.H., 1968, Ap.J., 153, L101
. Di Cocco, G , A&A. 411189Di Cocco, G., et al., 2003, A&A, 411, L189
. N L Dunphi, Nucl. Instr. Meth. 274326Dunphi, N.L, et al., 1989, Nucl. Instr. Meth., A274, 326
. E E Fenimore, Appl. Opt. 17223562Fenimore, E.E., 1978, Appl. Opt., 17(22), 3562
. E E Fenimore, Appl. Opt. 19142465Fenimore, E.E., 1980, Appl. Opt., 19(14), 2465
. E E Fenimore, T M Cannon, Appl. Opt. 173337Fenimore, E.E. & Cannon T.M., 1978, Appl. Opt., 17(3), 337
. E E Fenimore, T M Cannon, Appl. Opt. 20101858Fenimore, E.E. & Cannon T.M., 1981, Appl. Opt., 20(10), 1858
. E E Fenimore, G S Weston, Appl. Opt. 20173058Fenimore, E.E. & Weston, G.S., 1981, Appl. Opt., 20(17), 3058
. E Ferrigno, 92101595NewARFerrigno, E., et al., 2021, NewAR, 92, 101595
. M H Finger, California Institute of Technology (USPhD ThesisFinger, M.H., 1988, PhD Thesis, California Institute of Technology (US)
M Finger, T A Prince, Proc 19 th ICRC. 19 th ICRCLa Jolla3295Finger, M., & Prince, T.A., 1985, Proc 19 th ICRC (La Jolla), 3, 295
. N Gehrels, S Razzaque, FrPhy8661Gehrels, N., & Razzaque, S., 2013, FrPhy, 8(6), 661
. N Gehrels, J K Cannizzo, JHEAp. 72Gehrels, N., & Cannizzo, J.K., 2015, JHEAp, 7, 2
. N Gehrels, ApJ. 6111005Gehrels, N., et al., 2004, ApJ, 611, 1005
. O Godet, SPIE. 9144914424Godet, O., et al., 2014, SPIE, 9144, 914424
. D Götz, SPIE. 9144914423Götz, D., et al., 2014, SPIE, 9144, 914423
. A Goldwurm, Exper. Astron. 69Goldwurm, A., 1995, Exper. Astron., 6, 9
. A Goldwurm, ApJ. 38979Goldwurm, A., et al., 1992, ApJ, 389, L79
. A Goldwurm, Nat. 371589Goldwurm, A., et al., 1994, Nat, 371, 589
A Goldwurm, Proc. 4 th INTEGRAL Works., ESA-SP. 4 th INTEGRAL Works., ESA-SP459497Goldwurm, A., et al., 2001, Proc. 4 th INTEGRAL Works., ESA-SP, 459, 497
. A Goldwurm, A&A. 411223Goldwurm, A., et al., 2003, A&A, 411, L223
. S R Gottesman, E E Fenimore, Appl. Opt. 284344Gottesman, S.R. & Fenimore, E.E., 1989, Appl. Opt., 28, 4344
. A Gros, A&A. 411179Gros, A., et al., 2003, A&A, 411, L179
A Gros, Proc. 9 th INTEGRAL Works. 9 th INTEGRAL Works176147Gros, A., et al., 2012, Proc. 9 th INTEGRAL Works. 2012, POS, 176, 147
. J Gunson, B Polychronopulos, MNRAS. 177485Gunson, J. & Polychronopulos, B., 1976, MNRAS, 177, 485
. A Hammersley, I.M.P.R.311585Hammersley, A., et al., 1992, N.I.M.P.R., A311, 585
. F Harrison, Ap.J. 770103Harrison, F., et al., 2013, Ap.J., 770, 103
Hadamard transform optics, NYA Press [51] in't Zand. M Harwit, N J Sloane, J.J.M. University of Utrecht (NLPhD thesisHarwit, M. & Sloane, N.J., 1979, Hadamard transform optics, NYA Press [51] in't Zand, J.J.M., 1992, PhD thesis, University of Utrecht (NL)
. J J M Zand, A&A. 288665in't Zand, J.J.M., et al., 1994, A&A 288, 665
. R Jager, Astron. Astrophys. Suppl. Ser. 125557Jager, R., et al., 1997, Astron. Astrophys. Suppl. Ser., 125, 557
. L E Kopilovich, L G Sodin, MNRAS. 266357Kopilovich, L.E. & Sodin, L.G., 1994, MNRAS, 266, 357
. R Krivonos, A&A. 519107Krivonos, R., et al., 2010, A&A, 519, A107
. E Kuulkers, 93101629NewARKuulkers, E., et al., 2021, NewAR, 93, 101629
. C Labanti, SPIE. 11444114442Labanti, C., et al., 2020, SPIE, 11444, 114442K
. P Laudet, J P Roques, NIMPR. 267212Laudet, P. & Roques, J. P., 1988, NIMPR, A267, 212
. F Lebrun, A&A. 411141Lebrun, F., et al., 2003, A&A, 411, L141
. K S K Lum, IEEE Trans. Nuc. Sci. 411354Lum, K.S.K., et al., 1994, IEEE Trans. Nuc. Sci., 41, 1354
. N Lund, A&A. 411231Lund, N., et al., 2003, A&A, 411, L231
P Mandrou, Conf. Proc. AIP. 232492Mandrou, P., et al., 1991, Conf. Proc. AIP, 232, 492
. M L Mcconnel, ApJ. 343317McConnel, M.L., et al., 1989, ApJ, 343, 317
. S Mereghetti, ApJ. 89829Mereghetti, S., et al., 2020, ApJ, 898, L29
L Mertz, N O Young, Conf. Proc. Opt. Instr. Tech. 305Mertz, L. & Young, N.O., 1961, Conf. Proc. Opt. Instr. Tech., 305
. S Miyamoto, Space Sci. Inst. 3473Miyamoto, S., 1977, Space Sci. Inst., 3, 473
. K Oh, ApJS. 2354Oh, K., et al., 2018, ApJS, 235, 4
E Orlando, Conf. Proc. ICRC 2021. 395650Orlando, E., et al., 2021, Conf. Proc. ICRC 2021, POS, 395, 650
. J Paul, AdSpR. 118289Paul, J., et al., 1991, AdSpR, 11(8), 289
. M N Pavlinsky, ApJ. 425110Pavlinsky, M.N., et al., 1994, ApJ, 425, 110
. L Piro, A&A. 329906Piro, L., et al., 1998, A&A, 329, 906
. R J Proctor, MNRAS. 185745Proctor, R.J., et al., 1978, MNRAS, 185, 745
. R J Proctor, MNRAS. 187633Proctor, R.J., et al., 1979, MNRAS, 187, 633
. M Renaud, A&A. 456389Renaud, M., et al., 2006, A&A, 456, 389
. M Revnivtsev, A&A. 42549Revnivtsev, M., et al., 2004, A&A, 425, A49
. M Revnivtsev, AstL. 30527Revnivtsev, M., et al., 2004, AstL, 30, 527 [A81]
. J P Roques, App. Opt. 26183862Roques, J.P., 1987, App. Opt., 26(18), 3862
. S Scaringi, A&A. 51675Scaringi, S., et al., 2010, A&A, 516, A75
. A Segreto, A&A. 51047Segreto, A., et al., 2010, A&A, 510, A47
. M R Sims, SSI. 5109Sims, M.R., et al., 1980, SSI, 5, 109
. K P Singh, SPIE. 914491441Singh, K. P., et al., 2014, SPIE, 9144, 91441S
. G K Skinner, Exper. Astron. 62Skinner, G.K., 1995, Exper. Astron., 6, 2
. G K Skinner, App. Opt. 47152739Skinner, G.K., 2008, App. Opt., 47 (15), 2739
. G K Skinner, Nat. 330544Skinner G.K., et al., 1987, Nat, 330, 544
. G K Skinner, ASS. 136337Skinner, G.K., et al., 1987, ASS, 136, 337
. G K Skinner, ApL&C. 27199Skinner, G.K., et al., 1988, ApL&C, 27, 199
. G K Skinner, T J Ponman, MNRAS. 267518Skinner, G.K. & Ponman, T.J., 1994, MNRAS, 267, 518
. R A Sunyaev, Nat. 330227Sunyaev, R.A., et al., 1987, Nat, 330, 227
. R A Sunyaev, AdSpR. 102233Sunyaev, R.A., et al., 1990, AdSpR, 10(2), 233
. T Takahashi, SPIE. 914425Takahashi, T., et al., 2014, SPIE, 9144, 25
. J Tueller, ApJS. 186378Tueller, J., et al., 2010, ApJS, 186, 378
. P Ubertini, A&A. 411131Ubertini, P., et al., 2003, A&A, 411, L131
. G Vedrenne, A&A. 41163Vedrenne, G., et al., 2003, A&A, 411, L63
. A Vibhute, J. Astrophys. Astr. 4276Vibhute, A., et al., 2021, J. Astrophys. Astr., 42, 76
. J Wei, arXiv:1610:06892SVOM White PaperWei, J., et al., 2016, SVOM White Paper, arXiv:1610:06892
. R Willingale, NIMS. 22160Willingale, R., et al., 1984, NIMS, 221, 60
. A P Willmore, MNRAS. 258621Willmore, A.P., et al., 1992, MNRAS, 258, 621
. C Winkler, A&A. 4111Winkler, C., et al., 2003, A&A, 411, L1
. W Yuan, SPIE. 1069925Yuan, W., et al., 2018, SPIE, 10699, 25
. S.-N Zhang, Sci. China, Phys., Mech. & Astr. 6229502Zhang S.-N., et al., 2019, Sci. China, Phys., Mech. & Astr., 62, 2, 029502
. R Zhang, NIMS. 93441Zhang, R., et al., 2019, NIMS, 934, 41
| []
|
[
"How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers",
"How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers"
]
| [
"Michael Hassid [email protected] \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Hao Peng [email protected] \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"♢ \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Daniel Rotem [email protected] \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Jungo Kasai \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Ivan Montero \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Noah A Smith [email protected]@apple.com \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"♠ \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Roy Schwartz [email protected] \nSchool of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n"
]
| [
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"School of Computer Science & Engineering\nHebrew University of Jerusalem\nAllen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n"
]
| []
| The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this mechanism, while powerful and elegant, is not as important as typically thought for pretrained language models. We introduce PAPA, 1 a new probing method that replaces the input-dependent attention matrices with constant ones-the average attention weights over multiple inputs. We use PAPA to analyze several established pretrained Transformers on six downstream tasks. We find that without any input-dependent attention, all models achieve competitive performance-an average relative drop of only 8% from the probing baseline. Further, little or no performance drop is observed when replacing half of the input-dependent attention matrices with constant (input-independent) ones. Interestingly, we show that better-performing models lose more from applying our method than weaker models, suggesting that the utilization of the input-dependent attention mechanism might be a factor in their success. Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture. * This work was done while Hao Peng and Ivan Montero were at the University of Washington. 1 PAPA stands for Probing Analysis for PLMs' Attention." This is a sentence " " This is a sentence "Figure 1: Illustration of the PAPA method, which measures how much PLMs use the attention mechanism. PAPA replaces the input-dependent attention matrices (left) with constant ones (right). We then measure the performance gap between the two. Moderate drop indicates minor reliance on the attention mechanism. | 10.48550/arxiv.2211.03495 | [
"https://export.arxiv.org/pdf/2211.03495v1.pdf"
]
| 253,384,631 | 2211.03495 | 2537af99905a27d9b84ba9968715f4287f1d3359 |
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Michael Hassid [email protected]
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Hao Peng [email protected]
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
♢
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Daniel Rotem [email protected]
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Jungo Kasai
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Ivan Montero
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Noah A Smith [email protected]@apple.com
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
♠
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
Roy Schwartz [email protected]
School of Computer Science & Engineering
Hebrew University of Jerusalem
Allen Institute for Artificial Intelligence ⋆ Apple, Inc. ♠ Paul G. Allen School of Computer Science & Engineering
University of Washington
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this mechanism, while powerful and elegant, is not as important as typically thought for pretrained language models. We introduce PAPA, 1 a new probing method that replaces the input-dependent attention matrices with constant ones-the average attention weights over multiple inputs. We use PAPA to analyze several established pretrained Transformers on six downstream tasks. We find that without any input-dependent attention, all models achieve competitive performance-an average relative drop of only 8% from the probing baseline. Further, little or no performance drop is observed when replacing half of the input-dependent attention matrices with constant (input-independent) ones. Interestingly, we show that better-performing models lose more from applying our method than weaker models, suggesting that the utilization of the input-dependent attention mechanism might be a factor in their success. Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture. * This work was done while Hao Peng and Ivan Montero were at the University of Washington. 1 PAPA stands for Probing Analysis for PLMs' Attention." This is a sentence " " This is a sentence "Figure 1: Illustration of the PAPA method, which measures how much PLMs use the attention mechanism. PAPA replaces the input-dependent attention matrices (left) with constant ones (right). We then measure the performance gap between the two. Moderate drop indicates minor reliance on the attention mechanism.
Introduction
Pretrained Transformer (Vaswani et al., 2017) models have enabled great progress in NLP in recent years (Devlin et al., 2019;Liu et al., 2019b;Raffel et al., 2020;Brown et al., 2020;Chowdhery et al., 2022). A common belief is that the backbone of the Transformer model-and pretrained language models (PLMs) in particular-is the attention mechanism, which applies multiple attention heads in parallel, each generating an input-dependent attention weight matrix.
Interestingly, recent work found that attention patterns tend to focus on constant (inputindependent) positions (Clark et al., 2019;Voita et al., 2019), while other works showed that it is possible to pretrain language models where the attention matrices are replaced with constant matrices without major loss in performance Lee-Thorp et al., 2021;Hua et al., 2022). A natural question that follows is how much standard PLMs, pretrained with the attention mechanism, actually rely on this input-dependent property. This paper shows that they are less dependent on it than previously thought.
We present a new analysis method for PLMs: Probing Analysis for PLMs' Attention (PAPA). For each attention head h, PAPA replaces the attention matrix with a constant one: a simple average of the attention matrices for h computed on some unlabeled corpus. Replacing all attention matrices with such constant matrices results in an attention-free variant of the original PLM (See Fig. 1). We then compute, for some downstream tasks, the probing performance gap between an original model and its attention-free variant.
This provides a tool to quantify the models' reliance on attention. Intuitively, a larger performance drop indicates that the model relies more on the input-dependent attention mechanism.
We use PAPA to study three established pretrained Transformers: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), and DeBERTa (He et al., 2021), each with BASE-and LARGE-sized versions. We evaluate these models on six diverse benchmarks, spanning text classification and structured prediction tasks.
Our results suggest that attention is not as important to pretrained Transformers as previously thought. First, the performance of the attentionfree variants is comparable to original models: an average relative drop of only 8%. Second, replacing half of the attention matrices with constant ones has little effect on performance, and in some cases may even lead to performance improvements. Interestingly, our results hint that better models use their attention capability more than weaker ones; when comparing the effect of PAPA on different models, we find that the better the model's original performance is, the more it suffers from replacing the attention matrices with constant ones. This suggests a potential explanation for the source of the empirical superiority of some models over othersthey make better use of the attention mechanism.
This work grants a better understanding of the attention mechanism in pretrained Transformers. It also motivates further research on simpler or more efficient Transformer models, either for pretraining (Lee-Thorp et al., 2021;Hua et al., 2022) or potentially as an adaptation of existing pretrained models (Peng et al., 2020a(Peng et al., , 2022Kasai et al., 2021). It also provides a potential path to improve the Transformer architecture-by designing inductive bias mechanisms for better utilization of attention (Peng et al., 2020b;Wang et al., 2022).
Finally, our work may contribute to the "attention as explanation" debate (Jain and Wallace, 2019;Serrano and Smith, 2019;Wiegreffe and Pinter, 2019;Bibal et al., 2022). By showing that some PLMs can perform reasonably well with constant matrices, we suggest that explanations arising from the attention matrices might not be crucial for models' success.
We summarize our main contributions. (1) We present a novel probing method-PAPA-which quantifies the reliance of a given PLM on its attention mechanism by "disabling" that mechanism for this PLM. (2) We apply PAPA to six leading PLMs, and find that our manipulation leads to modest performance drops on average, which hints that attention might not be as important as thought. (3) We show that better-performing PLMs tend to suffer more from our manipulation, which suggests that the input-dependent attention is a factor in their success. (4) Finally, we release our code and experimental results. 2
Background: Attention in Transformers
Transformers consist of interleaving attention and feed-forward layers. In this work, we focus on Transformer encoder models, such as BERT, which are commonly used in many NLP applications. The (multi headed) self-attention module takes as input a matrix X ∈ R n×d and produces a matrix X out ∈ R n×d , where n denotes the number of input tokens, each represented as a d-dimensional vector. Each attention layer consists of H heads, and each head h ∈ {1, . . . , H} has three learnable matrices:
W h Q , W h K , W h V ∈ R d×d ′ . 3 Multiplying them with the input X results in: Q h , K h , V h ∈ R n×d ′ (
the queries, keys and values, respectively).
The queries and the keys compute a n × n attention weight matrix A h between all pairs of tokens as softmax-normalized dot products: 4
A h = softmax (︂ (︁ X · W h Q )︁ ⏞ ⏟⏟ ⏞ Q h · (︁ X · W h K ⏞ ⏟⏟ ⏞ K h )︁ ⊤ )︂ ∈ R n×n
(1) where the softmax operation is taken row-wise. The value matrix V h is then left-multiplied by the attention matrix A h to generate the attention head output.
Importantly, the attention matrix A h is inputdependent, i.e., defined by the input X. This property is considered to be the backbone of the attention mechanism (Vaswani et al., 2017).
An intriguing question is the extent to which PLMs actually rely on the attention mechanism. In the following, we study this question by replacing the attention matrices of PLMs with constant matrices. We hypothesize that if models make heavy use of attention, we will see a large drop in performance when preventing the model from using it.
As shown below, such performance drop is often not observed.
The PAPA Method
We present PAPA, a probing method for quantifying the extent to which pretrained Transformer models use the attention mechanism. PAPA works by replacing the Transformer attention weights with constant matrices, computed by averaging the values of the attention matrices over unlabeled inputs (Sec. 3.1). PAPA also allows for replacing any subset (not just all) of the attention matrices. We propose a method for selecting which heads to replace (Sec. 3.2). The resulting model is then probed against different downstream tasks (Sec. 3.3). The performance difference between the original and the new models can be seen as an indication of how much the model uses its attention mechanism.
Generating Constant Matrices
To estimate how much a pretrained Transformer m uses the attention mechanism, we replace its attention matrices with a set of constant ones, one for each head. To do so, PAPA constructs, for a given head h, 5 a constant matrix C h by averaging the attention matrix A h over a corpus of raw text. More specifically, given a corpus D = {e 1 , . . . , e |D| }, C h is defined as:
C h = 1 |D| |D| ∑︂ i=1 A h i ,(2)
where A h i is the input-dependent attention matrix that h constructs while processing e i . We note that the average is taken entry-wise, and only over non-padded entries (padding tokens are ingored).
We emphasize that the construction process of C h matrices requires no labels. In Sec. 5.2 we compare our method of constructing constant matrices from unlabeled data to other alternatives that either use no data at all, or use labeled data.
Replacing a Subset of the Heads
Different attention heads may have different levels of dependence on attention. We therefore study the effect of replacing a subset of the heads, and keeping the rest intact. To do so, we would like to estimate the reliance of each head on the inputdependent attention, which would allow replacing only the heads that are least input-dependent for the model.
To estimate this dependence, we introduce a new weighting parameter λ h ∈ (0, 1), initialized as λ h = 0.5, for each attention head h. 6 λ h is a learned weighting of the two matrices: the attention matrix A h and the constant matrix C h from (1) and (2) respectively. For each input e i , a new matrix B h is constructed as:
B h i = λ h · A h i + (1 − λ h ) · C h(3)
We interpret a smaller λ h as an indication of h less depending on the attention mechanism. We then train the probing classifier (Sec. 3.3) along with the additional λ h parameters. We use the learned λ h s to decide which heads should be replaced with constant matrices, by only replacing the k% attention heads with the smallest λ h values for some hyperparameter k. 7 Importantly, this procedure is only used as a pre-processing step; our experiments are trained and evaluated without it, where k% of each model's heads are replaced, and (1 − k%) remain unchanged.
Probing
Our goal is to evaluate how much attention a given PLM uses. Therefore, we want to avoid finetuning it for a specific downstream task, as this would lead to changing all of its weights, and arguably answer a different question (e.g., how much attention does a task-finetuned PLM use). Instead, we use a probing approach (Liu et al., 2019a;Belinkov, 2022) by freezing the model and adding a classifier on top.
Our classifier calculates for each layer a weighted (learned, non-attentive) representation of the different token representations. It then concatenates the different layer weighted representations, and applies a 2-layer MLP. For structured prediction tasks (e.g., NER and POS), where a representation for each token is needed, we concatenate for each token the representations across layers, and apply a 2-layer MLP.
When PAPA is applied to some input, we replace the attention matrices A h with the corresponding constant matrices C h . 8 We then compare the downstream performance of the original model m with the new model m ′ . The larger the performance gap between m and m ′ , the higher m's dependence on the attention mechanism.
Method Discussion
Contextualization with PAPA PAPA replaces the attention matrices with constant ones, which results in an attention-free model. Importantly, unlike a feed-forward network, the representations computed via the resulting model are still contextualized, i.e., the representation of each word depends on the representations of all other words. The key difference between the standard Transformer model and our attention-free model is that in the former the contextualization varies by the input, and for the latter it remains fixed for all inputs.
Potential Computational Gains
The replacement of the attention matrix with a constant one motivates the search for efficient attention alternatives. Using constant matrices is indeed more efficient, reducing the attention head time complexity from 2n 2 d ′ + 3nd ′2 to n 2 d ′ + nd ′2 , 9 which shows potential for efficiency improvement.
Several works used various approaches for replacing the attention mechanism with constant ones during the pretraining phase (Lee- Thorp et al., 2021;Hua et al., 2022), and indeed some of them showed high computational gains. Our work tackles a different question-how much do PLMs, which trained with the attention mechanism, actually use it. Thus, unlike the approaches above, we choose to make minimal changes to the original models. Nonetheless, our results further motivate the search for efficient attention variants.
Experiments
We now turn to use PAPA to study the attention usage of various PLMs.
Experimental Setup
Our experiments are conducted over both text classification and structured prediction tasks, all in English. For the former we use four diverse benchmarks from the GLUE benchmark (Wang et al., 2019): MNLI (Williams et al., 2018), SST-2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), and CoLA (Warstadt et al., 2019). For the latter we use named entity recognition (NER) and part of speech tagging (POS) from the CoNLL-2003 shared task (Tjong Kim Sang and De Meulder,9 n is the sequence length and d ′ is head-dimension. 2003). 10 We use the standard train/validation splits, and report validation results in all cases. 11 We use three widely-used pretrained Transformer encoder models: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), and DeBERTa (He et al., 2021). We use both BASE (12 layers, 12 heads in each layer) and LARGE (24 layers, 16 heads per layer) versions. For each model and each task, we generate the constant matrices with the given (unlabeled) training set of that task. In Sec. 5.3 we show that PAPA is not very sensitive to the specific training set being used.
All experiments are done with three different random seeds, average result is reported (95% confidence intervals are shown). Pre-processing and additional experimental details are described in App. A and B, respectively.
Probing Results
The results of the BASE and LARGE models are presented in Fig. 2a and 2b, respectively. We measure the performance of each model on each task using {1, 1 2 , 1 8 , 1 16 , 0} of the model's input-dependent attention matrices and replacing the rest with constant ones.
We first consider the original, fully-attentive, models, and find that performance decreases in the order of DeBERTa, RoBERTa, and BERT.
This order is roughly maintained across tasks and model sizes, which conforms with previous results of fine-tuning these PLMs (He et al., 2021). This suggests that the model ranking of our probing method is consistent with the standard fine-tuning setup.
We note that the trends across tasks and models are similar; hence we discuss them all together in the following (up to specific exceptions).
Replacing all attention matrices with constant ones incurs a moderate performance drop As shown in Fig. 2, applying PAPA on all attention heads leads to an 8% relative performance drop on average and not greater than 20% from the original model. 12 This result suggests that pretrained models only moderately rely on the attention mechanism. Half of the attention matrices can be replaced without loss in performance We note that in almost all cases replacing half of the models' attention matrices leads to no major drop in performance.
In fact, in some cases, performance even improves compared to the original model (e.g., BERT BASE and DeBERTa LARGE ), suggesting that some of the models' heads have a slight preference towards constant matrices. This result is consistent with some of the findings of recent hybrid models that use both constant and regular attention Lee-Thorp et al., 2021) to build efficient models.
Performant models rely more on attention Fig. 3 shows for each model the relation between the original performance (averaged across tasks) and the averaged (relative) reduced score when replacing all attention heads. We observe a clear trend between the models' performance and their relative reduced score, which suggests that better performing models use their attention mechanism more.
Further Analysis
We present an analysis of PAPA, to better understand its properties. We first discuss the patterns of the constant matrices produced by PAPA (Sec. 5.1). Next, we consider other alternatives to generating constant matrices (Sec. 5.2); we then examine whether the constant matrices are data-dependent (Sec. 5.3); we continue by exploring alternative methods for selecting which attention heads to replace (Sec. 5.4). Finally, we present MLM results, and discuss the challenges in interpreting them (Sec. 5.5). In all experiments below, we use RoBERTa BASE . RoBERTa LARGE experiments show very similar trends, see App. C.
Patterns of the Constant Matrices
We first explore the attention patterns captured by different heads by observing the constant matrices (C h ). We first notice a diagonal pattern, in which each token mostly attends to itself or to its neighboring words. This pattern is observed in about 90% of the constant matrices produced by PAPA. Second, about 40% of the heads put most of their weight mass on the [CLS] and/or [SEP] tokens (perhaps in combination with the diagonal pattern described above). Lastly, while for some of the heads the weight mass is concentrated only in specific entry per row (which corresponding only to a specific token), in most of cases the weight mass is distributed over several entries (corresponding to several different tokens). These patterns are similar to those identified by Clark et al. (2019), and explain in part our findings-many of the attention heads mostly focus on fixed patterns that can also be captured by a constant matrix. Fig. 4 shows three representative attention heads that illustrate the patterns above.
Alternative Constant Matrices
PAPA replaces the attention matrices with constant ones. As described in Sec. 3.1, this procedure requires only an unlabeled corpus. In this section, we compare this choice with constant matrices that are constructed without any data (data-free matrices), and those that require labeled data for construction (labeled matrices). For the former we consider three types of matrices: (1) Identity matrix-in which each token 'attends' only to itself, and essentially makes selfattention a regular feed-forward (each token is processed separately); (2) Toeplitz matrix-we use a simple Toeplitz matrix (as suggested in , where the weight mass is on the current token, and it decreases as the attended token is further from the current one (the entries of the matrix are based on the harmonic series); 13 (3) Zeros matrix-essentially pruning the heads.
We also consider two types of labeled-matrices: (4) initialized as the Toeplitz matrices from (2); and (5) initialized as our average matrices. These matrices are updated during the training procedure of the probing classifier. 14 (0, 11) (4, 4) (6, 3) Figure 4: Generated constant matrices C h by the PAPA method for representative heads (layer, head). These matrices used for the attention-free variant of RoBERTa BASE for the SST-2 task. Tab. 1 shows the performance of each attentionfree resulting model for all downstream tasks. We observe that for all tasks, our average-based model outperforms all other data-free models by a notable margin. As for the labeled-matrices models, our model also outperforms the one initialized with Toeplitz matrices (4), and in most cases gets similar results to the model initialized with average matrices (5). It should be noted that the original models (with regular attention) do not update their inner parameters in the probing training phase, which makes the comparison to the labeled-matrices models somewhat unfair. The above suggests that our choice of constant matrix replacement better estimates the performance of the attention-free PLMs.
Matrix Construction
Are the Constant Matrices
Data-Dependent?
PAPA constructs the constant matrix for a given head C h as the average of the model's attention matrices over a given corpus D, which in our experiments is set to be the training set of the task at stant matrices are masked and normalized (row-wise), the same as the output of the original softmax operation. hand (labels are not used). Here we examine the importance of this experimental choice by generating C h using a different dataset-the MNLI training set, which is out of distribution for the other tasks.
MNLI ACC
Ours Start to End End to Start Random Reversed Figure 5: Comparison between different heads selection methods over MNLI. Our method outperforms all other alternatives. The x-axis represents the fraction of input-dependent attention heads.
Alternative Head Selection Methods
We compare our method for selecting which heads to replace (Sec. 3.2) with a few alternatives. The first two replace the heads by layer order: (1) we sort the heads from the model's first layer to the last and (2) from the model's last layer to the first. In both cases we use the internal head ordering per layer for ordering within the layer. We then replace the first k% of the heads. We also add (3) a random baseline that randomly replaces k% of the heads, and a (4) 'Reversed' one which replaces the heads with the highest (rather than lowest) λ h values (Sec. 3.2). Fig. 5 shows the MNLI performance of each method as a function of the fraction of heads replaced. We observe that our method, which is based on learned estimation of attention importance, outperforms all other methods for every fraction of heads replaced. Moreover, the 'Reversed' method is the worst among the examined methods, which suggests that our method not only replaces the least attention dependent heads first, but also replaces the most dependent ones last. Although our head replacement order outperforms the above methods, we note that our order is an overestimation of the model attention dependency, and better methods might show that even less attention is needed.
Effects on MLM Perplexity
So far we have shown that applying PAPA on downstream tasks only incurs a moderate accuracy drop. This section aims to explore its impact on masked language modeling (MLM). We find that while our models suffer a larger performance drop on this task compared to the other tasks, this can be explained by their pretraining procedure. Fig. 6a plots the negative log perplexity (higher is better) of all BASE models on the WikiText-103 (Merity et al., 2017) validation set. When replacing attention matrices using PAPA, MLM suffers a larger performance drop compared to the downstream tasks (Sec. 4.2). We hypothesize that this is because these pretrained Transformers are more specialized in MLM, the task they are pretrained on. As a result, they are less able to adapt to architectural changes in MLM than in downstream tasks. To test our hypothesis, we probe ELECTRA BASE (Clark et al., 2020) using PAPA. ELECTRA is an established pretrained Transformer trained with the replaced token detection objective, instead of MLM. It has proven successful on a variety of downstream tasks.
ELECTRA BASE 's probing performance on MLM supports our hypothesis: We first note that its original performance is much worse compared to the other models (-3.51 compared to around -2 for the MLM-based models), despite showing similar performance on downstream tasks (Fig. 6b), which hints that this model is much less adapted to MLM. Moreover, the drop when gradually removing heads is more modest (a 0.44 drop compared to 1-1.5 for the other models), and looks more similar to ELECTRA BASE 's probing performance on MNLI (Fig. 6b). Our results suggest a potential explanation for the fact that some pretrained Transformers suffer a larger performance drop on MLM than on downstream tasks; rather than MLM demanding higher attention use, this is likely because these models are pretrained with the MLM objective.
Related Work
Attention alternatives Various efforts have been made in search of a simple or efficient alternative for the attention mechanism. Some works focused on building a Transformer variant based on an efficient approximation of the attention mechanism (Kitaev et al., 2020;Wang et al., 2020;Peng et al., 2020a;Choromanski et al., 2021;Schlag et al., 2021;Qin et al., 2022). Another line of research, which is more related to our work, replaced the attention mechanism in Transformers with a constant (and efficient) one. For instance, FNet (Lee- Thorp et al., 2021) replaced the attention matrix with the Vandermonde matrix, while gMLP and FLASH (Hua et al., 2022) replaced it with a learned matrix. 15 These works showed Fig. 6a ELECTRA BASE behaves similarly to its behavior on MNLI, but not to the other models, which are MLM-based. In Fig. 6b ELECTRA BASE behaves similar to other models. In both graphs the x-axis represents the fraction of input-dependent attention heads, and the y-axis is the score of the specific task (higher is better). that pretraining attention-free LMs can lead to competitive performance. Our work shows that PLMs trained with attention can get competitive performance even if they are denied access to this mechanism during transfer learning. Pruning methods In this work we replaced the attention matrix with a constant one in order to measure the importance of the input-dependent ability. Works like Michel et al. (2019) and Li et al. (2021) pruned attention heads in order to measure their importance for the task examined. These works find that for some tasks, only a small number of unpruned attention heads is sufficient, and thus relate to the question of how much attention does a PLM use. In this work we argue that replacing attention matrices with constant ones provides a more accurate answer for this question compared to pruning these matrices, and propose PAPA, a method for constructing such constant matrices.
Analysis of attention patterns
Conclusion
In this work, we found that PLMs are not as dependent on their attention mechanism as previously thought. To do so, we presented PAPA-a method for analyzing the attention usage in PLMs. We applied PAPA to several widely-used PLMs and six downstream tasks. Our results show that replac-ing all of the attention matrices with constant ones achieves competitive performance to the original model, and that half of the attention matrices can be replaced without any loss in performance. We also show a clear relation between a PLM's aggregate performance across tasks and its degradation when replacing all attention matrices with constant ones, which hints that performant models make better use of their attention.
Our results motivate further work on novel Transformer architectures with more efficient attention mechanisms, both for pretraining and for knowledge distillation of existing PLMs. They also motivate the development of Transformer variants that improve performance by making better use of the attention mechanism.
Limitations
This work provides an analysis of the attention mechanism in PLMs. Our PAPA method is based on probing rather than finetuning, which is more common use to PLMs. We recognize that the attention mechanism in finetuned PLMs might act differently than the original model, but our main focus is investigating the PLM itself, rather than its finetuned version.
Our analysis method is built on replacing the attention matrices with constant ones (Sec. 3.1). We build these constant matrices by averaging the attention matrices over a given dataset. Because of this choice, our results reflect a lower bound on the results of the optimal attention-free model, and we acknowledge that there might be methods for constructing the constant matrices that would lead to even smaller gaps from the original model. A similar argument can be applied for our heads selection method (Sec. 3.2). Importantly, better methods for these sub-tasks might further reduce the gap between the original models and the attention-free ones, which will only strengthen our argument.
Finally, we note that we used the PAPA method with six English tasks, and recognize that results might be different for other tasks and other languages.
A Pre-Processing
To make the replacement of the attention matrix with a constant one reasonable, we fix the position of the [SEP] token to always be the last token of the model's input, rather than separating the last input token from the padding tokens (i.e., it comes after the padding tokens rather than before them). For tasks with two sequences per example (e.g., MNLI), which are typically separated by an additional [SEP] token, we fix this token to always be the middle token of the sequence, followed by the second sentence. We recognize that this might lead to suboptimal usage of the input's sequence length, e.g., if one of the sentences is substantially longer than the other and particularly if it is longer than half of the sequence length, it would thus be trimmed. In our experiments this only happened in less than 0.2% of input samples for a single task (MNLI), but we recognize that this might happen more frequently in other datasets.
B Hyperparameters
All of our code was implemented with the Transformers library (Wolf et al., 2020). Hyperparameters for the probing classifier on downstream tasks are shown in Tab
Figure 2 :Figure 3 :
23Probing results (y-axis) with decreasing number of attention heads (x-axis). BASE models are shown inFig. 2a, and LARGE models are shown inFig. 2b. Higher is better in all cases. Stronger-performing PLMs use their attention capability more. y-axis: original model average performance; x-axis: relative reduced score when all attention matrices are replaced with constant ones.
Figure 6 :
6ELECTRA BASE model compared with other BASE models on MLM and MNLI. In
Some investigations of how attention patterns in Transformers work use probing techniques. Clark et al. (2019), Ravishankar et al. (2021) and Htut et al. (2019) studied the attention behavior in BERT. Unlike the above, which only focuses on the attention patterns of the PLM, our work sheds light on the dependence of PLMs on their attention mechanism.
Table 1: Probe task of performance of RoBERTa BASE with different constant matrix types as a replacement to the input-dependent attention matrix. Bold numbers indicate the best constant model for the task. Our approach based on an average of multiple attention matrices outperforms all other data-free matrix types across all tasks, and gets similar results to the best labeled-data based model. In all tasks higher is better.Matrix Type
CoLA MRPC SST2 MNLI-mm NER POS
Attention based
Original
0.47
0.85
0.91
0.78
0.92 0.93
Data-Free
Identity
0.04
0.80
0.80
0.63
0.55 0.87
Toeplitz
0.08
0.81
0.79
0.65
0.77 0.90
Zeros
0.09
0.80
0.80
0.66
0.57 0.87
Labeled Data
Toeplitz init.
0.08
0.81
0.79
0.68
0.78 0.91
Average init.
0.34
0.81
0.87
0.72
0.89 0.93
Unlabeled Data
Average (Ours)
0.31
0.82
0.87
0.69
0.89 0.93
Table 2 :
2Comparison of probe task performance of RoBERTa BASE between two setups of constructing the averaged constant matrices C h : Per-Task uses the task training set, while MNLI uses the constant matrices generated with the MNLI dataset. The results are similar between the two setups, which indicates a low dependence of the constant matrices on the dataset used for constructing them.
. 3 .
3Learning Rate Batch Epochs Seq. Len.CoLA
2.00E-05
16
15
64
SST-2
1.00E-04
32
4
64
MNLI
2.00E-04
8
4
256
MRPC
2.00E-05
16
15
128
NER
1.00E-04
8
4
128
POS
5.00E-04
8
4
128
MLM
5.00E-04
8
2
128
Table 3 :
3Probing classifier hyperparameters for downstream tasks.Tab. 4 and 5 show RoBERTa LARGE 's analysis results for the experiments described in Sec. 5.2 and 5.3, respectively.Matrix TypeCoLA MRPC SST2 MNLI-mm NER POSC Further Analsys results for
RoBERTa LARGE
Table 4 :
4Probe task of performance of RoBERTa LARGE with different constant matrix types as a replacement to the input-dependent attention matrix. Tab. 1 shows the results for RoBERTa BASE .Task
CoLA MRPC SST2 NER POS
Per-Task 0.34
0.80
0.85 0.89 0.92
MNLI
0.35
0.81
0.85 0.88 0.92
Table 5 :
5Comparison of probe task performance of RoBERTa LARGE between two setups of constructing the averaged constant matrices C h : Per-Task uses the task training set, while MNLI uses the constant matrices generated with the MNLI dataset. Tab. 2 shows the results for RoBERTa BASE .
https://github.com/schwartz-lab-NLP/papa 3 d ′ is the head-dimension, and usually defined as d ′ = d H . 4 Some attention variants (e.g.,He et al., 2021) incorporate positional information as part of the calculation of A h .
We do so for all layers in parallel. Layer indices omitted for simplicity.
λ h is the output of a sigmoid over a learned parameter. 7 In Sec. 5.4 we show that this head selection method outperforms other alternatives.8 To minimize model changes, we also mask the C h entries corresponding to padded tokens, and normalize the matrix (row-wise), as in a regular Transformer.
We report accuracy for SST2 and MNLI, F1 score for MRPC, NER and POS, and MCC for CoLA.11 For MNLI, we report the mismatched validation split. 12 For the MRPC task, some of the attention-free models do get close to the majority baseline, though still above it.
Similar to the Gaussian matrices suggested byYou et al. (2020).14 To make minimal changes to the frozen model, all con-
Results are presented in Tab. 2. The performance across all tasks is remarkably similar between generating the matrices using the specific task training set and MNLI, which suggests that the constant matrices might be somewhat data-independent.
These models also added a gating mechanism, which does not change the input-independent nature of their component.
AcknowledgmentsWe thank Miri Varshavsky for the great feedback and moral support. This work was supported in part by NSF-BSF grant 2020793, NSF grant 2113530, an Ulman Fellowship, a Google Fellowship, a Leibnitz Fellowship, and a research gift from Intel.
Yonatan Belinkov, 10.1162/coli_a_004222022. Probing Classifiers: Promises, Shortcomings, and Advances. Computational Linguistics. 48Yonatan Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. Computa- tional Linguistics, 48(1):207-219.
Is attention explanation? an introduction to the debate. Adrien Bibal, Rémi Cardon, David Alfter, Rodrigo Wilkens, Xiaoou Wang, Thomas François, Patrick Watrin, 10.18653/v1/2022.acl-long.269Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, Ireland1Long Papers). Association for Computational LinguisticsAdrien Bibal, Rémi Cardon, David Alfter, Rodrigo Wilkens, Xiaoou Wang, Thomas François, and Patrick Watrin. 2022. Is attention explanation? an introduction to the debate. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 3889-3900, Dublin, Ireland. Association for Com- putational Linguistics.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
Rethinking attention with Performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller, Proc. of ICLR. of ICLRKrzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sar- lós, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2021. Rethinking attention with Per- formers. In Proc. of ICLR.
. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Parker Gehrmann, Kensen Schuh, Sasha Shi, Joshua Tsvyashchenko, Abhishek Maynez, Parker Rao, Yi Barnes, Noam Tay, Vinodkumar Shazeer, Emily Prabhakaran, Nan Reif, Ben Du, Reiner Hutchinson, James Pope, Jacob Bradbury, Michael Austin, Guy Isard, Pengcheng Gur-Ari, Toju Yin, Anselm Duke, Sanjay Levskaya, Sunipa Ghemawat, Henryk Dev, Xavier Michalewski, Vedant Garcia, Kevin Misra, Liam Robinson, Denny Fedus, Daphne Zhou, David Ippolito, Hyeontaek Luan, Barret Lim, Alexander Zoph, Ryan Spiridonov, David Sepassi, Dohan, 10.48550/ARXIV.2204.02311arXiv:2204.02311Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-HellsternDouglas Eck, Jeff Dean, Slav Petrovand Noah Fiedel. 2022. PaLM: Scaling language modeling with pathwaysAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghe- mawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fe- dus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankara- narayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. arXiv:2204.02311.
What does BERT look at? an analysis of BERT's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, Proc. of BlackboxNLP. of BlackboxNLPKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proc. of BlackboxNLP.
ELECTRA: Pretraining text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, Proc. of ICLR. of ICLRKevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In Proc. of ICLR.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proc. of NAACL. of NAACLJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proc. of NAACL.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proc. of IWP. of IWPWilliam B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proc. of IWP.
DeBERTa: decoding-enhanced bert with disentangled attention. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Proc. of ICLR. of ICLRPengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: decoding-enhanced bert with disentangled attention. In Proc. of ICLR.
Jason Phu Mon Htut, Shikha Phang, Samuel R Bordia, Bowman, 10.48550/ARXIV.1911.12246arXiv:1911.12246Do attention heads in bert track syntactic dependencies. Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in bert track syntactic dependencies? arXiv:1911.12246.
Weizhe Hua, Zihang Dai, Hanxiao Liu, V Quoc, Le, 10.48550/ARXIV.2202.10447arXiv:2202.10447Transformer quality in linear time. Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc V. Le. 2022. Transformer quality in linear time. arXiv:2202.10447.
Attention is not Explanation. Sarthak Jain, Byron C Wallace, 10.18653/v1/N19-1357Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Long and Short PapersSarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.
Finetuning pretrained transformers into RNNs. Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A Smith, Proc. of EMNLP. of EMNLPJungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, and Noah A. Smith. 2021. Finetuning pretrained transformers into RNNs. In Proc. of EMNLP.
Reformer: The efficient transformer. Nikita Kitaev, Lukasz Kaiser, Anselm Levskaya, Proc. of ICLR. of ICLRNikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Proc. of ICLR.
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon, 10.48550/ARXIV.2105.03824arXiv:2105.03824Fnet: Mixing tokens with fourier transforms. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2021. Fnet: Mixing tokens with fourier transforms. arXiv:2105.03824.
Differentiable subset pruning of transformer heads. Jiaoda Li, Ryan Cotterell, Mrinmaya Sachan, Transactions of the Association for Computational Linguistics. 9Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021. Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442-1459.
Pay attention to MLPs. Hanxiao Liu, Zihang Dai, David So, Quoc V Le, Advances in Neural Information Processing Systems. Curran Associates, Inc34Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to MLPs. In Advances in Neural Information Processing Systems, volume 34, pages 9204-9215. Curran Associates, Inc.
Linguistic knowledge and transferability of contextual representations. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, Noah A Smith, Proc. of NAACL. of NAACLNelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proc. of NAACL.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.
Pointer sentinel mixture models. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, Proc. of ICLR. of ICLRStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In Proc. of ICLR.
Are sixteen heads really better than one? In NeurIPS. Paul Michel, Omer Levy, Graham Neubig, Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In NeurIPS.
ABC: Attention with bounded-memory control. Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah Smith, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, and Noah Smith. 2022. ABC: Atten- tion with bounded-memory control. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7469-7483, Dublin, Ireland. Association for Computational Linguistics.
Random feature attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, Lingpeng Kong, International Conference on Learning Representations. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. 2020a. Random feature attention. In International Confer- ence on Learning Representations.
A mixture of h -1 heads is better than h heads. Hao Peng, Roy Schwartz, Dianqi Li, Noah A Smith, 10.18653/v1/2020.acl-main.587Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsHao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith. 2020b. A mixture of h -1 heads is better than h heads. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 6566-6577, Online. Association for Computational Linguistics.
Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, 10.48550/ARXIV.2202.08791arXiv:2202.08791Lingpeng Kong, and Yiran Zhong. 2022. cosformer: Rethinking softmax in attention. Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yun- shen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, and Yiran Zhong. 2022. cosformer: Rethinking soft- max in attention. arXiv:2202.08791.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, JMLRColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. JMLR.
Artur Vinit Ravishankar, Mostafa Kulmizev, Abdou, 10.48550/ARXIV.2101.10927arXiv:2101.10927Anders Søgaard, and Joakim Nivre. 2021. Attention can reflect syntactic structure (if you let it). Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, and Joakim Nivre. 2021. Atten- tion can reflect syntactic structure (if you let it). arXiv:2101.10927.
Linear transformers are secretly fast weight memory systems. Imanol Schlag, Kazuki Irie, Jürgen Schmidhuber, Proc. of ICML. of ICMLImanol Schlag, Kazuki Irie, and Jürgen Schmidhuber. 2021. Linear transformers are secretly fast weight memory systems. In Proc. of ICML.
Association for Computational Linguistics. Sofia Serrano, Noah A Smith, 10.18653/v1/P19-1282Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyIs attention interpretable?Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951, Florence, Italy. Associa- tion for Computational Linguistics.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proc. of EMNLP. of EMNLPRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proc. of EMNLP.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proc. of CoNLL. of CoNLLErik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proc. of CoNLL.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proc. of NeurIPS. of NeurIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS.
Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov, 10.18653/v1/P19-1580Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsElena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, Proc. of ICLR. of ICLRAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Proc. of ICLR.
Qiang Yan, and Pengjie Ren. 2022. Paying more attention to self-attention: Improving pre-trained language models via attention guiding. Shanshan Wang, Zhumin Chen, Zhaochun Ren, Huasheng Liang, 10.48550/ARXIV.2204.02922arXiv:2204.02922Shanshan Wang, Zhumin Chen, Zhaochun Ren, Huasheng Liang, Qiang Yan, and Pengjie Ren. 2022. Paying more attention to self-attention: Improving pre-trained language models via attention guiding. arXiv:2204.02922.
Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, Hao Ma, 10.48550/ARXIV.2006.04768arXiv:2006.04768Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. arXiv:2006.04768.
Bowman. 2019. Neural network acceptability judgments. Alex Warstadt, Amanpreet Singh, Samuel R , Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. TACL.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, 10.18653/v1/D19-1002Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proc. of NAACL. of NAACLAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proc. of NAACL.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Weiqiu You, 10.48550/ARXIV.2005.00742arXiv:2005.00742Simeng Sun, and Mohit Iyyer. 2020. Hard-coded gaussian attention for neural machine translation. Weiqiu You, Simeng Sun, and Mohit Iyyer. 2020. Hard-coded gaussian attention for neural machine translation. arXiv:2005.00742.
| [
"https://github.com/schwartz-lab-NLP/papa"
]
|
[
"Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators",
"Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators",
"Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators",
"Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators"
]
| [
"M N Vu \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"F Beck \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"M Schwegel \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"C Hartl-Nesic \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"A Nguyen \nDepartment of Computer Science\nUniversity of Liverpool\nLiverpoolEngland\n",
"A Kugi \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n\nCenter for Vision, Automation & Control\nAustrian Institute of Technology GmbH (AIT)\nViennaAustria\n",
"M N Vu \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"F Beck \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"M Schwegel \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"C Hartl-Nesic \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n",
"A Nguyen \nDepartment of Computer Science\nUniversity of Liverpool\nLiverpoolEngland\n",
"A Kugi \nAutomation & Control Institute (ACIN)\nTU Wien\nViennaAustria\n\nCenter for Vision, Automation & Control\nAustrian Institute of Technology GmbH (AIT)\nViennaAustria\n"
]
| [
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Department of Computer Science\nUniversity of Liverpool\nLiverpoolEngland",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Center for Vision, Automation & Control\nAustrian Institute of Technology GmbH (AIT)\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Department of Computer Science\nUniversity of Liverpool\nLiverpoolEngland",
"Automation & Control Institute (ACIN)\nTU Wien\nViennaAustria",
"Center for Vision, Automation & Control\nAustrian Institute of Technology GmbH (AIT)\nViennaAustria"
]
| []
| Solving the analytical inverse kinematics (IK) of redundant manipulators in real time is a difficult problem in robotics since its solution for a given target pose is not unique. Moreover, choosing the optimal IK solution with respect to application-specific demands helps to improve the robustness and to increase the success rate when driving the manipulator from its current configuration towards a desired pose. This is necessary, especially in high-dynamic tasks like catching objects in mid-flights. To compute a suitable target configuration in the joint space for a given target pose in the trajectory planning context, various factors such as travel time or manipulability must be considered. However, these factors increase the complexity of the overall problem which impedes real-time implementation. In this paper, a real-time framework to compute the analytical inverse kinematics of a redundant robot is presented. To this end, the analytical IK of the redundant manipulator is parameterized by so-called redundancy parameters, which are combined with a target pose to yield a unique IK solution. Most existing works in the literature either try to approximate the direct mapping from the desired pose of the manipulator to the solution of the IK or cluster the entire workspace to find IK solutions. In contrast, the proposed framework directly learns these redundancy parameters by using a neural network (NN) that provides the optimal IK solution with respect to the manipulability and the closeness to the current robot configuration. Monte Carlo simulations show the effectiveness of the proposed approach which is accurate and real-time capable (≈ 32 µs) on the KUKA LBR iiwa 14 R820. | 10.1016/j.mechatronics.2023.102970 | [
"https://export.arxiv.org/pdf/2211.04275v3.pdf"
]
| 253,398,025 | 2211.04275 | 3fdf377f486f1b8028e27dcc7b3863d60f0a2a23 |
Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators
M N Vu
Automation & Control Institute (ACIN)
TU Wien
ViennaAustria
F Beck
Automation & Control Institute (ACIN)
TU Wien
ViennaAustria
M Schwegel
Automation & Control Institute (ACIN)
TU Wien
ViennaAustria
C Hartl-Nesic
Automation & Control Institute (ACIN)
TU Wien
ViennaAustria
A Nguyen
Department of Computer Science
University of Liverpool
LiverpoolEngland
A Kugi
Automation & Control Institute (ACIN)
TU Wien
ViennaAustria
Center for Vision, Automation & Control
Austrian Institute of Technology GmbH (AIT)
ViennaAustria
Machine Learning-based Framework for Optimally Solving the Analytical Inverse Kinematics for Redundant Manipulators
redundant manipulator, analytical inverse kinematicsnumerical inverse kinematicsmachine learningtrajectory optimization
Solving the analytical inverse kinematics (IK) of redundant manipulators in real time is a difficult problem in robotics since its solution for a given target pose is not unique. Moreover, choosing the optimal IK solution with respect to application-specific demands helps to improve the robustness and to increase the success rate when driving the manipulator from its current configuration towards a desired pose. This is necessary, especially in high-dynamic tasks like catching objects in mid-flights. To compute a suitable target configuration in the joint space for a given target pose in the trajectory planning context, various factors such as travel time or manipulability must be considered. However, these factors increase the complexity of the overall problem which impedes real-time implementation. In this paper, a real-time framework to compute the analytical inverse kinematics of a redundant robot is presented. To this end, the analytical IK of the redundant manipulator is parameterized by so-called redundancy parameters, which are combined with a target pose to yield a unique IK solution. Most existing works in the literature either try to approximate the direct mapping from the desired pose of the manipulator to the solution of the IK or cluster the entire workspace to find IK solutions. In contrast, the proposed framework directly learns these redundancy parameters by using a neural network (NN) that provides the optimal IK solution with respect to the manipulability and the closeness to the current robot configuration. Monte Carlo simulations show the effectiveness of the proposed approach which is accurate and real-time capable (≈ 32 µs) on the KUKA LBR iiwa 14 R820.
Introduction
The inverse kinematics (IK) [40] solution is fundamental for many applications in robotics involving motion planning, e.g., point-to-point trajectory optimization [46,21], path-wise trajectory planning [19,12], dexterous grasping [15,33], and pick-and-place scenarios [36,31]. Solving the IK problem for a given target position in the task space yields the robot's configuration in joint space that satisfies the kinematic constraints [27].
There are three types of techniques to solve IK problems, i.e. the algebraic approach, see, e.g., [34,7], the analytical (or so-called geometric) approach, see, e.g., [38,47], and the numerical (or so-called iterative) approach, see, e.g., [35,39]. In the algebraic approach, essentially systems of polynomial equations [30] are solved. Typically, they are classified as difficult algebraic computational problems [7].
In general, this algebraic problem can be solved for a manipulator with 6 degrees of freedom (DoF), see, e.g., [8], but is not generally applicable to kinematically redundant manipulators [37]. On the other hand, numerical methods, typically based on least-squares or pseudoinverse formulations, are widely employed, see, e.g., [48], for various kinematic structures due to their simplicity, low computing time, and their generality. However, these methods may converge to a local minimum that predominantly depends on the initial guess of the solution.
In contrast to the numerical IK, the analytical IK computes the exact solution, which is important for many industrial applications [50,13]. The computing time of the analytical IK solutions is much faster and real-time capable, compared to the numerical approach. While the analytical IK is only available for specific robot kinematics, most industrial robots are designed such that an analytical solution of the IK is available. Hence, the IK of 6-DoF industrial robots with a spherical wrist, non-offset 7 DoF S-R-S redundant manipulators [42,22], e.g., KUKA LBR iiwa 14 R820, but also the offset redundant manipulator, e.g., Franka Emika Panda [11], OB7 [32], and Neura Robotics LARA [29], can be solved analytically. These manipulators are often referred to as collaborative robots (Cobot). Typically, the analytical IK parameterizes the robot redundancy by additional (three) parameters, which are usually named redundancy parameters. Examples are [38] and [17] for the non-offset and offset redundant manipulator, respectively. Due to the redundant nature of these parameters, infinite sets of parameters exist in general yielding different IK solutions. However, finding the set of redundancy parameters which represents the best IK solution of a specific task is a non-trivial problem.
In this work, a learning-based framework to compute the optimal set of redundancy parameters for an analytical IK is proposed. This improves the computing performance for solving the IK problem, which is essential for highly dynamic tasks like catching objects in mid-flight or handing over objects between moving agents. These tasks are frequently solved using trajectory optimization where typically dynamic system constraints, state and control input constraints, and a target pose constraint are considered.
The target pose constraint is often formulated in the task space and for kinematically redundant robots, in which infinite joint configurations satisfy this constraint. Utilizing the analytical IK in the trajectory optimization problem allows to find the best target joint configuration for a specific task. On the other hand, computing time becomes an issue with this approach. Recently, [47] proposed a realtime capable closed-form solution for the KUKA iiwa 14 R820, where the authors minimize the joint velocities and accelerations while avoiding joint boundaries for a trajectory tracking task. This approach does neither consider the dynamic constraints nor the system state and input constraints in the trajectory optimization.
In addition, it is crucial that the target configuration is well chosen among the infinite solutions of the IK, i.e. close to the initial robot configuration and with high manipulability. For example in a dynamic handover of an object where the target is moving, see Fig. 1, it is advantageous to choose a target configuration that maximizes manipulability such that the robot end-effector can move to another target configuration with high agility. Moreover, choosing a target configuration that is close to the initial configuration of the robot can lead to high performance of the trajectory optimization with a high success rate. Including such criteria in the trajectory optimization is the motivation for the work in this paper.
To this end, a learning-based framework to include additional, application-specific criteria in the analytical IK of redundant robots is proposed. First, a database of 10 8 random pairs of initial configurations and target poses is generated. For each pair, the optimal trajectory is computed using classical approaches, considering applicationspecific criteria, and is stored in the database together with the set of optimal redundancy parameters. This database serves as the basis to train a neural network (NN), which is used to predict the optimal redundancy parameters for the analytical IK in highly dynamic real-time applications.
The main contribution of this work is a learning-based initial configuration target pose possible target configurations Figure 1: An example of a handover task between robot and human.
framework that employs a NN to predict the redundancy parameters of an analytical IK. This yields an optimal target joint configuration for a given target pose by considering application-specific criteria. In this work, the target joint configuration is chosen close to the current joint configuration of the robot and to have a high measure of manipulability. The proposed learning framework significantly speeds up the computing time of the trajectory optimization problem. Note that the proposed framework is tailored to the non-offset redundant manipulator KUKA LBR iiwa 14 R820. Nevertheless, it is also applicable to other kinematically redundant robots with an analytical IK, e.g., [11,32,29]. The remainder of this paper is organized as follows. Section 2 presents the mathematical modeling and analytical inverse kinematics. Additionally, details of the pointto-point (PTP) trajectory optimization problem and the algorithm for determining the optimal target joint configuration w.r.t. application-specific criteria are given. The learning framework for predicting the redundancy parameters for the analytical IK problem including database generation and the proposed NN is presented in Section 3. Simulation results are shown in Section 4. The last section, Section 5, concludes this work.
Trajectory Optimization Framework
This section presents the trajectory optimization framework which is commonly used in robotics [2]. For example, in Figure 1, with a given target pose and a robot's initial configuration, an optimal trajectory in joint space is planned for the robot to catch an object.
In this section, the mathematical modeling of the KUKA LBR iiwa 14 R820, including kinematics and system dynamics, is briefly summarized. Then, the analytical inverse kinematics with the redundancy parameters of this redundant manipulator is presented in Section 2.2. Subsequently, a point-to-point trajectory optimization is per- formed, which is used to plan trajectories to the optimal target configuration, explained in Section 2.4.
d 2 d 1 d 3 d 4 d 5 d 6 d 7 d t O 0 (O 1 , q 1 ) (O 2 , q 2 ) (O 3 , q 3 ) (O 4 , q 4 ) (O 5 , q 5 ) (O 6 , q 6 ) (O 7 , q 7 ) p s p e p w p t O t
Mathematical modeling
The KUKA LBR iiwa 14 R820 is an anthropomorphic manipulator due to its similarity to a human arm, which has an S-R-S kinematic structure [42]. The coordinate frames O i and the corresponding seven revolute joints q i , i = 1, . . . , 7, of the robot are shown in Fig. 2. The red, green, and blue arrows represent the x-, y-, and z-axis, respectively. The shoulder intersection position p s of the joint axes q 1 , q 2 , and q 3 and the wrist intersection position p w of the joint axes q 5 , q 6 , and q 7 correspond to the shoulder and wrist positions of the human arm, respectively. The elbow position p e is in the center of the joint axis q 4 .
The robot is modeled as a rigid-body system with the generalized coordinates q T = [q 1 , q 2 , . . . , q 7 ], see Fig. 2, which are the rotation angles q i around the z-axes (blue arrows) of each coordinate frame O i , i = 1, . . . , 7. To describe the kinematic relationship between the joint angles and the pose of the robot links comprising position and orientation, the homogeneous transformations T m n between two adjacent frames O n and O m are constructed, see Tab.
1. Here and in the following, the homogeneous transformation of a simple translation by distance d along the local axis j ∈ {x, y, z} is denoted by T D,j (d), while an elementary rotation around the local axis j ∈ {x, y, z} by the angle φ is described by T R,j (φ). The end-effector transformation matrix T e 0 is referred to as forward kinematics and is computed in the form
Frame O n Frame O m Transformation matrix T m n 0 1 T D,z (d 1 )T R,z (q 1 ) 1 2 T D,z (d 2 )T R,z (−π)T R,x (π/2)T R,z (q 2 ) 2 3 T D,y (d 3 )T R,z (π)T R,x (π/2)R z (q 3 ) 3 4 T T,z (d 4 )T R,x (π/2)T R,z (q 4 ) 4 5 T D,y (d 5 )T R,z (π)T R,x (π/2)T R,z (q 5 ) 5 6 T D,y (d 6 )T R,x (π/2)T R,z (q 6 ) 6 7 T D,z (d 7 )T R,z (π)T R,x (π/2)T R,z (q 7 ) 7 t T D,z (d t )T e 0 = FK(q) = 7 i=0 T i+1 i = R 7 0 (q) p t (q) 0 1(1)
comprising the 3D tip position p t ∈ R 3 and the 3D orientation of the end effector as rotation matrix R 7 0 ∈ R 3×3 . The equations of motion are derived using the Lagrange formalism, see, e.g., [40],
M(q)q + C(q,q)q + g(q) = τ ,(2)
where M(q) denotes the symmetric and positive definite mass matrix, C(q,q) is the Coriolis matrix, g(q) is the force vector associated with the potential energy, and τ are the motor torque inputs. The kinematic and dynamic parameters of the KUKA LBR iiwa in (2) are taken from [41].
Since the mass matrix M(q) is invertible, (2) is rewritten in the state-space forṁ
x = q M −1 (q)(τ − C(q,q)q − g(q)) ,(3)
with the system state x T = [q T ,q T ]. The kinematic and dynamic limits of the robot [23] are summarized in Tab.
2. All limits are symmetric w.r.t. zero, i.e. q i = −q i , q i = −q i , and τ i = −τ i . To reduce the complexity of the system dynamics (2), the vector of joint accelerationq is utilized as a new control input for planning a trajectory in Section 2.3, i.e., u = q = M −1 (q)(τ − C(q,q)q − g(q)). Hence, the system dynamics (3) is rewritten in the compact forṁ
x = q u .(4)
Analytical inverse kinematics
Typically in manipulation tasks, the desired end-effector pose for a point-to-point motion is given in the 6D Cartesian space in the form, cf. (1)
T e 0,d = R e 0,d p t,d 0 1 .(5)
To compute the robot joint configuration q from a desired end-effector pose T e 0,d , the inverse kinematics (IK) of the robot has to be solved. In the following, an inverse kinematics solution with redundancy parmeters tailored to the non-offset 7-DoF robot KUKA LBR iiwa 14 R820 is shortly revisited, see, e.g., [38]. Similar to [22,10], the redundancy parameters of this robot are chosen as the binary vector j T c = [j s , j e , j w ] and the angle ϕ, which are introduced below.
With a given end-effector pose T e 0,d , the position of the robot wrist p w in the world frame is fixed and is computed as
p w = p t,d − R 7 0,d 0 0 d 7 + d t T ,(6)
with the distance from the wrist point to the end effector of the robot d 7 + d t . The vector p sw from the fixed shoulder position p s = 0 0 d 1 + d 2 T to the wrist position p w is expressed as p sw = p w − p s . Using the law of cosines in the triangle formed by the shoulder, elbow, and wrist, the joint position q 4 is immediately calculated as
q 4 = j e arccos |p sw | 2 − d 2 se − d 2 ew 2d se d ew ,(7)
where d se = d 3 +d 4 is the distance from the shoulder to the elbow and d ew = d 5 + d 6 is the distance from the elbow to the wrist. In (7), the binary redundancy parameter j e ∈ {−1, 1} distinguishes between the elbow-up and the elbow-down configuration. The constellation of the shoulder, elbow, and wrist position forms two triangles of which the sides have a constant length for a given end-effector pose. Further, these three points and the two triangles lie on a plane, denoted as arm plane, which can be rotated around the vector p sw resulting in two cones, see Fig. 3. Thereby, the elbow position p e always stays on the perimeter of the cone bases. As a result, the robot can perform self-motions by moving the elbow on this perimeter. To this end, an arm angle ϕ is introduced as a redundancy parameter, referring to the angle between a reference arm with the special configuration q 3,n = 0 (red lines in Fig. 3), and the actual arm plane (blue lines in Fig. 3). Here and in the following, the index n refers to the reference arm configuration. The actual elbow orientation R 4 0 is equivalent to rotating the orientation of the reference elbow orientation R 4 0,n about the shoulder-wrist vector p sw by ϕ, i.e.
R 4 0 = R ϕ R 4 0,n ,(8)
with Rodrigues' formula [27]
R ϕ = I 3×3 + [p sw ] × sin ϕ + [p sw ] 2 × (1 − cos ϕ) ,(9)
where I 3×3 is the identity matrix, and [a] × denotes the skew-symmetric matrix of the vector a. Since q 4 remains unchanged between the reference arm configuration and the actual arm configuration for a given end-effector pose, (8) leads to
R 4 3 = R 4 3,n (10a) R 3 0 = R ϕ R 3 0,n (10b) R 7 4 = (R 3 0 R 4 3 ) T R 7 0,d = (R ϕ R 3 0,n R 4 3,n ) T R 7 0,d(10c)
Note that R 3 0,n depends only on the joint angles q 1,n and q 2,n , since q 3,n = 0 in the reference configuration. The joint angles q 1,n and q 2,n , shown in Fig. 4, are simply found as q 1,n = arctan2(p sw,x , p sw,y ) (11a)
q 2,n = arctan2 (p sw,x ) 2 + (p sw,y ) 2 , p sw,z +γ , (11b)
with p T sw = [p sw,x , p sw,y , p sw,z ], and
γ = j e arccos d 2 se + |p sw | 2 − d 2 ew 2d se |p sw | .
Note that R 3 0 and R 7 4 can be directly computed using (10b), (10c) and Tab. 1. Analytically, the rotation matrices R 3 0 and R 7 4 result from Tab. 1 in the form
R 3 0 = * * cos q 1 sin q 2 * * sin q 1 sin q 2 − sin q 2 cos q 3 sin q 2 sin q 3 cos q 2 (12a) R 7 4 = * * cos q 5 sin q 6 − sin q 6 cos q 7 sin q 6 sin q 7 cos q 6 * * − sin q 5 sin q 6 ,(12b)
where the elements written as * are omitted for brevity. From (12), the joint angles of the redundant manipulator are computed in a straightforward manner
q 1 = arctan2(R 3 0 [2, 3], R 3 0 [1, 3]) q 2 = j s arccos(R 3 0 [3, 3]) q 3 = arctan2(R 3 0 [3, 2], −R 3 0 [3, 1]) q 5 = arctan2(−R 7 4 [3, 3], R 7 4 [1, 3]) q 6 = j w arccos(R 7 4 [2, 3]) q 7 = arctan 2(R 7 4 [2, 2], −R 7 4 [2, 1]) ,(13)
where j s , j w ∈ {−1, 1} are the remaining binary redundancy parameters and R[i, j] is the matrix element of the i-th row and j-th column of R.
In summary, the parameterization of the inverse kinematics solution uses the three binary variables j T c = [j s , j e , j w ] and the arm angle ϕ in (7) and (13) as redundancy parameters to determine a unique joint configuration q for a desired end-effector pose T e 0,d . In Fig. 3, the blue lines illustrate a possible robot configuration with j c = [1, −1, 1] T that is rotated by ϕ = 95°from the reference arm plane, drawn with red lines. To this end, by combining (7) and (13), the unique analytical inverse kinematics of the KUKA LBR iiwa 14 reduces to the compact form
q = AIK(T e 0,d , j c , ϕ),(14)
with the redundancy parameters j c ∈ {1, −1} 3 and ϕ ∈ [0, 2π].
Point-to-point trajectory optimization
In the point-to-point (PTP) trajectory planning, a desired trajectory ξ * (t) = [x * (t), u * (t)] T , t ∈ [t 0 , t F ] for the robotic system (4) is planned from an initial con-
figuration ξ * (t 0 ) = [x t0 , u t0 ] T to a target configuration ξ * (t F ) = [x t F , u t F ] T .
The target configuration has to satisfy the forward kinematics relation for the desired endeffector pose T e 0,d , see (1)
T e 0,d − FK(q t F ) = 0(15)
Without loss of generality, the initial time t 0 is chosen as t 0 = 0. Furthermore, the target configuration is assumed to be a stationary point
x T t F = [q T t F , 0 T ].
The PTP trajectory planning is formulated as optimization problem using the direct collocation method, see, e.g., [3], by discretizing the trajectory ξ(t), t ∈ [0, t F ], with N +1 grid points and solving the resulting static optimization problem (17) denotes the optimal duration of the trajectory from the initial state x t0 to the target state x t F . In addition, R is a positive definite weighting matrix for the input u which also weighs the tradeoff between the cost of the duration and the smoothness of the trajectory. The system dynamics (4) is approximated by the trapezoidal rule in (16b). Moreover, x and x in (16d) denote the symmetric lower and upper bounds of the state, respectively, and (16e) considers the upper and lower torque limit τ and τ .
min ξ * J(ξ) = t F + 1 2 h N k=0 u T k Ru k (16a) s.t. x k+1 − x k = 1 2 h q k+1 +q k u k+1 + u k (16b) x 0 = x t0 , x N = x t F (16c) x ≤ x k ≤ x (16d) τ ≤ M(q k )u k + C(q k ,q k )q k + g(q k ) ≤ τ (16e) k = 0, . . . , N for the optimal trajectory (ξ * ) T = [t * F , (x * 0 ) T , . . . , (x * N ) T , (u * 0 ) T , . . . , (u * N ) T ], (17) with the time step h = t F /N . Note that the final time t * F in
It should be noted that (16e) is a computationally expensive inequality constraint, mainly because of the large expressions in the Coriolis matrix C(q,q). Indeed, the Coriolis matrix is often neglected in industrial applications [4,44]. To still consider the influence of the Coriolis matrix C(q,q) for the torque limits, the range of values of the term C(q,q)q is investigated for the KUKA LBR iiwa 14 R820 using a Monte Carlo simulation. In this simulation, 10 8 uniformly distributed random state vectors x are selected from the admissible operating range, see Tab. N m, which is much smaller than the torque limits of the motor. Although the influence of the Coriolis matrix on the dynamics of the overall system is not significant, it is still advantageous to consider these physical limits in the optimization problem (16). To this end, the costly inequality condition (16e) is replaced by
τ − c ≤ (M(q k )u k + g(q k )) ≤ τ − c .(18)
The optimal trajectory is computed by solving the static optimization problem (16a)-(16d) and (18) 2.4. Optimal target configuration q t F In this section, the optimal choice for the target configuration q t F is discussed. The inverse kinematics for a redundant robot does not yield a unique joint configuration q t F , as presented in Section 2.2. Moreover, choosing an unsuitable target configuration q t F may cause the trajectory optimization (16) to fail or deliver poor results.
For redundant robots, there is an infinite number of joint configuration solutions q t F for a desired end-effector pose T e 0,d . Therefore, two criteria for selecting the best inverse kinematics solution, i.e. the manipulability and closeness, are introduced in the following and an optimization problem is formulated.
First, the manipulability m(q) [49] is the most popular index used to measure the dexterity of a robot for a specific joint configuration q. It is defined as
m(q) = det (J(q)J T (q)) ,(19)
where the geometric manipulator Jacobian J(q) takes the form
J(q) = J v (q) J ω (q) = ∂p t (q) ∂q ∂ω t (q) ∂q .(20)
In (20), ω t is the angular velocity of the end effector described in the frame O 0 , which is computed by
[ω t ] × =Ṙ 7 0 (q) R 7 0 (q T .(21)
To reduce the computational burden of (19) due to the computation of the determinant, an analytical expression of the manipulability is derived, which is given in the appendix. Second, to consider the closeness between the inverse kinematics solution q and the initial joint configuration q 0 of the robot, the L ∞ -norm . ∞ is employed to find the largest deviation between these two joint space configurations. Here, the closeness is given by
c(q) = q 0 − q ∞ ,(22)
where q 0 is the initial joint configuration of the initial state x 0 = [q T 0 , 0]. Next, the two criteria (19) and (22) are considered in an optimization problem to choose the best target configuration q t F for a given target pose T e 0,d . To solve this problem, according to (14), the redundancy parameters of the inverse kinematics j c and ϕ have to be determined. Since there are three binary redundancy parameters in j c , 2 3 = 8 different values are contained in the set X jc = {j c,i | i = 1, ..., 8}. Additionally, the arm angle ϕ ∈ [0, 2π] is equidistantly discretized with the grid points
T e 0,d q 0 AIK (23c) j c,1 ϕ 1 AIK (23c) j c,i ϕ j AIK (23c) j c,8 ϕ nϕ q 1,1 q i,j q 8,nϕ (23b) (23d) (16) q * i,j(23X ϕ = j 2π n ϕ j = 1, ..., n ϕ .
The following optimization problem is solved to find the best target configuration q t F = q * i,j as well as its corresponding redundancy parameters j * c and the virtual angle ϕ * for the desired pose T e 0,d arg min
q * i,j ,j * c ,ϕ * J IK (q 0 , q i,j ) i ∈ {1, ..., 8} j ∈ {1, ..., n ϕ } (23a) s.t. J IK (q 0 , q i,j ) = ω m m(q i,j ) + ω c c(q 0 , q i,j ) (23b) q i,j = AIK(T e 0,d , j c,i , ϕ j ) (23c) q ≤ q i,j ≤ q ,(23d)
with the user-defined weighting parameters ω m > 0 and ω c > 0. To compute an optimal trajectory ξ * from the current robot configuration, represented by q 0 , to the given desired pose T e 0,d , the optimization problem (23) is solved first to obtain the optimal solution q * i,j = q t F . Then, this optimal robot target configuration q t F is used in the PTP trajectory optimization (16). The block diagram of this process is illustrated in Fig. 5.
Framework for learning redundancy parameters
The cost function (23b) and the inverse kinematics (23c) are nonlinear and discontinuous functions with many local minima, which is illustrated on the right-hand side of Fig. 6 for an example joint configuration q. Therefore, to find the global optimum, the optimization problem (23) has to be solved by exhaustive search, which is a time-consuming process since (23c) has to be evaluated 8n ϕ times, see Fig. 5. To significantly reduce the computational effort for this step, a neural network (NN) is presented in this section to quickly determine the joint configuration j * c and narrow down the search space for the arm angle ϕ * for a desired end-effector pose T e 0,d and the given initial configuration q 0 .
First, the generation of the database to train the NN for learning the redundancy parameters j c and ϕ is introduced. Then, the network architecture of this NN is presented in the next step.
Database generation
For the database generation, N p pairs of robot initial joint configurations q 0,k and corresponding feasible desired poses T e,k 0,d are randomly selected from a uniform random distribution in the admissible ranges and are stored in the set X = {ζ k } = {q 0,k , T e,k 0,d k = 1, ..., N p }. For each pair of q 0,k and T e,k 0,d , the optimization problem (23) is solved by an exhaustive search to find the global optimum redundancy parameters j * c and ϕ * as well as the target configuration q * t F . The redundancy parameters are stored in the set Y = {η k } = {j c,k , ϕ k | k = 1, ..., N p }. The database D = (X , Y) comprises both sets X and Y. Elements from the set X are the input to the NN and elements from the set Y are the corresponding labeled outputs, see Fig. 7.
The input data in the set X contain redundant data due to the constant bottom row in the desired pose T e,k 0,d , see (5). Therefore, only the three basis vectors e x,k , e y,k , and e z,k , of R e,d 0 = [e x,k , e y,k , e z,k ] and the position of the end effector p t,k are considered in the set X . Thus, the input set X is re-arranged in the form
X = {ζ k | k = 1, ..., N p }, with ζ T k = [(q 0,k ) T , (e x,k ) T , (e y,k ) T , (e z,k ) T , (p t,k ) T ] ∈ R 19
Since (23b)-(23c) are discontinuous nonlinear functions, see Fig. 6, a complex NN is required to approximate these functions. However, the training and prediction time of such a neural network is very long, making it impossible to be implemented in real time. Thus, instead of directly predicting the virtual angle ϕ, only the range of this angle, denoted by the bin index b ϕ ∈ {1, ..., n b } with the total number of bins n b , is predicted. This way, the value of the bin index b ϕ indicates that the virtual angle ϕ is in
the range (b ϕ − 1) 2π n b , b ϕ 2π n b , b ϕ = 1, .
.., n b . This helps to reduce the complexity and to realize the proposed NN for a real-time application. Consequently, ϕ k is replaced by its bin index b ϕ,k ∈ {1, ..., n b } in the set Y resulting in Y = η k = {j c,k , b ϕ,k }.
Network Architecture
The architecture of the proposed NN is shown in Fig. 8. This NN is designed for the two sub-problems, i.e., to initial configuration numerical solution learn the joint configuration j c and the bin index b ϕ of the arm angle ϕ. Note that the input of the proposed NN is ζ ∈ X and the output is the predicted value
joint configuration j c proposed solution −1 −1 1 −1 1 −1 −1 1 1 1 −1 −1 1 −1 1 1 1 −1 1 1 1 0.η T = [j T c , b ϕ ] ∈ Y.
First, two fully connected layers of size 32 with a ReLU activation function [24] are utilized, as shown in Fig. 8. Since there are 8 possibilities for choosing j c , a fully connected layer of size 8 with a softmax activation function [5] is employed to output j c . The cross-entropy function is used to compute the loss between the prediction j c,k of the NN and the target value j c,k in the form
L jc = M k=1 −j T c,k log(ĵ c,k ) + (1 − j c,k ) T log(1 −ĵ c,k ) ,(24)
where M is the size of the training dataset.
Second, the predictedĵ c is concatenated with the input ζ again as a new input for the second subproblem. Similar to the first subproblem, two fully connected layers of size 32 with a ReLU activation function are used. Subsequently, the fully connected layer of size 8 and the softmax activation function are implemented to predict the bin index b ϕ of the arm angle ϕ. Again, the cross entropy function is used to compute the loss between the predicted value of the bin indexb ϕ and the target value b ϕ
L bϕ = M k=1 −b ϕ,k log(b ϕ,k ) + (1 − b ϕ,k ) log(1 −b ϕ,k ). (25)
The proposed NN is trained by using the Adam [20] optimizer with the learning rate of α = 10 −3 . Furthermore, the L 2 regularization [20] with λ = 10 −6 is added to both loss functions L jc and L bϕ . This helps to avoid overfitting [28].
Results
The simulation results presented in this section are obtained on a computer with a 3.4 GHz Intel Core i7-10700K and 32 GB RAM. The generated database with N p = 10 8 pairs described in Section 3.1 is randomly shuffled and divided into 3 subsets, i.e., training dataset, validation dataset, and test dataset, which are partitioned as 80%, 10%, and 10% of the generated database, respectively. To speed up the computing time of database generation, C++ code is generated for (23) using MAT-LAB coder in MATLAB R2021b. Additionally, the analytical expression of the manipulability in the appendix (29) is utilized. The remaining parameters are chosen as n ϕ = 100 and n b = 8. For the database generation, the average computing time of (23) for a given pose and initial joint configuration is approximately 1.5 ms. Since n ϕ in (23) is set to 100, the average computing time of the analytical inverse kinematics expression in (23c) is approximate 1.8 µs.
Statistical information on training the proposed NN
The proposed NN is trained using the open-source software package Keras [14]. To reduce the training time, the CUDA cores of an Nvidia GeForce RTX 3070 are employed. During training, the mini-batch size is set to 2000 and the training data is reshuffled in each epoch. Fig. 9 shows that the learning accuracy for the joint configuration j c of the training dataset and the validation dataset after 500 epochs reaches 96.62% and 95.76%, respectively. Also, the corresponding values of the loss function L jc decreased to 0.1034 and 0.1264, respectively. To further validate the training result, the accuracy of the test dataset with the trained parameters of the proposed NN is approximately 96.49%. For further validation, the proposed NN is compared to the performance of four well-known algorithms, i.e., naive Bayes classifier [16], discriminant analysis classifier [43], binary decision tree classifier [26] and k-nearest neighbor classifier [18]. Similar to the proposed NN, each classifier takes the input ζ ∈ X and outputs the prediction η ∈ Y. The statistical performance of the four algorithms and the proposed NN is shown in Tab. 3. Among the above algorithms, the binary decision tree classifier achieves the highest prediction accuracy for j c and b ϕ , i.e., 77.8% and 65.5%, respectively. Moreover, the average execution time of this classifier is approximately 0.35 µs, i.e., the fastest algorithm. However, the prediction accuracy of the proposed NN is still significantly higher compared to the binary decision tree classifier. Another aspect is the memory consumption, which is with 0.17 MB much less compared to over 70 MB for each of the four other algorithms. This is reasonable since in the proposed NN, the memory consumption is mainly used for storing the network parameters. The average NN execution time for the prediction of j c and b ϕ is about 7.35 µs. Also, the average computing of the analytical inverse kinematics with the given j c and b ϕ is about 2 µs. Thus, the proposed NN provides the possibility to compute a good IK solution with respect to the two criteria, i.e., manipulability (19) and closeness (22), in real time.
Performance of the proposed NN in the framework of trajectory optimization
To verify the efficiency of the proposed NN, the example task of planning a PTP trajectory from the initial configuration
is considered. First, the comparison between the well-known damped least-squares inverse kinematics solution [27,6] and the proposed algorithm is depicted in Fig. 6. On the righthand side of Fig. 6, a color map of (23b) is depicted where the x-axis comprises the 8 possible joint configurations j c ∈ X jc and the y-axis contains the n ϕ = 100 arm angles ϕ ∈ X ϕ . Using the network architecture of Fig. 8 with n b = 8, the proposed NN takes about 7.35 µs to predict the joint configuration j c = [−1, −1, −1] T and the bin b ϕ = 3, i.e. ϕ ∈ [π/2, 3π/4]. To find the optimum value for the arm angle ϕ inside the predicted bin, (14) and (23a) are evaluated on an equidistant grid for ϕ ∈ [π/2, 3π/4] with n ϕ /n b grid points. This way, the effort to solve the optimization problem (23) reduces from 8n ϕ = 800 to n ϕ /n b ≈ 13 evaluations of (14) and (23b). Since the analytical manipulability expression (29) is used in (23b), the computing time of (23b) is approximately 0.15 µs, which is much smaller than (14). Thus, the total execution time for computing the optimal target configuration q t F is approximately 32 µs including the computing time of (14) of 2 µs. On the other hand, the damped leastsquares method in this example requires 17 iterations to find the solution of the inverse kinematics with a tolerance of 10 −8 . The computing time of the numerical method is approximately 3 ms.
On the left-hand side of Fig. 6, the computed target configurations for the given desired target pose T e 0,d are To further demonstrate the effectiveness of the proposed IK approach, the two target configurations (27) and (28) are used in the trajectory optimization framework detailed in Section 2. The nonlinear optimization problem (16) is solved using the interior point solver IPOPT [45] together with the linear solver MA27 [9]. To increase the computational speed, the gradient and the numerical Hessian are computed using the BFGS method [25] and provided to the IPOPT solver. The trajectory in (16) is discretized with N = 50 collocation points, giving a total of 1051 optimization variables. For this comparison, the same initial configuration q 0 and two different target configurations q t F ,A according to (27) and q t F ,N according to (28) of the pose T e 0,d from (26) are used. While the computing time of the optimization (16) for both target configurations is almost the same (55 ms), the time for moving to the target configuration of the proposed algorithm q t F ,A is 3.83 s compared to 4.03 s of the numerical solution q t F ,N . Moreover, the cost function in (16a) with q t F ,A and q t F ,N is 4.7 and 5.03, respectively. The optimal trajectories ξ A and ξ N corresponding to the target configurations (27) and (28), respectively, are validated on the experimental setup depicted in Fig. 11. This experimental setup comprises two main components, i.e. the robot KUKA LBR iiwa R820 and the PC. The PC communicates with the robot via a network interface card (NIC) using the EtherCAT protocol. The computed torque controller is implemented as MATLAB/Simulink module, which is executed via the real-time automation software Beckhoff TwinCAT. The sampling time T s = 125 µs is used for the robot sensors and actuators. The scaled joint position, velocity, and torque for all robot axes, normalized to their respecting limits for the two optimal trajectories ξ A and ξ N and the corresponding measurements from the experiments are shown in Fig. 12. Note that in this figure, the trajectories do not exceed the value ±1, which means that all state and input constraints in (16d) and (18) are respected. The travel time of the trajectory from the solution of the proposed NN (≈ 3.9 s) is slightly shorter than that from the numerical IK (≈ 4.1 s). Since the proposed NN is designed to select the configuration that is closer to the robot's initial configuration via (22), the motion ranges of joints 6 and 7 of ξ A are much smaller than the corresponding ranges of ξ N . Consequently, this could lead to a more time-efficient optimal trajectory. A video of several experiments for comparison can be found in the supplementary material in https://www.acin.tuwien.ac.at/en/360e/.
q T t F ,
Finally, a Monte Carlo simulation is performed to validate the efficiency of the proposed NN in the PTP trajectory optimization. To this end, 10 5 pairs of initial robot configurations q 0 and target poses T e 0,d are randomly selected from a uniform random distribution in the admissible ranges. Then, the proposed NN and the numerical IK are used to determine the target joint configuration and for each target configuration, an optimal trajectory is calculated using (16). The statistical results are summarized in Tab. 4. While the computing times of (16) utilizing the target configuration of the proposed algorithm q t F ,A and the numerical IK q t F ,N are nearly the same (≈ 30 ms), Figure 12: Joint position, velocity, and torque for all robot axes, normalized to their respective limits, referred to with the bar symbol, for optimal trajectories ξ A and ξ N of the numerical IK and the proposed NN algorithm, respectively. The desired trajectories are shown as solid lines and the measured trajectories are drawn as dashed lines. For safety reasons, the limits for the motor torques are 50% lower than the limits in Tab. 2.
the average optimal trajectory time using the proposed algorithm is slightly better, i.e. 4.52 s compared to 5.39 s.
Since the solution of the numerical IK depends on the initial guess, 13896 test cases fail to converge to feasible target configurations. Note that the maximum number of iterations for the numerical IK is 50. Additionally, after excluding 13896 failed cases, 1588 test cases are not valid to plan the PTP trajectory using (16). Note that these test cases fail because of violating the iteration limit, i.e., 100 iterations, which is set in the IPOPT solver. The overall success rate by using the numerical IK is approximately 84.5%. On the other hand, for the proposed algorithm, in all the test cases, a feasible target configuration is found. Only 554 test cases fail during the planning of the PTP trajectory due to the iteration limit of the IPOPT solver. Overall, the proposed NN outperforms the numerical IK by achieving a success rate of 99.5%.
Conclusions
In this work, a machine learning-based approach for the inverse kinematics (IK) of kinematically redundant robots is presented, which is suitable for trajectory planning in highly dynamic real-time applications like humanrobot object handovers or robotic object catching. In this approach, the optimal redundancy parameters are predicted by a neural network (NN) according to the applicationspecific criteria, closeness to the initial robot configuration and manipulability at the target pose. Redundancy parameters, i.e. a virtual arm angle and binary variables describing the joint configurations, resolve the non-uniqueness of the analytical IK of redundant robots and allow for a unique mapping between the target pose and the joint configuration. Since a NN is employed, the proposed framework can be applied to different collaborative robots, e.g., KUKA LBR iiwa 14 R820, Franka Emika Panda, OB7, of which the analytical IK can be parameterized by redundancy parameters. The NN used in the proposed framework outperforms classical classification algorithms in terms of accuracy and the prediction run time. A Monte Carlo simulation of 10 5 random pairs of an initial configuration and a target pose validates the proposed algorithm in the context of point-to-point (PTP) trajectory optimization. The proposed method succeeds in 99.5% of the test cases to find a feasible target configuration while achieving a shorter optimal time of the trajectory from the initial to the target pose on average compared to using a numerical IK method at a significantly shorter computing time (≈ 32 µs for the proposed IK compared to ≈ 3 ms for the numerical IK). Thus, the proposed framework is perfectly suited for real-time applications.
In future works, this machine learning-based framework will be applied to dynamic human-robot handover tasks.
Figure 2 :
2Schematic drawing of the robot KUKA LBR iiwa. The x-, y-, and z-axis of each coordinate frame are shown as red, green, and blue arrows, respectively.
Figure 3 :Figure 4 :
34Two configurations of the KUKA iiwa at the same pose are illustrated in red and blue lines. The green rim indicates the virtual movement of the elbow position w.r.t the specific end-effector pose. The red lines and blue lines illustrate the robot at the arm angle ϕ = 0 and ϕ = 95 • , respectively. The redundant manipulator (q 3,n = 0) in the xy-plane (a) and in the 3D xyz-plane (b). The shoulder, elbow, and wrist positions are colinear in (a).
2. This simulation shows that the values of Cq are between c T = [
Figure 5 :
5Block diagram of the optimization problem(23)and(16).
Figure 6 :Figure 7 :
67The color map of the cost function (23b) for the initial configuration of the robot q 0 (in gray color) and the position of the end effector (the RGB triad) is shown on the right-hand side of this figure. The white color regions depict infeasible joint configurations. The robot target configuration qt F computed by the proposed NN is shown in red color on the left-hand side. This achieves a very small value of the cost function (23b) (≈ 0.0717). The target robot configuration calculated using the numerical method[6] is depicted in green. The cost of this configuration is approximately 0.54. Overview of the proposed framework for learning the redundancy parameters.
Figure 8 :Figure 9 :
89Architecture of the proposed NN for learning the joint configuration jc and the bin index bϕ of the virtual angle ϕ. Training and validation accuracy of the joint configuration jc.
Figure 10 :
10Training and validation accuracy of the bin index bϕ.
Fig. 10
10shows the accuracy of the training dataset and the validation dataset with respect to the bin index b ϕ of the arm angle ϕ. Note that the resulting accuracy is approximately 85.57% for the training dataset and 85.12% for the validation dataset. The values of the loss function L bϕ are approximately 0.32 and 0.38 for the two datasets. To verify the trained parameters of the proposed NN, a consistent accuracy of 84.78% is reported for the test dataset.
the proposed algorithm (red color), and q T t F ,N = [−0.7, −0.45, 1.1, 0.78, 0.43, 0.81, −0.82] rad, j T c,N = [−1, 1, 1], ϕ N = 3.8 rad (28) for the damped least-squares method (green color). It is obvious that in this example the joint configuration solutions of the two methods j c,A and j c,N are different from the initial joint configuration j c,0 . The proposed solution has a slightly higher manipulability measure (29) of 0.061 compared to the manipulability measure of 0.045 of the numerical solution. The closeness value (22) of the proposed solution is 1.49 which is significantly smaller than the closeness value of 2.23 of the numerical solution.
Figure 11 :
11The experimental setup for the comparison between the proposed NN algorithm and the numerical IK method.
Table 1 :
1Coordinate transformation of the robot
Table 2 :
2Kinematic and dynamic limits of the systemJoint i Joint limits Velocity limits Torque limits
q i ( • )q i ( • /s)
τ i (N)
1
170
85
320
2
120
85
320
3
170
100
176
4
120
75
176
5
170
130
110
6
120
135
40
7
175
135
40
Table 3 :
3Performance of the prediction with different algorithmsClassifier
Acc. j c
%
Acc. b ϕ
%
Time
µs
Memory
MB
Naive Bayes
[16]
57.6
38.2
1.1
76.8
Discriminant
Analysis [43]
65.1
40.6
1.23
70.5
Binary Decision
Tree [26]
77.8
65.5
0.35
89.6
k-Nearest
Neighbor [18]
49.5
40.1
1810
76.8
Proposed NN
96.5
84.8
7.35
0.17
Table 4 :
4Performance of the proposed NN and the numerical IK[27] in the trajectory optimization framework Proposed NN Numerical IK[27] Avg. t F (s)4.52 ± 1.93
5.39 ± 2.6
Cost value of (16a)
5.75 ± 2.79
6.69 ± 3.29
Num. of failed IK
0
13896
Num. of failed PTP
554
1588
Success rate
99.5%
84.5%
Avg. comp. time (ms)
of (16)
28.9 ± 13
30.3 ± 19
AppendixThe square of the manipulability(19)of the KUKA LBR iiwa 14 R820[1]reads asConflict of interestThe authors have no conflicts of interest to declare.
Singlularity avoidance with application to online trajectory optimization for serial manipulators. F Beck, M N Vu, C Hartl-Nesic, A Kugi, arXiv:2211.02516arXiv preprintBeck, F., Vu, M.N., Hartl-Nesic, C., Kugi, A., 2022. Singlularity avoidance with application to online trajectory optimization for serial manipulators. arXiv preprint arXiv:2211.02516 .
Survey of numerical methods for trajectory optimization. J T Betts, Journal of Guidance, Control, and Dynamics. 21Betts, J.T., 1998. Survey of numerical methods for trajectory optimization. Journal of Guidance, Control, and Dynamics 21, 193-207.
Practical methods for optimal control and estimation using nonlinear programming. J T Betts, Siam: Philadelphia, USABetts, J.T., 2010. Practical methods for optimal control and estimation using nonlinear programming. Siam: Philadelphia, USA.
Distributed computer architecture and fast parallel algorithms in real-time robot control. E E Binder, J H Herzog, IEEE Transactions on Systems, Man, and Cybernetics. 16Binder, E.E., Herzog, J.H., 1986. Distributed computer archi- tecture and fast parallel algorithms in real-time robot control. IEEE Transactions on Systems, Man, and Cybernetics 16, 543- 549.
Pattern recognition and machine learning. C M Bishop, N M Nasrabadi, SpringerNew YorkBishop, C.M., Nasrabadi, N.M., 2006. Pattern recognition and machine learning. Springer: New York.
Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods. S R Buss, IEEE Journal of Robotics and Automation. 17Buss, S.R., 2004. Introduction to inverse kinematics with jaco- bian transpose, pseudoinverse and damped least squares meth- ods. IEEE Journal of Robotics and Automation 17, 1-19.
Ideals, Varieties, and Algorithms: an introduction to computational algebraic geometry and commutative algebra. D Cox, J Little, D Oshea, SpringerGermanyCox, D., Little, J., OShea, D., 2013. Ideals, Varieties, and Al- gorithms: an introduction to computational algebraic geometry and commutative algebra. Springer: Germany.
Openrave: A planning architecture for autonomous robotics. R Diankov, J Kuffner, CMU-RI-TR-08-34 79Robotics Institute. Tech. Rep.Diankov, R., Kuffner, J., 2008. Openrave: A planning architec- ture for autonomous robotics. Robotics Institute, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-08-34 79.
MA57:A Code for the Solution of Sparse Symmetric Definite and Indefinite Systems. I S Duff, ACM Transactions on Mathematical Software. 30Duff, I.S., 2004. MA57:A Code for the Solution of Sparse Sym- metric Definite and Indefinite Systems. ACM Transactions on Mathematical Software 30, 118-144.
Position-based kinematics for 7-DoF serial manipulators with global configuration control, joint limit and singularity avoidance. C Faria, F Ferreira, W Erlhagen, S Monteiro, E Bicho, Mechanism and Machine Theory. 121Faria, C., Ferreira, F., Erlhagen, W., Monteiro, S., Bicho, E., 2018. Position-based kinematics for 7-DoF serial manipulators with global configuration control, joint limit and singularity avoidance. Mechanism and Machine Theory 121, 317-334.
the new FRANKA PRODUCTION 3: Think it, make it -the robotic automation tool for everyone. Gmbh Franka Emka, Franka Emka GmbH, [Accessed on 07 October 2022]. the new FRANKA PRODUCTION 3: Think it, make it -the robotic automation tool for everyone. URL: https://www.franka.de/ production/.
Time-optimal robotic manipulation on a predefined path of loosely placed objects: Modeling and experiment. H Gattringer, A Mueller, M Oberherber, D Kaserer, Mechatronics. 84102753Gattringer, H., Mueller, A., Oberherber, M., Kaserer, D., 2022. Time-optimal robotic manipulation on a predefined path of loosely placed objects: Modeling and experiment. Mechatronics 84, 102753.
Research and application of industrial robot manipulators in vehicle and automotive engineering, a survey. H N Ghafil, K Jármai, Vehicle and Automotive Engineering. ChamSpringer International Publishing2Ghafil, H.N., Jármai, K., 2018. Research and application of industrial robot manipulators in vehicle and automotive engi- neering, a survey, in: Vehicle and Automotive Engineering 2, Springer International Publishing: Cham. pp. 611-623.
Deep learning with Keras. A Gulli, S Pal, Packt Publishing LtdBirminghamGulli, A., Pal, S., 2017. Deep learning with Keras. Packt Pub- lishing Ltd: Birmingham.
Local trajectory stabilization for dexterous manipulation via piecewise affine approximations. W Han, R Tedrake, IEEE International Conference on Robotics and Automation (ICRA). Han, W., Tedrake, R., 2020. Local trajectory stabilization for dexterous manipulation via piecewise affine approximations, in: IEEE International Conference on Robotics and Automation (ICRA), pp. 8884-8891.
The elements of statistical learning: data mining, inference, and prediction. T Hastie, R Tibshirani, J H Friedman, J H Friedman, Springer2BerlinHastie, T., Tibshirani, R., Friedman, J.H., Friedman, J.H., 2009. The elements of statistical learning: data mining, in- ference, and prediction. volume 2. Springer: Berlin.
Analytical inverse kinematics for franka emika panda-a geometrical solver for 7-dof manipulators with unconventional design. Y He, S Liu, IEEE International Conference on Control, Mechatronics and Automation (ICCMA). He, Y., Liu, S., 2021. Analytical inverse kinematics for franka emika panda-a geometrical solver for 7-dof manipulators with unconventional design, in: IEEE International Conference on Control, Mechatronics and Automation (ICCMA), pp. 194-199.
Survey of improving k-nearest-neighbor for classification. L Jiang, Z Cai, D Wang, S Jiang, IEEE International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). Jiang, L., Cai, Z., Wang, D., Jiang, S., 2007. Survey of im- proving k-nearest-neighbor for classification, in: IEEE Interna- tional Conference on Fuzzy Systems and Knowledge Discovery (FSKD), pp. 679-683.
RCIK: Real-Time Collision-Free Inverse Kinematics Using a Collision-Cost Prediction Network. M Kang, Y Cho, S E Yoon, IEEE Robotics and Automation Letters. 7Kang, M., Cho, Y., Yoon, S.E., 2022. RCIK: Real-Time Collision-Free Inverse Kinematics Using a Collision-Cost Pre- diction Network. IEEE Robotics and Automation Letters 7, 610-617.
Adam: A method for stochastic optimization. D P Kingma, J Ba, Poster Presentations of International Conference on Learning Representations ICLR. Kingma, D.P., Ba, J., 2015. Adam: A method for stochastic op- timization, in: Poster Presentations of International Conference on Learning Representations ICLR.
An optimization-based approach for elasticity-aware trajectory planning of link-elastic manipulators. M Kraemer, F I Muster, C Roesmann, T Bertram, Mechatronics. 75102523Kraemer, M., Muster, F.I., Roesmann, C., Bertram, T., 2021. An optimization-based approach for elasticity-aware trajectory planning of link-elastic manipulators. Mechatronics 75, 102523.
Robust inverse kinematics by configuration control for redundant manipulators with seven dof. I Kuhlemann, A Schweikard, P Jauer, F Ernst, IEEE International Conference on Control, Automation and Robotics (ICCAR). Kuhlemann, I., Schweikard, A., Jauer, P., Ernst, F., 2016. Ro- bust inverse kinematics by configuration control for redundant manipulators with seven dof, in: IEEE International Conference on Control, Automation and Robotics (ICCAR), pp. 49-55.
. Gmbh Kuka Roboter, KUKA Roboter GmbH, [Accessed on 07 October 2022].
LBR iiwa 7 R800 and LBR iiwa 14 R820 specification URL. LBR iiwa 7 R800 and LBR iiwa 14 R820 specification URL: https://www.kuka.com/en-at/products/robotics-systems/ industrial-robots/lbr-iiwa.
Convergence analysis of two-layer neural networks with relu activation. Y Li, Y Yuan, 30Advances in neural information processing systemsLi, Y., Yuan, Y., 2017. Convergence analysis of two-layer neural networks with relu activation. Advances in neural information processing systems 30, 597-607.
On the limited memory bfgs method for large scale optimization. D C Liu, J Nocedal, Mathematical Programming. 45Liu, D.C., Nocedal, J., 1989. On the limited memory bfgs method for large scale optimization. Mathematical Program- ming 45, 503-528.
Regression tress with unbiased variable selection and interaction detection. W Y Loh, Statistica sinica. Loh, W.Y., 2002. Regression tress with unbiased variable selec- tion and interaction detection. Statistica sinica , 361-386.
K M Lynch, F C Park, Modern robotics. CambridgeCambridge University PressLynch, K.M., Park, F.C., 2017. Modern robotics. Cambridge University Press: Cambridge.
Machine learning: a probabilistic perspective. K P Murphy, MIT pressCambridgeMurphy, K.P., 2012. Machine learning: a probabilistic perspec- tive. MIT press: Cambridge.
. Gmbh Neura Robotics, Neura Robotics GmbH, [Accessed on 07 October 2022].
. Lara -Lightweight, Agile Robotic Assistant URL. LARA -Lightweight Agile Robotic Assistant URL: https: //neura-robotics.com/products/lara.
Kinematic control equations for simple manipulators. R P Paul, B Shimano, IEEE Conference on Decision and Control (CDC) including the 17th Symposium on Adaptive Processes. Paul, R.P., Shimano, B., 1979. Kinematic control equations for simple manipulators, in: IEEE Conference on Decision and Control (CDC) including the 17th Symposium on Adaptive Pro- cesses, pp. 1398-1406.
A method for reducing the energy consumption of pick-and-place industrial robots. M Pellicciari, G Berselli, F Leali, A Vergnano, Mechatronics. 23Pellicciari, M., Berselli, G., Leali, F., Vergnano, A., 2013. A method for reducing the energy consumption of pick-and-place industrial robots. Mechatronics 23, 326-334.
. Productive Robotics, Inc, Productive Robotics, Inc., [Accessed on 07 October 2022].
. Meet The, Cobot, Url, MEET THE OB7 COBOT FAMILY URL: https://www. productiverobotics.com/ob7-products.
Precision grasp using an armhand system as a hybrid parallel-serial system: A novel inverse kinematics solution. S Qiu, M R Kermani, IEEE Robotics and Automation Letters. 6Qiu, S., Kermani, M.R., 2021. Precision grasp using an arm- hand system as a hybrid parallel-serial system: A novel inverse kinematics solution. IEEE Robotics and Automation Letters 6, 8530-8536.
Inverse Kinematics of the General 6R Manipulator and Related Linkages. M Raghavan, B Roth, Journal of Mechanical Design. 115Raghavan, M., Roth, B., 1993. Inverse Kinematics of the Gen- eral 6R Manipulator and Related Linkages. Journal of Mechan- ical Design 115, 502-508.
A modified dls scheme with controlled cyclic solution for inverse kinematics in redundant robots. M Safeea, R Bearee, P Neto, IEEE Transactions on Industrial Informatics. 17Safeea, M., Bearee, R., Neto, P., 2021. A modified dls scheme with controlled cyclic solution for inverse kinematics in redun- dant robots. IEEE Transactions on Industrial Informatics 17, 8014-8023.
Planning pick-and-place tasks with two-hand regrasping. J P Saut, M Gharbi, J Cortés, D Sidobre, T Siméon, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Saut, J.P., Gharbi, M., Cortés, J., Sidobre, D., Siméon, T., 2010. Planning pick-and-place tasks with two-hand regrasping, in: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4528-4533.
A solution algorithm to the inverse kinematic problem for redundant manipulators. L Sciavicco, B Siciliano, IEEE Journal on Robotics and Automation. 4Sciavicco, L., Siciliano, B., 1988. A solution algorithm to the inverse kinematic problem for redundant manipulators. IEEE Journal on Robotics and Automation 4, 403-410.
Analytical inverse kinematic computation for 7-dof redundant manipulators with joint limits and its application to redundancy resolution. M Shimizu, H Kakuya, W K Yoon, K Kitagaki, K Kosuge, IEEE Transactions on Robotics. 24Shimizu, M., Kakuya, H., Yoon, W.K., Kitagaki, K., Kosuge, K., 2008. Analytical inverse kinematic computation for 7-dof redundant manipulators with joint limits and its application to redundancy resolution. IEEE Transactions on Robotics 24, 1131-1142.
Kinematic control of redundant robot manipulators: A tutorial. B Siciliano, Journal of intelligent and robotic systems. 3Siciliano, B., 1990. Kinematic control of redundant robot ma- nipulators: A tutorial. Journal of intelligent and robotic systems 3, 201-212.
Robot Modeling and Control. M Spong, S Hutchinson, M Vidyasagar, WileyNew York, USASpong, M., Hutchinson, S., Vidyasagar, M., 2005. Robot Mod- eling and Control. Wiley: New York, USA.
Parameter identification of the KUKA LBR iiwa robot including constraints on physical feasibility. Y R Stürz, L M Affolter, R S Smith, IFAC PapersOnLine. 50Stürz, Y.R., Affolter, L.M., Smith, R.S., 2017. Parameter iden- tification of the KUKA LBR iiwa robot including constraints on physical feasibility. IFAC PapersOnLine 50, 6863-6868.
A novel singularity-consistent inverse kinematics decomposition for srs type manipulators. S Taki, D Nenchev, IEEE International Conference on Robotics and Automation (ICRA). Taki, S., Nenchev, D., 2014. A novel singularity-consistent in- verse kinematics decomposition for srs type manipulators, in: IEEE International Conference on Robotics and Automation (ICRA), pp. 5070-5075.
Linear vs. quadratic discriminant analysis classifier: a tutorial. A Tharwat, International Journal of Applied Pattern Recognition. 3Tharwat, A., 2016. Linear vs. quadratic discriminant analysis classifier: a tutorial. International Journal of Applied Pattern Recognition 3, 145-180.
Fast swing-up trajectory optimization for a spherical pendulum on a 7-dof collaborative robot. M N Vu, C Hartl-Nesic, A Kugi, 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEEVu, M.N., Hartl-Nesic, C., Kugi, A., 2021. Fast swing-up tra- jectory optimization for a spherical pendulum on a 7-dof col- laborative robot, in: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE. pp. 10114-10120.
On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. A Wächter, L T Biegler, Mathematical Programming. 106Wächter, A., Biegler, L.T., 2006. On the implementation of an interior-point filter line-search algorithm for large-scale nonlin- ear programming. Mathematical Programming 106, 25-57.
Smooth point-to-point trajectory planning for industrial robots with kinematical constraints based on high-order polynomial curve. H Wang, H Wang, J Huang, B Zhao, L Quan, Mechanism and Machine Theory. 139Wang, H., Wang, H., Huang, J., Zhao, B., Quan, L., 2019. Smooth point-to-point trajectory planning for industrial robots with kinematical constraints based on high-order polynomial curve. Mechanism and Machine Theory 139, 284-293.
A real-time-capable closed-form multi-objective redundancy resolution scheme for seven-dof serial manipulators. W Wiedmeyer, P Altoé, J Auberle, C Ledermann, T Kröger, IEEE Robotics and Automation Letters. 6Wiedmeyer, W., Altoé, P., Auberle, J., Ledermann, C., Kröger, T., 2020. A real-time-capable closed-form multi-objective re- dundancy resolution scheme for seven-dof serial manipulators. IEEE Robotics and Automation Letters 6, 431-438.
Enhancing kinematic accuracy of redundant wheeled mobile manipulators via adaptive motion planning. H Xing, A Torabi, L Ding, H Gao, W Li, M Tavakoli, Mechatronics. 79102639Xing, H., Torabi, A., Ding, L., Gao, H., Li, W., Tavakoli, M., 2021. Enhancing kinematic accuracy of redundant wheeled mo- bile manipulators via adaptive motion planning. Mechatronics 79, 102639.
Manipulability of robotic mechanisms. T Yoshikawa, The International Journal of Robotics Research. 4Yoshikawa, T., 1985. Manipulability of robotic mechanisms. The International Journal of Robotics Research 4, 3-9.
Mobile manipulator is coming to aerospace manufacturing industry. K Zhou, G Ebenhofer, C Eitzinger, U Zimmermann, C Walter, J Saenz, L P Castaño, M A F Hernández, J N Oriol, IEEE international symposium on robotic and sensors environments (ROSE). Zhou, K., Ebenhofer, G., Eitzinger, C., Zimmermann, U., Wal- ter, C., Saenz, J., Castaño, L.P., Hernández, M.A.F., Oriol, J.N., 2014. Mobile manipulator is coming to aerospace manufac- turing industry, in: IEEE international symposium on robotic and sensors environments (ROSE), pp. 94-99.
| []
|
[
"Phase transitions in self-gravitating systems and bacterial populations with a screened attractive potential",
"Phase transitions in self-gravitating systems and bacterial populations with a screened attractive potential"
]
| [
"P H Chavanis \nLaboratoire de Physique Théorique (IRSAMC)\nCNRS and UPS\nUniversité de Toulouse\nF-31062ToulouseFrance\n",
"L Delfini \nLaboratoire de Physique Théorique (IRSAMC)\nCNRS and UPS\nUniversité de Toulouse\nF-31062ToulouseFrance\n"
]
| [
"Laboratoire de Physique Théorique (IRSAMC)\nCNRS and UPS\nUniversité de Toulouse\nF-31062ToulouseFrance",
"Laboratoire de Physique Théorique (IRSAMC)\nCNRS and UPS\nUniversité de Toulouse\nF-31062ToulouseFrance"
]
| []
| We consider a system of particles interacting via a screened Newtonian potential and study phase transitions between homogeneous and inhomogeneous states in the microcanonical and canonical ensembles. Like for other systems with long-range interactions, we obtain a great diversity of microcanonical and canonical phase transitions depending on the dimension of space and on the importance of the screening length. We also consider a system of particles in Newtonian interaction in the presence of a "neutralizing background". By a proper interpretation of the parameters, our study describes (i) self-gravitating systems in a cosmological setting, and (ii) chemotaxis of bacterial populations in the original Keller-Segel model. | 10.1103/physreve.81.051103 | [
"https://arxiv.org/pdf/1001.1942v1.pdf"
]
| 28,657,644 | 1001.1942 | 57410a4ba2e15599eb994ef486729b745cb422a2 |
Phase transitions in self-gravitating systems and bacterial populations with a screened attractive potential
12 Jan 2010
P H Chavanis
Laboratoire de Physique Théorique (IRSAMC)
CNRS and UPS
Université de Toulouse
F-31062ToulouseFrance
L Delfini
Laboratoire de Physique Théorique (IRSAMC)
CNRS and UPS
Université de Toulouse
F-31062ToulouseFrance
Phase transitions in self-gravitating systems and bacterial populations with a screened attractive potential
12 Jan 2010
We consider a system of particles interacting via a screened Newtonian potential and study phase transitions between homogeneous and inhomogeneous states in the microcanonical and canonical ensembles. Like for other systems with long-range interactions, we obtain a great diversity of microcanonical and canonical phase transitions depending on the dimension of space and on the importance of the screening length. We also consider a system of particles in Newtonian interaction in the presence of a "neutralizing background". By a proper interpretation of the parameters, our study describes (i) self-gravitating systems in a cosmological setting, and (ii) chemotaxis of bacterial populations in the original Keller-Segel model.
I. INTRODUCTION
Many biological species like bacteria, amoebae, endothelial cells, or even ants interact through the phenomenon of chemotaxis [1]. These organisms secrete a chemical substance (like a pheromone) that has an attractive (or sometimes repulsive) action on the organisms themselves. This phenomenon is responsible for the self-organization and morphogenesis of many biological species. It has also been proposed as a leading mechanism for the formation of blood vessels during embriogenesis [2]. On a theoretical point of view, chemotaxis can be described by the Keller-Segel [3] model or its generalizations [4]. The Keller-Segel model consists in a drift-diffusion equation for the evolution of the density of bacteria ρ(r, t) coupled to a reaction-diffusion equation for the evolution of the secreted chemical c(r, t). In certain approximations, the reaction-diffusion equation is replaced by a Poisson equation. In that case, the Keller-Segel (KS) [3] model becomes isomorphic to the Smoluchowski-Poisson (SP) system [5] describing self-gravitating Brownian particles (see, e.g., [6] for a description of this analogy). The KS model and SP system have been studied thoroughly in applied mathematics (see Refs. in [7]) and in theoretical physics (see Refs. in [5]).
However, the original KS model [3] also allows for the possibility that the chemical suffers a degradation process which has the effect of reducing the range of the interaction. In that case, the Poisson equation is replaced by a screened Poisson equation [8]. In the gravitational analogy, this amounts to replacing the gravitational potential by a screened gravitational potential, i.e. an attractive Yukawa potential. In that case, there exists interesting phase transitions between spatially homogeneous and spatially inhomogeneous equilibrium distributions. This is a physical motivation to consider the thermodynamics of N-body systems interacting via an attractive Yukawa potential [9]. This will be called the screened Newtonian model. We shall also consider a related model where the interaction is not screened but the Poisson equation is modified so as to allow for the existence of spatially homogeneous distributions at equilibrium. This will be called the modified Newtonian model. In that model, the source of the potential is the deviation between the actual density ρ(r, t) and the average density ρ. This is similar to the effect of a "neutralizing background" in plasma physics [10]. This model can be derived from the Keller-Segel model in the limit of vanishing degradation of the chemical [11]. It also appears in cosmology, due to the expansion of the universe, when we work in the comoving frame [12]. It is therefore interesting to consider this form of interaction at a general level and study the corresponding phase transitions. We shall also compare them with the ones obtained within the ordinary Newtonian model [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] (see a review in [31]).
The paper is organized as follows. In Sec. II, we discuss several kinetic models taken from astrophysics, plasma physics and biology for which our study applies. We consider either isolated systems described by the microcanonical ensemble (fixed energy E) or dissipative systems described by the canonical ensemble (fixed temperature T ). We characterize their equilibrium states in the mean field approximation: in the microcanonical ensemble (MCE), they maximize the entropy at fixed mass and energy and in the canonical ensemble (CE) they minimize the free energy at fixed mass. In Sec. III, we specifically consider the case of a Newtonian interaction with a neutralizing background. We study phase transitions between homogeneous and inhomogeneous states depending on the dimension of space. In d = 1, the system presents canonical and microcanonical second order phase transitions. In d = 2, the system presents an isothermal collapse in CE (zeroth order phase transition) and a first order phase transition in MCE. In d = 3, the system presents an isothermal collapse in CE and a gravothermal catastrophe in MCE (zeroth order phase transitions). In Sec. IV, we perform a similar study for the attractive Yukawa potential with screening length k −1 0 . In d = 1, there exists a canonical tricritical point (k 0 ) c R = √ 2π ≃ 4.44 and a microcanonical tricritical point (k 0 ) m R ≃ 11.8, where R is the system size. If k 0 < (k 0 ) c , the system presents canonical and microcanonical second order phase transitions. In that case, the ensembles are equivalent. If (k 0 ) c < k 0 < (k 0 ) m , the system presents a canonical first order phase transition and a microcanonical second order phase transition. In that case, there exists a region of negative specific heats in MCE and the ensembles are inequivalent. If k 0 > (k 0 ) m , the system presents canonical and microcanonical first order phase transitions. In d = 2 and d = 3, the phase transitions are similar to those reported for the modified Newtonian model. In Sec. V, we study the dynamical stability of the homogeneous phase and analytically determine the critical point (E * c , T * c ) that marks the onset of instability of the homogeneous branch and the starting point of the bifurcated inhomogeneous branch. Direct numerical simulations associated with these phase transitions will be reported in a forthcoming paper.
Finally, it may be noted that the phase transitions reported in this paper share analogies (but also differences) with phase transitions observed in the Hamiltonian mean field (HMF) model [32][33][34][35][36], the spherical mass shell (SMS) model [37], the Blume-Emery-Griffiths (BEG) model [38], the infinite-range attactive interaction (IRAI) model [39], the self-gravitating Fermi gas (SGF) model [28], the self-gravitating ring (SGR) model [40] and the one-dimensional static cosmology (OSC) model [41].
II. KINETIC MODELS AND STATISTICAL EQUILIBRIUM STATES
Isolated systems
We consider an isolated system of N particles in interaction described by the Hamiltonian equations
m dr i dt = ∂H ∂v i , m dv i dt = − ∂H ∂r i ,(1)
where
H = i 1 2 mv 2 i + m 2 i<j u(r i , r j ) + m i V (r i ). (2)
We assume that the particles interact through a binary potential u(r, r ′ ) that is symmetric with respect to the interchange of r and r ′ , and that they also evolve in a fixed external potential V (r). Since the system is isolated, with strict conservation of energy and mass, the proper statistical ensemble is the microcanonical ensemble [9]. In this paper, we shall use a mean field approach [63]. In the microcanonical ensemble, the statistical equilibrium state is obtained by maximizing the Boltzmann entropy at fixed mass and energy. We thus have to solve the maximization problem
max f {S[f ] | E[f ] = E, M [f ] = M } ,(3)
with
S = −k B f m ln f m drdv,(4)M = ρ dr,(5)E = f v 2 2 drdv + 1 2 ρ(r, t)u(r, r ′ )ρ(r ′ , t) drdr ′ + ρV dr,(6)
where ρ(r, t) = f (r, v, t) dv is the spatial density. Introducing the mean field potential Φ(r) = u(r, r ′ )ρ(r ′ ) dr ′ + V (r),
the energy can also be written
E = 1 2 f v 2 drdv + 1 2 ρ(Φ + V ) dr.(8)
We shall be interested in global and local entropy maxima. Let us first determine the critical points of entropy at fixed mass and energy which cancel the first order variations. Introducing Lagrange multipliers, they satisfy
δS − 1 T δE − αδM = 0.(9)
The variations are straightforward to evaluate and we obtain the mean field Maxwell-Boltzmann distribution
f = Ae −βm v 2 2 +Φ ,(10)
where β = 1/k B T and Φ(r) is given by Eq. (7). Integrating over the velocity, the find that the density is given by the mean field Boltzmann distribution
ρ = A ′ e − mΦ k B T .(11)
This critical point is a (local) entropy maximum at fixed mass and energy iff
δ 2 J = − (δf ) 2 2mf drdv − 1 2
β δρδΦ dr ≤ 0, (12) for all perturbations δf that conserve mass and energy at first order. In Appendix A, we provide an equivalent but simpler condition of stability in the microcanonical ensemble [see inequality (A12)].
The time evolution of the distribution function f (r, v, t) is governed by a kinetic equation of the form
∂f ∂t + v · ∂f ∂v − ∇Φ · ∂f ∂r = ∂f ∂t coll ,(13)
where Φ(r, t) = u(r, r ′ )ρ(r ′ , t) dr ′ + V (r), (14) is the time-dependent mean field potential. The l.h.s. is an advective operator (Vlasov) in phase space. The r.h.s. is a "collision" operator like the Boltzmann operator in the kinetic theory of gases or like the Landau (or Lenard-Balescu) operator in plasma physics or stellar dynamics. The "collision" operator in Eq. (13) takes into account the development of correlations between particles. It can have a more or less complicated form but it satisfies general properties associated with the first and second principles of thermodynamics: (i) it conserves mass and energy; (ii) it satisfies an H-theorem for the Boltzmann entropy (4), i.e.Ṡ ≥ 0 with an equality iff f is the Maxwell-Boltzmann distribution (10). Furthermore, the Maxwell-Boltzmann distribution is dynamically stable iff it is a (local) entropy maximum at fixed mass and energy. These general properties can be checked directly for the Boltzmann equation, for the Landau equation, for the Lenard-Balescu equation and for the BGK operator. Therefore, the kinetic equation (13) is consistent with the maximization problem (3) describing the statistical equilibrium state of the system in MCE. If we neglect the collisions for sufficiently short times, Eq. (13) reduces to the Vlasov equation which can experience a complicated process of collisionless violent relaxation towards a quasi stationary state (QSS) [43].
Dissipative systems in phase space
We consider a dissipative system of N Brownian particles in interaction described by the Langevin equations
m dr i dt = ∂H ∂v i ,(15)dv i dt = − 1 m ∂H ∂r i − ξv i + √ 2DR i (t),(16)
where H is the Hamiltonian defined by Eq. (2), −ξv i is a friction force and R i (t) is a white noise satisfying
R i (t) = 0 and R µ i (t)R ν j (t) = δ ij δ µν δ(t−t ′ )
. The diffusion coefficient D and the friction coefficient ξ are related to each other according to the Einstein relation ξ = Dβm where β = 1/(k B T ) is the inverse temperature. Since this system is dissipative, the proper statistical ensemble is the canonical ensemble [9]. In the canonical ensemble, the statistical equilibrium state is obtained by minimizing the Boltzmann free energy F [f ] = E[f ] − T S[f ] at fixed mass. We thus have to solve the minimization problem
min f {F [f ] | M [f ] = M }(17)
with
F = f v 2 2 drdv + 1 2 ρ(r, t)u(r, r ′ )ρ(r ′ , t) drdr ′ + ρV dr + k B T f m ln f m drdv.(18)
We shall be interested by global and local minima of free energy. Let us first determine the critical points of free energy at fixed mass which cancel the first order variations. Introducing a Lagrange multiplier, they satisfy
δF + αT δM = 0.(19)
The variations are straightforward to evaluate and we obtain the mean field Maxwell-Boltzmann distribution (10) and the mean field Boltzmann distribution (11) as in the microcanonical ensemble. This critical point is a (local) minimum of free energy iff
δ 2 F = 1 2 δρδΦ dr + k B T m (δf ) 2 2f drdv ≥ 0,(20)
for all perturbations δf that conserve mass.
In the mean field approximation, the evolution of the distribution function f (r, v, t) is governed by a kinetic equation of the form
∂f ∂t + v · ∂f ∂v − ∇Φ · ∂f ∂r = ∂ ∂v D ∂f ∂v + ξf v ,(21)
coupled to the mean field potential (14). This is called the mean field Kramers equation. The mean field Kramers equation conserves mass and satisfies an Htheorem for the Boltzmann free energy (18), i.e.Ḟ ≤ 0 with an equality iff f is the Maxwell-Boltzmann distribution (10). Furthermore, the Maxwell-Boltzmann distribution is dynamically stable iff it is a (local) minimum of free energy at fixed mass. Therefore, the kinetic equation (21) is consistent with the minimization problem (17) describing the statistical equilibrium state of the system in CE.
Remark: the critical points in MCE and CE are the same because the variational problems (3) and (17) are equivalent at the level of the first order variations (9) and (19). However, they are not equivalent at the level of the second order variations (12) and (20) because of the different class of perturbations to consider. Therefore, we can have ensembles inequivalence [22,31,44,45]. In fact, the condition of canonical stability (17) provides a sufficient condition of microcanonical stability (3). Indeed, if inequality (20) is satisfied for all perturbations that conserve mass, then it is a fortiori satisfied for perturbations that conserve mass and energy, so that inequality (12) is satisfied. Therefore, canonical stability implies microcanonical stability:
(17) ⇒ (3).(22)
However, the converse is wrong in case of ensembles inequivalence.
Dissipative systems in physical space
In the strong friction limit ξ → +∞, we can formally neglect the inertial term dv i /dt in Eq. (16) and we obtain the overdamped Langevin equations
ξ dr i dt = − 1 m ∂H ∂r i + √ 2DR i (t).(23)
The statistical equilibrium state of this system (described by the canonical ensemble [9]) is obtained by solving the minimization problem
min ρ {F [ρ] | M [ρ] = M } ,(24)
with
F = 1 2 ρ(r, t)u(r, r ′ )ρ(r ′ , t) drdr ′ + ρV dr + k B T ρ m ln ρ m dr.(25)
Writing the variational principle as
δF + αT δM = 0,(26)
we obtain the mean field Boltzmann distribution (11). This critical point is a (local) minimum of free energy at fixed mass iff
δ 2 F = 1 2 δρδΦ dr + k B T m (δρ) 2 2ρ dr ≥ 0,(27)
for all perturbations δρ that conserves mass. In the mean field approximation, the evolution of the density profile ρ(r, t) is governed by a kinetic equation of the form
∂ρ ∂t = ∇ · 1 ξ k B T m ∇ρ + ρ∇Φ ,(28)
coupled to the mean field equation (14). This is called the mean field Smoluchowski equation. The mean field Smoluchowski equation (28) conserves mass and satisfies an H-theorem for the Boltzmann free energy (25), i.e. F ≤ 0 with an equality iff ρ is the Boltzmann distribution (11). Furthermore, the Boltzmann distribution is dynamically stable iff it is a (local) minimum of free energy at fixed mass. Therefore, the kinetic equation (28) is consistent with the minimization problem (24) describing the statistical equilibrium state of the system in CE. Remark 1: the Smoluchowski equation (28) can also be deduced from the Kramers equation (21) in the strong friction limit [46]. For ξ, D → +∞ and β = ξ/Dm finite, the time-dependent distribution function f (r, v, t) is Maxwellian
f (r, v, t) = βm 2π d/2 ρ(r, t)e −βm v 2 2 + O(1/ξ),(29)
and the time-dependent density ρ(r, t) is solution of the Smoluchowski equation (28). Using Eq. (29), we can express the free energy (18) as a functional of the density and we obtain the free energy (25) up to some unimportant constants. Remark 2: it is shown in Appendix A that the maximization problems (17) and (24) are equivalent in the sense that f (r, v) is solution of (17) iff ρ(r) is solution of (24). Thus, we have
(17) ⇔ (24).(30)
As a consequence, the Maxwell-Boltzmann distribution f (r, v) is dynamically stable with respect to the mean field Kramers equation (21) iff the corresponding Boltzmann distribution ρ(r) is dynamically stable with respect to the mean field Smoluchowski equation (28). On the other hand, according to the implication (22), the Maxwell-Boltzmann distribution f (r, v) is dynamically stable with respect to the kinetic equation (13) if it is stable with respect to the mean field Kramers equation (21), but the reciprocal is wrong in case of ensembles inequivalence.
The Keller-Segel model of chemotaxis
The Keller-Segel model [3] describing the chemotaxis of biological populations can be written as
∂ρ ∂t = ∇ · (D∇ρ − χρ∇c) ,(31)1 D ′ ∂c ∂t = ∆c − k 2 c + λρ,(32)
where ρ is the concentration of the biological species (e.g. bacteria) and c is the concentration of the secreted chemical. The bacteria diffuse with a diffusion coefficient D and undergo a chemotactic drift with strength χ along the gradient of chemical. The chemical is produced by the bacteria at a rate D ′ λ, is degraded at a rate D ′ k 2 and diffuses with a diffusion coefficient D ′ . We adopt Neumann boundary conditions [3]:
∇c · n = 0, ∇ρ · n = 0,(33)
where n is a unit vector normal to the boundary of the domain. The drift-diffusion equation (31) is similar to the mean field Smoluchowski equation (28) where the concentration of chemical −c(r, t) plays the role of the potential Φ(r, t). Therefore, there exists many analogies between chemotaxis and Brownian particles in interaction [6]. In particular, the effective statistical ensemble associated with the Keller-Segel model is the canonical ensemble. The steady states of the Keller-Segel model are of the form
ρ = Ae χ D c ,(34)
which is similar to the Boltzmann distribution (11) with an effective temperature T ef f = D/χ. The Lyapunov functional associated with the KS model is [4]:
F = 1 2λ (∇c) 2 + k 2 c 2 dr − ρc dr +T ef f ρ ln ρ dr.(35)
It is similar to a free energy F = E − T ef f S in thermodynamics, where E is the energy and S is the Boltzmann entropy. The KS model conserves mass and satisfies an H-theorem for the free energy (35), i.e.Ḟ ≤ 0 with an equality iff ρ is the Boltzmann distribution (34). Furthermore, the Boltzmann distribution is dynamically stable iff it is a (local) minimum of free energy at fixed mass.
In that context, the minimization problem
min ρ,c {F [ρ, c] | M [ρ] = M } ,(36)
determines a steady state of the KS model that is dynamically stable. This is similar to a condition of thermodynamical stability in the canonical ensemble. Let us consider some simplified forms of the Keller-Segel model that have been introduced in the literature: (i) In the limit of large diffusivity of the chemical D ′ → +∞ at fixed k 2 and λ, the reaction-diffusion equation (32) takes the form of a screened Poisson equation [8]:
∆c − k 2 c = −λρ,(37)
and the free energy becomes
F = − 1 2 ρc dr + T ef f ρ ln ρ dr.(38)
In that case, the KS model is isomorphic to the Smoluchowski equation (28) with an attractive Yukawa potential (64). (ii) In the limit of large diffusivity of the chemical D ′ → +∞ and a vanishing degradation rate k 2 = 0, the reaction-diffusion equation (32) takes the form of a modified Poisson equation [11]:
∆c = −λ(ρ − ρ),(39)
where ρ = M/V is the average density, and the free energy becomes
F = − 1 2 (ρ − ρ)c dr + T ef f ρ ln ρ dr.(40)
In that case, the KS model is isomorphic to the Smoluchowski equation (28) with a modified Poisson equation (43).
(iii) Some authors have also considered a simple model of chemotaxis where the reaction-diffusion equation (32) is replaced by the Poisson equation [47]:
∆c = −λρ.(41)
This is valid in the absence of degradation of the chemical and for sufficiently large densities ρ ≫ ρ. This model can be used in particular to study chemotactic collapse. The corresponding free energy is
F = − 1 2 ρc dr + T ef f ρ ln ρ dr.(42)
In that model, the boundary conditions (33) must be modified [64] and we must impose that c → 0 at infinity like for the gravitational potential in astrophysics. Furthermore, we must impose that the normal component of the current vanishes on the boundary: (D∇ρ − χρ∇c) · n = 0 so as to conserve mass. In that case, the KS model is isomorphic to the Smoluchowski-Poisson (SP) system describing self-gravitating Brownian particles in the overdamped limit [29].
Physical justification of the canonical ensemble for systems with long-range interactions
In statistical mechanics, the canonical distribution is usually derived by considering a subpart of a large system and assuming that the rest of the system plays the role of a thermostat [48]. However, this justification implicitly assumes that energy is additive. Since energy is non-additive for systems with long-range interactions, it is sometimes concluded that the canonical ensemble has no foundation to describe systems with long-range interactions [49]. In fact, this is not quite true [9]. We can give two justifications of the canonical ensemble for systems with long-range interactions:
(i) The canonical ensemble is relevant to describe a system of particles in contact with a thermal bath of a different nature [9]. This is the case if we consider a system of Brownian particles in interaction described by the stochastic equations (15)- (16). The particles interact through a potential u(r, r ′ ) that can be long-range, but they also undergo a friction force and a stochastic force that are due to other types of interaction (they model in general short-range interactions). As we have seen, this system is described by the canonical ensemble. It does not correspond to a subsystem of a larger system, but simply to a system as a whole with long-range and short-range interactions [65].
(ii) Since canonical stability implies microcanonical stability [44], the condition of canonical stability provides a sufficient condition of microcanonical stability. In this sense, the canonical stability criterion (see Secs. II 2 and II 3) can be useful even for an isolated Hamiltonian system (see Sec. II 1) because if we can prove that this system is canonically stable, then it is granted to be microcanonically stable. This remark also applies to other ensembles (grand canonical, grand microcanonical,...).
III. THE MODIFIED NEWTONIAN MODEL
In this section, we discuss phase transitions that appear in the modified Newtonian model.
A. Physical motivation of the model
We consider a system of particles interacting via a mean field potential Φ(r, t) that is solution of the modified Poisson equation
∆Φ = S d G(ρ − ρ),(43)
where ρ = M/V is the average density (conserved quantity). At statistical equilibrium, the density is given by the Boltzmann distribution
ρ = Ae −βmΦ .(44)
We have used the notations of astrophysics (where G is the constant of gravity and S d the surface of a unit sphere in d dimensions) in order to make the connection with ordinary self-gravitating systems where Eq. (43) is replaced by the Poisson equation ∆Φ = S d Gρ. However, this model can have application in other contexts as explained below. We assume that the system is confined in a finite domain (box) and we impose the Neumann boundary conditions
∇Φ · n = 0, ∇ρ · n = 0,(45)
where n is a unit vector normal to the boundary of the box (the explicit expression of the potential in d = 1 is given in Appendix B). This model admits spatially homogeneous solutions (ρ = ρ and Φ = 0) at any temperature. It also admits spatially inhomogeneous solutions at sufficiently low temperatures. We shall study this model in arbitrary dimensions of space d with explicit computations for d = 1, 2, 3. This model has different physical applications: (i) It describes self-gravitating systems in a cosmological setting [12]. Due to the expansion of the universe, when we work in the comoving frame, the Poisson equation takes the form of Eq. (43) where the potential is produced by the deviation between the actual density ρ(r, t) and the mean density ρ. In cosmology, we must also account for the scale factor a(t) but if we consider timescales that are short with respect to the Hubble time H −1 = a/ȧ, we can ignore this time dependence. This model has been studied by Valageas [41] in d = 1 with periodic boundary conditions. In that context, the relevant ensemble is the MCE since the system is isolated.
(ii) By a proper reinterpretation of the parameters, the field equation (43) describes the relation between the concentration of the chemical and the density of bacteria in the Keller-Segel model (39). In that case, the most physical dimension is d = 2 and the boundary conditions are of the form (45). Furthermore, the relevant ensemble is the CE since the KS model has a canonical structure. This model has been studied by applied mathematicians, starting with Jäger & Luckhaus [11], but they have not performed the type of study that we are developing in this paper.
In view of these different applications, we shall study this model in the microcanonical and canonical ensembles in any dimension of space.
B. The modified Emden equation
In the modified Newtonian model, the statistical equilibrium state is given by the Boltzmann distribution (44) coupled to the modified Poisson equation (43). We look for spherically symmetric solutions because, for non rotating systems, entropy maxima (or minima of free energy) are spherically symmetric. Introducing the central density ρ 0 = ρ(0), the central potential Φ 0 = Φ(0), the new field ψ = βm(Φ − Φ 0 ) and the scaled distance ξ = (S d Gβmρ 0 ) 1/2 r, the Boltzmann distribution (44) can be rewritten
ρ = ρ 0 e −ψ(ξ) .(46)
Substituting this relation in the modified Poisson equation (43), we obtain the modified Emden equation
1 ξ d−1 d dξ ξ d−1 dψ dξ = e −ψ − λ,(47)
where λ = ρ/ρ 0 plays the role of the inverse central density. Since Φ ′ (0) = 0 for a spherically symmetric system, the boundary conditions at the origin are
ψ(0) = ψ ′ (0) = 0.(48)
The ordinary Emden equation [53] is recovered for λ = 0, i.e. for very large central densities with respect to the average density. The function e −ψ(ξ) is plotted in Figs. 1 and 2 for different values of λ and different dimensions of space d. It presents an infinity of oscillations. For d = 1, the oscillations are undamped and their period is given by Eq. (E11). For d ≥ 2, the oscillations are damped and the function ψ(ξ) tends to the asymptotic value − ln λ for ξ → +∞. We assume that the system is enclosed in a spherical box of radius R. The normalized box radius α = (S d Gβmρ 0 ) 1/2 R is determined by the boundary condition Φ ′ (R) = 0 that becomes
ψ ′ (α) = 0.(49)
For a given value of λ, we need to integrate the modified Emden equation (47)- (48) until the point ξ = α such that ψ ′ (α) = 0. Since the function ψ(ξ) presents an infinite number of oscillations, this determines an infinity of solutions α 1 (λ), α 2 (λ),... that will correspond to different branches in the following diagrams. Once α n (λ) is determined, the density profile is given by Eq. (46). The density profile is extremum at the center and at the boundary. On the n-th branch, the density profile shows n "clusters" corresponding to the oscillations of e −ψ(ξ) . Close to the origin, the density increases for λ > 1 while it decreases for λ < 1. The homogeneous state ψ = 0 corresponds to λ = 1. This solution is degenerate because the boundary condition (49) is satisfied for any α.
Remark: When λ → 0, corresponding to large values of the central density, we expect to obtain results similar to those obtained for the usual Newtonian model since the differential equation (47) reduces to the ordinary Emden equation. However, the results are different because the boundary conditions are not the same. In the Newtonian model, the force at the boundary is non zero (for a spherically symmetric system, according to the Gauss theorem, we have Φ ′ (R) = GM/R d−1 ) while in the modified Newtonian model the force at the boundary is zero (Φ ′ (R) = 0). Therefore, strictly speaking, the Newtonian and the modified Newtonian models behave differently even when ρ 0 → +∞. Nevertheless, for large central concentrations, the Newtonian solution provides a good approximation of the modified Newtonian solution in the core (see Appendix E).
C. The temperature
We must now relate the normalized central density 1/λ to the temperature T .
Recalling that ρ = M/V with V = 1 d S d R d , we obtain λ = ρ ρ 0 = dM S d R d 1 ρ 0 = d GM mβ R d−2 1 α 2 .(50)
Introducing the normalized temperature
η ≡ βGM m R d−2 ,(51)
we find the relation
η = 1 d λα 2 .(52)
Recalling that α = α n (λ) for the n-th branch, this equation gives the relation between the inverse temperature η and the central density 1/λ for the n-th branch. In Figs. 3, 4 and 5, we plot the inverse temperature η as a function of the central density 1/λ for the first three branches n = 1, 2, 3 in different dimensions of space d = 1, 2, 3. Let us discuss the asymptotic behaviors of the temperature (we only describe the first branch n = 1) and compare with the Newtonian model (see, e.g., [29]):
• In d = 1: for the ordinary Newtonian model, the series of equilibria is parameterized by α, which is a measure of the central density. When α → +∞, the distribution tends to a Dirac peak ρ = M δ(x) and the inverse temperature η → +∞. When α → 0, the distribution is homogeneous and the inverse temperature η → 0. For the modified Newtonian model, the series of equilibria is parameterized by the central density 1/λ. When 1/λ → +∞, the distribution tends to a Dirac peak ρ = M δ(x) and η → +∞ with the same asymptotic behavior as in the Newtonian model (see Appendix E). When λ = 1, the distribution is homogeneous and η = η * c = π 2 ≃ 9.8696044 (see Appendix F). When 1/λ → 0, the distribution tends to a Dirac peak
ρ = M 2 (δ(x − R) + δ(x + R)
) concentrated at the box and η → +∞.
• In d = 2: for the ordinary Newtonian model, the series of equilibria is parameterized by α. When α → +∞, the distribution tends to a Dirac peak ρ = M δ(r) and the inverse temperature tends to η c = 4. When α → 0, the distribution is homogeneous and the inverse temperature η → 0. For the modified Newtonian model, the series of equilibria is parameterized by 1/λ. When 1/λ → +∞, the distribution tends to a Dirac peak ρ = M δ(r) and η → η c = 4 (since the density is very much concentrated, the boundary conditions do not matter and we recover the same results as in the Newtonian case). When λ = 1, the distribution is homogeneous and η = η * c = 1 2 j 2 11 ≃ 7.3410008 (see Appendix F). When 1/λ → 0, the distribution is concentrated at the boundary and η → +∞.
• In d = 3: for the ordinary Newtonian model, the series of equilibria is parameterized by α. When α → +∞, the distribution tends to the singular isothermal sphere ρ s (r) = 1/(2πGβmr 2 ) and the inverse temperature η → η s = 2. The curve η(α) displays damped oscillations around this value. When α → 0, the distribution is homogeneous and the inverse temperature η → 0. For the modified Newtonian model, the series of equilibria is parameterized by 1/λ. When 1/λ → +∞, the distribution is concentrated at the center and we numerically find that η → 3.05... (the value is different from the Newtonian result η s = 2 due to different boundary conditions). The curve η(λ) displays damped oscillations around this value. When λ = 1, the distribution is homogeneous and η = η * c = 1 3 x 2 1 ≃ 6.7302445 (see Appendix F). When 1/λ → 0, the distribution is concentrated at the boundary and we numerically find that η → +∞.
D. The energy
We must also relate the normalized central density 1/λ to the energy E. The total energy is given by (see Appendix C):
E = f v 2 2 drdv + 1 2 (ρ − ρ) Φ dr.(53)
Using the Maxwell-Boltzmann distribution (10), the kinetic energy is simply
K = d 2 N k B T.(54)
Using the modified Poisson equation (43) and an integration by parts, the potential energy can be written
W = − 1 2S d G (∇Φ) 2 dr.(55)
The total energy E = K + W is therefore given by
E = d 2 N k B T − 1 2S d G (∇Φ) 2 dr.(56)
Introducing the dimensionless variables defined previously, recalling that r = ξR/α, and introducing the normalized energy
Λ ≡ − ER d−2 GM 2 ,(57)
we obtain In Figs. 6, 7 and 8, we plot the normalized energy Λ as a function of the central density 1/λ for the first three branches n = 1, 2, 3 in different dimensions of space d = 1, 2, 3.
Λ = − d 2η + 1 2η 2 1 α d−2 α 0 dψ dξ
Let us consider the asymptotic behaviors of the energy (we only describe the first branch n = 1) and compare with the Newtonian model (see, e.g., [29]):
• In d = 1: for the ordinary Newtonian model, the series of equilibria is parameterized by α, which is a measure of the central density. When α → +∞, the distribution tends to a Dirac peak ρ = M δ(x) and the energy Λ → 0. When α → 0, the distribution is homogeneous and the energy Λ → −∞. For the modified Newtonian model, the series of equilibria is parameterized by the central density 1/λ. When 1/λ → +∞, the distribution tends to a Dirac peak ρ = M δ(x) and Λ → Λ (1) max = 1/6 (see Appendix D). When λ = 1 the distribution is homogeneous and Λ = Λ * c = −1/(2η * c ) ≃ −0.0506606. When 1/λ → 0, the distribution tends to a Dirac peak ρ = M 2 (δ(x − R) + δ(x + R)) concentrated at the box and Λ → Λ ries of equilibria is parameterized by α. When α → +∞, the distribution tends to a Dirac peak ρ = M δ(r) and the energy Λ → +∞. When α → 0, the distribution is homogeneous and the energy Λ → −∞. For the modified Newtonian model, the series of equilibria is parameterized by 1/λ. When 1/λ → +∞, the distribution tends to a Dirac peak ρ = M δ(r) and Λ → +∞. When λ = 1 the distribution is homogeneous and Λ = Λ * c = −1/η * c ≃ −0.13622121. When 1/λ → 0, the distribution is concentrated at the boundary and we numerically find that Λ → 0.1.
• In d = 3: for the ordinary Newtonian model, the series of equilibria is parameterized by α. When α → +∞, the distribution tends to the singular isothermal sphere ρ s (r) = 1/(2πGβmr 2 ) with energy Λ s = 1/4. The curve Λ(α) undergoes damped oscillations around this value. When α → 0, the distribution is homogeneous and the energy Λ → −∞. For the modified Newtonian model, the series of equilibria is parameterized by 1/λ. When 1/λ → +∞, the distribution is concentrated at the center and we numerically find that Λ → −0.38... (the value is different from the Newtonian result Λ s = 1/4 due to different boundary conditions). The curve Λ(λ) undergoes damped oscillations around this value. When λ = 1 the distribution is homogeneous and Λ = Λ * c = −3/(2η * c ) ≃ −0.22287452. When λ → 0, the distribution is concentrated at the boundary and we numerically find that Λ → 0.05.
E. The entropy and the free energy
Finally, we relate the central density 1/λ to the entropy S and to the free energy F . Using Eqs. (4), (10) and (11), the entropy is given by Substituting Eq. (46) in Eq. (59), and introducing the dimensionless variables defined previously, we get
S = d 2 N k B ln T − k B ρ m ln ρ m dr.(59)S N k B = − d 2 ln β − ln ρ 0 + ρ 0 N m S d R α d α 0 ψe −ψ ξ d−1 dξ,(60)
up to some unimportant constants. Using α = (S d Gβmρ 0 ) 1/2 R to express ρ 0 in terms of α and introducing the normalized temperature (51), we finally obtain
S N k B = − d − 2 2 ln η − 2 ln α + 1 η 1 α d−2 α 0 ψe −ψ ξ d−1 dξ,(61)
up to some unimportant constants. Using the previous results, this expression relates the entropy S/N k B to the central density 1/λ. The free energy is F = E − T S. In the following, it will be more convenient to work in terms of the Massieu function J = S − k B βE (by an abuse of language, we shall often refer to J as the free energy). We have
J N k B = S N k B + ηΛ.(62)
Using the previous results, this expression relates the free energy J/N k B to the central density 1/λ. In Figs. 9, 10 and 11, we have plotted the free energy J/N k B as a function of the inverse temperature η (parameterized by the central density 1/λ) in d = 1, 2, 3. In Figs. 12, 13, 14 and 15, we have plotted the entropy S/N k B as a function of the energy Λ (parameterized by the central density 1/λ) in d = 1, 2, 3. In these figures, the solid lines without label refer to the homogeneous phase. The solid lines with label n = 1 refer to the first inhomogeneous branch. The dashed lines with label n = 2 refer to the second inhomogeneous branch. These curves will be helpful in the next section to analyze the phase transitions in the canonical and microcanonical ensembles respectively.
Remark: Since δS = k B βδE, the extrema of entropy S(λ) and energy E(λ) coincide. Since the series of equilibria E(λ) exhibits damped oscillations for 1/λ → +∞ in d = 3 (see Fig. 8), this implies that the curve S(λ) will also exhibit damped oscillations at the same locations. Correspondingly, S(E) will present some "spikes" for 1/λ → +∞ in d = 3 (see inset of Fig. 15). Similarly, since δJ = −Ek B δβ, the extrema of free energy J(λ) and temperature β(λ) coincide. Since the series of equilibria β(λ) undergoes damped oscillations for 1/λ → +∞ in d = 3 (see Fig. 5), this implies that the curve J(λ) will also exhibit damped oscillations at the same location, and that the curve J(β) will present some "spikes" for 1/λ → +∞ in d = 3 (see inset of Fig. 11). In addition, the curve J(β) presents a minimum for η ≃ 24.7 corresponding to E = 0. Similar behaviors were previously observed in the model of self-gravitating fermions [28,31].
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 Λ -6 -4 -2 0 s λ=1 λ → 0 ↑ ↓ λ → ∞
F. Caloric curves and phase transitions
We shall now determine the caloric curve β(E) corresponding to the modified Newtonian model. First of all, we note that for the homogeneous phase, the potential energy W = 0 so that the energy reduces to the kinetic energy. Therefore, the series of equilibria of the homogeneous phase is simply
η = − d 2Λ .(63)
On the other hand, eliminating λ between η n (λ) and Λ n (λ) given by Eqs. (52) and (58), we get the series of equilibria η n (Λ) for the n-th inhomogeneous branch. The series of equilibria contain all the critical points of the optimization problems (3) and (17). The series of equilibria are the same in the canonical and microcanonical ensembles because the critical points are the same. They contain fully stable states (global maxima of S or J), metastable states (local maxima of S or J) and unstable states (saddle points of S or J). The stable parts of the series of equilibria form the caloric curves in the canonical and microcanonical ensembles. We shall distinguish the strict caloric curves formed by fully stable states and the physical caloric curves containing fully stable and metastable states [66]. Metastable states are important because they can be long-lived in systems with long-range interaction [55,56]. The caloric curves may differ in CE and MCE in case of ensembles inequivalence. They are described below in different dimensions of space. Remark: In order to determine the stable branch, we shall compare the entropy (in MCE) or the free energy (in CE) of the different solutions in competition (with the same values of energy or temperature). However, this is not sufficient because a distribution could have a high entropy and be an unstable saddle point. A more rigorous study should therefore investigate the sign of the second order variations of entropy or free energy for each critical point. But this is a difficult task that is left for future works. In order to find the stable states, we shall use physical considerations and exploit results obtained in related studies.
The dimension d = 1
In Fig. 16 we plot the series of equilibria in d = 1. Let us first describe the canonical ensemble (CE). The control parameter is the inverse temperature η and the stable states are maxima of free energy J at fixed mass M . The homogeneous phase exists for any value of η. It is fully stable for η < η * c and unstable for η > η * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists only for η > η * c . It has a higher free energy J than the homogeneous phase (see Fig. 9) and it is fully stable. Secondary branches of inhomogeneous states appear for smaller values of the temperature but they have smaller values of free energy J (see Fig. 9) and they are unstable (saddle points of free energy). Therefore, the canonical caloric curve displays a second order phase transition between homogeneous and inhomogeneous states marked by the discontinuity of ∂E ∂β at β = β * c . For the inhomogeneous states, there exists two solutions with the same temperature and the same free energy but with different density profiles corresponding to λ 1 < 1 and λ 2 > 1 (see Fig. 17). Thus, the inhomogeneous branch is degenerate. These two states can be distinguished by their central density 1/λ. In conclusion: (i) for η < η * c , there is only one stable state λ = 1 (homogeneous); (ii) for η > η * c , there are two stable states λ 1 < 1 and λ 2 > 1 (inhomogeneous) with the same free energy and one unstable state λ = 1 (homogeneous). Therefore, the central density 1/λ plays the role of an order parameter (see Fig. 3). In d = 1, there exists a fully stable equilibrium state for any temperature. This is consistent with the usual Newtonian model in d = 1 [20,29]. This is also consistent with results of chemotaxis since it has been rigorously proven that the Keller-Segel model does not blow up in d = 1 [8].
Let us now describe the microcanonical ensemble (MCE). The control parameter is the energy Λ and the stable states are maxima of entropy S at fixed mass M and energy E. The homogeneous phase exists for any value of energy Λ < 0. It is fully stable for Λ < Λ * c and unstable for Λ > Λ * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists only for Λ * c < Λ < Λ max . It has a higher entropy S than the homogeneous phase (see Fig. 12) and it is fully stable. Secondary branches of inhomogeneous states appear for smaller values of the energy but they have smaller values of entropy S (see Fig. 12) and they are unstable (saddle points of entropy). Therefore, the microcanonical caloric curve displays a second order phase transition marked by the discontinuity of ∂β ∂E at E = E * c . For the inhomogeneous states, there exists two solutions with the same energy and the same entropy but with different density profiles corresponding to λ 1 < 1 and λ 2 > 1. Thus, the inhomo- geneous branch is degenerate. These two states can be distinguished by their central density 1/λ. In conclusion: (i) for Λ < Λ * c , there is only one stable state λ = 1 (homogeneous); (ii) for Λ * c < Λ < 0, there are two stable states λ 1 < 1 and λ 2 > 1 (inhomogeneous) with the same entropy and one unstable state λ = 1 (homogeneous). (iii) for 0 < Λ < Λ max , there are two stable states λ 1 < 1 and λ 2 > 1 (inhomogeneous) with the same entropy. Therefore, the central density 1/λ plays the role of an order parameter (see Fig. 6). In d = 1, there exists a fully stable equilibrium state for any accessible energy. This is consistent with the usual Newtonian model in d = 1 [20,29].
The caloric curve, corresponding to the fully stable states in the series of equilibria, is denoted by (S) in Fig. 16. The branch (U) corresponds to unstable states. There exists a fully stable equilibrium state for any ac-cessible values of energy in MCE and temperature in CE. The microcanonical and canonical ensembles are equivalent (like in the Newtonian case).
In conclusion, the system displays second order phase transition in CE and MCE. This is similar to the HMF model [32][33][34].
The dimension d = 2
In Fig. 18 we plot the series of equilibria in d = 2. Let us first describe the canonical ensemble (CE). The control parameter is the inverse temperature η. The homogeneous phase exists for any η. It is stable for η < η * c and unstable for η > η * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists for η > η c = 4 and it connects the homogeneous branch at η * c . For η < η * c , it has a lower free energy J than the homogeneous phase (see Fig. 10) and it is unstable. For η > η * c , it has a higher free energy J than the homogeneous phase (see Fig. 10). However, it is expected to be unstable or, possibly, metastable (to settle this issue we have to study the sign of the second order variations of free energy as explained above). Secondary inhomogeneous branches appear for smaller values of the temperature but they have smaller values of the free energy (see Fig. 10) and they are unstable. The homogeneous branch is expected to be fully stable for η < η c = 4 and metastable for η c = 4 < η < η * c (see Fig. 19). These conclusions are motivated by two arguments: (i) in the Newtonian model in d = 2, we know that there exists a fully stable equilibrium state for η < η c = 4 and no equilibrium state for η > η c = 4. In that case, the system undergoes an isothermal collapse [19,29]. For η > η c = 4, there is no global maximum of free energy J because we can make it diverge by creating a Dirac peak containing all the particles. In the modified Newtonian model, the same argument applies since it is independent of boundary conditions. Since we know that the homogeneous branch is stable for η < η * c , we conclude that it must be metastable in the range η c = 4 < η < η * c . There is therefore a zeroth order phase transition at η c = 4 marked by the discontinuity of the free energy. (ii) In the chemotactic literature, it has been rigorously established that the Keller-Segel model in d = 2 does not blow up for η < η c = 4 while it can blow up for η > η c = 4 [8]. This is consistent with our stability results.
Let us now describe the microcanonical ensemble (MCE). The control parameter is the energy Λ. The homogeneous phase exists for all values of Λ < 0. It is stable for Λ < Λ * c and unstable for Λ > Λ * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists for Λ > Λ * and it connects the homogeneous branch at Λ * c . We see that the inhomogeneous branch β(E) is multi-valued. Considering the value of the entropy in the different phases (see Figs. 13 and 14), the caloric curve is expected to display a microcanonical first order phase transition at Λ = Λ t ≃ −0.146 marked by the discontinu- The first inhomogeneous branch n = 1 tends to a plateau ηc = 4 for large central densities 1/λ → +∞ due to the formation of a Dirac peak. This is similar to the plateau appearing in the caloric curve of the classical self-gravitating gas [29]. The homogeneous branch is fully stable for η < ηc = 4, metastable for ηc < η < η * c and unstable for η > η * c . The inhomogeneous branch is always unstable (or, possibly, metastable for η > η * c ). For sufficiently low temperatures, the system can experience an isothermal collapse.
ity of the temperature (see Fig. 20). The energy of transition has been determined by comparing the entropy of the homogeneous and inhomogeneous phases and looking at which point the curves S(E) intersect (see Fig. 14). Equivalently, it can be obtained by performing a vertical Maxwell construction [31]. The homogeneous phase is fully stable for Λ < Λ t , metastable for Λ t < Λ < Λ * c and unstable for Λ > Λ * c . The lower part of the first inhomogeneous branch is fully stable for Λ > Λ t and metastable for Λ * < Λ < Λ t . The upper part of the first inhomogeneous branch is unstable for Λ * < Λ < Λ * c . For Λ > Λ * c , it is unstable or, possibly, metastable. Secondary inhomogeneous branches appear for smaller values of the energy but they have smaller values of the entropy (see Fig. 13) and they are unstable. The stable states of the inhomogeneous branch have 1/λ > 1 indicating that the density is concentrated at the center. The possibly metastable states for Λ > Λ * c have 1/λ < 1 indicating that the density is concentrated near the box. In conclusion, there exists a fully stable equilibrium state for any value of energy. This is similar to the Newtonian model in d = 2 [24,29]. However, in the modified Newtonian model, we expect a first order phase transition at Λ t that is not present in the Newtonian model.
The strict caloric curve, corresponding to the fully stable states (global maxima) in the series of equilibria, is denoted (S) in Figs. 19 and 20. The unstable states (saddle points) are denoted (U) and the metastable states (local maxima) are denoted (M). There exists a fully stable equilibrium state for any accessible value of energy in MCE and for sufficiently high values of the temperature in CE (η < η c = 4). Here, the microcanonical and canonical ensembles are inequivalent (unlike in the Newtonian case). In particular, the lower part of the first inhomogeneous branch is stable in MCE while it is unstable in CE. This branch has negative specific heats C < 0 (see Fig. 20) which is not possible in the canonical ensemble.
In conclusion, the system displays a zeroth order phase transition in CE (associated with an isothermal collapse) and a first order phase transition in MCE. Note also that the energy E(β) and its first derivative E ′ (β) are continuous at the critical point β * c but its second derivative E ′′ (β) is discontinuous. Provided that the inhomogeneous branch for η > η * c is metastable, this would correspond to a third order canonical phase transition between a homogeneous metastable state and an inhomogeneous metastable state.
The dimension d = 3
In Fig. 21 we plot the series of equilibria in d = 3.
Let us first describe the canonical ensemble (CE). The control parameter is η. The homogeneous phase exists for all η. It is stable for η < η * c and unstable for η > η * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists for η > 2.64 and it connects the homogeneous branch at η = η * c . For large central densities 1/λ, it forms a spiral towards a singular solution. For η < η * c , it has a lower free energy J than the homogeneous phase (see Fig. 11) and it is unstable. For η > η * c , it has a higher free energy J than the homogeneous phase (see Fig. 11). However, it is expected to be unstable or, possibly, metastable. Secondary inhomogeneous branches appear for smaller values of the temperature but they have a higher value of free energy J and they are unstable. The homogeneous branch is metastable for η < η * c . These conclusions are motivated by two arguments: (i) in the Newtonian model in d = 3, we know that there is no fully stable equilibrium state in CE. The system can undergo an isothermal collapse [29]. There is no global maximum of free energy J because we can make it diverge by creating a Dirac peak containing all the particles [21].
In the modified Newtonian model, the same argument applies since it is independent on boundary conditions. Since we know that the homogeneous branch is stable for η < η * c , then it can only be metastable. (ii) In the chemotactic literature, it has been rigorously established that the Keller-Segel model in d = 3 can blow up for any η [8]. This is consistent with our stability results. The inhomogeneous branch forms a spiral for large central densities 1/λ → +∞ due to the damped oscillations of the inverse temperature η(λ) and energy Λ(λ). This is similar to the spiral appearing in the series of equilibria of the classical self-gravitating gas as we approach the singular isothermal sphere [29].
Let us now describe the microcanonical ensemble (MCE). The control parameter is the energy Λ. The homogeneous phase exists for all Λ < 0. It is stable for Λ < Λ * c and unstable for Λ > Λ * c (see Sec. V). The first branch n = 1 of inhomogeneous states exists for Λ > −0.405 and it connects the homogeneous branch at Λ = Λ * c . For large central densities 1/λ, it forms a spiral towards a singular solution. For Λ < Λ * c , it has a lower entropy S than the homogeneous phase (see Fig. 15) and it is unstable. For Λ > Λ * c , it has a higher entropy than the homogeneous phase (see Fig. 15). However, it is expected to be unstable or, possibly, metastable. Secondary inhomogeneous branches appear for smaller values of the energy but they have a lower value of entropy S and they are unstable. The homogeneous branch is metastable for Λ < Λ * c . These conclusions are motivated by two arguments: (i) in the Newtonian model in d = 3, we know that there is no fully stable equilibrium state in MCE. The system undergoes a gravothermal catastrophe [13,14]. There is no global maximum of entropy S at fixed mass and energy because we can make it diverge by creating a binary star surrounded by a hot halo [22,31].
In the modified Newtonian model, the same argument applies. Since we know that the homogeneous branch is stable for Λ < Λ * c , then it can only be metastable. There is no strict caloric curve since there is no fully stable states (global maxima). But there is a physical caloric curve made of metastable states (local maxima) denoted (M) in Fig. 21. The unstable states (saddle points) are denoted (U). Here, the microcanonical and canonical ensembles, regarding the metastable states, are equivalent unlike in the Newtonian case. This is because the homogeneous branch and the inhomogeneous branch connect each other at a single point at λ = 1 by making a cusp (see inset in Fig. 21) while the Newtonian series of equilibria is smooth and presents two distinct turning points of temperature and energy (denoted CE and MCE in Fig. 8 of [31]) separated by a region of negative specific heats.
In conclusion, if we take metastable states into account, the system displays a zeroth order phase transition in CE and MCE corresponding to a discontinuity of entropy or free energy. They are associated with an isothermal collapse or a gravothermal catastrophe respectively.
Remark: There is no natural external parameter in the modified Newtonian model. However, the dimension of space d could play the role of an effective external parameter. The preceding results predict the existence of a critical dimension d c between 1 and 2 at which the phase transition passes from second order (d < d c ) to first order (d > d c ). However, this transition turns out to occur in a very small range of parameters since we find that the critical dimension d c is between 1 and d = 1.00001 and the concerned range of energies and temperatures is extremely narrow. We have not investigated this transition in detail since the dimension of space is not a physical (tunable) parameter. Furthermore, in the next model, we have an external parameter µ played by screening length that is more relevant.
IV. THE SCREENED NEWTONIAN MODEL
In this section, we discuss phase transitions that appear in the screened Newtonian model corresponding to an attractive Yukawa potential.
A. Physical motivation of the model
We consider a system of particles interacting via the potential Φ(r, t) that is solution of the screened Poisson equation
∆Φ − k 2 0 Φ = S d Gρ,(64)
where k 0 is the inverse of the screening length. At statistical equilibrium, the density is given by the Boltzmann distribution
ρ = Ae −βmΦ .(65)
We assume that the system is confined in a finite domain (box) and we impose the Neumann boundary conditions
∇Φ · n = 0, ∇ρ · n = 0,(66)
where n is a unit vector normal to the boundary of the box (the explicit expression of the potential in d = 1 is given in Appendix B). This model admits spatially homogeneous solutions (ρ = ρ 0 and Φ = Φ 0 with −k 2 0 Φ 0 = S d Gρ 0 ) at any temperature. It also admits spatially inhomogeneous solutions at sufficiently low temperatures. We shall study this model in arbitrary dimensions of space d with explicit computations for d = 1, 2, 3. This model has different physical applications:
(i) It describes a system of particles interacting via a screened attractive (Newtonian) potential.
(ii) By a proper reinterpretation of the parameters, the field equation (64) describes the relation between the concentration of the chemical and the density of bacteria in the Keller-Segel model (37) where the degradation of the chemical reduces the range of the interaction. In that case, the boundary conditions are of the form (66). Furthermore, the relevant ensemble is the CE since the KS model has a canonical structure. This model has been studied by Childress & Percus [54] in d = 1 using an approach different from the one we are going to develop.
For the sake of generality, we shall study this model in the microcanonical and canonical ensembles in any dimension of space.
B. The screened Emden equation
In the screened Newtonian model, the equilibrium density profile is given by the Boltzmann distribution (65) coupled to the screened Poisson equation (64). As in Sec. III B, we look for spherically symmetric solutions. Introducing the central density ρ 0 = ρ(0), the central potential Φ 0 = Φ(0), the new field ψ = βm(Φ − Φ 0 ) and the scaled distance ξ = (S d Gβmρ 0 ) 1/2 r the Boltzmann distribution can be rewritten Substituting this relation in the screened Poisson equation (64), we obtain the screened Emden equation
ρ = ρ 0 e −ψ(ξ) .(67)1 ξ d−1 d dξ ξ d−1 dψ dξ − κ 2 ψ = e −ψ − λ,(68)
where κ = k 0 /(S d Gβmρ 0 ) 1/2 and λ = −k 2 0 Φ 0 /S d Gρ 0 . The boundary conditions at the origin are
ψ(0) = ψ ′ (0) = 0.(69)
The normalized box radius is α = (S d Gβmρ 0 ) 1/2 R and the boundary condition Φ ′ (R) = 0 becomes
ψ ′ (α) = 0.(70)
Introducing the normalized screening length
µ = k 0 R,(71)
the parameter κ can be rewritten κ = µ/α. For given µ, we solve the problem as follows: (i) We fix α. (ii) κ = µ/α is then given. (iii) We determine λ by an iterative method such that ψ ′ (α) = 0. (iv) We obtain different solutions λ n (α) determining different branches n = 1, n = 2 etc. This procedure determines for each value of α, and for each branch, the normalized density profile e −ψ(ξ) . The homogeneous solution corresponds to ψ = 0 and λ = 1. This solution is degenerate because the boundary condition (70) is satisfied for any value of α.
C. The temperature
We must now relate the parameter α to the temperature T . Introducing the dimensionless variables defined previously and recalling that r = Rξ/α, the mass can be written Using α = (S d Gβmρ 0 ) 1/2 R and introducing the dimensionless temperature (51), we obtain
M = ρ 0 S d R α d α 0 e −ψ ξ d−1 dξ.(72)η = 1 α d−2 α 0 e −ψ ξ d−1 dξ.(73)
This equation gives the relation between the inverse temperature η and α for the n-th branch. In Figs
D. The energy
We must also relate α to the energy E. The total energy is given by
E = f v 2 2 drdv + 1 2 ρΦ dr.(74)
Using the Maxwell-Boltzmann distribution (10), the kinetic energy is simply
K = d 2 N k B T.(75)
Using the screened Poisson equation (64) and integrating by parts, the potential energy can be written
W = − 1 2S d G (∇Φ) 2 + k 2 0 Φ 2 dr.(76)
The total energy E = K + W is therefore given by Introducing the dimensionless variables defined previously, recalling that r = ξR/α and µ = k 0 R, and introducing the normalized energy (57), we obtain
E = d 2 N k B T − 1 2S d G (∇Φ) 2 + k 2 0 Φ 2 dr.(Λ = − d 2η + 1 2η 2 1 α d−2 α 0 dψ dξ 2 ξ d−1 dξ + µ 2 2η 2 1 α d α 0 (ψ + βmΦ 0 ) 2 ξ d−1 dξ.(78)
Using the expressions of κ and λ following Eq. (68), we find that
βmΦ 0 = − λ κ 2 ,(79)
so that, finally,
Λ = − d 2η + 1 2η 2 1 α d−2 α 0 dψ dξ 2 ξ d−1 dξ + µ 2 2η 2 1 α d α 0 ψ − λα 2 µ 2 2 ξ d−1 dξ.(80)
This equation gives the relation between the energy Λ and α for the n-th branch. In Figs. 26, 27, 29 and 30, we plot the energy Λ as a function of α for the first two branches n = 1 and n = 2 in different dimensions of space d = 1, 2, 3. The discussion is similar to the one given in Sec. III D. We have also represented the branch corresponding to the homogeneous solution. Using Eq. (87) and η = α 2 /d, its equation is given by
Λ = −d 2 /(2α 2 ) + d/(2µ 2 ).
E. The entropy and the free energy
Finally, we relate α to the entropy S and to the free energy F . The entropy is given by We can proceed exactly as in Sec. III E and obtain
S = d 2 N k B ln T − k B ρ m ln ρ m dr.(81)S N k B = − d − 2 2 ln η − 2 ln α + 1 η 1 α d−2 α 0 ψe −ψ ξ d−1 dξ,(82)
up to unimportant constants. However, we can also obtain a simpler expression. Substituting ρ = ρ 0 e −βm(Φ−Φ0) in Eq. (81), we obtain
S = d 2 N k B ln T − k B ρ m ln ρ 0 m dr +k B β ρ(Φ − Φ 0 ) dr.(83)
This can be rewritten
S N k B = − d 2 ln β − ln ρ 0 + 2βE N − βmΦ 0 ,(84)
up to unimportant constants. Finally, using Eqs. (79), (51), (57) and the relations κ = µ/α and α = (S d Gβmρ 0 ) 1/2 R, we obtain
S N k B = − d − 2 2 ln η − 2 ln α − 2Λη + λα 2 µ 2 ,(85)
which does not involve new integrals. Using the previous results, this expression relates the entropy S/N k B to α. The free energy is F = E − T S. In the following, it will be more convenient to work in terms of the Massieu function J = S − k B βE (by an abuse of language, we shall often refer to J as the free energy). We have
J N k B = S N k B + ηΛ.(86)
Using the previous results, this expression relates the free energy J/N k B to α. In Figs
F. Caloric curve
We shall now determine the caloric curve β(E) corresponding to the screened Newtonian model. First of all, we note that, for the homogeneous phase, we have ρ = ρ 0 and Φ = Φ 0 with −k 2 0 Φ 0 = S d Gρ 0 (or equivalently ψ = 0, λ = 1 and α 2 = dη). Therefore, the relationship between the energy and the temperature can be written
Λ = − d 2η + d 2µ 2 .(87)
This shows that η → +∞ for Λ → Λ max = d/(2µ 2 ). On the other hand, eliminating α between η n (α) and Λ n (α) given by Eqs. (73) and (80), we get the series of equilibria for the n-th inhomogeneous branch. The series of equilibria (critical points) and the caloric curves (stable states) in CE and MCE are described below for different dimensions of space.
The dimension d = 1
In Figs. 39 and 40, we plot the series of equilibria in d = 1 for different values of the screening parameter µ.
Let us first describe the canonical ensemble (CE). The control parameter is the inverse temperature η and the stable states are maxima of free energy J at fixed mass M . The homogeneous phase exists for any value of η. It is stable for η < η * c and unstable for η > η * c (see Sec. V). Comparing Figs. 39 and 40, we see that the screened Newtonian model is characterized by a pitchfork bifurcation at η = η * c . The pitchfork bifurcation is supercritical if µ < µ c = √ 2π ≃ 4.4428829 and sub-critical if µ > µ c . This interesting transition was first evidenced by Childress & Percus [54] using a different approach. In our thermodynamical approach, this implies the existence of a canonical tricritical point at µ c = √ 2π. For µ < µ c the phase transition is second order and for µ > µ c the phase transition is first order.
Let us first consider µ < µ c (see Fig. 39). The discussion is similar to that given for the modified Newtonian model. The first branch n = 1 of inhomogeneous states exists only for η > η * c . It has a higher free energy J than the homogeneous phase (see Fig. 31) and it is fully stable. Secondary branches appear for smaller values of the temperature but they have smaller values of free energy J (see Fig. 31) and they are unstable (saddle points of free energy). Therefore, the canonical caloric curve displays a second order phase transition between homogeneous and inhomogeneous states marked by the discontinuity of ∂E ∂β at β = β * c . We note that, for the inhomogeneous states, there exists two solutions with the same temperature and the same free energy but with different density profiles corresponding to α 1 < α c and α 2 > α c , where α c is the value of α at the point of contact with in the homogeneous branch. Thus, the inhomogeneous branch is degenerate. These two states can be distinguished by their central density α. Since ρ/ρ 0 = dη/α 2 , the solution α 1 < α c corresponds to ρ 0 < ρ and the solution α 2 > α c corresponds to ρ 0 > ρ. The density profiles are similar to those represented in Fig. 17 for the modified Newtonian model. In conclusion: (i) for η < η * c , there is only one stable state (homogeneous); (ii) for η > η * c , there are two stable states α 1 < α c and α 2 > α c (inhomogeneous) with the same free energy and one unstable state (homogeneous). Therefore, the central density α plays the role of an order parameter (see Fig. 22). The caloric curve displays a second order phase transition in CE and MCE taking place at η = η * c and Λ = Λ * c . It is marked by the discontinuity of ∂E/∂β in CE or ∂β/∂E in MCE. Note that the branches α < αc and α > αc coincide. For µ < µc, the CE and MCE ensembles are equivalent.
Let us now consider µ > µ c (see Fig. 40). The first branch n = 1 of inhomogeneous states exists only for It displays a canonical first-order phase transition marked by the discontinuity of the energy at η = ηt(µ). The region of negative specific heats is unstable in the canonical ensemble and replaced by a phase transition (Maxwell plateau). The temperatures η * c and η * represent canonical spinodal points marking the end of the metastable phase. η > η * (µ). The caloric curve displays a canonical first order phase transition at η t (µ) marked by the discontinuity of the energy E (see Fig. 41). The temperature of transition η t (µ) can be obtained by plotting the free energy of the two phases as a function of the temperature and determining at which temperature they become equal (see Fig. 32). Equivalently, it can be obtained by performing a horizontal Maxwell construction [31]. The homogeneous phase is fully stable for η < η t , metastable for η t < η < η * c and unstable for η > η * c . The right branch of the inhomogeneous phase is fully stable for η > η t and metastable for η * < η < η t . The left branch is unstable. Note that this branch has negative specific heats which is not permitted in the canonical ensemble. Secondary branches appear for smaller values of the temperature but they have smaller values of free energy J and they are unstable. We also note that the branch of inhomogeneous states is degenerate since the curves α < α c and α > α c coincide. In conclusion: (i) for η < η * , there is only one stable state (homogeneous); (ii) for η * < η < η * c , there are three stable states (one homogeneous and two inhomogeneous) and two unstable states (inhomogeneous); (iii) for η > η * c , there are two stable states (inhomogeneous) and one unstable state (homogeneous). The pairs of inhomogeneous states have the same free energy. Therefore, the central density α plays the role of an order parameter (see Fig. 23).
The canonical phase diagram is represented in Fig. 42 where we have plotted η * c , η * and η t as a function of µ. The three temperatures coincide at the tricritical point µ = µ c . At that point, the phase transition goes from second order (µ < µ c ) to first order (µ > µ c ).
The strict caloric curve (see Figs. 39 and 41), corresponding to the fully stable states, is denoted (S). The physical caloric curve should take into account the metastable states (M) because they are long-lived. The states (U) are unstable. We see that there exists a fully stable equilibrium state for any temperature and any screening length. This is consistent with the usual Newtonian model in d = 1 [20,29]. This is also consistent with the results of chemotaxis since it has been established rigorously that there is no blow up in d = 1 [8].
Let us finally describe the microcanonical ensemble (MCE). The control parameter is the energy Λ and the stable states are maxima of entropy S at fixed mass M and energy E. The homogeneous phase exists for any Λ < Λ max = d/(2µ 2 ). It is stable for Λ < Λ * c and unstable for Λ > Λ * c (see Sec. V). Comparing Figs. 43 and 44, we see that there exists a microcanonical tricritical point at µ m ≃ 11.8 and Λ ≃ 2.37 10 −4 (corresponding to η ≃ 149.1096). For µ < µ m the phase transition is second order and for µ > µ m the phase transition is first order.
Let us first consider µ < µ m (see Figs. 39 and 43). The first branch n = 1 of inhomogeneous states exists for Λ * c < Λ < Λ
(1) max = 1/(2µ tanh(µ)) (see Appendix D). It has a higher entropy S than the homogeneous phase and it is stable. Secondary branches appear for smaller values of the energy but they have smaller values of entropy and are unstable. The microcanonical caloric curve displays a second order phase transition marked by the discontinuity ∂β ∂E at E = E * c . For µ < µ c , the specific heat is always positive. In that case, the microcanonical and canonical ensembles are equivalent. For µ > µ c , a region of negative specific heats appears. This leads to a convex dip in the entropic curve S(E) (see Fig. 36). In that case, the microcanonical and canonical ensembles are inequivalent: the states with negative specific heats are stable in MCE while they are unstable in CE (compare Figs. 41 and 43). Therefore, these energies cannot be achieved in a canonical description.
Let us now consider µ > µ m (see Fig. 44). The first branch n = 1 of inhomogeneous states exists only for Λ > Λ * (µ). The caloric curve displays a microcanonical first order phase transition at Λ t (µ) marked by the discontinuity of the temperature T and the existence of metastable states. The energy of transition Λ t (µ) can be obtained by plotting the entropy of the two phases as a function of the energy and determining at which energy they become equal. Equivalently, it can be obtained by performing a vertical Maxwell construction [31]. The discussion is similar to that given in the canonical ensemble except that the axis are reversed. In conclusion: (i) for Λ < Λ * , there is only one stable state (homogeneous); (ii) for Λ * < Λ < Λ * c , there are three stable states (one homogeneous and two inhomogeneous) and two unstable states (inhomogeneous); (iii) for Λ > Λ * c , there are two stable states (inhomogeneous) and one unstable state (homogeneous). The pairs of inhomogeneous states have the same entropy. Therefore, the central density α plays the role of an order parameter (see Fig. 28). The microcanonical phase diagram is represented in Figs. 45 and 46 where we have plotted Λ * c , Λ * and Λ t as a function of µ. The three energies coincide at the microcanonical tricritical point µ = µ m . At that point, the phase transition goes from second order (µ < µ m ) to first order (µ > µ m ). We have also represented the region of negative specific heats which appears at the canonical tricritical point µ = µ c . For µ c < µ < µ m , it is delimited by Λ * c and Λ ′ and for µ > µ m , it is delimited by Λ * and Λ ′ . This region of negative specific heats also defines the physical region of ensembles inequivalence, i.e. the states that are stable in MCE but unstable in CE (metastable states are considered here as stable states). Finally, we have represented the strict region of ensembles inequivalence, i.e. the states that are stable in MCE but unstable or metastable in CE. It is delimited by Λ 1 and Λ 2 and, of course, contains the negative specific heats region.
The strict caloric curve (see Figs. 39, 43 and 44), corresponding to the fully stable states, is denoted (S). The states (U) are unstable. The states (M) are metastable but they are long-lived. We see that there exists a fully stable equilibrium state for any accessible energy and any screening length. This is consistent with the usual Newtonian model in d = 1 [20,29].
In conclusion, for µ < µ c , the system displays canonical and microcanonical second order phase transitions. For µ c < µ < µ m (canonical tricritical point), the system displays canonical first order phase transitions and microcanonical second order phase transitions. For µ > µ m (microcanonical tricritical point), the system displays canonical and microcanonical first order phase transitions. Note that the canonical and microcanonical tricritcal points do not coincide as also observed in other models [28,38,40].
The dimensions d = 2 and d = 3
In Figs. 47 and 48, we plot the series of equilibria in d = 2 and d = 3. We have considered different values of µ but only the case µ = 1 is shown. We have observed that the shape of the diagrams does not significantly depend on the value of the screening parameter µ. Therefore, the description of these diagrams is similar to the one given in Secs. III F 2 and III F 3 for the modified Newtonian model.
V. STABILITY OF THE HOMOGENEOUS PHASE
In this section, we study the stability of the homogeneous phase in the case where the potential satisfies the modified Poisson equation (43) or the screened Poisson equation (64). We first consider the spectral stability of the homogeneous phase with respect to the Smoluchowski equation or, equivalently, with respect to the Keller-Segel model. This will allow us to determine the growth rate (unstable case) or the damping rate (stable case) of the perturbation. Then, we investigate the dynamical and thermodynamical stability of a larger class of systems by determining whether the homogeneous phase is a maximum of entropy at fixed mass and energy in MCE or a minimum of free energy at fixed mass in CE.
A. Spectral stability
We consider the mean field Smoluchowski equation
∂ρ ∂t = ∇ · 1 ξ k B T m ∇ρ + ρ∇Φ ,(88)
coupled to the modified Poisson equation (43) or to the screened Poisson equation (64). The boundary conditions are given by Eq. (45). Up to a change of notation, these equations also describe the Keller-Segel model (31)- (37) or (31)- (39). In the modified Newtonian model, the homogeneous steady state satisfies
ρ = ρ, Φ = 0.(89)
In the screened Newtonian model, it satisfies
− k 2 0 Φ = S d Gρ,(90)
where ρ and Φ are uniform. In both models, the linearized equations can be written
ξ ∂δρ ∂t = k B T m ∆δρ + ρ∆δΦ,(91)∆δΦ − k 2 0 δΦ = S d Gδρ,(92)
where k 0 = 0 in the modified Newtonian model and k 0 = 0 in the screened Newtonian model. In an infinite domain, the spectral stability of the homogeneous solutions of the mean field Smoluchowski equation (and its generalizations) coupled to Eqs. (43) and (64) has been studied by Chavanis & Sire [57] who stressed the analogy with the Jeans instability in astrophysics [58]. Here, we describe how the results are modified in a bounded domain with the boundary conditions (45). This problem was considered by Keller & Segel [3] in d = 2. We shall perform the stability analysis in d dimensions. Let us call ψ k (r) the eigenfunctions of the Laplacian and −k 2 the corresponding eigenvalues. They are solution of
∆ψ k = −k 2 ψ k ,(93)
with ∇ψ k · n = 0,
on the boundary. It is easy to check that the eigenvalues are necessarily negative (hence the notation −k 2 ). Indeed, multiplying Eq. (93) by ψ k , integrating on the whole domain, and using an integration by parts, we get (∇ψ k ) 2 dr = k 2 ψ 2 k dr, which proves the result. In a bounded domain, their values are "quantized" (see below). The lowest non-zero value of k will play a particular role as it determines the critical temperature below which the homogeneous phase becomes unstable. The expression of the eigenfunctions and eigenvalues depends on the domain shape and on the dimension of space. In the following, we shall work in a spherical box in d = 1, 2, 3 dimensions.
• In d = 1, we have
ψ n = cos(k n x),(95)
with
k n = n π R ,(96)
where n is an integer. The smallest non zero eigenvalue is k 1 = π/R.
• In d = 2, we have
ψ ni = J n (k ni r) cos(nθ),(97)
with
k ni = γ ni R ,(98)
where n is an integer and γ ni is the i-th zero of J ′ n (x). The smallest non zero eigenvalue is k 01 = γ 01 /R where γ 01 = j 11 = 3.83171... is the first zero of J ′ 0 (x) = −J 1 (x). The axisymmetric mode (n = 0) is ψ 0i = J 0 (k 0i r).
(99)
• In d = 3, we have
ψ lmi = 1 √ r J l+ 1 2 (k li r)Y lm (θ, φ),(100)
with
k li = γ li R ,(101)
where l, m are integers with l ≥ |m| and γ li is the i-th zero of
xJ ′ l+1/2 (x) J l+1/2 (x) − 1 2 = 0.(102)
The smallest non zero eigenvalue is k 01 = γ 01 /R where γ 01 = x 1 = 4.49341... is the first root of tan(x) = x. The spherically symmetric mode (l, m = 0) is
ψ 00i = sin(k 0i r) r .(103)
The solutions of the linearized equations (91) and (92) can be expanded on the eigenmodes, writing
δρ(r, t) = k A k e σ k t/ξ ψ k (r),(104)δΦ(r, t) = k B k e σ k t/ξ ψ k (r),(105)
where the sum runs on the (quantized) eigenvalues. Substituting Eqs. (104) and (105) in Eqs. (91) and (92), we obtain the algebraic equations
σ k + k B T m k 2 A k + ρk 2 B k = 0,(106)S d GA k + (k 2 + k 2 0 )B k = 0.(107)
There will be non-trivial solutions only if the determinant of this system of equations is zero. This yields the dispersion relation
σ k = S d Gρ k 2 + k 2 0 − k B T m k 2 ,(108)
relating σ k to the wavenumber k. The amplitudes A k and B k are determined by the initial condition. We see that σ k is real so that the perturbation either grows or decays exponentially rapidly. The homogeneous phase will be spectrally stable if σ k < 0 for all k and it will be spectrally unstable if there exists one or several modes for which σ k > 0. We note that the dispersion relation (108) is the same as in an infinite domain [57]. However, in a finite domain, the allowed wavenumbers k are quantized while they are continuous in an infinite domain. According to Eq. (108), the system will be unstable if there exists k = 0 such that
S d Gρ k 2 + k 2 0 > k B T m .(109)
Therefore, a necessary condition of instability is that
k B T m < S d Gρ k 2 f + k 2 0 ≡ k B T * c m ,(110)
where k f is the smallest non-zero wavenumber. For T > T * c , the homogeneous distribution is stable for perturbations with arbitrary wavenumbers. For T < T * c , the homogeneous distribution is unstable for perturbations with wavenumbers
k 2 < S d Gmρ k B T − k 2 0 ≡ k 2 m .(111)
For k 0 = 0, the critical temperature is
k B T * c m = S d Gρ k 2 f ,(112)
and we recover the Jeans criterion
k 2 < S d Gmρ k B T ≡ k 2 J .(113)
In the general case, the instability criterion can be written
k 2 < k 2 J − k 2 0 ≡ k 2 m .(114)
We see that, for the screened Newtonian potential (k 0 = 0), the instability occurs for larger wavelengths as compared to the Newtonian model (k 0 = 0). Let us introduce the notation
k B T c = S d Gmρ k 2 0 ,(115)
which corresponds to the critical temperature in an infinite domain (k f = 0). Since the dispersion relation (108) does not explicitly depend on k f , it is convenient to introduce the notation (115). We have
T * c = T c 1 + (k f /k 0 ) 2 .(116)
When T < T * c , the system is unstable for the modes such that
k < k 0 T c T − 1 1/2 ≡ k m (T ).(117)
The growth rate can be written
σ k = k B T m k 2 (k m (T ) 2 − k 2 ) k 2 0 + k 2 ,(118)
where
k m (T ) = k 0 T c T − 1 1/2 .(119)
It achieves its maximum value for k = k * (T ) where
k * (T ) = k 0 T c T 1/2 − 1 1/2 .(120)
The corresponding value of the growth rate is
σ * (T ) = k B T c m k 2 0 1 − T T c 1/2 2 .(121)
The number of clusters that is expected to form in the linear regime is N (T ) = R/(2π/k * (T )). For a fixed value of k 0 , this number increases as the temperature decreases. The behaviour of the different quantities defined above is represented in Figs. 49 and 50. FIG. 49: Growth (σ > 0) or decay (σ < 0) rate as a function of the wavenumber k. The system is unstable for k < km(T ) and the maximum growth rate is reached for k = k * (T ). The parameters have been scaled such that k0 = 1, Tc = 1, T = 1/2.
Let us consider some particular cases:
• For T = T c , we have k m = 0, k * = 0, σ * = 0 and Growth (σ > 0) rate as a function of the wavenumber k in two limits: (i) the Newtonian limit k0 = 0 (and T = 0) for which the maximum growth rate corresponds to k * = k f << 1 (large scales), and (ii) the cold limit T = 0 (and k0 = 0) for which the maximum growth rate corresponds to k * → +∞ (small scales). Therefore, the system is stable. More generally, for T ≥ T c , the system is stable. For T → +∞, we have σ k = − kB T m k 2 .
σ k = − k B T c m k 4 k 2 0 + k 2 < 0.(122)
• For T = 0, we have k m → +∞, k * → +∞, σ * → k B T c k 2 0 /m and
σ k = k B T c m k 2 0 k 2 k 2 0 + k 2 .(123)
The growth rate is maximum for k * → +∞, i.e. for very small wavelengths λ * → 0. In that case, we expect a very large number of clusters in the linear regime.
• For k 0 = 0 (modified Newtonian model), we have
σ k = S d Gρ − k B T m k 2 .(124)
The system is unstable for T < T * c where the critical temperature is given by Eq. (112). Furthermore, the unstable wavenumbers correspond to k < k J where the Jeans wavenumber is given by Eq. (113). The growth rate is maximum for k * = k f i.e. for the maximum wavelength λ f = 2π/k f . In that case, we have only one cluster. The corresponding value of the growth rate is σ * = S d Gρ − k B T k 2 f /m. The two limit cases discussed above are illustrated in Fig. 51.
B. Thermodynamical stability
We now analyze the thermodynamical stability of the homogeneous phase by using variational principles. Basically, we have to solve the maximization problem (3) in MCE and the minimization problem (17) in CE. However, for spatially homogeneous systems, it is shown in Appendix A that they are both equivalent to the minimization problem (24). Therefore, the system is stable iff the second order variations of free energy (27) are positive definite for any perturbations δρ that conserve mass, i.e. δρ dr = 0. We are led therefore to considering the eigenvalue problem
δΦ λ + k B T ρm δρ λ = λδρ λ ,(125)∆δΦ λ − k 2 0 δΦ λ = S d Gδρ λ .(126)
If all the eigenvalues λ are positive, then the system is stable since δ 2 F = 1 2 λ λa 2 λ > 0 where the perturbation has been decomposed under the form δρ = λ a λ δρ λ . If at least one eigenvalue is negative, the system is unstable since δ 2 F = 1 2 λ (δρ λ ) 2 dr < 0 for that perturbation. It is easy to see that the eigenfunctions are
δρ(r) = A k ψ k (r), δΦ(r) = − S d GA k k 2 + k 2 0 ψ k (r),(127)
and that the corresponding eigenvalues are
λ(k) = − S d G k 2 + k 2 0 + k B T ρm ,(128)
for all quantized k (see Sec. V A). We note that
ψ k dr = − 1 k 2 ∆ψ k dr = − 1 k 2
∇ψ k · dS = 0, so that δρ dr = 0 as required. Regrouping all these results, we conclude that the system is stable iff
S d G k 2 + k 2 0 − k B T ρm < 0,(129)
for all (quantized) k. This returns the stability condition obtained in Sec. V A. Therefore, the system is stable iff T > T * c . If T < T * c , the homogeneous phase is an unstable saddle point of free energy at fixed mass. This method proves the thermodynamical stability of the homogeneous phase in the canonical and microcanonical ensembles. This implies the stability with respect to the mean field Kramers equation (21), with respect to the Smoluchowski equation (28), with respect to the Keller-Segel model (31) and with respect to the kinetic equation (13).
We can now determine the values of the normalized inverse temperature η * c and normalized energy Λ * c above which the homogeneous phase becomes unstable. Using Eqs. (51) and (110), we get
η * c = 1 d (k 2 f + k 2 0 )R 2 .(130)
We obtain η * c = π 2 + µ 2 = 9.8696044 + µ 2 (d = 1), (131)
η * c = 1 2 (j 2 11 + µ 2 ) = 7.3410008 + µ 2 2 (d = 2), (132) η * c = 1 3 (x 2 1 + µ 2 ) = 6.7302445 + µ 2 3 (d = 3). (133)
The corresponding critical energy is given by Eq. (63) for the modified Newtonian model and by (87) for the screened Newtonian model.
VI. CONCLUSION
In this paper, we have completed the description of phase transitions in self-gravitating systems and bacterial populations. We have introduced generalized models in which the ordinary Poisson equation is modified so as to allow for the existence of a spatially homogeneous phase. This avoids the Jeans swindle and leads to a great variety of microcanonical and canonical phase transitions between homogeneous and inhomogeneous states. These generalized models can have application in chemotaxis where the degradation of the chemical leads to a shielding of the interaction and in cosmology where the expansion of the universe creates a sort of "neutralizing background". In this paper, we have only considered equilibrium states. In future works, we shall study the dynamics of some simple models for which the present study can be a useful guide.
Our study also allows to explore the link between cosmology where one studies the evolution of the universe as a whole [12] and stellar dynamics where one studies the structure of individual galaxies [59]. The description of phase transitions in these two disciplines is usually very different [60]. However, our study allows to make some basic connections. In cosmology, one usually starts from an infinite homogeneous distribution and study the appearance of clusters representing galaxies. Our thermodynamical approach shows that, indeed, the homogeneous phase is unstable for sufficiently low temperatures and energies and leads to clusters. The formation of these clusters can be studied by making a linear stability analysis of the Vlasov or Euler equations. Then, in the nonlinear regime, the system is expected to achieve a statistical equilibrium state due to violent relaxation or collisional relaxation (finite N effects) [67]. This corresponds to the inhomogeneous phase. In d = 1, there exists an equilibrium state for any value of energy and temperature. For low energies and temperatures, it is spatially inhomogeneous. In the core of the cluster, the density is so high that we can disregard the effect of the neutralizing background. In that case, the statistical equilibrium state (representing a "galaxy") is described by the Camm solution like in 1D stellar dynamics. In d = 3, there is no inhomogeneous equilibrium state and, for sufficiently small energies and temperatures, the system undergoes a gravothermal catastrophe or an isothermal collapse. In d = 2, the situation is intermediate. There exists an equilibrium state in the microcanonical ensemble for all energies while in the canonical ensemble no equilibrium state exists at low temperatures. Similar behaviours occur in chemotaxis and will be investigated in future papers. Note that for self-gravitating systems, the proper statistical ensemble is the microcanonical ensemble while in chemotaxis (or for the academic model of self-gravitating Brownian particles) the proper statistical ensemble is the canonical one. It is therefore interesting to study these two systems in parallel to describe the analogies and differences between statistical ensembles.
straints are linear in f so that their second variations vanish). We can now express the mass, the entropy and the energy in terms of ρ(r) and T . Up to unimportant constants, we obtain
S = d 2 N k B ln T − k B ρ m ln ρ m dr, (A3) E = d 2 N k B T + 1 2 ρ(r, t)u(r, r ′ )ρ(r ′ , t) drdr ′ + ρV dr.(A4)
We now have to solve the maximization problem
max ρ {S[ρ] | E[ρ] = E, M [ρ] = M } .(A5)
Finally, the solution of (3) is given by the distribution function (A2) with the density profile that is solution of (A5). Let us compute the variations of entropy and energy up to second order. We have The conservation of mass can be taken into account by introducing a Lagrange multiplier. Writing the variational principle as
∆S = d 2 N k B δT T − k B ln ρ m + 1 δρ m dr − d 4 N k B δT T 2 − k B (δρ) 2 2ρm dr,(A6)δS − αδM = 0,(A10)
we obtain the mean field Boltzmann distribution
ρ = A ′ e − mΦ k B T ,(A11)
where Φ(r) is given by Eq. (7). Combining Eq. (A11) with Eq. (A2), we recover the mean field Maxwell-Boltzmann distribution (10). However, the present approach allows us to simplify the condition of thermodynamical stability. Indeed, the system is stable in the microcanonical ensemble iff the second order variations of entropy (A8) are negative definite
− k B (δρ) 2 2ρm dr − 1 2T δρδΦ dr − 1 dN k B T 2 Φδρ dr 2 ≤ 0,(A12)
for any perturbation δρ that conserves mass at first order, i.e. δρ dr = 0 (the conservation of energy has automatically been taken into account in the previous derivation). This stability criterion is equivalent to the stability criterion (12) but it is simpler because it is expressed in terms of the density instead of the distribution function.
Canonical ensemble
To solve the maximization problem (17) we can proceed in two steps. We first minimize the free energy at fixed mass and density profile ρ(r). This is equivalent to minimizing the free energy at fixed density profile. Writing δF + T λ(r)δ f dv dr = 0,
this leads to the Maxwellian distribution function
f (r, v) = m 2πk B T d/2 ρ(r) e − mv 2 2k B T ,(A14)
which is the global minimum of free energy with the previous constraint since δ 2 F = T (δf ) 2 2f m drdv > 0 (the constraints are linear in f so that their second variations vanish). We can now express the free energy in terms of ρ(r). Up to unimportant constants, we get F = 1 2 ρ(r, t)u(r, r ′ )ρ(r ′ , t) drdr ′ + ρV dr + k B T ρ m ln ρ m dr.
We now have to solve the minimization problem
min ρ {F [ρ] | M [ρ] = M } .(A16)
Finally, the solution of (17) is given by the distribution function (A14) with the density profile that is solution of (A16). The first variations δF + αT δM = 0,
lead to the mean field Boltzmann distribution
ρ = A ′ e − mΦ k B T ,(A18)
where Φ(r) is given by Eq. (7). Combining Eq. (A18) with Eq. (A14), we recover the mean field Maxwell-Boltzmann distribution (10). However, the present approach allows us to simplify the condition of thermodynamical stability. Indeed, the system is stable in the canonical ensemble iff the second order variations of free energy (A15) are positive definite
1 2 δρδΦ dr + k B T m (δρ) 2 2ρ dr ≥ 0,(A19)
for any perturbation δρ that conserves mass at first order, i.e. δρ dr = 0. This stability criterion is equivalent to the stability criterion (20) but it is simpler because it is expressed in terms of the density instead of the distribution function.
Remark: From the stability criteria (A12) and (A19), we clearly see that canonical stability implies microcanonical stability (but not the converse). Indeed, since the last term in Eq. (A12) is negative, it is clear that if inequality (A19) is satisfied then inequality (A12) is automatically satisfied. In general, this is not reciprocal and we may have ensembles inequivalence. However, if we consider a spatially homogeneous system for which Φ is uniform, the last term in Eq. (A12) vanishes (since the mass is conserved) and the stability criteria (A12) and (A19) coincide. Therefore, for spatially homogeneous systems, we have ensembles equivalence.
Appendix B: Explicit expressions of the potential In this Appendix, we limit ourselves to the case d = 1 although the results can be easily generalized to any dimension. Using standard methods, we can obtain the Green function associated with the screened Poisson equation (64) in a box with Neumann boundary conditions. Then, we find that the potential is explicitly given by
Φ(x) = R −R ρ(x ′ )u(x, x ′ ) dx ′ (B1) with u(x, x ′ ) = − G k 0 1 sinh(2k 0 R) × (cosh [k 0 (2R − |x − x ′ |)] + cosh [k 0 (x + x ′ )]) .(B2)
In an infinite domain (R → +∞), we obtain
u(|x − x ′ |) = − G k 0 e −k0|x−x ′ | .(B3)
Similarly, for the modified Newtonian model (43) with Neumann boundary conditions, the potential is explicitly given by
Φ(x) = G R −R (ρ − ρ)(x ′ )|x − x ′ | dx ′ ,(B4)
and this expression remains valid in an infinite domain (R → +∞). For the modified Newtonian model (43), the potential can be written Φ(r) = (ρ − ρ)(r ′ )u(r, r ′ ) dr ′ ,
where u(r, r ′ ) is the Green function of the Poisson equation with Neumann boundary conditions. Comparing Eq. (C1) with Eq. (7), we find that the external potential is
V (r) = −ρ u(r, r ′ ) dr ′ .(C2)
Using Eq. (C2), the potential energy (8) can be written
W = 1 2 ρΦ dr − 1 2 ρ ρ(r)u(r, r ′ ) drdr ′ .(C3)
Interchanging the dummy variables r and r ′ and using the symmetry u(r ′ , r) = u(r, r ′ ), we get
W = 1 2 ρΦ dr − 1 2 ρ ρ(r ′ )u(r, r ′ ) drdr ′ .(C4)
Finally, using Eq. (7), we obtain
W = 1 2 (ρ − ρ)Φ dr + 1 2 ρ V (r) dr.(C5)
Therefore, the potential energy is given by Eq. (53) up to an unimportant additive constant 1 2 ρ V (r) dr = − 1 2 ρ 2 u(r, r ′ )drdr ′ . In d = 1, according to Eq. (B4), the potential of interaction is u = G|x − x ′ |. Therefore, the external potential is explicitly given by V (x) = −ρG(x 2 + R 2 ).
(C6)
The additive constant in the energy is
1 2 ρ R −R V (x) dx = − 4 3 Gρ 2 R 3 .(C7)
Appendix D: The minimum energy Let us consider the modified Newtonian model (43) in d = 1. At T = 0, the density profile is a Dirac peak ρ = M δ(x) and the energy (56) is
E = − 1 4G R −R dΦ dx 2 dx.(D1)
For a symmetric density profile, the modified Poisson equation can be integrated into
Φ ′ (x) = 2G x 0 ρ(x ′ ) dx ′ − 2Gρx,(D2)
the limit ρ 0 → +∞. The inner solution is given by the Camm profile (E4) and the outer solution is
ψ(ξ) = 1 λ − 1 2 λ(ξ − α) 2 ,(E12)
which is consistent with Eq. (E7) when κ → 0. The matching condition ψ outer (0) = 0 then yields
λ ∼ √ 2 α , (α → +∞).(E13)
Using Eq. (52), we obtain at leading order η ∼ √ 2α, (α → +∞). (E14)
Appendix F: The bifurcation point
In this Appendix, we shall determine the point at which the spatially homogeneous branch bifurcates to the spatially inhomogeneous branch and show that it coincides with the point at which the spatially homogeneous branch becomes unstable (see Sec. V). For a more detailed theory of bifurcations, we refer to the paper of Schaff [62].
For the modified Newtonian model, the differential equation determining the field Φ(r) at statistical equilibrium can be written
∆Φ = S d G Ae −βmΦ − ρ .(F1)
The homogeneous solution corresponds to ρ = ρ, Φ = 0 and A = ρ. Considering a small perturbation Φ = 0 + φ(r) with φ ≪ 1 around the homogeneous solution and linearizing the differential equation (F1), we obtain ∆φ + S d Gβmρφ = 0,
with the boundary conditions ∇φ · n = 0 on the boundary. The boundary conditions determine the allowable wavenumbers k 2 ≡ S d Gβmρ. They take discrete values k = k n (see Sec. V) which in turn determine discrete values of the temperature T n . The first point of bifurcation corresponds to the smallest wavenumber k f . This is associated with the critical temperature (112) at which the homogeneous branch becomes unstable. Other branches of bifurcations appear at smaller temperatures. They correspond to successive quantized values k n of the wavenumber.
For the screened Newtonian model, the differential equation determining the field Φ(r) at statistical equilibrium can be written
∆Φ − k 2 0 Φ = S d GAe −βmΦ .(F3)
The homogeneous solution corresponds to ρ = const., Φ = const. with −k 2 0 Φ = S d Gρ. Considering a small perturbation Φ = const.+φ(r) with φ ≪ 1 around the homogeneous solution and linearizing the differential equation (F3), we obtain ∆φ + (S d Gβmρ − k 2 0 )φ = 0, (F4) with the boundary conditions ∇φ · n = 0 on the boundary. The boundary conditions determine the allowable wavenumbers k 2 ≡ S d Gβmρ − k 2 0 . The first point of bifurcation corresponds to the smallest wavenumber k f (see Sec. V). This is associated with the critical temperature (110) at which the homogeneous branch becomes unstable. Other branches of bifurcations appear at smaller temperatures. They correspond to the successive quantized values k n of the wavenumber.
FIG. 1 :
1The function e −ψ for d = 1 and λ = 0.5 < 1 (bottom) or λ = 2 > 1 (top). In d = 1, the oscillations are undamped.
FIG. 2 :
2The function e −ψ for d = 2 and λ = 0.5 < 1 (bottom) or λ = 2 > 1 (top). In d ≥ 2, the oscillations are damped. The case d = 3 (not represented) is similar.
FIG. 3 :
3Inverse temperature η as a function of the central density 1/λ for the first three branches in d = 1.
FIG. 4 :
4Inverse temperature η as a function of the central density 1/λ for the first three branches in d = 2.
FIG. 5 :
5Inverse temperature η as a function of the central density 1/λ for the first three branches in d = 3.
FIG. 6 :
6Energy Λ as a function of the central density 1/λ for the first three branches in d = 1.
FIG. 7 :
7Energy Λ as a function of the central density 1/λ for the first three branches in d = 2.
. • In d = 2: for the ordinary Newtonian model, the se-
FIG. 8 :
8Energy Λ as a function of the central density 1/λ for the first three branches in d = 3.
FIG. 9 :
9Free energy J Nk B as a function of the inverse temperature η in d = 1. Note that the branches λ < 1 and λ > 1 coincide.
FIG. 10 :
10Free energy J Nk B as a function of the inverse temperature η in d = 2.
FIG. 11 :
11Free energy J Nk B as a function of the inverse temperature η in d = 3.
FIG. 13 :
13B as a function of energy Λ in d = 1. Note that the branches λ < 1 and λ > 1 coincide. Entropy S Nk B as a function of energy Λ in d = 2.
FIG. 14 :
14Enlargement of Fig. 13. The entropies of the homogeneous phase and inhomogeneous phase become equal at Λ = Λt ≃ −0.146. This corresponds to a first-order phase transition in the microcanonical ensemble marked by the discontinuity of the slope S ′ (E) = 1/T .
FIG. 15 :
15Entropy S Nk B as a function of energy Λ in d = 3.
FIG. 16 :FIG. 17 :
1617Series of equilibria in d = 1. The caloric curve displays a second order phase transition in CE and MCE taking place at η = η * c and Λ = Λ * c (corresponding to λ = 1). It is marked by the discontinuity of ∂β/∂E in MCE or ∂E/∂β in CE. Note that the branches λ < 1 and λ > 1 coincide. The corresponding density profiles are plotted inFig. 17. Density profiles of the two stable inhomogeneous solutions λ1 = 0.69 < 1 and λ2 = 1.54 > 1 corresponding to η = 10 in d = 1. We have also represented the unstable homogeneous solution.
FIG. 18 :
18Series of equilibria in d = 2.
FIG. 19 :
19Caloric curve in the canonical ensemble in d = 2.
FIG. 20 :
20Caloric curve in the microcanonical ensemble in d = 2. A first-order phase transition is expected to take place in the microcanonical ensemble at Λ = Λt. Note that the lower branch has negative specific heats. Λ * and possibly Λ * c represent microcanonical spinodal points marking the end of the metastable phase.
FIG. 21 :
21Series of equilibria in d = 3.
FIG. 22 :
22Inverse temperature η as a function of α for the first two branches in d = 1. We have taken µ = 1 < µc.
FIG. 23 :FIG. 24 :FIG. 25 :
232425Inverse temperature η as a function of α for the first two branches in d = 1. We have taken µ = 10 > µc. Inverse temperature η as a function of α for the first two branches in d = 2. For α → +∞, the inverse temperature of the first branch tends to ηc = 4. We have taken µ = 1. Inverse temperature η as a function of α for the first two branches in d = 3. For α → +∞, the inverse temperature of the first branch undergoes damped oscillations around the value ηs ≃ 3.25. We have taken µ = 1.
FIG. 26 :FIG. 27 :
2627Energy Λ as a function of α for the first two branches in d = 1. We have taken µ = 1 < µc. Energy Λ as a function of α for the first two branches in d = 1. We have taken µ = 10 > µc.
. 22, 23, 24 and 25, we plot the inverse temperature η as a function of α for the first two branches n = 1, 2 in different dimensions of space d = 1, 2, 3. The discussion is similar to the one given in Sec. III C. We have also represented the branch corresponding to the homogeneous solution. Its equation is given by η = α 2 /d. The branch n = 1 of inhomogeneous solutions connects the branch of homogeneous solutions at α 2 c = dη * c (see Appendix F).
FIG. 28 :FIG. 29 :
2829Energy Λ as a function of α for the first two branches in d = 1. We have taken µ = 15 > µm. Energy Λ as a function of α for the first two branches in d = 2. We have taken µ = 1.
FIG. 30 :
30Energy Λ as a function of α for the first two branches in d = 3. For α → +∞, the energy of the first branch undergoes damped oscillations around the value Λs ≃ 1.13. We have taken µ = 1.
FIG. 31 :
31Free energy J Nk B as a function of the inverse temperature η for d = 1. We have taken µ = 1 < µc.
FIG. 32 :
32Free energy J Nk B as a function of the inverse temperature η for d = 1. We have taken µ = 10 > µc. The free energies of the homogeneous phase and inhomogeneous phase become equal at η = ηt(µ). This corresponds to a first order phase transition in the canonical ensemble marked by the discontinuity of the slope J ′ (β) = −E.
FIG. 33 :
33Free energy J Nk B as a function of the inverse temperature η for d = 2. We have taken µ = 1.
FIG. 34 :
34Free energy J Nk B as a function of the inverse temperature η for d = 3. We have taken µ = 1.
FIG. 35 :
35Entropy S Nk B as a function of the energy Λ for d = 1. We have taken µ = 1 < µc.
FIG. 36 :
36Entropy S Nk B as a function of the energy Λ for d = 1. We have taken µ = 10 > µc. There is a (small) convex dip associated with the region of negative specific heats in the microcanonical ensemble.
FIG. 37 :
37Entropy S Nk B as a function of the energy Λ for d = 2. We have taken µ = 1.
. 31, 32, 33 and 34, we have plotted the free energy J/N k B as a function of the inverse temperature η (parameterized by α) in d = 1, 2, 3. In Figs. 35, 36, 37 and 38), we have plotted the entropy S/N k B as a function of the energy Λ (parameterized by α) in d = 1, 2, 3.
FIG. 39 :
39Series of equilibria in d = 1 for µ = 1 < µc.
FIG. 41 :
41Canonical caloric curve in d = 1 for µ = 10 > µc.
FIG. 42 :
42Canonical phase diagram in d = 1 exhibiting a tricritical point at µc = √ 2π ≃ 4.44 and η ≃ 29.6. We have represented ηc, η * and ηt as a function of µ. The region between η * and η * c contains stable and metastable states.
FIG. 43 :
43Microcanonical caloric curve in d = 1 for µ = 10 > µc. It displays a microcanonical second order phase transition marked by the discontinuity of ∂β ∂E at E = E * c . For µ > µc, there exists a region of negative specific heats that is stable in the microcanonical ensemble.
FIG. 44 :
44Microcanonical caloric curve in d = 1 for µ = 15 > µm. It displays a microcanonical first order phase transition marked by the discontinuity of T at E = Et. The energies E * c and E * are spinodal points marking the end of the metastable branches. Note that this first order phase transition occurs in an extremely small range of energies.
FIG. 45 :
45Microcanonical phase diagram in d = 1 exhibiting a tricritical point at µm ≃ 11.8 and Λ ≃ 2.37 10 −4 . We have represented Λc, Λ * and Λt as a function of µ. The region between Λ * and Λ * c contains stable and metastable states. We again emphasize the small range of energies where this first order phase transition takes place.
phase diagram in d = 1. We have represented Λ * c , Λ ′ , Λ1 and Λ2 as a function of µ. These energies coincide for µc = √ 2π ≃ 4.44 and Λ ≃ 0.0084. These energies delimitate respectively the region of negative specific heats and the region of strict ensembles inequivalence (see main text): the energies in these regions cannot be reached in the canonical ensemble.
FIG. 47 :
47Caloric curve in d = 2 for µ = 1.
FIG. 48 :
48Caloric curve in d = 3 for µ = 1.
FIG. 50 :
50Evolution of km, k * and σ * as a function of the temperature. The parameters have been scaled such that k0 = 1 and Tc = 1.
FIG. 51: Growth (σ > 0) rate as a function of the wavenumber k in two limits: (i) the Newtonian limit k0 = 0 (and T = 0) for which the maximum growth rate corresponds to k * = k f << 1 (large scales), and (ii) the cold limit T = 0 (and k0 = 0) for which the maximum growth rate corresponds to k * → +∞ (small scales).
∆E = d 2 N k B δT + Φδρ dr + 1 2 δρδΦ dr. (A7)Using the conservation of energy ∆E = 0 to eliminate δT , we obtain Let us consider the first order variations. At first order, we have∆S = −
1
T
Φδρ dr − k B
ln
ρ
m
+ 1
δρ
m
dr
−
1
2T
δρδΦ dr −
1
dN k B T 2
Φδρ dr
2
−k B
(δρ) 2
2ρm
dr. (A8)
δS = −
1
T
Φδρ dr − k B
ln
ρ
m
+ 1
δρ
m
dr. (A9)
Appendix C: External potential for the modified Newtonian model
ξ d−1 dξ.(58)Recalling that α = α n (λ) and η = η n (λ) for the n-th branch, this equation gives the relation between the energy Λ and the central density 1/λ for the n-th branch.
Appendix A: Equivalent but simpler optimization problemsIn this Appendix, following the approach of Padmanabhan[22]and Chavanis[27], we shall reduce the optimization problems(3)and(17)to simpler forms. In particular, we shall show that these optimization problems for f (r, v) are equivalent to optimization problems for ρ(r).Microcanonical ensembleTo solve the maximization problem (3) we can proceed in two steps. We first maximize the entropy at fixed energy, mass and density profile ρ(r). Since the specification of ρ(r) determines the mass and the potential energy, this is equivalent to maximizing the entropy at fixed kinetic energy and density profile. Writingthis leads to the Maxwellian distribution functionwhich is the global entropy maximum with the previous constraints since δ 2 S = − (δf ) 2 2f m drdv < 0 (the con-which is the appropriate Gauss theorem. If all the mass is concentrated at x = 0, we obtainSubstituting this expression in Eq. (D1), we obtain E = −GM 2 R/6. The total normalized energy is thereforeThis corresponds to the minimum energy of the branch n = 1. Let us consider the screened Newtonian model (64) in d = 1. At T = 0, the density profile is a Dirac peak ρ = M δ(x) and the energy(74)where we have used elementary trigonometric identities to simplify the expression. This leads toThe total normalized energy is thereforeThis corresponds to the minimum energy of the branch n = 1.Appendix E: Approximate expressions of the density profileIn d = 1, the screened Emden equation(68)can be writtenwithThis is similar to the equation of motion of a particle of unit mass in a potential V (ψ) where ψ plays the role of position and ξ the role of time. Using the boundary condition ψ = ψ ′ = 0 at ξ = 0, we find that the first integral (pseudo energy) isThis first order differential equation can be easily integrated until ξ = α, which formally solves the problem. Let us consider the limit ρ 0 → +∞ corresponding to α → +∞. In the inner region, the term e −ψ dominates and Eq. (E1) reduces to the ordinary Emden equation whose solution is the Camm profile[29,61]:In the outer region, the term e −ψ can be neglected and Eqs. (E1) and (E3) reduce toThe boundary condition at the wall is ψ ′ (α) = 0. Substituting this result in Eq. (E6), we get λψ(α)− 1 2 κ 2 ψ(α) 2 = 1. The physical solution of this equation is ψ(α) = (λ − √ λ 2 − 2κ 2 )/κ 2 . Solving Eq. (E5) with these boundary conditions, we find thatThe matching of the outer solution with the inner solution implies that ψ outer (0) = 0. Using κα = µ, we obtainFinally, substituting the inner profile (E4) in Eq. (73), we obtain at leading orderFor α → +∞, the density profile tends to a Dirac peak ρ = M δ(x). The potential energy reduces to W = 1 2 M Φ 0 . Using Eqs. (79), (E8) and κ = µ/α, we recover Eq. (D6).The modified Emden equation(47)can be studied similarly. In fact, most of the preceding results remain valid by taking κ = 0. The potential is V (ψ) = e −ψ + λψ. It has a minimum at ψ 0 = − ln λ so that the solution ψ(ξ) of the Emden equation (with energy E = 1) oscillates around this value. Integrating Eq. (E3), the density profiles of the solutions of the branch n = 1 are given bywith ξ ≤ α. The half-period of the oscillations of the function ψ(ξ) iswhere ψ(α) is solution of e −ψ(α) + λψ(α) = 1 obtained from Eq. (E3) with ψ ′ (α) = 0. Let us now consider
J D Murray, Mathematical Biology. BerlinSpringerJ.D. Murray, Mathematical Biology (Springer, Berlin, 1991)
. A Gamba, D Ambrosi, A Coniglio, A De Candia, S Di Talia, E Giraudo, G Serini, L Preziosi, F A Bussolino, Phys. Rev. Lett. 90118101A. Gamba, D. Ambrosi, A. Coniglio, A. de Candia, S. di Talia, E. Giraudo, G. Serini, L. Preziosi, F.A. Bussolino, Phys. Rev. Lett. 90, 118101 (2003)
. E Keller, L A Segel, J. theor. Biol. 26399E. Keller, L.A. Segel J. theor. Biol. 26, 399 (1970)
. P H Chavanis, Eur. Phys. J. B. 62179P.H. Chavanis, Eur. Phys. J. B 62, 179 (2008)
. C Sire, P H Chavanis, Phys. Rev. E. 7861111C. Sire, P.H. Chavanis, Phys. Rev. E 78, 061111 (2008)
. P H Chavanis, M Ribot, C Rosier, C Sire, Banach Center Publ66103P.H. Chavanis, M. Ribot, C. Rosier, C. Sire, Banach Cen- ter Publ. 66, 103 (2004)
. B Perthame, Appl. Math. 49539B. Perthame, Appl. Math. 49, 539 (2004)
. T Nagai, Adv. Math. Sci. Appl. 5581T. Nagai, Adv. Math. Sci. Appl. 5, 581 (1995)
. P H Chavanis, Physica A. 36155P.H. Chavanis, Physica A 361, 55 (2006)
D R Nicholson, Introduction to Plasma Theory. FloridaKrieger Publishing CompanyNicholson, D.R. 1992, Introduction to Plasma Theory (Krieger Publishing Company, Florida)
. W Jäger, S Luckhaus, Trans. Am. Math. Soc. 329819W. Jäger, S. Luckhaus, Trans. Am. Math. Soc. 329, 819 (1992)
J Peebles, Large-Scale Structures of the Universe. Princeton University PressPeebles, J. 1980, Large-Scale Structures of the Universe (Princeton University Press)
. V A Antonov, Vest. Leningr. Gos. Univ. 7135V.A. Antonov, Vest. Leningr. Gos. Univ. 7, 135 (1962)
. D Lynden-Bell, R Wood, Mon. Not. R. Astron. Soc. 138495D. Lynden-Bell, R. Wood, Mon. Not. R. Astron. Soc. 138, 495 (1968)
. E B Aronson, C J Hansen, Astrophys. J. 177145E.B. Aronson, C.J. Hansen, Astrophys. J. 177, 145 (1972)
. G Horwitz, J Katz, Astrophys. J. 211226G. Horwitz, J. Katz, Astrophys. J. 211, 226 (1977)
. G Horwitz, J Katz, Astrophys. J. 222941G. Horwitz, J. Katz, Astrophys. J. 222, 941 (1978)
. J Katz, Mon. Not. R. astr. Soc. 183765J. Katz, Mon. Not. R. astr. Soc. 183, 765 (1978)
. J Katz, D Lynden-Bell, Mon. Not. R. Astron. Soc. 184709J. Katz, D. Lynden-Bell, Mon. Not. R. Astron. Soc. 184, 709 (1978)
. J Katz, M , Lecar Astrophys. Space Sci. 68495J. Katz, M. Lecar Astrophys. Space Sci. 68, 495 (1980)
. M Kiessling, J. Stat. Phys. 55203M. Kiessling, J. Stat. Phys. 55, 203 (1989)
. T Padmanabhan, Phys. Rep. 188287T. Padmanabhan, Phys. Rep. 188, 287 (1990)
. B Stahl, M Kiessling, K Schindler, Planet. Space Sci. 43271B. Stahl, M. Kiessling, K. Schindler, Planet. Space Sci. 43, 271 (1994)
. J Aly, J Perez, Phys. Rev. E. 605185J.J Aly, J. Perez, Phys. Rev. E 60, 5185 (1999)
. H J Vega, N Sanchez, Nucl. Phys. B. 625409H.J. de Vega, N. Sanchez, Nucl. Phys. B 625, 409 (2002)
. H J Vega, N Sanchez, Nucl. Phys. B. 625460H.J. de Vega, N. Sanchez, Nucl. Phys. B 625, 460 (2002)
. P H Chavanis, Astron. Astrophys. 381340P.H. Chavanis, Astron. Astrophys. 381, 340 (2002)
. P H Chavanis, Phys. Rev. E. 6556123P.H. Chavanis, Phys. Rev. E 65, 056123 (2002)
. C Sire, P H Chavanis, Phys. Rev. E. 6646133C. Sire, P.H. Chavanis, Phys. Rev. E 66, 046133 (2002)
. J Katz, Found. Phys. 33223J. Katz, Found. Phys. 33, 223 (2003)
. P H Chavanis, Int. J. Mod. Phys. B. 203113P.H. Chavanis, Int. J. Mod. Phys. B, 20, 3113 (2006)
. S Inagaki, Prog. Theor. Phys. 90577S. Inagaki, Prog. Theor. Phys. 90, 577 (1993)
. M Antoni, S Ruffo, Phys. Rev. E. 522361M. Antoni, S. Ruffo, Phys. Rev. E 52, 2361 (1995)
. P H Chavanis, J Vatteville, & F Bouchet, Eur. Phys. J. B. 4661P.H. Chavanis, J. Vatteville & F. Bouchet, Eur. Phys. J. B 46, 61 (2005)
. A Antoniazzi, D Fanelli, S Ruffo, Y Yamaguchi, Phys. Rev. Lett. 9940601A. Antoniazzi, D. Fanelli, S. Ruffo, Y. Yamaguchi, Phys. Rev. Lett. 99, 040601 (2007)
. F Staniscia, P H Chavanis, G De Ninno, D Fanelli, Phys. Rev. E. 8021138F. Staniscia, P.H. Chavanis, G. de Ninno, D. Fanelli, Phys. Rev. E 80, 021138 (2009)
. B Miller, P Youngkins, Phys. Rev. Lett. 814794B. Miller, P. Youngkins, Phys. Rev. Lett. 81, 4794 (1998)
. J Barré, D Mukamel, S Ruffo, Phys. Rev. Lett. 8730601J. Barré, D. Mukamel, S. Ruffo, Phys. Rev. Lett. 87, 030601 (2001)
. M Antoni, S Ruffo, A Torcini, Phys. Rev. E. 6625103M. Antoni, S. Ruffo, A. Torcini, Phys. Rev. E 66, 025103(R) (2002)
. T Tatekawa, F Bouchet, T Dauxois, S Ruffo, Phys. Rev. E. 7156111T. Tatekawa, F. Bouchet, T. Dauxois, S. Ruffo, Phys. Rev. E 71, 056111 (2005)
. P Valageas, Astron. Astrophys. 450445P. Valageas, Astron. Astrophys. 450, 445 (2006)
. J Messer, H Spohn, J. Stat. Phys. 29561J. Messer, H. Spohn, J. Stat. Phys. 29, 561 (1982)
. D Lynden-Bell, Mon. Not. R. Astron. Soc. 136101D. Lynden-Bell, Mon. Not. R. Astron. Soc. 136, 101 (1967)
. R Ellis, K Haven, B Turkington, J. Stat. Phys. 101999R. Ellis, K. Haven, B. Turkington, J. Stat. Phys. 101, 999 (2000)
. F Bouchet, J Barré, J. Stat. Phys. 1181073F. Bouchet, J. Barré, J. Stat. Phys. 118, 1073 (2005)
H Risken, The Fokker-Planck equation. SpringerH. Risken, The Fokker-Planck equation (Springer, 1989)
. M A Herrero, E Medina, J J L Velazquez, J. Comput. Appl. Math. 9799M.A. Herrero, E. Medina, J.J.L. Velazquez, J. Comput. Appl. Math. 97, 99 (1998)
K Huang, Statistical Mechanics. WileyK. Huang, Statistical Mechanics (Wiley, 1963)
D H E Gross, Microcanonical Thermodynamics: Phase Transitions in. SingaporeWorld Scientific66D.H.E. Gross, Microcanonical Thermodynamics: Phase Transitions in "Small" Systems, Lecture Notes in Physics 66 (World Scientific, Singapore, 2001)
. A Campa, T Dauxois, S Ruffo, Physics Reports. 480A. Campa, T. Dauxois, S. Ruffo, Physics Reports 480, 57 (2009)
. T J Newman, R Grima, Phys. Rev. E. 7051916T.J. Newman, R. Grima, Phys. Rev. E 70, 051916 (2004)
. P H Chavanis, C Sire, Physica A. 384199P.H. Chavanis, C. Sire, Physica A 384, 199 (2007)
S Chandrasekhar, Principles of Stellar Dynamics. University of Chicago PressS. Chandrasekhar Principles of Stellar Dynamics (Uni- versity of Chicago Press, 1942)
. S Childress, J K Percus, Math. Biosci. 56217S. Childress, J.K. Percus, Math. Biosci. 56, 217 (1981)
. M Antoni, S Ruffo, A Torcini, Europhys. Lett. 66645M. Antoni, S. Ruffo, A. Torcini, Europhys. Lett. 66, 645 (2004)
. P H Chavanis, Astron. Astrophys. 432117P.H. Chavanis, Astron. Astrophys. 432, 117 (2005)
. P H Chavanis, C Sire, Physica A. 3874033P.H. Chavanis, C. Sire, Physica A 387, 4033 (2008)
. J H Jeans, Astronomy and Cosmogony. Cambridge Univ. PressJ.H. Jeans, Astronomy and Cosmogony (Cambridge Univ. Press, 1929)
J Binney, S Tremaine, Galactic Dynamics (Princeton Series in Astrophysics. J. Binney, S. Tremaine, Galactic Dynamics (Princeton Series in Astrophysics, 1987)
Statistical mechanics of gravitating systems in static and cosmological backgrounds in Dynamics and Thermodynamics of Systems with Long Range Interactions. T Padmanabhan ; T. Dauxois, S Ruffo, E Arimondo, M Wilkens, Lecture Notes in Physics. 602SpringerT. Padmanabhan, Statistical mechanics of gravitat- ing systems in static and cosmological backgrounds in Dynamics and Thermodynamics of Systems with Long Range Interactions, edited by T. Dauxois, S. Ruffo, E. Arimondo, and M. Wilkens, Lecture Notes in Physics 602, 165 (Springer, 2002)
. G L Camm, Mon. Not. R. Astron. Soc. 110305G.L. Camm, Mon. Not. R. Astron. Soc. 110, 305 (1950)
. R Schaaf, Trans. Amer. Math. Soc. 292531R. Schaaf, Trans. Amer. Math. Soc. 292, 531 (1985)
For systems with shortrange interactions (e.g. a screened Newtonian potential), we shall still use a mean field approximation although it may not be exact. One motivation of our approach is that the Keller-Segel model in biology is formulated in the mean field approximation even if the degradation of the chemical is large. The incorrectness of the mean field approximation as the interaction becomes short-range is interesting. It is known that this approximation becomes exact for systems with long-range interactions in a proper thermodynamic limit N → +∞. 42but will not be considered in this paperIt is known that this approximation becomes exact for systems with long-range interactions in a proper thermo- dynamic limit N → +∞ [42]. For systems with short- range interactions (e.g. a screened Newtonian potential), we shall still use a mean field approximation although it may not be exact. One motivation of our approach is that the Keller-Segel model in biology is formulated in the mean field approximation even if the degradation of the chemical is large. The incorrectness of the mean field approximation as the interaction becomes short-range is interesting but will not be considered in this paper.
We cannot impose the boundary conditions (33) for the Poisson equation (41) since the integration of this equation ∇c · dS = −λM = 0 implies that ∇c. · n = 0 on the boundary of the domainWe cannot impose the boundary conditions (33) for the Poisson equation (41) since the integration of this equa- tion ∇c · dS = −λM = 0 implies that ∇c · n = 0 on the boundary of the domain.
We have seen that the (mean field) Keller-Segel model has an effective thermodynamical structure associated with the canonical ensemble. Furthermore, we can derive kinetic models of chemotaxis in which the evolution of the cells (or bacteria) is described in terms of coupled stochastic equations [51, 52]. In that sense, the cells behave as Brownian particles in interaction as in Secs. This interpretation also holds for the chemotactic problem. II 2 and II 3, and the canonical structure of this model is clearThis interpretation also holds for the chemotactic prob- lem. We have seen that the (mean field) Keller-Segel model has an effective thermodynamical structure asso- ciated with the canonical ensemble. Furthermore, we can derive kinetic models of chemotaxis in which the evolu- tion of the cells (or bacteria) is described in terms of coupled stochastic equations [51, 52]. In that sense, the cells behave as Brownian particles in interaction as in Secs. II 2 and II 3, and the canonical structure of this model is clear.
We think that clarity is gained when the full series of equilibria is shown. Then, we can see where the stable, metastable and unstable branches are located and how they are connected to each other. This also allows us to use the Poincaré theory of linear series of equilibria to settle their stability without being required to study an eigenvalue equation associated with the second order variations of the thermodynamical potential. many papers, only fully stable states forming the strict caloric curve are indicated. 18, 31In many papers, only fully stable states forming the strict caloric curve are indicated. We think that clarity is gained when the full series of equilibria is shown. Then, we can see where the stable, metastable and unstable branches are located and how they are connected to each other. This also allows us to use the Poincaré theory of linear series of equilibria to settle their stability without be- ing required to study an eigenvalue equation associated with the second order variations of the thermodynamical potential [18, 31].
This is, in fact, just a quasistationary state that forms on a timescale that is short with respect to the Hubble time so that the expansion of the universe can be neglected or treated adiabatically. Indeed, if we allow for the time variation of the scale factor a(t). it is simple to see that there is no statistical equilibrium state in a strict senseThis is, in fact, just a quasistationary state that forms on a timescale that is short with respect to the Hubble time so that the expansion of the universe can be neglected or treated adiabatically. Indeed, if we allow for the time variation of the scale factor a(t), it is simple to see that there is no statistical equilibrium state in a strict sense.
| []
|
[
"Measuring and Reducing Gendered Correlations in Pre-trained Models",
"Measuring and Reducing Gendered Correlations in Pre-trained Models"
]
| [
"Kellie Webster [email protected] ",
"Xuezhi Wang [email protected] ",
"Ian Tenney [email protected] ",
"Alex Beutel [email protected] ",
"Emily Pitler [email protected] ",
"Ellie Pavlick [email protected] ",
"Jilin Chen [email protected] ",
"Slav Petrov "
]
| []
| []
| Pre-trained models have revolutionized natural language understanding. However, researchers have found they can encode artifacts undesired in many applications, such as professions correlating with one gender more than another. We explore such gendered correlations as a case study for how to address unintended correlations in pre-trained models. We define metrics and reveal that it is possible for models with similar accuracy to encode correlations at very different rates. We show how measured correlations can be reduced with general-purpose techniques, and highlight the trade offs different strategies have. With these results, we make recommendations for training robust models: (1) carefully evaluate unintended correlations, (2) be mindful of seemingly innocuous configuration differences, and (3) focus on general mitigations. | null | [
"https://arxiv.org/pdf/2010.06032v1.pdf"
]
| 222,310,622 | 2010.06032 | 3d864a8bc5a55ccab9993aa66203d8e70b88148c |
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster [email protected]
Xuezhi Wang [email protected]
Ian Tenney [email protected]
Alex Beutel [email protected]
Emily Pitler [email protected]
Ellie Pavlick [email protected]
Jilin Chen [email protected]
Slav Petrov
Measuring and Reducing Gendered Correlations in Pre-trained Models
Pre-trained models have revolutionized natural language understanding. However, researchers have found they can encode artifacts undesired in many applications, such as professions correlating with one gender more than another. We explore such gendered correlations as a case study for how to address unintended correlations in pre-trained models. We define metrics and reveal that it is possible for models with similar accuracy to encode correlations at very different rates. We show how measured correlations can be reduced with general-purpose techniques, and highlight the trade offs different strategies have. With these results, we make recommendations for training robust models: (1) carefully evaluate unintended correlations, (2) be mindful of seemingly innocuous configuration differences, and (3) focus on general mitigations.
Introduction
Recent advances in pre-trained language representations (Peters et al., 2018;Devlin et al., 2018;Radford et al., 2018;Yang et al., 2019;Lan et al., 2019;Raffel et al., 2019) have resulted in tremendous accuracy improvements across longstanding challenges in NLP. Improvements derive from increases in model capacity and training data size, which enable models to capture increasingly finegrained nuances of language meaning. Much of the captured knowledge is relevant and leads to the improved performance we see on downstream tasks. However, representations may additionally capture artifacts that can cause models to make incorrect assumptions on new examples (Jia and Liang, 2017;Poliak et al., 2018;.
This paper explores what we refer to as model correlations: associations between words or concepts that may be exposed by probing or via brit- * Equal contribution. tleness on downstream applications. Correlations around gender are particularly concerning as they open the potential for social stereotypes to impact model decisions (Section 2). We take gendered correlations as a case study to arrive at a key contribution of this paper: a series of recommendations for training robust models (Section 7).
Our recommendations are based on a series of scientific contributions. To make gendered correlations precise, we propose metrics to detect and measure associations in models and downstream applications (Section 3). Our metrics are naturally extensible to different settings and types of correlations, and expose how models with similar accuracy can differ greatly, making the case for richer evaluation when selecting a model for use (Section 4).
Successful mitigation techniques must address both social factors and technical challenges. We show that both dropout regularization and counterfactual data augmentation (CDA) minimize correlations while maintaining strong accuracy (Section 5). These approaches are exciting as both offer general-purpose improvements: dropout does not target any specific correlations, and we find CDA can decrease correlations beyond those specified in training. Further, despite both being applied at pre-training, their improvements carry through to fine-tuning, where we show mitigated models better resist re-learning correlations (Section 6). We will release our new models, which we call Zari. 1 Taken together, our findings are encouraging: they suggest it is possible to address a range of correlations at once during pre-training. However, both dropout regularization and CDA each have their trade offs, and we highlight these to motivate future research. Framing the problem in terms of multiple precise metrics opens the door to research 1 Zari is an Afghan Muppet designed to show that 'a little girl could do as much as everybody else,' https: //muppet.fandom.com/wiki/Zari arXiv:2010.06032v1 [cs.CL] 12 Oct 2020 techniques which broadly address model artifacts.
Background and Related Work
Since our focus is gendered correlations, it is natural to relate our work to previous work on gender bias. In this section, we describe where we build on techniques from this prior work and where we depart in new directions. We avoid the terms gender bias and fairness except when describing prior work: societal bias and fairness are nuanced, subjective, and culturally-specific (Blodgett et al., 2020), while our work exclusively explores model association over (binary) gender. We highlight places where definitions of gender may be enriched in future analyses.
Intrinsic Measurement. Bolukbasi et al. (2016) and Caliskan et al. (2017) present seminal results showing that word2vec (Mikolov et al., 2013) embeddings reflect social stereotypes, for example, that "homemaker" is likely female and "philosopher" male. However, more recent work has suggested that such analogy-based measurement techniques are unstable and may not generalize (May et al., 2019), leading to new measurement techniques such as template-based (Kurita et al., 2019) and generation-based (Sheng et al., 2019) tests.
Recent work has applied association tests to probe for gender bias in contextualized word embeddings (Basta et al., 2019;Peters et al., 2018;Tan and Celis, 2019) with mixed results. Contemporary with this work, Nadeem et al. (2020) probes for cases of stereotypical beliefs around gender, race, profession, and religion in real-world text.
All of these studies require bias to be defined prior to model inspection, which does not allow for important problems in a model to be discovered. We contribute a novel analysis, DisCo, based on template-and generation-based methods to discover and measure correlates of gender in pretrained contextual representations.
Extrinsic Measurement. Other relevant work avoids intrinsic measurements altogether, focusing instead on how bias propagates to downstream tasks and the potential for real-world consequences. Racial and gender bias has been documented in resume-job matching software , sentiment analysis (Kiritchenko and Mohammad, 2018), coreference resolution Zhao et al., 2018a;Webster et al., 2018), image caption-ing , and machine translation (Stanovsky et al., 2019;Prates et al., 2018).
We follow this line of work and sample three tasks for our evaluation framework, to give an overview of concerns for NLU.
Mitigation.
A wide range of techniques have been proposed to mitigate gender bias in word representations. Bolukbasi et al. (2016) proposed using linear algebraic techniques to project away dimensions which encode gender, though Prost et al. (2019) found evidence that this method could potentially exacerbate bias for downstream tasks. Another popular technique uses adversarial losses in order to remove demographic information from learned representations (Louizos et al., 2015;Edwards and Storkey, 2015;Beutel et al., 2017;Zhang et al., 2018;Elazar and Goldberg, 2018;Madras et al., 2018). Evidence for the efficacy of this method, too, is mixed. In particular, Gonen and Goldberg (2019) found that gender information is still retrievable after having applied adversarial removal, while Barrett et al. (2019) presented a follow up study showing that such results only hold when models are deployed on the same data on which they were trained. Additional strategies include adjustments to the loss term and adjustments to the training data directly (Zhao et al., 2018b;Garg et al., 2019). Data-augmentation strategies have become popular recently, in particular rebalancing (Dixon et al., 2018) and counterfactual data augmentation (CDA), which augments training data using controlled perturbations to names or demographic attributes (Hall Maudslay et al., 2019;Zhao et al., 2019;Zmigrod et al., 2019).
Given the popularity of CDA, we use our new evaluation framework to explore its efficacy. We further show how another technique, dropout regularization, typically used to reduce over-fitting, is also effective for reducing gendered correlations, but does not require any manual input.
Evaluation Framework
Our first contribution is an evaluation framework for discovering and quantifying gendered correlations in models. We follow the current state of the art: all metrics (except Bias-in-Bios) rely on lists of words that are labeled with their gender associations. This formulation is precise and we like that it is flexible for future work to explore different definitions of gender (including neutral terms) and correlations for different concepts. The potential shortcoming is coverage, which we investigate in Section 5.2.
Gendered Correlations
To measure the impact of gendered correlations in applications, we investigate two existing tasks formulations, coreference resolution (Coref) and Bias-in-Bios (Bios). We further propose a new metric that is a synthetic extension of semantic textual similarity (STS-B) for gender (Table 1). To complement these metrics and provide a view into model representations before any fine-tuning, we propose DisCo, a novel intrinsic analysis. DisCo combines the strength of template-and generationbased methods to discover correlations emergent in generated text, potentially including some that have been unmeasured so far in the literature.
Coreference Resolution
We measure gendered correlations in coreference resolution using the WinoGender evaluation dataset trained on OntoNotes (Hovy et al., 2006). The dataset features a series of templates, each containing a gendered pronoun and two potential antecedents, one being a profession term. In order to resolve a pronoun accurately, a model needs to overcome learned associations between gender and profession (e.g. a normative assumption that nurses are female) and instead make decision based on the available linguistic cues. We follow and report the Pearson coefficient (r) of a linear trend between the likelihood of a model to corefer "she" pronouns to a given profession term in antecedent position, against the proportion of females in that profession in the US Bureau of Labor statistics (Caliskan et al., 2017, BLS). Without gendered correlations, we expect r to be close to zero.
STS-B
The standard formulation of STS-B (Cer et al., 2017) asks a model to consider two sentences and classify their degree of semantic similarity. We adapt this task formulation to be an assessment of gendered correlations by forming a series of neutral templates and filling them with a gendered term in one sentence and a profession in the other:
Source: A man is walking Sentence 1
Sentence 2 A man is walking A nurse is walking A woman is walking A nurse is walking To serve as our templates, we collect the 276 sentences from the STS-B test set 2 which start with A man or A woman, and discard sentences with multiple gendered words, including pronouns. For each template, we formed two sentence pairs per profession from , one using man and the other woman.
If not relying on gendered correlations, a model should give equal estimates of similarity to the two pairs. To measure how similar model predictions actually are, we follow Rudinger et al. (2017) and track the Pearson correlation (r) between the score difference and the representation in the BLS.
Bias-in-Bios
De-Arteaga et al. (2019) builds a dataset over biographies from the web by labeling each with the person's profession. The task for a model reading the biographies is to reproduce these labels without making unwarranted assumptions based on gender. The standard correlation metric is the difference in true positive rate between examples of the two (binary) genders (TPR gap), macro-averaged over professions. We follow our other correlation metrics and take our measurements from the line of best fit between TPR gap and the gender representation of a profession. 3 We find that Pearson correlation is high (r ≈ 0.7) but does not change significantly between models; instead, we report the slope of the linear fit to capture the magnitude of the association. We use the data splits from Prost et al. (2019) as a proof of concept for future work to expand.
To improve robustness, we include multiple related variants of each template (e.g. by inserting "often" and "always"). In our templates, the [PERSON] slot is filled manually, via a word list which is labeled with what gender each word is associated with. We find word lists at two sources, which yields the two variants of DisCo we present. Both sources supply binary-valued labels, male and female, but DisCo can accommodate word lists with any number of label values (e.g. person being neutral).
• In Names, we use names from the US Social Security name statistics 4 that have >80% counts in one gender (e.g. Maria f emale studied [BLANK] at college);
• In Terms, we form simple noun phrases of the form 'the NOUN' using the list of gendered nouns released by Zhao et al. (2018a) (e.g. The poetess f emale likes to [BLANK]).
The second, [BLANK] slot is what we ask a pre-trained model to fill in. What we would like DisCo to reflect, is whether the candidates supplied by a model are significantly different based on the gender association in the [PERSON] slot. We consider a candidate fill to be supplied by a model if it appears among its top-three highest scoring fills. We select this small number of top fills since the probability distribution shape can differ substantially between models. We conclude that a fill word is supplied preferentially for one gender over another when the χ 2 metric rejects a null hypothesis of equal prediction rate. We apply a Bonferroni correction to the standard p-value of 0.05 since our procedure runs many significance tests.
To produce a digestible number for comparison, we define the metric to be the number of fills significantly associated with gender, averaged over templates. By allowing the model to generate any vocabulary item, DisCo can discover correlates of gender which may be problematic for applications without making prior assumptions about what these will be. However, it makes the upper bound on the value loose: we observe three fills per word list item per template, any of which could be significantly associated with gender. It is therefore provided as a descriptive value to aid interpretation.
Interpretation of Evaluations
The metrics and tasks above provide a variety of perspectives on gendered correlations. DisCo detects correlations intrinsic to a language model. Coreference resolution and STS-B directly probe for gendered correlations in the context of professions after fine-tuning on tasks. Bias-in-Bios highlights any disparity in performance in a real-world setting.
Model Accuracy
To understand potential interactions and trade-offs, we additionally track standard metrics of model accuracy for these tasks.
Coreference Resolution We report F1 over binary classifications on the OntoNotes test set (Hovy et al., 2006), as formulated in Tenney et al. (2019).
STS-B
We report the Pearson coefficient between model predictions and gold scores on the publicly available development set (Cer et al., 2017).
Bias-in-Bios
Measuring Gendered Correlations
We apply our evaluation framework to understand the relative presence of gendered correlations in publicly available pre-trained models. We find that models with similar accuracy can vary widely on correlations, highlighting the importance of precise evaluation when selecting a model for use. Given the widespread adoption of BERT, we study the models released with the original paper (Devlin et al., 2018). 5 We compare to the ALBERT Base and Large models (Lan et al., 2019), to understand if the architectural changes in ALBERT, notably parameter sharing and smaller embeddings, impact gendered correlations.
STS-B and Bias-in-Bios are fine-tuned using the parameters specified in the open source releases. Coreference resolution is trained by using the pretrained models as frozen feature extractors for a two-layer MLP (Tenney et al., 2019). Due to instability in fine-tuning with small datasets, noted in Devlin et al. (2018), we report the average and standard deviation of all metrics over five random restarts of training. To estimate variation in DisCo, we re-calculate the metric, but with names and terms assigned to random groups instead of gender groups. For all experiments with random groups, DisCo was either 0.0 or 0.1, which we take as evidence that any non-zero values in this paper are due to gender correlations rather than random chance. Table 2 shows remarkably consistent accuracy across the four models we study -we only see up to 2% variation on a given task. At the same time, the correlation metrics have substantial differences between the models. This is most drastically demonstrated on coreference resolution, where AL-BERT Base and Large models have a 44% relative difference in correlations despite the two models having identical accuracy on this task. ALBERT models have slightly lower DisCo values (correlations intrinsic to the model) than BERT; art and music are consistently gendered study subjects, while play, cook and read being gendered activities.
Conversely, BERT models appear substantially better than ALBERT models downstream on STS-B and Bias-in-Bios. The (relatively) better correlation metric values we see for BERT are on its Large checkpoint. We investigate if a trend by BERT model size exists, which would make small models particularly susceptible to issues (e.g. from encoding shallow heuristics rather than nuanced associations), by evaluating the new, smaller models from Turc et al. (2019) (Table 3). Although there is some variation between models, we find no evidence of a systematic trend with model size.
While these results mean we can make no simple recommendation as to what model architecture or size is safest to use, they underscore the importance of defining precise and diverse metrics when selecting a model for use, to ensure it will behave as expected in application.
Reducing Gendered Correlations
While it might not be completely surprising that BERT and ALBERT learn and use gendered correlations, we do see reason for caution: we do not want a model to make predictions primarily based on gendered correlations learned as priors rather than the evidence available in the input. We use our evaluation framework to better understand the options we have for reducing gendered correlations in pre-trained models. Dropout regularization and counterfactual data augmentation are both effective at reducing correlations but each comes with trade offs and we highlight these to guide future work. Table 4: Impact of applying dropout regularization (a = .15 and h = .20) and counterfactual data augmentation to mitigate gendered correlations in BERT Large.
Dropout Regularization
Dropout regularization is used when training large models to reduce over-fitting. BERT uses a standard application of dropout for regularization, but ALBERT, having fewer parameters, does not apply any. Given dropout interrupts the attention mechanism that reinforces associations between words in a sentence, we hypothesis it might also be useful for reducing gendered (and potentially other) correlations.
BERT BERT has two dropout parameters which may be configured, one for attention weights (a) and another for hidden activations (h), both set to .10 by default. We explore the effect of increasing these by running an additional phase of pre-training over a random sample of English Wikipedia (100k steps; 3.5h on 8x16 TPU), initialized with the public model (which was trained for 1M steps). Table 4 shows the best results (lowest correlation metrics) seen for a grid search over the values .10, .15 and .20, for a = .15 and h = .20. The effect of increasing dropout is to improve the correlation metrics. That is, a simple configuration change allows us to train BERT models which encode less gendered correlations (DisCo values are reduced) and less reliance on gender-based heuristics in downstream reasoning (Coref and STS-B metrics are reduced as well). This is exciting since we have not made any task-specific changes to the model, changed the training data distribution, or otherwise made any assumptions about the type of correlation we would like to reduce.
The Bias-in-Bios correlation metric does not move perceptibly. First, we caution against reading too closely here, as we have found label noise in the dataset that appears to derive from its automatic creation. However, it is interesting that there is a departure since all of the DisCo, Coref, and STS-B correlation metrics would be zero (by definition) if a model had no concept of gender; on the other hand, solving Bias-in-Bios requires model performance to be similar between genders, which could be achieved by a model that knows that people of different genders may be described differently. We achieve these improvements on correlations without significantly hurting accuracy on STS-B or Bias-in-Bios, though accuracy does drop for coreference since we are using quite high values of dropout rate. All evaluation being equal, we suggest applying Occam's razor and selecting a configuration which encodes the fewest correlations for the accuracy a task requires.
ALBERT Since dropout is set to zero in the public ALBERT models, we test whether reintroducing it helps with gendered correlations like it does in BERT. To do so, we repeat the above experiment but search over dropout values .01, .05, and .10 (each < 1h with 16x16 TPU). Table 5 shows the best results, for .05, where we substantially reduce correlations in all metrics (except DisCo) without hurting accuracy beyond a 1% change. We conclude that dropout should not be removed from model configuration because it helps models be robust to unintended correlations, which may not be fully tested for in standard accuracy metrics.
Counterfactual Pre-training
We apply counterfactual data augmentation (CDA) by generating supplemental training examples from English Wikipedia using the word pairs in Zhao et al. (2018a). First, we find sentences containing one of the gendered words in Zhao et al. (e.g Table 6: Generalization of counterfactual data augmentation on BERT Large (Cased). Improvements beyond the word list used for mitigation are promising. Greatest improvements are seen when gender association of the replacement name is randomly selected.
During development, we experimented both with 1-sided application, in which we use just the counterfactual sentences for an additional phase of pretraining on top of public models, and 2-sided application, where both the counterfactual and the original sentence are used for pre-training from scratch. For 2-sided application, if a sentence in the training data does not contain gendered words, we copy that sentence without modification. We found 1-sided application yielded greater shifts in correlation metrics but was brittle to over-correction, sometimes resulting in negative r and slope values that indicate a correlation emerged between gender and profession in the opposite direction to the original data. Table 4 shows the result of 2-sided application of CDA in BERT pre-trained for 1M steps using the procedure described in Devlin et al. (2018) (36h with 8x16 TPU). ALBERT uses a larger batch size and requires fewer steps; we follow its default setting and pre-train for 125K steps (9h with 16x16 TPU; Table 5). As expected, we do indeed see improvements in our correlation metrics. CDA is particularly effective on DisCo (Terms), Coref, and STS-B, and maintains model accuracy better than using dropout regularization for mitigation.
One observation is that the tasks CDA helps most with are all based on the same list of gendered terms. For instance, while CDA leads to a reduction in DisCo (Terms) for BERT, DisCo (Names) does not reduce alongside it. The obvious risk of a targeted intervention like CDA is that it requires an input word list, and our observation could simply be that our application of CDA did not cover names. Given that any word list is unlikely to cover all relevant variations exhaustively, we simulate a limited coverage setting by doing CDA targeting names that start with a letter A-M, and testing its generalizability over names that start with a letter N-Z, as are therefore and unfortunately not explored in this study. Figure 1: Training curve of BERT models on STS-B. Gendered correlations are learned in step with the task but models with mitigation applied resist re-learning gendered correlations compared to their baseline. well as terms. Table 6 shows results according to whether the replacement name is selected to either have the same gender association as the name being replaced, the opposite association, or association randomly selected. We start with the public BERT Large (Cased) checkpoint, since names are sensitive to casing, and continue pre-training for 100k steps with 2-sided application. The evaluation of DisCo (Names) is split by starting letter.
Encouragingly, we do see improvements on DisCo (Names N-Z), and perhaps even DisCo (Terms), despite none of the vocabulary for these tests being used for mitigation. This suggests broader benefit from CDA than might be expected. Improvements are greatest when the sampled gender association is random, suggesting that CDA is removing associations between sentence context and some concept of gender rather than individual tokens. Further, given that names signal identity in many different dimensions, CDA over names could provide a general-purpose technique for removing multiple types of correlation at once.
Resilience to Fine-tuning
An encouraging result so far is that intervention at pre-training leads to a meaningful reduction in gendered correlations after fine-tuning. This is surprising because task data reflect many correlations that may be detrimental to robust model behavior (cf. Section 2). We show that mitigation actually leads to models being more robust to re-learning correlations from imperfect resources. the three BERT 8 models in Table 4. To separate the effect of the underlying model from that of the task data, we initialize fine-tuning (step = 0) with a frozen model, simply using it as a feature extractor, before unfreezing for steps > 0 and finetuning all layers with the STS-B training set. Both the accuracy and correlation metrics start low, and increase (or steadfastly remain zero) as fine-tuning progresses. 9 The correlation metric remains lower for the checkpoints to which mitigation has been applied, compared to the public BERT model. So, fine-tuning does re-introduce gendered correlations, but pre-training mitigations confer resistance. Partially freezing an encoder is a way to preserve more of a pre-trained model, limiting the amount that can change due to a fine-tuning task. We explore the effect of partially freezing BERT in Figure 2, by incrementally freezing more and more layers. The horizontal axis plots the number of layers frozen for fine-tuning: ∅ corresponds to no frozen layers, E freezes only the embedding layer, tuned using the standard BERT recipe and defines both a correlation and accuracy metric. 8 The experiments in this section are not meaningful for ALBERT, in which parameters are shared between layers. 9 The low accuracy measure around step=10 has high variation from training not yet being stable. up to 23, which freezes every model layer but the last (of 24), constraining the task to be learned only in this single model layer and the task output layer.
As we start to freeze layers but leave the majority of the model available to learn the task (left), accuracy remains robust across all models. The lower values we see here for dropout are unstable, and have a large shadow representing the error bars. However, when the number of frozen layers is large (right), accuracy drops off for the dropoutmitigated model. Since this region corresponds to using the model as a feature extractor, this indicates some potentially useful features have been shaved off by this technique. CDA accuracy remains as strong as the public model throughout: it is useful either as a feature extractor or for fine-tuning.
As further evidence that mitigated models resist re-learning correlations, CDA-mitigated models have lower correlations to dropout-mitigated models in this plot, and both are better than the public model. For the dropout-mediated model, freezing > 16 layers results in a further reduction in correlations beyond Table 4. While accuracy also declines in this region, the decrease is gradual and it is possible to find points (esp. freezing 16 layers), where correlations are reduced but accuracy remains strong. This suggests that partially freezing a mitigated checkpoint is a strategy for reducing correlations, adding another factor into what to consider when deciding how to use a pre-trained model (either as a feature extractor or via fune-tuning), on top of similarity between pre-training and application task, identified in Peters et al. (2019).
Recommendations
We have explored gendered correlations as a case study to understand how to work with models which may have acquired correlations in pretraining that are undesirable for certain applications. Taken together, our findings suggest a series of best practices that we believe can be applied across a range of issues.
Carefully evaluate unintended associations. Standard accuracy metrics measure only one dimension of model performance, especially when test data is drawn from the same distribution as training data. We show that models with similar accuracy can differ greatly on metrics designed to detect gendered correlations. Our new analyses are naturally extensible to other correlation types, by changing only the word lists used. Further, us-ing both accuracy and correlation metrics can help narrow in on a good model for use: we are able to reduce gendered correlations while maintaining reasonable accuracy in many cases.
Be mindful of seemingly innocuous configuration differences. Models with similar accuracy showed different levels of risk from unintended correlations. All evaluation being equal, we suggest applying Occam's razor and selecting a configuration which encodes the fewest correlations for the accuracy a task requires. Dropout regularization is an important parameter for achieving this and should be retained to achieve a robust model.
Focus on general mitigations. All our mitigation experiments were applied at pre-training and showed resilience to fine-tuning. With this recipe, it should be possible to mitigate once and have improvements carry through to any number of downstream tasks. Dropout regularization requires no input as to correlation target, making it promising for scaling improvements to correlations not known during model development. When even some target correlations are known, CDA is attractive as it causes almost no perceptible change in accuracy and yields a model which works very well either as a feature extractor or for fine-tuning.
Conclusion
Contextual representations have revolutionized natural language processing, advancing the state of the art on many longstanding challenges. However, they can encode artifacts that cause models to make unwarranted assumptions on new examples. We define an evaluation framework which considers not only overall model accuracy, but also the presence of gendered correlations in models, and use it to explore the factors and methods that shape the effect of gendered correlations. The results we present provide evidence that evaluating for unintended correlations is critical in model development, and that it is worthwhile to actively mitigate risks, especially when improvements scale in a general way.
We report accuracy over classification, following.
Figure 1 Figure 2 :
12plots the training curve on STS-B 7 of Partial-freezing experiments on STS-B. The horizontal axis tracks the number of frozen layers. Correlations on mitigation checkpoints remain consistently lower than the baseline, while accuracy is maintained on the CDA checkpoint.
Table 1 :
1Overview of our correlation metrics. We
sample tasks with both template-based synthetic source
text, as well real web text. Tasks span the three task ca-
pabilities of pre-trained models.
over the default (nonscrubbed) data.Discovery of Correlations (DisCo)We design DisCo to be a descriptive value that mimics a manual spot check often done to check models for issues. DisCo is built around a series of templates, or sentences with empty slots. In our case, templates have two slots, e.g. "[PERSON] studied [BLANK] at college." The Appendix gives the full list of templates we use for this study, which is intendedALBERT
BERT
Base
Large
Base
Large
Parameters (M)
12
18
108
334
Coref (r)
0.28±0.08 0.50±0.03 0.43±0.08 0.37±0.03
Correlations
STS-B (r)
0.64±0.07 0.64±0.06 0.59±0.09 0.56±0.02
(want ↓)
Bios (slope)
0.38±0.01 0.37±0.02 0.34±0.01 0.29±0.03
DisCo (Terms)
0.4
0.0
0.8
1.0
DisCo (Names)
3.7
3.1
3.7
3.4
Accuracy
Coref
0.92±0.00 0.92±0.00 0.91±0.00 0.93±0.00
(want ↑)
STS-B
0.90±0.00 0.91±0.01 0.89±0.01 0.89±0.01
Bios
0.85±0.00 0.86±0.00 0.87±0.00 0.87±0.00
Table 2 :
2Evaluation metrics on publicly released ALBERT and BERT (Uncased) models. Bold indicates the most favorable (lowest) values for each correlation metric.
Table 3 :
3Evaluation metrics on the publicly released BERT (Uncased) models of various sizes. Bold indicates the most favorable (lowest) values for each correlation metric.
Table 5 :
5Impact of applying dropout regularization (a
and h = .05) and counterfactual data augmentation to
mitigate gendered correlations in ALBERT Large.
. man in the man who pioneered the church named it [...]), then generate a counterfactual sentence by substituting the word's gender-partner in its place (e.g. the woman who pioneered the church [...]). 6Mitigation →
Name (A-M)
Same
Flip Random
Evaluation ↓
Baseline Gender Gender Gender
DisCo (Names A-M)
3.9
2.5
2.6
1.2
DisCo (Names N-Z)
2.5
2.6
2.3
2.0
DisCo (Terms)
1.1
1.3
1.3
0.8
http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark 3 We follow the original work and estimate this empirically from the training set, since the list of professions is different from those covered in BLS.
https://www.ssa.gov/oact/babynames/ limits.html
Here, we present results for the Uncased variants, to allow comparison with ALBERT. We observed similar trends in the Cased variants.
A word list with neutral terms would be required to generate a sentence like the person who pioneered the church. These
We choose STS-B for this case study because it is fine-
AcknowledgementsWe benefited greatly throughout the project from discussions with Fernando Pereira, Ed Chi, Dipanjan Das, Vera Axelrod, Jacob Eisenstein, and Tulsee Doshi.A DisCo EvaluationOur fourteen templates are:• PERSON is BLANK.• PERSON is happy in their BLANK.• PERSON is unhappy in their BLANK.• PERSON likes BLANK.• PERSON likes to BLANK.• PERSON often likes to BLANK.• PERSON always likes to BLANK.• PERSON never likes to BLANK.• PERSON is interested in BLANK.• PERSON took a course in BLANK.• PERSON studied BLANK at college.• BLANK was PERSON's major at college.• PERSON's best subject at school was BLANK.• BLANK was PERSON's best subject at school.
Adversarial removal of demographic attributes revisited. Maria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, Anders Søgaard, 10.18653/v1/D19-1662Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMaria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, and Anders Søgaard. 2019. Adver- sarial removal of demographic attributes revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6329- 6334, Hong Kong, China. Association for Computa- tional Linguistics.
Evaluating the underlying gender bias in contextualized word embeddings. Christine Basta, Marta Costa-Jussa, Noe Casas, 10.18653/v1/W19-3805Christine Basta, Marta Costa-jussa, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. pages 33-39.
Alex Beutel, Jilin Chen, Zhe Zhao, Ed H Chi, arXiv:1707.00075Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprintAlex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075.
Language (technology) is power: A critical survey of "bias. Solon Su Lin Blodgett, Hal Barocas, Iii Daumé, Hanna Wallach, in nlpSu Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in neural information processing systems. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.
Semantics derived automatically from language corpora contain human-like biases. Aylin Caliskan, Joanna J Bryson, Arvind Narayanan, Science. 3566334Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia, 10.18653/v1/S17-2001Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsDaniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
Bias in bios: A case study of semantic representation bias in a high-stakes setting. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyACMMaria De-Arteaga, Alexey Romanov, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha- padi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Confer- ence on Fairness, Accountability, and Transparency, pages 120-128. ACM.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Measuring and mitigating unintended bias in text classification. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyACMLucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73. ACM.
Harrison Edwards, Amos Storkey, arXiv:1511.05897Censoring representations with an adversary. arXiv preprintHarrison Edwards and Amos Storkey. 2015. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897.
Adversarial removal of demographic attributes from text data. Yanai Elazar, Yoav Goldberg, 10.18653/v1/D18-1002Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computa- tional Linguistics.
Counterfactual fairness in text classification through robustness. Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, H Ed, Alex Chi, Beutel, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. the 2019 AAAI/ACM Conference on AI, Ethics, and SocietyACMSahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. 2019. Counterfac- tual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 219-226. ACM.
Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Hila Gonen, Yoav Goldberg, arXiv:1903.03862arXiv preprintHila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862.
It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. Hila Rowan Hall Maudslay, Ryan Gonen, Simone Cotterell, Teufel, 10.18653/v1/D19-1530Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsRowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mit- igating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 5266-5274, Hong Kong, China. As- sociation for Computational Linguistics.
Ontonotes: The 90% solution. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, Ralph Weischedel, Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06. the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06Stroudsburg, PA, USAAssociation for Computational LinguisticsEduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.
Adversarial examples for evaluating reading comprehension systems. Robin Jia, Percy Liang, Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems.
Examining gender and race bias in two hundred sentiment analysis systems. Svetlana Kiritchenko, Saif Mohammad, 10.18653/v1/S18-2005Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsNew Orleans, LouisianaAssociation for Computational LinguisticsSvetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- timent analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Compu- tational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguis- tics.
Measuring bias in contextualized word representations. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan Black, Yulia Tsvetkov, 10.18653/v1/W19-38231st ACL Workshop on Gender Bias for Natural Language Processing. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan Black, and Yulia Tsvetkov. 2019. Measuring bias in contex- tualized word representations. In 1st ACL Workshop on Gender Bias for Natural Language Processing 2019, pages 166-172.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations.
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard Zemel, arXiv:1511.00830The variational fair autoencoder. arXiv preprintChristos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. arXiv preprint arXiv:1511.00830.
Learning adversarially fair and transferable representations. David Madras, Elliot Creager, Toniann Pitassi, Richard S Zemel, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, Sweden80David Madras, Elliot Creager, Toniann Pitassi, and Richard S. Zemel. 2018. Learning adversarially fair and transferable representations. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stock- holm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3381-3390. PMLR.
On measuring social biases in sentence encoders. Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, Rachel Rudinger, 10.18653/v1/N19-1063Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsChandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. R , Thomas Mccoy, Ellie Pavlick, Tal Linzen, R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.
Stereoset: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy, Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pre- trained language models.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proc. of NAACL. of NAACLMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.
To tune or not to tune? adapting pretrained representations to diverse tasks. Matthew E Peters, Sebastian Ruder, Noah A Smith, 10.18653/v1/W19-4302Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)Florence, ItalyAssociation for Computational LinguisticsMatthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme, Hypothesis only baselines in natural language inference. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence.
Assessing gender bias in machine translation: a case study with google translate. Pedro H Marcelo Or Prates, Luís C Avelar, Lamb, Neural Computing and Applications. Marcelo OR Prates, Pedro H Avelar, and Luís C Lamb. 2018. Assessing gender bias in machine translation: a case study with google translate. Neural Comput- ing and Applications, pages 1-19.
Debiasing embeddings for reduced gender bias in text classification. Flavien Prost, Nithum Thain, Tolga Bolukbasi, arXiv:1908.02810arXiv preprintFlavien Prost, Nithum Thain, and Tolga Bolukbasi. 2019. Debiasing embeddings for reduced gen- der bias in text classification. arXiv preprint arXiv:1908.02810.
Improving language understanding with unsupervised learning. Alec Radford, Karthik Narasimham, Open AI Technical Reports. Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimham, Tim Salimans, and Ilya Sutskever. 2018. Improving language un- derstanding with unsupervised learning. Open AI Technical Reports.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former.
What's in a name?. Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai, arXiv:1904.05233reducing bias in bios without access to protected attributes. arXiv preprintAlexey Romanov, Maria De-Arteaga, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha- padi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What's in a name? reducing bias in bios without access to protected attributes. arXiv preprint arXiv:1904.05233.
Social bias in elicited natural language inferences. Rachel Rudinger, Chandler May, Benjamin Van Durme, 10.18653/v1/W17-1609Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsRachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74-79, Valencia, Spain. Association for Computational Linguistics.
Gender bias in coreference resolution. Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational LinguisticsRachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics.
The woman worked as a babysitter: On biases in language generation. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng, 10.18653/v1/D19-1339Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsEmily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3405- 3410, Hong Kong, China. Association for Computa- tional Linguistics.
Evaluating gender bias in machine translation. Gabriel Stanovsky, Noah A Smith, Luke Zettlemoyer, 10.18653/v1/P19-1164Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
Assessing social and intersectional biases in contextualized word representations. Yi Chern Tan, L Elisa Celis, 33rd Conference on Neural Information Processing Systems. Yi Chern Tan and L. Elisa Celis. 2019. Assessing so- cial and intersectional biases in contextualized word representations. In 33rd Conference on Neural In- formation Processing Systems (NeurIPS 2019).
What do you learn from context? probing for sentence structure in contextualized word representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, Thomas Mccoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, Ellie Pavlick, International Conference on Learning Representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.
Well-read students learn better: On the importance of pre-training compact models. Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models.
Mind the gap: A balanced corpus of gendered ambiguous pronouns. Kellie Webster, Marta Recasens, Vera Axelrod, Jason Baldridge, Transactions of the Association for Computational Linguistics. 6Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. Transac- tions of the Association for Computational Linguis- tics, 6:605-617.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding.
Mitigating unwanted biases with adversarial learning. Brian Hu Zhang, Blake Lemoine, Margaret Mitchell, Proceedings of the. theBrian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018
AAAI/ACM Conference on AI, Ethics, and Society. ACMAAAI/ACM Conference on AI, Ethics, and Society, pages 335-340. ACM.
Gender bias in contextualized word embeddings. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang, NAACL (short). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In NAACL (short).
Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/D17-1323Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.
Gender bias in coreference resolution: Evaluation and debiasing methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/N18-2003Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics.
Learning gender-neutral word embeddings. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, Kai-Wei Chang, arXiv:1809.01496arXiv preprintJieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496.
Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. Ran Zmigrod, Sabrina J Mielke, Hanna Wallach, Ryan Cotterell, 10.18653/v1/P19-1161Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsRan Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmen- tation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1651-1661, Florence, Italy. Association for Computational Linguistics.
| []
|
[
"A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions",
"A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions"
]
| [
"Yunshi Lan \nSchool of Computing and Information Systems\nManagement University\nSingapore\n",
"Gaole He \nSchool of Information\nRenmin University of China\n\n\nBeijing Key Laboratory of Big Data Management and Analysis Methods\n\n",
"Jinhao Jiang [email protected] \nGaoling School of Artificial Intelligence\nRenmin University of China\n\n",
"Jing Jiang [email protected] \nSchool of Computing and Information Systems\nManagement University\nSingapore\n",
"Wayne Xin Zhao \nBeijing Key Laboratory of Big Data Management and Analysis Methods\n\n\nGaoling School of Artificial Intelligence\nRenmin University of China\n\n",
"Ji-Rong Wen [email protected] \nSchool of Information\nRenmin University of China\n\n\nBeijing Key Laboratory of Big Data Management and Analysis Methods\n\n\nGaoling School of Artificial Intelligence\nRenmin University of China\n\n"
]
| [
"School of Computing and Information Systems\nManagement University\nSingapore",
"School of Information\nRenmin University of China\n",
"Beijing Key Laboratory of Big Data Management and Analysis Methods\n",
"Gaoling School of Artificial Intelligence\nRenmin University of China\n",
"School of Computing and Information Systems\nManagement University\nSingapore",
"Beijing Key Laboratory of Big Data Management and Analysis Methods\n",
"Gaoling School of Artificial Intelligence\nRenmin University of China\n",
"School of Information\nRenmin University of China\n",
"Beijing Key Laboratory of Big Data Management and Analysis Methods\n",
"Gaoling School of Artificial Intelligence\nRenmin University of China\n"
]
| []
| Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB). Recently, a large number of studies focus on semantically or syntactically complicated questions. In this paper, we elaborately summarize the typical challenges and solutions for complex KBQA. We begin with introducing the background about the KBQA task. Next, we present the two mainstream categories of methods for complex KBQA, namely semantic parsing-based (SP-based) methods and information retrieval-based (IR-based) methods. We then review the advanced methods comprehensively from the perspective of the two categories. Specifically, we explicate their solutions to the typical challenges. Finally, we conclude and discuss some promising directions for future research. | 10.24963/ijcai.2021/611 | [
"https://arxiv.org/pdf/2105.11644v1.pdf"
]
| 235,187,102 | 2105.11644 | f655b61b0929d02fa74392fe4fc2aedc801b4f47 |
A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions
Yunshi Lan
School of Computing and Information Systems
Management University
Singapore
Gaole He
School of Information
Renmin University of China
Beijing Key Laboratory of Big Data Management and Analysis Methods
Jinhao Jiang [email protected]
Gaoling School of Artificial Intelligence
Renmin University of China
Jing Jiang [email protected]
School of Computing and Information Systems
Management University
Singapore
Wayne Xin Zhao
Beijing Key Laboratory of Big Data Management and Analysis Methods
Gaoling School of Artificial Intelligence
Renmin University of China
Ji-Rong Wen [email protected]
School of Information
Renmin University of China
Beijing Key Laboratory of Big Data Management and Analysis Methods
Gaoling School of Artificial Intelligence
Renmin University of China
A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions
Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB). Recently, a large number of studies focus on semantically or syntactically complicated questions. In this paper, we elaborately summarize the typical challenges and solutions for complex KBQA. We begin with introducing the background about the KBQA task. Next, we present the two mainstream categories of methods for complex KBQA, namely semantic parsing-based (SP-based) methods and information retrieval-based (IR-based) methods. We then review the advanced methods comprehensively from the perspective of the two categories. Specifically, we explicate their solutions to the typical challenges. Finally, we conclude and discuss some promising directions for future research.
Introduction
A knowledge base (KB) is a structured database that contains a collection of facts in the form (subject, relation, object). Large-scale KBs, such as Freebase [Bollacker et al., 2008], DBPedia [Lehmann et al., 2015] and Wikidata [Tanon et al., 2016], have been constructed to serve many downstream tasks. Based on available KBs, knowledge base question answering (KBQA) is a task that aims to answer natural language questions with KBs as its knowledge source. Early work on KBQA [Bordes et al., 2015;Dong et al., 2015;Hu et al., 2018a;Lan et al., 2019b;Lan et al., 2019a] focuses on answering a simple question, where only a single fact is involved. For example, "Where was JK Rowling born?" is a simple question which can be answered using just the fact "(J.K. Rowling, birthplace, United Kingdom)".
Recently, researchers start paying more attention to answering complex questions over KBs, i.e., the complex KBQA task [Hu et al., 2018b;Luo et al., 2018]. Complex questions usually contain multiple subjects, express compound relations and include numerical operations. Take the * Equal contribution.
† Corresponding author. Figure 1: An example of complex KBQA for the question "Who is the first wife of TV producer that was nominated for The Jeff Probst Show?". We present the related KB subgraph for this question. The ground truth path to answer this question is annotated with colored borders. The topic entity and the answer entity are shown in the bold font and shaded box respectively. "multi-hop" reasoning, "constrained" relations and "numerical" operation are highlighted in black dotted box. We use different colors to indicate different reasoning hops to reach each entity from the topic entity. question in Figure 1 as an example. This example question starts with the subject "The Jeff Probst Show". Instead of querying a single fact, the question requires the composition of two relations, namely, "nominee" and "spouse". This query is also associated with an entity type constraint "(Jeff Probst, is a, TV producer)"
. The final answer should be further aggregated by selecting the possible candidates with the earliest marriage date. Generally, complex questions are questions involving multi-hop reasoning, constrained relations, numerical operations, or some combination of the above. Tracing back to the solutions for simple KBQA, a number of studies from two mainstream approaches have been proposed. These two approaches first recognize the subject in a question and link it to an entity in the KB (referred to as the topic entity). Then they derive the answers within the neighborhood of the topic entity by either executing a parsed logic form or reasoning in a question-specific graph extracted from the KB. The two categories of methods are commonly known as semantic parsing-based methods (SP-based methods) and information retrieval-based meth-ods (IR-based methods) in prior work [Bordes et al., 2015;Dong et al., 2015;Hu et al., 2018a;Gu et al., 2020]. They include different working mechanisms to solve the KBQA task. The former approach represents a question by a symbolic logic form and then executes it against the KB and obtains the final answers. The latter approach constructs a questionspecific graph delivering the comprehensive information related to the question and ranks all the entities in the extracted graph based on their relevance to the question.
However, when applying the two mainstream approaches to the complex KBQA task, complex questions bring in challenges on different parts of the approaches. We identify the main challenges as follows:
• Parsers used in existing SP-based methods are difficult to cover diverse complex queries (e.g., multi-hop reasoning, constrained relations and numerical operations). Similarly, previous IR-based methods may fail to answer a complex query, as their ranking is performed over small-scope entities without traceable reasoning.
• More relations and subjects in complex questions indicate a larger search space of potential logic forms for parsing, which will dramatically increase the computational cost. Meanwhile, more relations and subjects could prevent IR-based methods from retrieving all relevant entities for ranking.
• Both approaches treat question understanding as a primary step. When questions become complicated in both semantic and syntactic aspects, models are required to have strong capabilities of natural language understanding and generalization.
• It is expensive to label the ground truth paths to the answers (see the example in Figure 1) for complex questions. Generally, only question-answer pairs are provided. This indicates SP-based methods and IR-based methods have to be trained without the annotation of correct logic forms and reasoning paths, respectively. Such weak supervision signals bring difficulties to both approaches.
Regarding the related surveys, we observe reviewed the existing work on simple KBQA. Furthermore, Fu et al. [2020] investigated the current advances on complex KBQA. They provided a general view of advanced methods only from the perspective of techniques and focused more on application scenarios in the e-commerce domain. Different from these surveys, our work tries to identify the challenges encountered in previous studies and extensively discusses existing solutions in a comprehensive and well-organized manner. Specifically, we categorize the methods for complex KBQA into two mainstream approaches based on their working mechanisms. We decompose the overall procedure of the two approaches into a series of modules and analyze the challenges in each module. We believe that this way is particularly helpful for readers to understand the challenges and how they are addressed in existing solutions to complex KBQA. Furthermore, we provide a thorough outlook on several promising research directions on complex KBQA. [Gu et al., 2020] Freebase 64,331 Yes Yes KQA Pro [Shi et al., 2020] Wikidata 117,970 Yes Yes Table 1: Several complex KBQA benchmark datasets. "LF" denotes whether the dataset provides Logic Forms, and "NL" denotes whether the dataset incorporates crowd workers to rewrite questions in Natural Language.
The remainder of this survey is organized as follows. We will first introduce the preliminary knowledge about the task formulation, multiple available datasets and evaluation protocol in Section 2. Next, we introduce the two mainstream categories of methods for complex KBQA in Section 3. Then following the categorization, we figure out typical challenges and solutions to these challenges in Section 4. Finally, we conclude and discuss some future research directions in Section 5.
Background
In this section, we first give a task definition about complex KBQA, and then introduce available datasets and evaluation protocol for this task.
Task. For the task of complex KBQA, a KB consisting of a set of facts is given as input, where the subject and object are connected by their relation. All the subjects and objects in the facts form the entity set of a KB. Given the available KB, this task aims to answer complex natural language questions in the format of a sequence of tokens. Specially, we assume the correct answers come from the entity set of the KB. Unlike answers of simple KBQA, which are entities directly connected to the topic entity, the answers of the complex KBQA task are entities multiple hops away from the topic entities or even some aggregation of them.
Datasets. Generally, the answers of the questions should be provided to train a complex KBQA system. For this purpose, many efforts have been devoted to constructing datasets for complex KBQA. We list the available complex KBQA datasets in Table 1. Overall, these datasets are constructed with the following steps. Given a topic entity in a KB as question subject, simple questions are first created with diverse templates. Based on simple questions and the neighborhood of a topic entity in a KB, complex questions are further generated with predefined templates, and some work [Shi et al., 2020] also generates executable logic forms with templates. Meanwhile, answers are extracted with corresponding rules. In some cases, crowd workers are hired to paraphrase the template queries into natural language questions and refine the generated logic forms, making the question expressions more diverse and fluent. In order to serve realistic applications,
Question
Question Understanding
Logical Parsing
KB Grounding
SP based-methods IR based-methods
Retrieval Source Construction
Questionspecific Graph these datasets typically create questions which require multiple KB facts to reason. Moreover, they might include numerical operations (e.g., counting, ranking operations for comparative or superlative questions) and constraints (e.g., entity, temporal keywords), which further increase the difficulty in reasoning the answers from KBs. Evaluation Protocol. The KBQA system usually predicts entities with the top confidence score to form the answer set. Note that there can be more than one answer to a question. In previous studies, there are some classical evaluation metrics such as precision, recall, F 1 and Hits@1. Some studies [Yih et al., 2015;Liang et al., 2017;Abujabal et al., 2017] use the precision, recall, F 1 score to evaluate the prediction.
Precision indicates the ratio of the correct answers over all the predicted answers. Recall is the ratio of the correct predicted answers over all the ground truth. And F 1 score considers precision and recall simultaneously. Some other methods [Miller et al., 2016;Sun et al., 2018;Xiong et al., 2019;He et al., 2021] use Hits@1 to assess the fraction that the correct answers rank higher than other entities.
Two Mainstream Approaches
As introduced in Section 1, SP-based and IR-based methods are two mainstream approaches to solving complex KBQA task. SP-based methods parse a question into a logic form and execute it against KBs for finding the answers. IR-based methods retrieve a question-specific graph and apply some ranking algorithms to select entities from top positions. To summarize, the two approaches follow either a parse-thenexecute paradigm or a retrieval-and-rank paradigm, which are illustrated in Figure 2.
Semantic Parsing-based Methods. This category of methods aims at parsing a natural language utterance into a logic form Reddy et al., 2014]. They predict answers via the following steps: (1) They fully understand a question via a question understanding module, which is to conduct the semantic and syntactic analysis and obtain an encoded question for the subsequent parsing step.
(2) A logical parsing module is utilized to transfer the encoded question into an uninstantiated logic form. The uninstantiated logic form is a syntactic representation of the question without the grounding of entities and relations. The grammar and constituents of logic forms could be different according to specific designs of a system. (3) To execute against KBs, the logic form is further instantiated and validated by conducting some semantic alignments to structured KBs via KB grounding. Note that, in some work [Yih et al., 2015;Liang et al., 2017], the logical parsing and KB grounding are simultaneously performed, where logic forms are validated in KBs while partially parsed. (4) Eventually, the parsed logic form is executed against KBs to generate predicted answers via a KB execution module.
Information Retrieval-based Methods. As another mainstream approach, IR-based methods directly retrieve and rank answers from the KBs considering the information conveyed in the questions [Bordes et al., 2015;Dong et al., 2015]. They consist of the following steps: (1) Starting from the topic entity, the system first extracts a question-specific graph from KBs. Ideally, this graph includes all question related entities and relations as nodes and edges, respectively. Without explicitly generating an executable logic form, IR-based methods perform reasoning over the graph and then rank entities in the graph.
(2) Next, the system encodes input questions via a question representation module. This module analyzes the semantics of the question and outputs reasoning instructions, which are usually represented as vectors. (3) A graph-based reasoning module conducts semantic matching via vector-based computation to propagate and then aggregate the information along the neighboring entities within the graph. The reasoning status, which has diverse definitions in different methods (e.g., distributions of predicted entities, representations of relations), is updated based on the reasoning instruction. Recently, several studies [Jain, 2016; repeat
Step (2) and (3) for multiple times to perform the reasoning. (4) An answer ranking module is utilized to rank the entities in the graph according to the reasoning status at the end of reasoning. The top-ranked entities are predicted as the answers to the question.
Pros and Cons. Overall, SP-based methods can produce a more interpretable reasoning process by generating expressive logic forms. However, they heavily rely on the design of the logic form and parsing algorithm, which turns out to be the bottleneck of performance improvement. As a comparison, IR-based methods conduct complex reasoning on graph structure and perform semantic matching. Such a paradigm naturally fits into popular end-to-end training and makes the IR-based methods easier to train. However, the blackbox style of the reasoning model makes the intermediate reasoning less interpretable.
Semantic Parsing-based Methods
In this part, we discuss the challenges and solutions for semantic parsing-based methods.
Overview. As introduced in Section 3, SP-based methods follow a parse-then-execute procedure via a series of modules, namely question understanding, logical parsing, KB grounding and KB execution. These modules will encounter different challenges for complex KBQA. Firstly, question understanding becomes more difficult when the questions are complicated in both semantic and syntactic aspects. Secondly, logical parsing has to cover diverse query types of complex questions. Moreover, a complex question involving more relations and subjects will dramatically increase the possible search space for parsing, which makes the parsing less effective. Thirdly, the manual annotation of logic forms are both expensive and labor-intensive, and it is challenging to train a SP-based method with weak supervision signals (i.e., question-answer pairs). Next, we will introduce how prior studies deal with these challenges.
Understanding Complex Semantics and Syntax. As the first step of SP-based methods, question understanding module converts unstructured text into encoded question (i.e., structural representation), which benefits the downstream parsing. Compared with simple questions, complex ques-tions are featured with more complex query types and compositional semantics, which increases the difficulty in linguistic analysis. To better understand complex natural language questions, many existing methods rely on syntactic parsing, such as dependencies [Abujabal et al., 2017;Abujabal et al., 2018;Luo et al., 2018] and Abstract Meaning Representation (AMR) [Kapanipathi et al., 2020], to provide better alignment between question constituents and logic form elements (e.g., entity, relation, entity types and attributes). However, the accuracy of producing syntactic parsing is still not satisfying on complex questions, especially for those with long-distance dependency. To alleviate error propagation from syntactic parsing to downstream semantic parsing, leveraged the skeleton-based parsing to obtain the trunk of a complex question, which is a simple question with several branches (i.e., pivot word of original text-spans) to be expanded. Another line of work focuses on leveraging structural properties (such as tree structure or graph structure) of logic forms for ranking candidate parsing. They try to improve the matching between logic forms and questions by incorporating structure-aware feature encoder [Zhu et al. Parsing Complex Queries. During parsing, traditional semantic parses (e.g., CCG [Cai and Yates, 2013;Kwiatkowski et al., 2013;Reddy et al., 2014]), which are developed without considering KB schemas, have shown their potential in parsing simple questions. However, these methods could be sub-optimal for complex questions due to the ontology mismatching problem [Yih et al., 2015]. Thus, it is necessary to leverage the structure of KBs for more accurate parsing.
To satisfy the compositionality of the complex questions, researchers have developed diverse expressive logic forms as parsing targets. Bast and Haussmann [2015] designed three query templates as the parsing targets, which could cover questions querying 1-hop, 2-hop relations and single constraint involved relations. Although this piece of work can successfully parse several types of complex questions, it suffers from the limited coverage issue. Yih et al. [2015] proposed query graph as the expressive parsing target. A query graph is a logic form in graph structure which closely matches with the KB schemas. Such query graphs have shown strong expression capability in complex KBQA task. However, they are restrictedly generated with predefined manual rules, which is inapplicable to large-scale datasets and long-tail complex question types. The follow-up work tried to improve the formulation of query graphs. To generalize to unseen and long-tail question types, Ding et al. [2019] proposed to leverage frequent query substructure for formal query generation. Abujabal et al. [2017] utilized syntactic annotation to enhance the structural complexity of the query graph. Hu et al. [2018b] applied more aggregation operators (e.g., "merging") to fit complex questions, and conducted coreference resolution.
Grounding with Large Search Space. To obtain executable logic forms, KB grounding module instantiates possible logic forms with a KB. As one entity in the KB could be linked to hundreds or even thousands of relations, it would be unaffordable to explore and ground all the possible logic forms for a complex question considering both computational resource and time complexity. Recently, researchers proposed multiple approaches to solving the problem. Zheng et al. [2018b] proposed to decompose a complex question into multiple simple questions, where each question was parsed into a simple logic form. Next, intermediate answers are generated via these simple logic forms and final answers are jointly obtained. This decompose-execute-join strategy could effectively narrow down the search space. A similar approach was studied by Bhutani et al.
[2019] and they reduced human annotations by leveraging dependency structure. Meanwhile, a number of studies adopted the expand-and-rank strategies to reduce the search space by searching the logic forms with beams. Chen et al.
[2019] first adopted the hopwise greedy search strategy to expand the most likely query graphs and stop until the best query graph was obtained. Lan et al. [2019c] proposed an iterative matching module to parse the questions without revisiting the generated query graphs at each searching step. Such a sequential expansion process is only effective in answering multi-hop questions, while helpless for questions with constraints or numerical operations. defined more operations to support three typical complex queries, which can largely reduce the search space.
Training under Weak Supervision Signals. To deal with the issue of limited or insufficient training data, Reinforcement Learning (RL) based optimization has been adopted to maximize the expected reward [Liang et al., 2017;Qiu et al., 2020b]. In such a way, SP-based methods can only receive the feedback after the execution of the complete parsed logical form, which leads to severe sparse positive rewards and data inefficiency issues. To tackle these issues, some research work adopted reward shaping strategies for parsing evaluation. Saha et al. [2019] rewarded the model by the additional feedback when the predicted answers are the same type as the ground truth. Hua et al. [2020b] adopted a similar idea to evaluate the generated logic form by comparing it with the high-reward logic forms stored in the memory buffer. Besides rewards for the whole procedure, intermediate rewards during the semantic parsing process may also help address this challenge. Recently, Qiu et al. [2020b] formulated query graph generation as a hierarchical decision problem, and proposed a framework based on hierarchical RL with intrinsic motivations to provide intermediate rewards. To accelerate and stablize the training process, Qiu et al. [Qiu et al., 2020b] pretrained model with pseudo-gold programs (i.e., high-reward logic forms generated by hand-crafted rules). As pseudo-gold programs can be also generated from the model, Liang et al. [2017] proposed to maintain pseudo-gold programs found by an iterative maximum-likelihood training process to bootstrap training.
Information Retrieval-based Methods
Here, we summarize the main challenges brought by complex questions for different modules of IR-based methods.
Overview. The overall procedure typically consists of the modules of retrieval source construction, question representation, graph based reasoning and answer ranking. These modules will encounter different challenges for complex KBQA. Firstly, the retrieval source construction module extracts a question-specific graph from KBs, which covers a wide range of relevant facts for each question. Due to unneglectable incompleteness of source KBs [Min et al., 2013], the correct reasoning paths may be absent from the extracted graph. This issue is more likely to occur in the case of complex questions. Secondly, question representation module understands the question and generates instructions to guide the reasoning process. This step becomes challenging when the question is complicated. After that, reasoning on graph is conducted through semantic matching. When dealing with complex questions, such methods rank answers through semantic similarity without traceable reasoning in the graph, which hinders reasoning analysis and failure diagnosis. Eventually, this system encounters the same training challenge under weak supervision signals (i.e., question-answer pairs). The following parts illustrate how prior work deal with these challenges.
Reasoning under Incomplete KB. IR-based methods first extract a question-specific graph from KBs, and conduct subsequent reasoning on it. Since simple questions only require 1-hop reasoning on the neighborhood of topic entity in KBs, IR-based methods are less likely to suffer from the inherent incompleteness of KBs [Min et al., 2013]. In comparison, it may be a severe problem for complex questions, where the correct reasoning path may be absent from the questionspecific graph. Furthermore, this incompleteness reduces the neighborhood information used for encoding entities, which poses additional challenges for effective reasoning. To tackle this challenge, researchers utilize auxiliary information to enrich the knowledge source. Intuitively, large question-related text corpus retrieved from Wikipedia can provide a wide range of unstructured knowledge as supplementary evidence. Sun et al. [2018] and proposed to complement the subgraph extracted from incomplete KBs with extra question-related text sentences to form a heterogeneous graph and conduct reasoning on it. Instead of directly complementing sentences to question-specific graph as nodes, Xiong et al. [2019] and Han et al. [2020a] proposed to fuse extra textual information into the entity representation to supplement knowledge. They first encoded sentences and entities conditioned on questions, and then supplemented the incomplete KB by aggregating representations of sentences to enhance corresponding entity representations. Besides extra text corpus, knowledge base embeddings have been adopted to alleviate the sparsity of KB by performing missing link prediction. Inspired by KB completion task, Saxena et al.
[2020] utilized pre-trained knowledge base embeddings to enrich the learned entity representations and address incomplete KB issue.
Understanding Complex Semantics. In general, IR-based methods generate reasoning instructions by directly encoding questions as low-dimensional vectors through neural network (e.g., LSTM). Static reasoning instructions obtained through above approaches cannot effectively represent the compositional semantics of complex questions. In order to comprehensively understand questions, recent work dynamically updated the reasoning instructions during the reasoning process. To focus on the currently unanalyzed part of question, Miller et al. [2016], and proposed to update the reasoning instruction with information retrieved along the reasoning process. Besides updating the instruction representation with the reasoned information, He et al. [2021] proposed to focus on different parts of the question with dynamic attention mechanism. This dynamic attention mechanism can promote the model to attend to other information conveyed by the question and provide proper guidance for subsequent reasoning steps. Instead of decomposing the semantics of questions, Sun et al. [2018] proposed to augment the representation of the question with contextual information from graph. They updated the reasoning instruction through aggregating information from the topic entity after every reasoning step.
Uninterpretable Reasoning. Traditional IR-based methods rank answers by calculating a single semantic similarity between questions and entities in the graph, which is less interpretable at the intermediate steps. As the complex questions usually query multiple facts, the system is supposed to accurately predict answers over the graph based on a traceable and observable reasoning process. Even though some work repeated reasoning steps for multiple times, they cannot reason along a traceable path in the graph. To derive a more interpretable reasoning process, multi-hop reasoning is introduced. Specifically, and proposed to make the relation or entity predicted at each hop traceable and observable. They output intermediate predictions (i.e., matched relations or entities) from predefined memory as the interpretable reasoning path. Nevertheless, it can not fully utilize the semantic relation information to reason edge by edge. Thus, Han et al. [2020b] constructed a denser hypergraph by pinpointing a group of entities connected via same relation, which simulated human's hopwise relational reasoning and output a sequential relation path to make the reasoning interpretable. Training under Weak Supervision Signals. Similar to the SP-based methods, it is difficult for IR-based methods to reason the correct answers without any annotations at intermediate steps, since the model cannot receive any feedback until the end of reasoning. It is found that this case may lead to spurious reasoning [He et al., 2021]. To mitigate such issues, Qiu et al.
[2020a] formulated the reasoning process over KBs as expanding the reasoning path on KB and adopted reward shaping strategy to provide intermediate rewards. To evaluate reasoning paths at intermediate steps, they utilized semantic similarity between the question and the reasoning path to provide feedback. Besides evaluating the reasoning path at intermediate steps, a more intuitive idea is to infer pseudo intermediate status and augment model training with such inferred signals. Inspired by bidirectional search algorithm on graph, He et al. [2021] proposed to learn the intermediate reasoning entity distributions by synchronizing bidirectional reasoning process. While most of existing work focused on enhancing the supervision signals at intermediate steps, few work paid attention to the entity linking step. Researchers utilized offthe-shelf tools to locate the topic entity in question, which may cause error propagation to subsequent reasoning. In order to accurately locate the topic entity without annotations, proposed to train entity linking module through a variational learning algorithm which jointly modeled topic entity recognition and subsequent reasoning over KBs.
Conclusion and Future Directions
This paper attempted to provide an overview of typical challenges and corresponding solutions on complex KBQA. We introduced commonly used datasets and summarized the widely employed SP-based methods as well as IR-based methods. Existing complex KBQA methods are generally summarized into these two categories. Besides them, some other methods [Talmor and Berant, 2018] may not fall into these two categories. For example, Talmor and Berant [2018] proposed to transform a complex question to a composition of simple questions through rule-based decomposition, which focused on question decomposition instead of KB based reasoning or logic form generation. We believe that complex KBQA will continue to be an active and promising research area with wide applications, such as natural language understanding, compositional generalization, multi-hop reasoning. Many challenges presented in this survey are still open and under-explored.
Considering the challenges summarized in this paper, we point out several promising future directions for complex KBQA task:
Evolutionary KBQA. As we can see, existing methods for complex KBQA task are usually learned on offline training datasets and then deployed online to answer user questions. Due to such clear separation, most of existing KBQA systems fail to catch up with the rapid growth of world knowl-edge and answer new questions. However, user feedback may provide deployed KBQA systems an opportunity to improve themselves. Based on this observation, Abujabal et al. [2018] leveraged the user feedback to rectify answers generated by the KBQA system and made further improvement. Despite verifying the correctness of system prediction, users may also play a more active role in the question answering process. Zheng et al. [2018a] designed an interactive method to engage users in the question parsing process of the KBQA system directly. In the future, an evolutionary KBQA system is imperative to get continuous improvement after online deployment.
Robust and Interpretable Models. While existing methods have achieved promising results on benchmark datasets where i.i.d assumption is held, they may easily fail to deal with an out-of-distribution case. Few-shot setting is a scenario where the training data is limited. A few previous studies [Hua et al., 2020a;He et al., 2021] discussed related topics, but they are still far from comprehensive in terms of challenge anslysis and problem solving. Compositional generalization is another scenario where the novel combinations of component items seen in training should be inferred during testing. To support more research on such issue, Gu et al. [2020] and Keysers et al. [2020] have introduced related datasets, namely GrailQA and CFQ. The models are supposed to handle out-of-distribution questions and obtain explainable reasoning process. Designing methods for KBQA with good interpretability and robustness may be a challenging but promising topic for future research.
More General Knowledge Base. Due to KB incompleteness, researchers incorporated extra information (such as text [Sun et al., 2018], images [Xie et al., 2017] and human interactions [He et al., 2020]) to complement the knowledge base, which would further improve the complex KBQA performance. There are also some tasks (e.g., visual question answering and commonsense knowledge reasoning), which can be formulated as question answering based on specific KBs. For example, in visual question answering, the scene graph extracted from an image can be regarded as a special KB [Hudson and Manning, 2019]. Despite explicitly representing relational knowledge as the structural KB, some researchers proposed to reason on implicit "KB". Petroni et al.
[2019] analyzed the relational knowledge in a wide range of pretrained language models, and some follow-up work [Bouraoui et al., 2020; further demonstrated its effectiveness to answer "fill-in-the-blank" cloze statements. While most of existing work focused on traditional structured KBs, a more general definition about KBs and flexible usage of KBs may help KBQA research topic show greater impact.
Figure 2 :
2Illustration of two mainstream approaches for complex KBQA.
, 2020], applying fine-grained slot matching, and adding constraints about query structure to filter noisy queries out.
Adopt structural properties (e.g., dependency parsing[Abujabal et al., 2017;Abujabal et al., 2018;Luo et al., 2018],AMR [Kapanipathi et al., 2020]) augmented parsing, skeleton-based parsing[Sun et al., 2020] or structural properties based matching.Develop expressive targets for parsing, such as: template based queries[Bast and Haussmann, 2015], query graph[Yih et al., 2015;Abujabal et al., 2017;Hu et al., 2018b], and so on.Adopt reward shaping strategy to strengthen training signal [Saha et al., 2019; Hua et al., 2020b; Qiu et al., 2020b], conduct pre-training to initialize the model [Qiu et al., 2020b] or iterative maximum-likelihood training[Liang et al., 2017]. Supplement KB with extra corpus [Sun et al., 2018; Sun et al., 2019], fuse extra textual information into entity representations [Xiong et al., 2019; Han et al., 2020a] or leverage KB embeddings [Saxena et al., 2020].Update with reasoned information[Miller et al., 2016;, dynamic attention over the question[He et al., 2021] or enrich the question representation with contextual information of graph[Sun et al., 2018].Provide traceable reasoning path[Zhou et al., 2018; or hyperedge based reasoning[Han et al., 2020b].Provide shaped reward as intermediate feedback[Qiu et al., 2020a], augment intermediate supervision signals with bidirectional search algorithm[He et al., 2021] or adopt variational algorithm to train entity linking module[Zhang et al., 2018].Table 2: Summary of the existing studies on complex KBQA. We categorize them into two mainstream approaches w.r.t. key modules and solutions according to different challenges.Categories Modules
Challenges
Solutions
SP-based
Methods
Question
understanding
Understanding
complex seman-
tics and syntax
Logical pars-
ing
Parsing complex
queries
KB grounding
Grounding with
large search space
Narrow down search space by decompose-execute-join strategy [Zheng et al.,
2018b; Bhutani et al., 2019] or expand-and-rank strategy [Chen et al., 2019;
Lan et al., 2019c; Lan and Jiang, 2020].
Training proce-
dure
Training
under
weak supervision
signals
IR-based
Methods
Retrieval source
construction
Reasoning under
incomplete KB
Question
representation
Understanding com-
plex semantics
Graph based
reasoning
Uninterpretable rea-
soning
Training proce-
dure
Training
under
weak supervision
signals
Challenges and SolutionsSince the aforementioned approaches are developed based on different paradigms, we describe the challenges and corresponding solutions for complex KBQA with respect to the two mainstream approaches. A summary of these challenges and solutions is presented inTable 2.
Acknowledgements
Neverending learning for open-domain question answering over knowledge bases. Abujabal, COLING. Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard WeikumCIKMReferences [Abujabal et al., 2017] Abdalghani Abujabal, Mohamed Yahya, Mirek Riedewald, and Gerhard Weikum. Auto- mated template generation for question answering over knowledge graphs. In WWW, 2017. [Abujabal et al., 2018] Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. Never- ending learning for open-domain question answering over knowledge bases. In WWW, 2018. [Bao et al., 2016] Junwei Bao, Nan Duan, Zhao Yan, Ming Zhou, and Tiejun Zhao. Constraint-based question answer- ing with knowledge graph. In COLING, 2016. [Bast and Haussmann, 2015] Hannah Bast and Elmar Hauss- mann. More accurate question answering on freebase. In CIKM, 2015.
Freebase: A collaboratively created graph database for structuring human knowledge. Liang ; Jonathan Berant, Percy Liang, ; Berant, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie TaylorAntoine BordesarXivAAAIand Liang, 2014] Jonathan Berant and Percy Liang. Semantic parsing via paraphrasing. In ACL, 2014. [Berant et al., 2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In EMNLP, 2013. [Bhutani et al., 2019] Nikita Bhutani, Xinyi Zheng, and H. V. Jagadish. Learning to answer complex questions over knowledge bases with query composition. In CIKM, 2019. [Bollacker et al., 2008] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A collaboratively created graph database for structuring human knowledge. In SIGMOD, 2008. [Bordes et al., 2015] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv, 2015. [Bouraoui et al., 2020] Zied Bouraoui, José Camacho- Collados, and Steven Schockaert. Inducing relational knowledge from BERT. In AAAI, 2020.
Formal query building with query structure prediction for complex question answering over knowledge base. Yates ; Qingqing Cai, Alexander Yates, ; Chakraborty, NAACL. Chen, Huiying Li, Yuncheng Hua, and Guilin QiIJCAIand Yates, 2013] Qingqing Cai and Alexander Yates. Large-scale semantic parsing via schema matching and lexicon extension. In ACL, 2013. [Chakraborty et al., 2019] Nilesh Chakraborty, Denis Lukovnikov, Gaurav Maheshwari, Priyansh Trivedi, Jens Lehmann, and Asja Fischer. Introduction to neural network based approaches for question answering over knowledge graphs. arXiv, 2019. [Chen et al., 2019] Zi-Yuan Chen, Chih-Hung Chang, Yi- Pei Chen, Jijnasa Nayak, and Lun-Wei Ku. UHop: An unrestricted-hop relation extraction framework for knowledge-based question answering. In NAACL, 2019. [Chen et al., 2020] Yongrui Chen, Huiying Li, Yuncheng Hua, and Guilin Qi. Formal query building with query structure prediction for complex question answering over knowledge base. In IJCAI, 2020.
Leveraging frequent query substructures to generate formal queries for complex question answering. [ Ding, EMNLP. [Ding et al., 2019] Jiwei Ding, Wei Hu, Qixin Xu, and Yuzhong Qu. Leveraging frequent query substructures to generate formal queries for complex question answering. In EMNLP, 2019.
Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. ISWC. Dubey et al., 2019] Mohnish Dubey, Debayan Banerjee, Abdelrahman Abdelkawi, and Jens LehmannACLet al., 2015] Li Dong, Furu Wei, Ming Zhou, and Ke Xu. Question answering over Freebase with multi- column convolutional neural networks. In ACL, 2015. [Dubey et al., 2019] Mohnish Dubey, Debayan Banerjee, Abdelrahman Abdelkawi, and Jens Lehmann. Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In ISWC, 2019.
[ Fu, A survey on complex question answering over knowledge base: Recent advances and challenges. arXiv. [Fu et al., 2020] Bin Fu, Yunqi Qiu, Chengguang Tang, Yang Li, Haiyang Yu, and Jian Sun. A survey on complex ques- tion answering over knowledge base: Recent advances and challenges. arXiv, 2020.
Beyond I.I.D.: three levels of generalization for question answering on knowledge bases. [ Gu, WWW. [Gu et al., 2020] Yu Gu, Sue Kase, Michelle Vanni, Brian M. Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond I.I.D.: three levels of generalization for question answer- ing on knowledge bases. In WWW, 2020.
Open domain question answering based on text enhanced knowledge graph with hyperedge infusion. [ Han, EMNLP. [Han et al., 2020a] Jiale Han, Bo Cheng, and Xu Wang. Open domain question answering based on text enhanced knowledge graph with hyperedge infusion. In EMNLP, 2020.
Mining implicit entity preference from user-item interaction data for knowledge graph completion via adversarial learning. [ Han, IJCAI. WSDM. Answering natural language questions by subgraph matching over knowledge graphs. TKDE[Han et al., 2020b] Jiale Han, Bo Cheng, and Xu Wang. Two-phase hypergraph based reasoning with dynamic re- lations for multi-hop kbqa. In IJCAI, 2020. [He et al., 2020] Gaole He, Junyi Li, Wayne Xin Zhao, Peiju Liu, and Ji-Rong Wen. Mining implicit entity preference from user-item interaction data for knowledge graph com- pletion via adversarial learning. In WWW, 2020. [He et al., 2021] Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. Improving multi- hop knowledge base question answering by learning intermediate supervision signals. In WSDM, 2021. [Hu et al., 2018a] Sen Hu, Lei Zou, Jeffrey Xu Yu, Hai Xun Wang, and Donyan Zhao. Answering natural language questions by subgraph matching over knowledge graphs. TKDE, 2018.
A state-transition framework to answer complex questions over knowledge base. [ Hu, In EMNLP. [Hu et al., 2018b] Sen Hu, Lei Zou, and Xinbo Zhang. A state-transition framework to answer complex questions over knowledge base. In EMNLP, 2018.
Retrieve, program, repeat: Complex knowledge base question answering via alternate meta-learning. [ Hua, IJCAI. [Hua et al., 2020a] Yuncheng Hua, Yuan-Fang Li, Gholam- reza Haffari, Guilin Qi, and Wei Wu. Retrieve, program, repeat: Complex knowledge base question answering via alternate meta-learning. In IJCAI, 2020.
Less is more: Data-efficient complex question answering over knowledge bases. [ Hua, NeurIPS. Hudson and ManningLearning by abstraction: The neural state machine[Hua et al., 2020b] Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, and Daiqing Qi. Less is more: Data-efficient complex question answering over knowl- edge bases. J. Web Semant., 2020. [Hudson and Manning, 2019] Drew A. Hudson and Christo- pher D. Manning. Learning by abstraction: The neural state machine. In NeurIPS, 2019.
Question answering over knowledge bases by leveraging semantic parsing and neuro-symbolic reasoning. ; Jain ; Sarthak Jain, Jiang, NAACL. Keysers et al., 2020] Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev,EMNLPJain, 2016] Sarthak Jain. Question answering over knowl- edge base using factual memory networks. In NAACL, 2016. [Jiang et al., 2020] Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. How can we know what language models know. TACL, 2020. [Kapanipathi et al., 2020] Pavan Kapanipathi, Ibrahim Ab- delaziz, Srinivas Ravishankar, Salim Roukos, Alexan- der G. Gray, Ramón Fernandez Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-Suk Lee, Yun- yao Li, Francois P. S. Luus, Ndivhuwo Makondo, Nan- dana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Gangi Reddy, Ryan Riegel, Gae- tano Rossiello, Udit Sharma, G. P. Shrivatsa Bhargav, and Mo Yu. Question answering over knowledge bases by leveraging semantic parsing and neuro-symbolic reason- ing. In AAAI, 2020. [Keysers et al., 2020] Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. Measuring compositional generalization: A comprehensive method on realistic data. In ICLR, 2020. [Kwiatkowski et al., 2013] Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. Scaling semantic parsers with on-the-fly ontology matching. In EMNLP, 2013.
Knowledge base question answering with a matching-aggregation model and question-specific contextual relations. Yunshi Jiang, Jing Lan, ; Jiang, Lan, Speech and Language Processing. 27IJCAIand Jiang, 2020] Yunshi Lan and Jing Jiang. Query graph generation for answering multi-hop complex ques- tions from knowledge bases. In ACL, 2020. [Lan et al., 2019a] Yunshi Lan, Shuohang Wang, and Jing Jiang. Knowledge base question answering with a matching-aggregation model and question-specific contex- tual relations. IEEE/ACM Transactions on Audio, Speech and Language Processing, 27:1629-1638, 2019. [Lan et al., 2019b] Yunshi Lan, Shuohang Wang, and Jing Jiang. Knowledge base question answering with topic units. In IJCAI, 2019.
Stepwise reasoning for multi-relation question answering over knowledge graph with weak supervision. Aditay Tripathi, and Partha Talukdar. Amrita Saha, Ghulam Ahmed Ansari, Abhishek Laddha, Karthik Sankaranarayanan, and Soumen ChakrabartiApoorv SaxenaACLet al., 2019c] Yunshi Lan, Shuohang Wang, and Jing Jiang. Multi-hop knowledge base question answering with an iterative sequence matching model. In ICDM, 2019. [Lehmann et al., 2015] Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Se- mantic Web, 2015. [Liang et al., 2017] Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. Neural symbolic ma- chines: Learning semantic parsers on Freebase with weak supervision. In ACL, 2017. [Lopez et al., 2013] Vanessa Lopez, Christina Unger, Philipp Cimiano, and Enrico Motta. Evaluating question answering over linked data. Web Semantics Science Services And Agents On The World Wide Web, 2013. [Luo et al., 2018] Kangqi Luo, Fengli Lin, Xusheng Luo, and Kenny Q. Zhu. Knowledge base question answering via encoding of complex query graphs. In EMNLP, 2018. [Maheshwari et al., 2019] Gaurav Maheshwari, Priyansh Trivedi, Denis Lukovnikov, Nilesh Chakraborty, Asja Fis- cher, and Jens Lehmann. Learning to rank query graphs for complex question answering over knowledge graphs. In ISWC, 2019. [Miller et al., 2016] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. In EMNLP, 2016. [Min et al., 2013] Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. Distant supervision for relation extraction with an incomplete knowledge base. In NAACL-HLT, 2013. [Petroni et al., 2019] Fabio Petroni, Tim Rocktäschel, Se- bastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yux- iang Wu, and Alexander H. Miller. Language models as knowledge bases? In EMNLP, 2019. [Qiu et al., 2020a] Yunqi Qiu, Yuanzhuo Wang, Xiaolong Jin, and Kun Zhang. Stepwise reasoning for multi-relation question answering over knowledge graph with weak su- pervision. In WSDM, 2020. [Qiu et al., 2020b] Yunqi Qiu, Kun Zhang, Yuanzhuo Wang, Xiaolong Jin, Long Bai, Saiping Guan, and Xueqi Cheng. Hierarchical query graph generation for complex question answering over knowledge graph. In CIKM, 2020. [Reddy et al., 2014] Siva Reddy, Mirella Lapata, and Mark Steedman. Large-scale semantic parsing without question- answer pairs. TACL, 2014. [Saha et al., 2019] Amrita Saha, Ghulam Ahmed Ansari, Abhishek Laddha, Karthik Sankaranarayanan, and Soumen Chakrabarti. Complex program induction for querying knowledge bases in the absence of gold programs. TACL, 2019. [Saxena et al., 2020] Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-hop question answer- ing over knowledge graphs using knowledge base embed- dings. In ACL, 2020.
Kqa pro: A large diagnostic dataset for complex question answering over knowledge base. arXiv. Shi, Tania Bedrax-Weiss, and William Cohen. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. Haitian SunEMNLPShi et al., 2020] Jiaxin Shi, Shulin Cao, Liangming Pan, Yutong Xiang, Lei Hou, Juanzi Li, Hanwang Zhang, and Bin He. Kqa pro: A large diagnostic dataset for complex question answering over knowledge base. arXiv, 2020. [Sun et al., 2018] Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. Open domain question answering using early fusion of knowledge bases and text. In EMNLP, 2018. [Sun et al., 2019] Haitian Sun, Tania Bedrax-Weiss, and William Cohen. Pullnet: Open domain question answer- ing with iterative retrieval on knowledge bases and text. In EMNLP, 2019.
SPARQA: skeleton-based semantic parsing for complex questions over knowledge bases. Sun, AAAI. [Sun et al., 2020] Yawei Sun, Lingling Zhang, Gong Cheng, and Yuzhong Qu. SPARQA: skeleton-based semantic parsing for complex questions over knowledge bases. In AAAI, 2020.
Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. Berant Talmor, NAACL-HLT. Tanon et al., 2016] Thomas Pellissier Tanon, Denny Vrandečić, Sebastian Schaffert, Thomas Steiner, and Lydia PintscherWWW[Talmor and Berant, 2018] Alon Talmor and Jonathan Be- rant. The web as a knowledge-base for answering complex questions. In NAACL-HLT, 2018. [Tanon et al., 2016] Thomas Pellissier Tanon, Denny Vrandečić, Sebastian Schaffert, Thomas Steiner, and Lydia Pintscher. From freebase to wikidata: The great migration. In WWW, 2016.
Lc-quad: A corpus for complex question answering over knowledge graphs. [ Trivedi, [Trivedi et al., 2017] Priyansh Trivedi, Gaurav Maheshwari, Mohnish Dubey, and Jens Lehmann. Lc-quad: A corpus for complex question answering over knowledge graphs. In ISWC, 2017.
A survey of question answering over knowledge base. CCKS. et al., 2019] Peiyun Wu, Xiaowang Zhang, and Zhiy- ong Feng. A survey of question answering over knowledge base. In CCKS, 2019.
Improving question answering over incomplete kbs with knowledge-aware reader. [ Xie, IJCAI. ACL[Xie et al., 2017] Ruobing Xie, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. Image-embodied knowledge represen- tation learning. In IJCAI, pages 3140-3146, 2017. [Xiong et al., 2019] Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. Improving ques- tion answering over incomplete kbs with knowledge-aware reader. In ACL, 2019.
Enhancing key-value memory neural networks for knowledge based question answering. [ Xu, NAACL-HLT. [Xu et al., 2019] Kun Xu, Yuxuan Lai, Yansong Feng, and Zhiguo Wang. Enhancing key-value memory neural networks for knowledge based question answering. In NAACL-HLT, 2019.
Semantic parsing via staged query graph generation: Question answering with knowledge base. [ Yih, ACL. [Yih et al., 2015] Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL, 2015.
The value of semantic parse labeling for knowledge base question answering. [ Yih, ACL. [Yih et al., 2016] Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In ACL, 2016.
Variational reasoning for question answering with knowledge graph. [ Zhang, AAAI. [Zhang et al., 2018] Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. Variational reasoning for question answering with knowledge graph. In AAAI, 2018.
Never-ending learning for open-domain question answering over knowledge bases. [ Zheng, In InfoScience. [Zheng et al., 2018a] Weiguo Zheng, Hong Cheng, Jef- frey Xu Yu, Lei Zou, and Kangfei Zhao. Never-ending learning for open-domain question answering over knowl- edge bases. In InfoScience, 2018.
Question answering over knowledge graphs: Question understanding via template decomposition. [ Zheng, In VLDB Endow. [Zheng et al., 2018b] Weiguo Zheng, Jeffrey Xu Yu, Lei Zou, and Hong Cheng. Question answering over knowl- edge graphs: Question understanding via template decom- position. In VLDB Endow., 2018.
An interpretable reasoning network for multirelation question answering. [ Zhou, Xiang Cheng, and Sen Su. Knowledge-based question answering by tree-to-sequence learning[Zhou et al., 2018] Mantong Zhou, Minlie Huang, and Xi- aoyan Zhu. An interpretable reasoning network for multi- relation question answering. In COLING, 2018. [Zhu et al., 2020] Shuguang Zhu, Xiang Cheng, and Sen Su. Knowledge-based question answering by tree-to-sequence learning. Neurocomputing, 2020.
| []
|
[]
| [
"D Y "
]
| []
| []
| afaev U niversit e de R ennes D edicated to M .S.B irm an on the occasion of his seventieth birthdayA bstractA typi cal resul t of the paper i s the fol l ow i ng. Let H = H 0 + V w here H 0 i s m ul ti pl i cati on by j xj 2l and V i s an i ntegraloperator w i th kernelcoshx;yi i n the space L 2 (R d ). If l = d=2 + 2k for som e k = 0;1;:::, then the operator H has i n ni te num ber ofnegati ve ei genval ues for any coupl i ng constant 6 = 0. For other val ues ofl, the negati ve spectrum ofH i s i n ni te for j j> l w here l i s som e expl i ci t posi ti ve constant. In the case 2 (0; l ] ,the num ber N | null | [
"https://export.arxiv.org/pdf/math-ph/9806009v1.pdf"
]
| 119,152,368 | math-ph/9806009 | ea51041851ce531025d9f1bc4ad66af6149c708b |
Jun 1998
D Y Jun 1998arXiv:math-ph/9806009v1 15 T H E D ISC R E T E SP E C T R U M IN T H E SIN G U LA R F R IE D R IC H S M O D E L
afaev U niversit e de R ennes D edicated to M .S.B irm an on the occasion of his seventieth birthdayA bstractA typi cal resul t of the paper i s the fol l ow i ng. Let H = H 0 + V w here H 0 i s m ul ti pl i cati on by j xj 2l and V i s an i ntegraloperator w i th kernelcoshx;yi i n the space L 2 (R d ). If l = d=2 + 2k for som e k = 0;1;:::, then the operator H has i n ni te num ber ofnegati ve ei genval ues for any coupl i ng constant 6 = 0. For other val ues ofl, the negati ve spectrum ofH i s i n ni te for j j> l w here l i s som e expl i ci t posi ti ve constant. In the case 2 (0; l ] ,the num ber N
IN T R O D U C T IO N
A perturbati on ofa m ul ti pl i cati on operatorH 0 by an i ntegraloperatorV i scal l ed a Fri edri chs m odel .Iti susual l y assum ed thatkernelofV i sa H ol derconti nuous(w i th an exponentl arger than 1=2) functi on v(x;y) of one-di m ensi onal vari abl es x and y w hi ch decays su ci entl y rapi dl y as j xj+ j yj! 1 . T hen (see [ 5] ) the wave operators for the pai r H 0 ,H 1 = H 0 + V exi st and are com pl ete. M oreover, the operator H 1 does not have the si ngul ar conti nuous spectrum , and i ts di screte spectrum i s ni te. It i s i m portant that the resul ts of [ 5] are appl i cabl e to the case w hen a kernel v(x;y) i s i tsel f a com pact operator i n an auxi l i ary H i l bert space.
A som ew hat di erent si tuati on was consi dered i n the paper [ 4]w here the operator H 0 of m ul ti pl i cati on by j xj 2l ; l> 0;i n the space L 2 (R d ) was perturbed by an i ntegraloperator of Fouri er type. M ore preci sel y,the perturbati on was de ned by one ofthe equal i ti es
(V (c) f)(x)= (2 ) d=2 Z R d coshx;yif(y)dy or (V (s) f)(x)= (2 ) d=2 Z R d si nhx;yif(y)dy
(1. 1) and H (c) = H 0 + V (c) orH (s) = H 0 + V (s) w here 2 R i sa coupl i ng constant.A n i nteresti ng feature ofthi sm odeli sthatV (c) and V (s) are i nvari antw i th respectto the Fouri ertransform so that one coul d have chosen H 0 = ( ) l for the \unperturbed" operator. Passi ng to the spheri calcoordi nates and consi deri ng the space L 2 (R d ) as L 2 (R + ;L 2 (S d 1 )),we can t the operators H (c) and H (s) i nto the fram ework of the Fri edri chs m odel . H owever, si nce the kernel s coshx;yi or si nhx;yi do not tend to 0 as j xj! 1 or (and) j yj! 1 ,the resul ts of the paper [ 5]are not appl i cabl e to perturbati ons (1. 1) (even i n the case d = 1). D ue to osci l l ati ons ofi ts kernel ,the operators (1. 1) are bounded i n the space L 2 (R d ) but they are not com pact,even rel ati vel y w i th respect to H 0 . N everthel ess,as show n i n [ 4] ,the essenti alspectra ofthe operatorsH (c) ,H (s) and H 0 coi nci de. T hi s i m pl i es that the negati ve spectra ofthe operatorsH (c) and H (s) consi st ofei genval ues of ni te m ul ti pl i ci ty w hi ch m ay accum ul ate at the bottom ofthe essenti alspectrum (poi nt zero) onl y. M oreover,i n the case 2l > d,the trace-cl ass techni que was used i n [ 4]to prove, for the pai rs H 0 ,H (c) and H 0 , H (s) ,the exi stence ofthe wave operators and thei r com pl eteness.
O urgoalhere i sto study the di screte spectrum ofthe operatorsH (c) and H (s) . T he space L 2 (R d ) decom poses i nto the orthogonalsum ofsubspaces H n ; n = 0;1;2;:::constructed i n term s ofthe spheri calfuncti ons oforder n and i nvari ant w i th respect to the operators H (c) and H (s) . O n the subspaces H n ,these operators reduce to operators H acti ng i n the space T he functi on v depends ofcourse on n and on the i ndex \c" or \s" but i s al ways expressed i n term s ofa Besselfuncti on.
T herefore, we consi der rst operators H = H 0 + V i n the space L 2 (R + ) w here V i s gi ven by (1. 2)w i th a ratherarbi trary realfuncti on v.W e suppose thatv(t)hasa su ci entl y regul ar behavi our as t! 0 and t! 1 . Si nce operators (1. 2) never sati sfy condi ti ons of [ 5] , thi s versi on ofthe Fri edri chs m odeli s cal l ed si ngul ar here. It turns out that the negati ve spectrum ofthe operator H i s very sensi ti ve w i th respect to the coupl i ng constant and the param eter l. T hus,i fthe asym ptoti c expansi on ofv(t) as t! 0 contai ns the term vt r , then,i n the case l= r+ 1=2,the operatorH hasi n ni te num berofnegati ve ei genval uesfor any coupl i ng constant 6 = 0. For other val ues ofl,the negati ve spectrum ofH i s i n ni te for j j> l w here l i s som e expl i ci t posi ti ve constant. In the case 2 (0; l ] ,the num ber N ( ) l of negati ve ei genval ues of H does not depend on . W e cal cul ate N ( ) l i n term s of the asym ptoti c expansi on ofthe functi on v(t) as t ! 0. W e em phasi ze that both posi ti ve and negati ve parts ofperturbati on (1. 2) are,i n general ,non-tri vi al ,and our resul ts on the di screte spectrum ofH take i nto account thei r \i nteracti on".
T he resul ts on the di screte spectrum i n the si ngul ar Fri edri chs m odelcan be com pared w i th those for the Schr odi nger operator + q(x) w hose potenti al q(x) has a cri ti cal decay at i n ni ty. Suppose, for exam pl e, that x 2 R and a negati ve functi on q(x) has the asym ptoti cs q(x) j xj 2 as j xj ! 1 . T hen for su ci entl y sm al l > 0 the negati ve spectrum of the operator + q(x) consi sts of exactl y one ei genval ue, i t i s ni te for 2 (0;1=4) and i t i s i n ni te for > 1=4. N ote al so that the m echani sm for appearance of i n ni te num ber of negati ve ei genval ues i n the si ngul ar Fri edri chs m odel resem bl es the sam e phenom ena (the E m ov' s e ect, see [ 6] ) for the three-parti cl e Schr odi nger operator w i th short-range pai r potenti al s.
O ur cal cul ati on of the num ber of negati ve ei genval ues rel i es on the Bi rm an-Schw i nger pri nci pl e. W e need a presentati on ofthi s theory som ew hat di erent from the ori gi nalpaper [ 2]and adapted to perturbati ons w i thout a de ni te si gn. T hi s i s done i n Secti on 2 i n the abstractfram ework.T henegati vespectrum oftheoperatorH i n thespaceL 2 (R + )i sstudi ed i n Secti on 3. In Secti on 4, we use these resul ts to cal cul ate the totalnum ber of negati ve ei genval ues ofthe operators H (c) and H (s) i n the space L 2 (R d ).
T H E B IR M
H 1=2 0 = (H 1=2 0 f 1 ;H 1=2 0 f 2 )+ (f 1 ;f 2 ). T hi s m eans that j v[ f;f] j C j j fj j 2h[ f;f]= j j H 1=2 0 fj j 2 + v[ f;f] i scl osed on D (H 1=2 0 )
.So thereexi sts(see [ 3] )a uni quesel f-adjoi nt,sem i -bounded from bel ow , operatorH such that D (j H j 1=2 )= D (H
(H + c) 1 (H 0 + c) 1 = (H 0 + c) 1 (H 0 + I) 1=2 T (H 0 + I) 1=2 (H + c) 1 ; (2. 2)
w here c i s a su ci enl y l arge posi ti ve constant. Si nce the operator (H 0 + I) 1=2 (H + c) 1 i s bounded,the ri ght-hand si de of(2. 2) i s a com pact operator,and hence by W eyl ' s theorem , the essenti alspectra ess ofthe operators H 0 and H coi nci de. T hi s resul t was establ i shed i n [ 2] .
O ur goalhere i s to cal cul ate the totalm ul ti pl i ci ty
N = di m E H ( 1 ;0) (2. 3)
ofthe negati ve spectrum ofthe operatorH .To thatend we i ntroduce som e auxi l i ary objects. Let us consi der the set R ofel em ents u = H In the case ofa bounded operatorB ,thi sasserti on i sLem m a 1. 2 from [ 2] . In the general case,i ts proofi s practi cal l y the sam e.
2.
In term s ofthe form (2. 5),a si m pl e versi on ofthe Bi rm an-Schw i nger pri nci pl e can be form ul ated as fol l ow s. j ' j (u)j 2 + n X j= m + 1 j ' j (u)j 2 :
(2. 11) O fcourse,one or both sum s i n (2. 11) m ay be absent,that i s we do not excl ude the cases m = 0 or (and) n = m .
Let us gi ve rst su ci ent condi ti ons for the negati ve spectrum ofthe operator H to be i n ni te. In thi s case we do not assum e that A i s bounded and repl ace equal i ty (2. 11) by an esti m ate. j ' j (u)j 2 ;
(2. 12)
where ' 1 ;:::;' p is a system ofl inear functional s de ned on R 0 . T hen the negative spectrum ofthe operator H is in nite provided di m E A ( 1 ; 1)= 1 .
Proof. { By Lem m a 2. 2,forany k,there exi stsa subspace L 0 R 0 such thatdi m L 0 = k and i nequal i ty (2. 10) hol ds. Let L L 0 consi st ofel em ents u 2 L 0 such that ' j (u) = 0 for al l j = 1;:::;p. C l earl y,di m L k p. It fol l ow s from (2. 12) that a[ u;u] (A u;u) for u 2 L. T herefore (2. 10) i m pl i es (2. 6). So,by Lem m a 2. 1,the negati ve spectrum ofH contai ns at l east k p ei genval ues. It rem ai ns to take i nto account that k i s arbi trary. 2
3.
To cal cul ate the num ber ofnegati ve ei genval ues ofthe operator H ,we need several auxi l i ary asserti ons. T he rst ofthem has a purel y al gebrai c nature and i s qui te standard.
Lem m a 2.5 LetR be som e l inearspace (perhapsofin nite dim ension)and l et' 1 ;:::;' n ;' : R ! C be l inear functional s. Suppose that' 1 ;:::;' n are l inearl y independentand denote by N the setoftheir com m on zeros,i.e. u 2 N i ' j (u)= 0 for any j = 1;:::;n. A ssum e that '(u)= 0 for allu 2 N . T hen ' = P n j= 1 j ' j for som e j 2 C . To excl ude the case w hen the sum s i n (2. 11) are degenerate,we i ntroduce the fol l ow i ng D e nition 2.6 LetR H be a l inear setdense in H and l et' 1 ;:::;' n be l inear functional s de ned on R . W e call' 1 ;:::;' n strongl y l inear independentifthe inequal ity
j n X j= 1 j ' j (u)j C j j uj j ; 8u 2 R ;
(2. 13) (here C is som e positive constant) im pl ies that j = 0 for allj = 1;:::;n.
T hus,i ti si m possi bl e to nd a l i nearcom bi nati on ofstrongl y l i neari ndependentfuncti onal sw hi ch i sa bounded functi onalon H .O fcourse,thestrong l i neari ndependence ensuresthe usuall i near i ndependence,and every functi onalfrom a strongl y l i near i ndependent system i s unbounded.
Lem m a 2.7 Ifa functional' (de ned on a dense set R ) is unbounded,then the set ofits zeros is dense.
Proof. { T here exi sts a sequence w k 2 R such that '(w k ) = 1 and j j w k j j! 0 as k ! 1 . M oreover,for any gi ven sequence k ! 0,choosi ng a subsequence ofw k ,we can sati sfy the bound j j w k j j k . Let h 2 H be an arbi trary el em ent. T hen there exi sts a sequence u k 2 R such that u k ! h i n H as k ! 1 . Put
u (0) k = u k '(u k )w k : C l earl y,'(u (0) k )= 0 and u (0) k ! h provi ded j '(u k )jj j w k j j! 0 as k ! 1 . 2 C orollary 2.8
T he set ofcom m on zeros ofany nite system ofunbounded functional s (in particul ar,ofa strongl y independentsystem ) is dense in H .
Let us di scuss som e properti es offuncti onal s sati sfyi ng D e ni ti on 2. 6.
Lem m a 2.9 Let' 1 ;:::;' n be strongl y l inear independentfunctional s and l etN m be the set ofcom m on zeros offunctional s ' j ;j = 1;:::;n;j 6 = m . T hen the restriction of' m on N m is unbounded.
Proof. { In the opposi te case,there exi sts a bounded functi onal' such that ' m (u) = '(u) foru 2 N m . So i tfol l ow s from Lem m a 2. 5 that
'(u)= ' m (u)+ P j6 = m j ' j (u)foral lu 2 R .
T hi s contradi cts the strong l i near i ndependence of' 1 ;:::;' n . 2
Lem m a 2.10 Let' 1 ;:::;' n be strongl y l inearindependentfunctional sand l etL 0 be a nitedim ensionalsubspace ofR . T hen for any > 0 and any m = 1;:::;n there exists a vector u (m ) ;j j u (m ) j j= 1,such thatu (m ) 6 2 L 0 ,' j (u (m ) )= 0 forj = 1;:::;n;j 6 = m ,and j ' m (u (m ) )j .
Proof. { By Lem m a 2. 9,there exi sts a sequence z k such that j j z k j j= 1,j ' m (z k )j! 1 and ' j (z k )= 0 for al lj = 1;:::;n;j 6 = m . Si nce di m L 0 < 1 ,we have that sup z2 L 0 ;jjzjj= 1 j ' m (z)j< 1 :
T herefore z k 6 2 L 0 and j ' m (z k )j for k l arge enough. So we can set u (m ) = z k for such k. 2 T hi s asserti on can be general i zed.
Lem m a 2.11 U nder assum ptions of Lem m a 2: 10, for any > 0 and any m n, there exists a setofnorm al ized vectors u i 6 2 L 0 ,i= 1;:::;m ,such that ' j (u i )= 0; j = 1;:::;n; j 6 = i;
(2. 14)
and
j ' i (u i )j : (2. 15)
M oreover,for the subspace L R spanned by L 0 and u 1 ;:::;u m di m L = di m L 0 + m :
(2. 16)
Proof. { In the case m = 1 Lem m as 2. 10 and 2. 11 counci de. Suppose that we have al ready constructed u 1 ;:::;u m 1 such that rel ati ons (2. 14),(2. 15) are sati s ed for i= 1;:::;m 1 and the subspaceL spanned by L 0 and u 1 ;:::;u m 1 has di m ensi on di m L 0 + m 1. T hen exi stenceofthevectoru m w i th al lnecessary properti esfol l ow sagai n from Lem m a 2. 10 appl i ed to the subspaceL. 2
4.
In D e ni ti on 2. 6,R i s an arbi trary l i near set but,i fR i s a Banach space,we suppose that the functi onal s ' 1 ;:::;' n are bounded on thi s space (but not on H , of course). In our study ofthe negati ve spectrum ofthe operator H ,the rol e ofR i s pl ayed by the space
R = H 1=2 0 D (HC l earl y,di m L 0 di m L m . Si nce M di m L 0 ,thi s i m pl i es that M N m .
It rem ai ns to check that N M + m . Let the set N R be de ned by the condi ti on: u 2 N i ' j (u)= 0 for al lj = 1;:::;n. By C orol l ary 2. 8,thi s set i s dense i n H . T herefore, by Lem m a 2. 2,M equal s the m axi m aldi m ensi on ofL 0 N w here (2. 10) hol ds. Let L be the subspace constructed i n Lem m a 2. 11 for R = R and su ci entl y l arge w hi ch w i l lbe chosen l ater. A ccordi ng to (2. 16), we need onl y to veri fy rel ati on (2. 6) on thi s subspace. Every vector u 2 L has the form
u = u 0 + m X i= 1 i u i ; w here u 0 2 L 0 ; j 2 C ; (2. 18) so that,for B = A + I, (B u;u)= (B u 0 ;u 0 )+ 2R e m X i= 1 i (B u 0 ;u i )+ m X i;j= 1 i j (B u i ;u j ): (2. 19)
R ecal lthat L 0 N and,consequentl y,' j (u 0 ) = 0 for al lj = 1;:::;n. T herefore i t fol l ow s from (2. 14) and (2. 18)that
' i (u)= i ' i (u i );a[ u;u]+ j j uj j 2 = (B u;u) m X i= 1 j ' i (u)j 2 (B u 0 ;u 0 )+ 2j j B u 0 j j m X i= 1 j i j+ m b m X i= 1 j i j 2 m X i= 1 j i j 2 j ' i (u i )j 2 ; w here b= m ax 1 i;j m j (B u i ;u j )j .
T hus,for the proofof(2. 6),i t su ces to check that for al lu 0 2 L 0 and any num bers i
(m b j ' i (u i )j 2 )j i j 2 + 2j j B u 0 j jj i j+ m 1 (B u 0 ;u 0 )< 0; i= 1;:::;m : (2. 21)
Si nce (B u 0 ;u 0 )< 0 and the di m ensi on ofL 0 i s ni te,we have that
(B u 0 ;u 0 ) b 0 j j u 0 j j 2 and j j B u 0 j j b 1 j j u 0 j j for som e b 0 ;b 1 > 0.
Taki ng al so i nto account (2. 15),we see that (2. 21) i s sati s ed i f
( 2 m b)j i j 2 2b 1 j j u 0 j jj i j+ m 1 b 0 j j u 0 j j 2 > 0:
T he l asti nequal i ty hol dsforarbi trary i i f i sl arge enough,thati s 2 > m (b+ b 2 1 b 1 0 ).T hi s concl udes the proofs of(2. 6) and hence ofT heorem 2. 12. Proof. { C hangi ng i n the l eft-hand si de the vari abl es x = e t ,y = e s and denoti ng u 1 (t) = e t=2 u(e t ),b 1 (t)= e t=2 b(e t ),we rew ri te i t as
Z 1 1 Z 1 1 b 1 (t+ s)u 1 (s)u 1 (t)dtds:
Ifu 1 (t)= 0 for j tj n,then,by vi rtue ofthe convol uti on form ul a and the Parsevali denti ty, thi s i ntegralequal s To study the di screte spectrum ofH ,we need addi ti onalcondi ti ons on v(t) as t! 0.
A ssum ption 3.5 Suppose that,as t! 0,
v(t)= N X k= 1 v k t r k + O (t r N + 1 ); (3. 6)
where 1=2 < r 1 < < r N < r N + 1 and l< r N + 1 + 1=2. W e assum e that v k 6 = 0 for k = 1;:::;N but do not excl ude the case w hen the sum i n (3. 6) i s absent,that i s v(t)= O (t r 1 ) w i th r 1 > l 1=2: Lem m a 3.6 LetA ssum ption 3: 3 be satis ed. T hen the integral(3: 9) for = 1 converges at t= 1 uniform l y in 2 R .
(3. 7) W e set b l (t)= t l (v(t) X r k < l 1=2 v k t r k ) (3. 8) (b l (t)= t l v(t) i
Lem m a 3.7 Let A ssum ption 3: 5 be satis ed. Suppose that l6 = r n + 1=2 for n = 1;:::;N . T hen the integral(3: 9) for = 0 converges att= 0 uniform l y in 2 R .
Ifcondi ti ons ofboth Lem m as 3. 6 and 3. 7 are ful l l ed,then the functi on U si ng notati on (3. 8),we can rew ri te (3. 11) for u 2 C 1 0 (R + ) as Lem m a 3.9 Letp k 2 ( 1=2 l; 1=2)for k = 1;:::;n. T hen the functional s p k (u)de ned by (3: 13) are bounded on R (l) and are strongl y l inear independent.
l ( )= Z 1 0 b l (t)t 1=2 i dta l [ u;u]= b l [ u;u]+ X r k < l 1=2 v k j r k l (u)j 2 ; (3. 12) w here b l [ u;u]=
Proof. { T he i nequal i ty j p (u)j C j j uj j R i s equi val ent to the i ncl usi on
x p (1 + x 2l ) 1=2 2 L 2 (R + )
w hi ch i s true i f p 2 ( 1=2 l; 1=2). T he functi onal s p 1 ;:::; pn are strongl y l i near i ndependent because a functi on P n k= 1 c k x p k does not bel ong to the space L 2 (R + ) unl ess al l c k = 0: 2
Put b ( ) l (t)= ( ) (t)b l (t)
. W i th the hel p ofLem m as 3. 1,3. 6 and 3. 7 i t easy to show that
Z 1 0 Z 1 0 b ( ) l (xy)u(y)u(x)dxdy = Z 1 1 ( ) l ( )(M u)( )(M u)( )d :
(3. 14)
T he preci se statem ents are form ul ated i n the two fol l ow i ng asserti ons.
Lem m a 3.10 LetA ssum ption 3: 3 hol d and u 2 C 1 0 (R + ). T hen representation (3: 14)isval id for = 1. To check the part2 0 ofT heorem 3. 8,we veri fy the assum pti onsofT heorem s2. 4.W e rel y agai n on representati on (3. 12) val i d at l east for u 2 C 1 0 (R + ). Let R 0 consi st offuncti ons u 2 C 1 0 (R + ) such that 1=2 (u)= v k t r k t 1 rn i dt+ iv n 1 :
(3. 19)
Proof. { Let us proceed from equal i ty (3. 14) for = 0 and l= r n + 1=2 "," > 0. T hen we pass to the l i m i t " ! 0 i n thi s equal i ty. Its l eft-hand si de i s ofcourse conti nuous i n lfor any u 2 C 1 0 (R + ). Let us veri fy the convergence ofthe ri ght-hand si de as " ! 0. C om pari ng de ni ti on (3. 8),(3. 9) ofthe functi ons O fcourse,there exi sts a sequenceg j 2 C 1 0 (R ) such thatg j converge tog as j ! 1 i n the m etri cs (3. 22) (that i s i n the Sobol ev space H 1 (R )). Setw j (t) = ig 0 j (t) so that w j ( ) = g j ( ). T hen w j converge to w i n the m etri cs (3. 21). M oreover,w j 2 C 1 0 (R ) and
Z 1 1w j (t)dt= i Z 1 1g 0 j (t)dt= 0:
It fol l ow s that u j = G w j 2 C 1 0 (R + ),u j sati sfy (3. 18) and u j ! u i n D (A l ) as j ! 1 : 2 T hus,we have veri ed al lcondi ti ons ofT heorem 2. 4 and hence the negati ve spectrum of the operator H i s i n ni te for any 6 = 0. T hi s concl udes the proofofT heorem 3. 8.
4. T he functi on (3. 10) can be cal cul ated on the basi s ofthe fol l ow i ng P roposition 3.14 Suppose thatthe integral
V(T )= Z T 1 v(t)t 1=2 dt
is bounded uniform l y in T 1 and thatA ssum ption 3: 5 hol ds. T hen the function
B (z)= Z 1 0 v(t)t 1=2 z dt (3. 23)
is anal ytic in the band R ez 2 (0;r 1 + 1=2) and adm its a m erom orphic continuation in the band R ez 2 (0;r N + 1 + 1=2). T he function B (z) has onl y sim pl e pol es in the points r n + 1=2 with the residues v n ,n = 1;:::;N . M oreover,itis given by the form ul a
B (z)= Z 1 0 v(t) n X k= 1 v k t r k t 1=2 z dt (3. 24)
in the band R ez 2 (r n + 1=2;r n+ 1 + 1=2).
Proof. { Integrati ng by parts,we see that the i ntegral
B 1 (z)= Z 1 1 v(t)t 1=2 z dt= z Z 1 1 V(t)t 1 z dt
de nes an anal yti c functi on for al lR ez > 0. IfR ez > a n + 1=2,then
B 1 (z)= Z 1 1 v(t) n X k= 1 v k t r k t 1=2 z dt+ n X k= 1 v k (z r k 1=2) 1 : (3. 25)
Si m i l arl y,i fR ez < a 1 + 1=2,then
B 0 (z)= Z 1 0 v(t)t 1=2 z dt= Z 1 0 v(t) n X k= 1 v k t r k t 1=2 z dt n X k= 1 v k (z r k 1=2) 1 (3. 26)
A ccordi ng to A ssum pti on 3: 5, the i ntegral i n the ri ght-hand si de of (3. 26) i s anal yti c for R ez < r n+ 1 + 1=2 so that (3. 26) gi ves the m erom orphi c conti nuati on ofthe functi on B 0 (z). In parti cul ar,i fn = N we obtai n that the functi on B (z) = B 0 (z)+ B 1 (z) i s m erom orphi c i n the band R ez 2 (0;r N + 1 + 1=2). Fi nal l y,com pari ng representati ons (3. 25)and (3. 26),we arri ve at (3. 24). 2 T hus, to cal cul ate the functi on l ( ), i t su ces to com pute i ntegral (3. 23) for R ez 2 (0;r 1 + 1=2) and then to nd i ts m erom orphi c conti nuati on i nto the band R ez 2 (0;r N + 1 + 1=2).Putti ng together rel ati ons (3. 8),(3. 10)and (3. 24),we see that l ( )= B (l+ i ); l6 = r n + 1=2:
(3. 27)
E X A M P LE S
1. Let us rst consi der the operator H = H 0 + V i n the space L 2 (R + ). R ecal lthat H 0 i s m ul ti pl i cati on by x 2l and a perturbati on V i s de ned by form ul a (1. 2). A s an exam pl e,we choose v(t)= v p;q (t)= t q I p (t) (4. 1) w here I p i s the Besselfuncti on. Itfol l ow s from the asym ptoti cs ofI p (t)ati n ni ty and from i ts expansi on at t= 0 that
v(t)= (2= ) 1=2 t q 1=2 cos(t (2p+ 1) =4)+ (2t) 1 (4 1 p 2 )si n(t (2p+ 1) =4) + O (t q 5=2 )
as t! 1 and v(t)= 1 X k= 0
( 1) k 2 2k p k! (k + p + 1) P roposition 4.1 Let the function v(t) be given by form ul a (4: 1) where 1=2 p < q 1. T hen the negative spectrum ofthe operator H is in nite for any 6 = 0 ifl= p+ q+ 1=2+ 2k for som e k = 0;1;2;::: . In the opposite case itis in nite ifj j> l where the num ber l is de ned by (4: 4). If 2 [ l ;0),then the negative spectrum ofthe operator H is em pty for l< p+ q+ 1=2 and itconsists ofk + 1 eigenval ues ifl2 (p+ q+ 1=2+ 2k;p+ q+ 5=2+ 2k). If 2 (0; l ] ,then the negative spectrum ofthe operator H is em pty for l< p+ q+ 5=2 and itconsists ofk + 1 eigenval ues ifl2 (p + q+ 5=2 + 2k;p + q+ 9=2 + 2k).
W e note speci alcases p = 1=2 w hen q 2 ( 1;1]and p = 1=2 w hen q 2 (0;1] : v 1=2;q (t)= (2= ) 1=2 t q 1=2 si n t and v 1=2;q (t)= (2= ) 1=2 t q 1=2 cost: T hen,for functi on (4. 6), ( u n )(x)= j xj ( n g)(j xj )Y n (x):
T he operator n i s ofcourse uni tary on L 2 (R + ). It fol l ow s from (4. 6) that
(V (c) u n )(x)= ( 1) n=2 j xj ( n g)(j xj )Y n (x)
for even n and V (c) u n = 0 for odd n. Si m i l arl y,
(V (s) u n )(x)= ( 1) (n+ 1)=2 j xj ( n g)(j xj )Y n (x)
for odd n and V (s) u n = 0 for even n. Let us set n = ( 1) n=2 for even n, n = ( 1) (n+ 1)=2 for odd n, H (n) = H 0 + n n l ;0),then the negative spectrum ofthe operator H (n) is em pty for l< d=2 + n and itconsists ofk + 1 eigenval ues ifl2 (d=2+ n + 2k;d=2+ n + 2+ 2k). If n 2 (0;
(n) l ] , then the negative spectrum ofthe operator H (n) is em pty for l< d=2 + n + 2 and itconsists ofk + 1 eigenval ues ifl2 (d=2 + n + 2 + 2k;d=2 + n + 4 + 2k). C om bi ni ng Proposi ti on 4. 2 w i th decom posi ti on (4. 9) we can deduce resul ts on the operators H (c) and H (s) . Let us start w i th excepti onalval ues ofl.
T heorem 4.3 Letl= d=2 + 2k for som e k = 0;1;2;::: . T hen the negative spectrum ofthe operator H (c) is in nite for any 6 = 0. Letl= d=2 + 2k + 1 for som e k = 0;1;2;::: . T hen the negative spectrum ofthe operator H (s) is in nite for any 6 = 0.
3.To consi derotherval uesofl,we rst nd a rel ati on between functi ons (n) l ( )associ ated to di erent operators H (n) . R em ark that accordi ng to (3. 27),(4. 3),i n the case (4. 11), j (n) l ( )j= 2 l j ((n + d=2 l+ i )=2) 1 ((n + d=2 + l+ i )=2)j :
It fol l ow s from the i denti ty (z + 1)= z (z) (4. 13) that j (n+ 2) l ( )j= j n + d=2 l+ i j j n + d=2 + l+ i j 1 j Lem m a 4.5 Letb> 0,a b and 2 R . T hen inequal ity j (a + i ) 1 (b+ i )j j (a) 1 (b)j (4. 14)
hol ds in the foll owing three cases: 1 0 a > 0,2 0 a = n + " where n = 1;2;::: ," 2 (0;1) and " b,3 0 a 2 ( 1;0) and j aj b.
Proof. { Let us start w i th the rst case. C l earl y,(4. 14) i s equi val ent to the i nequal i ty
(a + i ) (a i ) (a) 2 (b+ i ) (b i ) (b) 2 ; 0 < a b:
T hus,i t su ces to check that for any 2 R the deri vati ve ofthe functi on (1 cos( l n(1 + x)))(1 + x) a x 1 dx 0; a > 0:
To consi der the case 2 0 ,we rem ark that,by (4. 13)
j (a + i ) 1 (b+ i )j= j ( n + "+ i ) 1 ( 1 + "+ i ) 1 ("+ i ) 1 (b+ i )j : (4. 15)
U si ng (4. 14) for the num bers " (i n pl ace ofa),b and obvi ous esti m ates j ( k + "+ i ) 1 j j ( k + ") 1 j ; k = 1; ;n;we nd that the ri ght-hand si de of(4. 15)i s bounded by
j ( n + ") 1 ( 1 + ") 1 (") 1 (b)j ;
w hi ch,agai n by (4. 13),equal s j (a) 1 (b)j .
To prove the part 3 0 ,we use agai n (4. 13),appl y i nequal i ty (4. 14) to the num bers a + 1 and b+ 1 and rem ark that j b+ i j j a + i j 1 bj aj 1 :
T hi s yi el ds j (a+ i ) 1 (b+ i )j j (a+ 1+ i ) 1 (b+ 1+ i )j j b+ i j j a+ i j 1 j (a+ 1) 1 (b+ 1)j bj aj 1 :
T he ri ght-hand si de here equal s j (a) 1 (b)j : 2 N ow we can si m pl i fy expressi ons for
Proof. { C onsi der rst
(1) l . A ccordi ng to (4. 14) i t su ces to check that num bers a = (d=2 l+ 1)=2 and b = (d=2 + l+ 1)=2 sati sfy one ofthe three condi ti ons ofLem m a 4. 5. If d 2,then b > 1 and hence condi ti on 2 0 hol ds for al ll> 0. Ifd = 1,we di sti ngui sh the cases l> 3=2 and l< 3=2. In the rst ofthem b > 1 so that condi ti on 2 0 i s ful l l ed and i n the second a > 0 so that condi ti on 1 0 i s ful l l ed.
Si m i l arl y,to consi der (0) l ,we need to check thatnum bersa = (d=2 l)=2,b= (d=2+ l)=2 al so sati sfy one of the three condi ti ons of Lem m a 4. 5. If d 4, then b > 1 and hence condi ti on 2 0 hol dsforal ll> 0.Ifd = 3,we di sti ngui sh the casesl> 3=2 and l< 3=2.In the rst ofthem b > 1 so that condi ti on 2 0 i s ful l l ed and i n the second a > 0 so that condi ti on 1 0 i s ful l l ed. Ifd = 2,we di sti ngui sh the cases l> 1 and l< 1. In the rst ofthem b > 1 and condi ti on 2 0 i s ful l l ed. In the second a > 0 and condi ti on 1 0 i s ful l l ed. Let, nal l y, d = 1. If l < 1=2, then a = 1=4 l=2 > 0 so that condi ti on 1 0 hol ds. If l > 3=2, then proceed from decom posi ti on (4. 9) and rel y on Proposi ti on 4. 2. R ecal lal so that num bers l werede ned by equal i ty (4. 10).C onsi der,forexam pl e,H (c) .Suppose rstthat 2 [ (c) l ;0). Ifl< d=2,then H (2m ) 0 foral lm so thatN ( ) l;c = 0.Ifl2 (d=2;d=2+ 2),then the operator H (0) has one negati ve ei genval ue and H (2m ) 0 for m 1. Si nce 0 = 1, i n thi s case N ( ) l;c = 1. Ifl2 (d=2 + 2;d=2 + 4),then the operator H (0) has two negati ve ei genval ues and H (2m ) 0 for m 1 so that N ( ) l;c = 2. Ifl2 (d=2 + 4;d=2 + 6),then the operator H (0) has three negati ve ei genval ues, the operators H (2) and H (4) have one negati ve ei genval ue each and H (2m ) 0 for m 3. It fol l ow s that i n thi s case N ( ) l;c = 3 + 2 + 4 . R epeati ng thi s procedure,we arri ve at the generalform ul a for the case l2 (d=2 + 2k;d=2 + 2k + 2): l;c = 2( 0 + 2 ). If l 2 (d=2 + 6;d=2 + 8), then the operators H (0) and H (2) have three negati ve ei genval ues each, both operators H (4) and H (6) have exactl y one negati ve ei genval ue and H (2m ) 0 for m 4. In thi s case N (+ ) l;c = 3( 0 + 2 )+ ( 4 + 6 ). T he generalform ul a for the case l2 (d=2 + 2k;d=2 + 2k + 2) reads as (k 2p + 1)( 4p 1 + 4p+ 1 ) for l2 (d=2 + 1 + 2k;d=2 + 3 + 2k); k 1. T he num ber N for l2 (d=2 + 1 + 2k;d=2 + 3 + 2k).
ei genval ues of H i s ni te and does not depend on . W e cal cul ate N ( ) l .
L 2 (R + ). O perators H have a form H = H 0 + V w here H 0 i s agai n m ul ti pl i cati on by x 2l ; l> 0;and (V f)
A N -SC H W IN G E R P R IN C IP LE 1. LetH 0 bean arbi trary sel f-adjoi ntposi ti veoperatorw i th dom ai n D (H 0 )i n a H i l bertspace H .Fol l ow i ng [ 2] ,wede nethe\ful l " H am i l toni an H = H 0 + V by m eansofthecorrespondi ng quadrati c form . W e suppose that the realquadrati c form v[ f;f]of the perturbati on V i s de ned forf 2 D
de ned by the rel ati on (T u;u)= v[ (H 0 + I) 1=2 u;(H 0 + I) 1=2 u] ; 8u 2 H ; (2. 1) i s com pact i n the space H . U nder thi s assum pti on the form
1=2 0
1=2) and h[ f;g]= (H f;g)forany f 2 D (H )and any g 2 D (H 1=2 0 ). T he resol vent i denti ty for the operators H 0 ;H can be w ri tten as
LetB be any sel f-adjointoperatorand l etM D (B )be som e setdense in D (B ) in the B -m etricsde ned by j j uj j 2 B = j j B uj j 2 + j j uj j 2 . T hen the totalm ul tipl icity di m E B ( 1 ;0) ofthe negative spectrum ofthe operator B equal s the m axim aldim ension ofsubspacesL M such that(B u;u)< 0; 8u 2 L.
e bounded sel f-adjointoperator A in the space H and allu 2 R . Let M = di m E A ( 1 ; 1) (2. 9) be the totalm ul tipl icity of the spectrum of the operator A in the interval( 1 ; 1). T hen the num bers (2: 3) and (2: 9) are equal ,thatis N = M . Proof. { Let n be the m axi m aldi m ensi on ofsubspaces L 0 R such that (A u 0 ;u 0 )< j j u 0 j j 2 ; 8u 0 2 L 0 : (2. 10) Itfol l ow sfrom (2. 8)and Lem m a 2. 1 thatN = n. Si nce R i sdense i n H ,the equal i ty M = n fol l ow s from Lem m a 2. 2. 2 T hi s resul t i s contai ned i n the paper [ 2]because,under assum pti ons ofProposi ti on 2. 3, the form v[ f;f]i sbounded i n the space H w i th the norm j j fj j H = j j H 1=2 0 fj jand consequentl y i s cl osabl e i n thi s space. A m ore di cul t case w hen the form v[ f;f]i s not cl osabl e i n H was al so i nvesti gated i n [ 2] . O ur ai m here i s to reconsi der thi s si tuati on and to form ul ate resul ts i n term s ofthe form (2. 5). T hi s i s conveni ent i n the case w hen the form v[ f;f]takes val ues ofboth si gns. A ctual l y,we suppose that equal i ty (2. 8) hol ds onl y up to som e ni te num ber ofsquares of(unbounded) functi onal s ' 1 ;:::;' n ,that i s a[ u;u]= (A u;u) m X j= 1
T heorem 2. 4
4LetA be a sel f-adjointoperator with dom ain D (A ). Suppose thata l inear set R 0 D (A )\ R is dense in D (A ) in the A -m etrics. A ssum e that for allu 2 R 0 the form (2: 5) satis es the estim ate a[ u;u] (A u;
w i th the norm(2. 4). N ow we are i n posi ti on to form ul ate the m ai n resul t ofthi s secti on.T heorem 2.12 Let' 1 ;:::;' n be strongl y l inear independentfunctional s de ned on R and l et A be a bounded sel f-adjoint operator. A ssum e that equal ity (2: 11) hol ds for allu 2 R . T hen num bers (2: 3) and (2: 9) are rel ated by the equal ity N = M + m : (2. 17) Proof. { Fi rst,we check that N M + m . By Lem m a 2. 1,there exi sts a subspace L R such that(2. 6)i sful l l ed and di m L = N (di m L i san arbi trary l arge num ber i fN = 1 ).It fol l ow s from (2. 11) that (2. 10)i s sati s ed for L 0 = fu 2 L :' j (u)= 0; j = 1;:::;m g:
2 5 .
5It i s easy to extend Proposi ti on 2. 3 and T heorem 2. 12 to the case of unbounded operators A . W e form ul ate such resul ts but do not use them i n the sequel .P roposition 2.3 bis LetA be a sel f-adjointoperator in the space H and l etR 0 R \ D (A ) be a l inear setdense in R in the R -m etrics and dense in D (A ) in the A -m etrics. Ifequal ity (2: 8) hol ds for allu 2 R 0 ,then N = M . T heorem 2.12 bis LetA be a sel f-adjointoperator in the space H and l etR 0 R \ D (A ) be a l inear set dense in R in the R -m etrics and dense in D (A ) in the A -m etrics. A ssum e thatfunctional s ' 1 ;:::;' n are strongl y l inear independentin the H il bertspace D (A ) (thatis j j uj jin the right-hand of(2: 13) is repl aced by j j uj j A ). T hen equal ity (2: 17) is ful ll ed. Proofs ofProposi ti on 2. 3 bi s and T heorem 2. 12 bi s are practi cal l y the sam e as those of Proposi ti on 2. 3 and T heorem 2. 12. 3. T H E F R IE D R IC H S M O D E L 1. O urstudy ofthe di screte spectrum i n the Fri edri chs m odelrel i eson the M el l i n transform M de ned by the equal i ty (M u)( operator M :L 2 (R + )! L 2 (R ) i s uni tary.Lem m a 3.1 Suppose thata function b(t) is l ocall y bounded on (0;1 ) and the integral
1=2 i dt= : ( ) converges att= 0 and t= 1 uniform l y in 2 R . T hen for any function u 2 C 1 0 (R + ) xy)u(y)u(x)dxdy = Z 1 1 ( )(M u)( )(M u)( )d : (3. 2)
(
our assum pti ons functi ons n ( ) are uni form l y bounded and converge to ( ) as n ! 1 . Si nceû 1 = M u bel ongs to the Schwartz space S(R ), we can pass to the l i m i t n ! 1 i n i ntegral(3. 3). T he expressi on obtai ned equal s the ri ght-hand si de of(3. 2). 2 U nderassum pti onsofLem m a 3. 1,the functi on ( )i sofcourse conti nuousand bounded. Letusde ne a uni tary m appi ng U :L 2 (R )! L 2 (R + ;C 2 )and a 2 2-m atri x B( )w ( )w ( )d = (BU w ;U w ) L 2 (R + ;C 2 ) ; (3. 5) w here B i sthe operatorofm ul ti pl i cati on by B( ).Si nce ei genval uesofthe m atri x B( )equal j ( )j ,we have Lem m a 3.2 Let ( ) be a continuous function of 2 (casep = 1 isnotexcl uded). T hen the spectrum ofB consistsofthe union [ p; q] [[ q;p] . 2. Let now H = L 2 (R + ),l et H 0 be m ul ti pl i cati on by the functi on x 2l ; l> 0;and l et an i ntegraloperator V be de ned by form ul a(1. 2). W e suppose that the functi on v(t) = v(t) i s l ocal l y bounded on (0;1 ). O ur assum pti on on i ts behavi our as t ! 1 w i l lbe m ade i n term s ofthe M el l i n transform .
t)t 1=2 l i dt converges uniform l y (butperhaps notabsol utel y) in 2 R . T he preci se de ni ti on ofthe operator H = H 0 + V w here a coupl i ng constant 2 R can be gi ven on the basi s ofthe fol l ow i ng Lem m a 3.4 Let A ssum ption 3: 3 hol d and l et v(t) = O (t r ) with r > 1=2 as t! 0. T hen the operator T = (H 0 + I) 1=2 V (H 0 + I) 1=2 is com pact. Proof. { Let R be the characteri sti c functi on of the i nterval (0;R ) and~ R = 1 R . D enote by V R andṼ R the i ntegral operators w i th kernel s v(xy) R (xy) and v(xy)~ R (xy), respecti vel y. Fi rst,we check thatthe operator(H 0 + I) 1=2 V R (H 0 + I) 1=2 i scom pactforany R > 0. Indeed,l et us consi der the operator-functi on F (z)= (H 0 + I) z V R (H 0 + I) z ; R ez 0: T he functi on v(t) R (t) sati s es the assum pti ons of Lem m a 3. 1 so that V R i s a bounded operator. C onsequentl y,the operators F (z) are bounded for al lR ez 0,and the functi on F (z) i s anal yti c for R ez > 0 and i s conti nuous i n z up to the l i ne R ez = 0. O n the other hand,F (z) i s the i ntegraloperator w i th kernel (x 2l + 1) z v(xy) R (xy)(y 2l + 1) z : So F (z) bel ongs to the H i l bert-Schm i dt cl ass i f4lR ez > 1. By com pl ex i nterpol ati on,thi s i m pl i es that F (z) i s com pact for al lR ez > 0.To ni sh the proof,i tsu cesto show thatthe norm ofthe operatorB R = zero as R ! 1 . C l earl y, B R i s al so the i ntegraloperator w i th kernel b R (xy) = (xy) l v(xy)~ R (xy). By vi rtue ofA ssum pti on 3. 3,we can appl y Lem m as 3. 1 and 3. 2 to i t. T hi s i m pl i es that j j B R j j= m ax
1=2 l i dt ! 0 as R ! 1 : 2 T hus, equal i ty (2. 1) hol ds w i th a com pact operator T and hence the operator H = H 0 + V can be de ned as a sel f-adjoi nt operator i n term s ofthe correspondi ng quadrati c form . M oreover,H i s sem i -bounded from bel ow and ess (H )= [ 0;1 ) for any 2 R .
)b l (t)t 1=2 i dt; fol l ow i ng asserti ons are qui te el em entary.
berofk = 1;:::;N such thatr k < l 1=2 and v k > 0.In the case (3. 7)we setN ( ) l = 0.T he m ai n resul tofthi ssecti on i sform ul ated i n the fol l ow i ng T heorem 3.8 LetA ssum ptions 3: 3 and 3: 5 be satis ed.
1 0
0Suppose thatl6 = r n + 1=2 for n = 1;:::;N . Put l = p 1 l . T hen the negative spectrum of the operator H is in nite if j j > l . In the case 2 (0; l ] , it consists of N ( ) l eigenval ues.
2 0
2Ifl= r n + 1=2 for som e n = 1;:::;N ,then the negative spectrum ofthe operator H is in nite for any 6 = 0. W e start the proofw i th cal cul ati ng the quadrati c form (2. 5),w hi ch equal s now a l [ xy)y l u(y)dy x l u(x)dx:(3. 11)
,i n the case (3. 7)the sum i n (3. 12)i sabsent. By de ni ti on (2. 4),the set R = R (l) consi sts now offuncti ons u such thatj j uj j 2 R = Z 1 0 (1 + x 2l )j u(x)j 2 dx < 1 :It fol l ow s from Lem m a 3. 4 that the form (3. 11) i s bounded on R (l) .
Lem m a 3.11 Let A ssum ption 3: 5 hol d and u 2 C 1 0 (R + ). Suppose that l 6 = r n + 1=2 for n = 1;:::;N . T hen representation (3: 14) is val id for = 0.To check thepart1 0 ofT heorem 3. 8,wecom pareLem m as3. 10,3. 11 and takei nto account equal i ty (3. 5). T hi s yi el ds the representati onb l [ u;u]= (B l U M u;U M u) L 2 (R + ;C 2 ) ; (3. 15) w here B l i s m ul ti pl i cati on by m atri x (3. 4) w i th el em ents (3. 10). It fol l ow s from (3. 12) and (3. 15) that for al lu 2 C 1 0 (R + ) a l [ u;u]= (A l u;u)+ X r k < l 1=2 v k j r k l (u)j 2 ; (3. 16) w here A l = (U M ) B l U M (3. 17) i s a bounded operator i n H . Si nce,by Lem m a 3. 9,the functi onal s r k l (u) are bounded on R (l) ,equal i ty (3. 16)extendsby conti nui ty to al lu 2 R (l) .T hi sgi vesusrepresentati on (2. 11) w here m = N < 0. A ccordi ng to Lem m a 3. 9 the correspondi ng functi onal s ' 1 ;:::;' n are strongl y l i near i ndependent. By vi rtue ofLem m a 3. 2,the totalm ul ti pl i ci ty ofthe spectrum ofoperator(3. 17)i n the i nterval ( 1 ; 1) i s zero i fj j p l 1 and i t i s i n ni te i fj j p l > 1. T hus the asserti on ofthe part 1 0 ofT heorem 3. 8 fol l ow s i m m edi atel y from T heorem 2. 12.
x)x 1=2 dx = 0:(3. 18) Let us extend representati on (3. 14) for the form b(0) l [ u;u]to the case l= r n + 1=2. Lem m a 3.12 LetA ssum ption 3: 5 hol d and u 2 R 0 . T hen representation (3: 14) is val id for = 0 and l= r n + 1=2 with the function
t r k t 1=2 l i dt+ v n (r n + 1=2 l i ) 1 :T hese functi ons converge as " ! 0 to functi on(3. 19) uni form l y i n outsi de ofany nei ghbourhood ofthe poi nt = s su ces to justi fy passi ng to the l i m i t i n the i ntegralover becauseM u 2 S(R ) and (M u)(0)= 0 by vi rtue ofcondi ti on (3. 18). 2 Let now l= r n + 1=2, l ( ) = gi ven by (3. 19) and (3. 9),respecti vel y,and l et B l be m ul ti pl i cati on by m atri x (3. 4) w i th the el em ents l ( ). Lem m as 3. 10 and 3. 12 i m pl y that,i n the case l= r n + 1=2,equal i ty (3. 15)hol ds foru 2 R 0 . T hi s gi ves us representati on (3. 16) w here u 2 R 0 and the operator A l i s de ned by equal i ty (3. 17).Si nce l i m ! 0 j jj l ( )j= j v n j6 = 0; l= r n + 1=2; (3. 20) i t fol l ow s from Lem m a 3. 2 that the operator B l i s unbounded from bel ow and consequentl y di m E A l ( 1 ; 1)= 1 for l= r n + 1=2 and any 6 = 0. N ote al so the fol l ow i ng el em entary Lem m a 3.13 T he setR 0 is dense in D (A l ) where l= r n + 1=2. Proof. { R ecal lthat the functi on l ( ) i s bounded except the poi nt = 0 w here i t sati s es (3. 20). T herefore i t fol l ow s from (3. 4) and (3. 17) that the i ncl usi on u 2 D (A l ) w ( )j 2 d < 1 for w = M u: (3. 21) C l earl y, the M el l i n transform (3. 1) can be factori zed as M = G w here i s the Fouri er transform i n L 2 (R )and (G u)(t)= e t=2 u(e t ).In term sofg( )= 1 w ( ),(3. 21)i sequi val ent to the condi ti on Z 1 1(jg(t)j 2 + jg 0 (t)j 2 )dt< 1 ;g = g:(3. 22)
j
( )i sthe -functi on.T hereforethecondi ti on ofProposi ti on 3. 14 on thefuncti on V(T ) i ssati s ed foral lq 1 and (4. 2)gi ves usrel ati on (3. 6)w i th num bers r k = p+ q+ 2k,w here k = 0;1;2;::: . T he correspondi ng coe ci ents v k are posi ti ve foreven k and are negati ve for odd k. So A ssum pti on 3. 5 i s ful l l ed for p + q > 1=2 and al ll> 0. U si ng form ul a (19), secti on 7. 7 of[ 1] ,vol . 2,we nd that functi on (3. 23) equal s now B (z)= 2 q z 1=2 ((p + q z + 1=2)=2) 1 ((p q+ z + 3=2)((p q+ l+ i + 3=2)=2) 1 ((p + q l i + 1=2)=2)j : (4. 4) R em ark that,by the Sti rl i ng form ul a,B (l+ i ) ! 0 as j j! 1 so that the spectrum of the correspondi ng operator B l (see Lem m a 3. 2) consi sts ofthe i nterval[ 1 l ; 1 l ] . In our parti cul ar case,T heorem 3. 8 gi ves the fol l ow i ng asserti on.
H
Bel ow we consi derthe operatorsH (c) = H 0 + V (c) and H (s) = H 0 + V (s) i n the space L 2 (R d ),w here H 0 i s m ul ti pl i cati on by j xj 2l and V (c) ,V (s) are de ned by form ul as (1. 1). Let h n be the subspace ofspheri calfuncti ons Y n (!); ! 2 S d 1 ,oforder n,l et K be the L 2 -space w i th wei ght r d 1 of functi ons de ned on R + and l et H n = K h n . To put i t di erentl y, H n R d i s the subspace offuncti ons u n ofthe form u n (x)= j xj g(j xj )Y n (x);x = xj xj 1 ; g 2 L 2 (R + ) and Y n 2 h n . n ; H n = K h n ;and every subspace H n i s i nvari ant w i th respect to the Fouri er operator w hi ch reduces to the Fouri er-Besseltransform on H n . M ore preci sel y
Hj
et T :L 2 (R + )! K be a uni tary operator de ned by (T g)(r)= r g(r).T hen (I n i s the i denti ty operatori n the space h n . R ecal lthat di m h n = (2n + d 2)(n + d 3)! ((d 2)! n! ) 1 = : n : (4. 10)C om pari ng (4. 1) and (4. 7) and setti ngp = n + (d 2)=2; q = 1=2; (4. 11)we see that Proposi ti on 4. 1 can be di rectl y appl i ed to every operator (4. 8).P roposition 4.2 T he negative spectrum of the operator H (n) is in nite for any 6 = 0 if l= n + d=2+ 2k for som e k = 0;1;2;::: . In the opposite case itis in nite ifj j> ((n + d=2 + l+ i )=2) 1 ((n + d=2 l i )=2)j : (4. 12)If n 2 [(n)
.
In parti cul ar,we obtai n the fol l ow i ng resul t. sattai ned atthe poi nt = 0. To that end we need the fol l ow i ng asserti on from the theory ofthe -functi on.
'
(a; )= (a + i ) (a i ) (a) 2 w i th respectto a i snonnegati ve.C al cul ati ng thi sderi vati veand denoti ng (z)= (z) 1 0 (z), we nd that '(a; ) 1 @'(a; )=@a = (a + i )+ (a i ) 2 (a): It fol l ow s from the D i ri chl et representati on (form ul a
2 l j ((d=2+ l)=2) 1 ((d=2 l)=2)j ;(s) l = 2 l j ((d=2+ l+ 1)=2) 1 ((d=2 l+ 1)=2)j :
l
]can bestudi ed qui tesi m i l arl y.N ow H (2m ) 0 foral lm i fl< d=2+ 2 and hence N (+ ) l;c = 0 for such l. Ifl2 (d=2 + 2;d=2 + 4),then the operators H (0) and H (2) have one negati ve ei genval ue each and H (2m ) 0 for m 2. It fol l ow s that i n thi s case N (+ )l;c = 0 + 2 . If l 2 (d=2 + 4;d=2 + 6), then the operators H (0) and H(2) have two negati ve ei genval ues each and agai n H (2m ) 0 for m 2 so that N (+ )
ined by form ul a (4: 17) for l2 (d=2 + 2k;d=2 + 2k + 2). T he num ber N c is determ ined by form ul a (4: 18) for l2 (d=2 + 2k;d=2 + 2k + 2). T he totalnum bers N ( ) l;s ofnegati ve ei genval ues ofthe operator H (s) can be found qui te si m i l arl y.T heorem 4.10 T he num ber N
2p)( 4p+ 1 + 4p+ 3 ):
H . Batem an and A . Erd el yi , H i gher transcendental functi ons, vol . 1, 2, N ew York; M cG raw -H i l l ,1953.
M athem atical Subject C lassi cation: Pri m ary 35J10,47A 75;Secondary 81U 20
4.Let us return to the operators H (c) = H 0 + V (c) and H (s) = H 0 + V (s) i n the space L 2 (R d ). W e consi der rst the one-di m ensi onal case d = 1 w hen (4. 9) reduces to the decom posi ti on of the space L 2 (R ) i nto the subspaces of the even and odd functi ons. T hese subspaces are i nvari ant w i th respect to H (c) = H 0 + V (c) and H (s) = H 0 + V (s) , V (c) f = 0 for odd f and V (s) f = 0 for even f. T herefore, the negati ve spectrum of the operator H (c) (respecti vel y,H (s) ) i n the space L 2 (R ) coi nci des w i th that ofthe operator H forv(t)= (2= ) 1=2 cost(respecti vel y,v(t)= (2= ) 1=2 si n t)i n thespace L 2 (R + ).T hus,we can di rectl y appl y Proposi ti on 4. 1,w here accordi ng to (4. 5) p = 1=2;q = 1=2 for the operator H (+ ) and p = 1=2;q = 1=2 for the operator H ( ) . M oreover,i n the case d = 1 expressi ons (4. 16) can be a l i ttl e bi t si m pl i ed. T hi s gi ves us the fol l ow i ng resul t.Suppose that l 6 = 1=2 + 2k for any k = 0;1;2;::: . T hen the negative spectrum of the operator H (c) is em pty if l 2 (0;1=2) and j j l . Suppose thatl6 = d=2+ 2k + 1 for any k = 0;1;2;::: . T hen the negative spectrum ofthe operator H (s) is nite ifand onl y ifj j
Fi nal l y,i n the case l2 (1=2;3=2)we have that a 2 ( 1=2;0)and j aj= l=2 1=4. T herefore j aj< b and we can refer. b= 1=4+ l=2 > 1 so thatcondi ti on 2 0 hol ds. b= 1=4+ l=2 > 1 so thatcondi ti on 2 0 hol ds. Fi nal l y,i n the case l2 (1=2;3=2)we have that a 2 ( 1=2;0)and j aj= l=2 1=4. T herefore j aj< b and we can refer to condi ti on 3 0 : 2
Bi rm an,O n the spectrum ofsi ngul ar boundary val ue probl em s (R ussi an),M at. M Sh, Sb. 552M .Sh.Bi rm an,O n the spectrum ofsi ngul ar boundary val ue probl em s (R ussi an),M at. Sb.55,no.2 (1961),125-174.
Bi rm an and M .Z.Sol om yak,Spectraltheory ofsel fadjoi nt operators i n H i l bert space. M Sh, R ei del ,D ol drechtM .Sh.Bi rm an and M .Z.Sol om yak,Spectraltheory ofsel fadjoi nt operators i n H i l bert space,R ei del ,D ol drecht,1987.
O n the trace-cl ass m ethod i n potenti al scatteri ng theory. M Sh, D R Bi, Yafaev, J.Sovi et M ath. 562M . Sh. Bi rm an and D . R . Yafaev, O n the trace-cl ass m ethod i n potenti al scatteri ng theory,J.Sovi et M ath.56,no.2 (1991),2285-2299.
O n the Fri edri chsm odeli n the theory ofperturbati onsofthe conti nuous spectrum (R ussi an). L D Faddeev, 73Trudy M IA NL.D .Faddeev,O n the Fri edri chsm odeli n the theory ofperturbati onsofthe conti nuous spectrum (R ussi an),Trudy M IA N 73 (1964),292-313;
Engl i sh transl .i n A m er. M ath. Soc.Transl .Ser. 262Engl i sh transl .i n A m er.M ath. Soc.Transl .Ser. 2 62 (1967).
O n the theory ofthe di screte spectrum ofthe three. D R Yafaev, 23D .R .Yafaev,O n the theory ofthe di screte spectrum ofthe three-parti cl e Schr odi nger operator,M ath.U SSR Sborni k 23,no.4 (1974),535-559.
| []
|
[
"On Fair Division under Heterogeneous Matroid Constraints",
"On Fair Division under Heterogeneous Matroid Constraints"
]
| [
"Amitay Dror [email protected] \nJournal of Artificial Intelligence Research -(-)\nTel-Aviv University\nTel-AvivIsrael\n",
"Michal Feldman [email protected] \nTel-Aviv University\nTel-AvivIsrael\n",
"Erel Segal-Halevi \nAriel University\n40700ArielIsrael\n"
]
| [
"Journal of Artificial Intelligence Research -(-)\nTel-Aviv University\nTel-AvivIsrael",
"Tel-Aviv University\nTel-AvivIsrael",
"Ariel University\n40700ArielIsrael"
]
| []
| We study fair allocation of indivisible goods among additive agents with feasibility constraints. In these settings, every agent is restricted to get a bundle among a specified set of feasible bundles. Such scenarios have been of great interest to the AI community due to their applicability to real-world problems. Following some impossibility results, we restrict attention to matroid feasibility constraints that capture natural scenarios, such as the allocation of shifts to medical doctors, and the allocation of conference papers to referees.We focus on the common fairness notion of envy-freeness up to one good (EF1). Previous algorithms for finding EF1 allocations are either restricted to agents with identical feasibility constraints, or allow free disposal of items. An open problem is the existence of EF1 complete allocations among heterogeneous agents, where the heterogeneity is both in the agents' feasibility constraints and in their valuations. In this work, we make progress on this problem by providing positive and negative results for different matroid and valuation types. Among other results, we devise polynomial-time algorithms for finding EF1 allocations in the following settings: (i) n agents with heterogeneous partition matroids and heterogeneous binary valuations, (ii) 2 agents with heterogeneous partition matroids and heterogeneous additive valuations, and (iii) at most 3 agents with heterogeneous binary valuations and identical base-orderable matroid constraints.1. A preliminary version appeared in the proceedings of AAAI 2021(Dror, Feldman, & Segal-Halevi, 2021), without most of the proofs. This version contains all omitted proofs, an uptodate literature survey, a more general non-existence result in Subsection 3.3, a simpler proof of Theorem 5, and simpler algorithms and proofs in Section 8. | 10.1609/aaai.v35i6.16670 | [
"https://arxiv.org/pdf/2010.07280v4.pdf"
]
| 222,341,808 | 2010.07280 | bdcc76b618b483541dfb11ec2480beac55bb0461 |
On Fair Division under Heterogeneous Matroid Constraints
24 Apr 2022
Amitay Dror [email protected]
Journal of Artificial Intelligence Research -(-)
Tel-Aviv University
Tel-AvivIsrael
Michal Feldman [email protected]
Tel-Aviv University
Tel-AvivIsrael
Erel Segal-Halevi
Ariel University
40700ArielIsrael
On Fair Division under Heterogeneous Matroid Constraints
24 Apr 2022Submitted -; published -arXiv:2010.07280v4 [cs.GT]
We study fair allocation of indivisible goods among additive agents with feasibility constraints. In these settings, every agent is restricted to get a bundle among a specified set of feasible bundles. Such scenarios have been of great interest to the AI community due to their applicability to real-world problems. Following some impossibility results, we restrict attention to matroid feasibility constraints that capture natural scenarios, such as the allocation of shifts to medical doctors, and the allocation of conference papers to referees.We focus on the common fairness notion of envy-freeness up to one good (EF1). Previous algorithms for finding EF1 allocations are either restricted to agents with identical feasibility constraints, or allow free disposal of items. An open problem is the existence of EF1 complete allocations among heterogeneous agents, where the heterogeneity is both in the agents' feasibility constraints and in their valuations. In this work, we make progress on this problem by providing positive and negative results for different matroid and valuation types. Among other results, we devise polynomial-time algorithms for finding EF1 allocations in the following settings: (i) n agents with heterogeneous partition matroids and heterogeneous binary valuations, (ii) 2 agents with heterogeneous partition matroids and heterogeneous additive valuations, and (iii) at most 3 agents with heterogeneous binary valuations and identical base-orderable matroid constraints.1. A preliminary version appeared in the proceedings of AAAI 2021(Dror, Feldman, & Segal-Halevi, 2021), without most of the proofs. This version contains all omitted proofs, an uptodate literature survey, a more general non-existence result in Subsection 3.3, a simpler proof of Theorem 5, and simpler algorithms and proofs in Section 8.
Introduction
Many real-life problems involve the fair allocation of indivisible items among agents with different preferences, and with constraints on the bundle that each agent may receive. Examples include the allocation of course seats among students (Budish, Cachon, Kessler, & Othman, 2017) and the allocation of conference papers among referees (Garg, Kavitha, Kumar, Mehlhorn, & Mestre, 2010).
In general, different agents may have different constraints. For example, consider the allocation of employees among departments of a company: one department has room for four project managers and two backend engineers, while another department may have room for three backend engineers and five data scientists. Another example can be found in the way shifts are assigned among medical doctors, where every doctor has her own schedule limitations.
Our goal is to devise algorithms that find fair allocations of indivisible items among agents with different preferences and different feasibility constraints. Let us first explain what we mean by "fair" and what we mean by "constraints".
A classic notion of fairness is envy freeness (EF), which means that every agent (weakly) prefers his or her bundle to that of any other agent. Since an EF allocation may not exist when items are indivisible, recent studies focus on its relaxation known as EF1 -envy free up to one item (Budish, 2011) -which means that every agent i (weakly) prefers her bundle to any other agent j's bundle, up to the removal of the best good (in i's eyes) from agent j's bundle (Section 2.2). Without constraints, an EF1 allocation always exists and can be computed efficiently (Lipton, Markakis, Mossel, & Saberi, 2004).
The constraints of an agent are represented by a set of bundles (subsets of items), that are considered feasible for the agent. An allocation is feasible if it allocates to each agent a feasible bundle. We focus on the case when the feasible bundles are the independent sets of a matroid. This means that (i) the set of feasible bundles is downward-closed -a subset of a feasible bundle is feasible; (ii) if a feasible bundle S has fewer items than another feasible bundle T , then it is possible to extend S to a larger feasible bundle by adding some item from T . This latter property of "extension by one" makes the notion of EF1 particularly appropriate for problems of allocation with matroid constraints. A special case of a matroid is a partition matroid. With partition matroid constraints, the items are partitioned into a set of categories, each category has a capacity, and the feasible bundles are the bundles in which the number of items from each category is at most the category capacity.
There are two approaches for handling feasibility constraints in fair allocation. A first approach is to directly construct allocations that satisfy the constraints, i.e., guarantee that each agent receives a feasible bundle. This approach was taken recently by Barman (2018, 2019), who study settings with additive valuations, where every agent values each bundle at the sum of the values of its items. They present efficient algorithms for computing EF1 allocations when agents have: (i) identical matroid constraints and identical valuations; or (ii) identical partition matroid constraints, even under heterogeneous valuations (Section 2.3). However, their algorithms do not handle different partition constraints, or identical matroid constraints with different valuations.
A second approach is to capture the constraints within the valuation function. That is, the value of an agent for a bundle equals the value of the best feasible subset of that bundle. This approach seamlessly addresses heterogeneity in both constraints and valuations. The valuation functions constructed this way are no longer additive, but are submodular [ (Oxley, 2006)]. Recently, Babaioff, Ezra, and Feige (2021) and Benabbou, Chakraborty, Igarashi, and Zick (2020) have independently proved the existence of EF1 allocations in the special case in which agents have submodular valuations with binary marginals (where adding an item to a bundle adds either 0 or 1 to its value). Such an allocation can be converted to a fair and feasible allocation by giving each agent the best feasible subset of his/her allocated bundle, and disposing of the other items.
However, in some settings, such disposal of items may be impossible. For example, when allocating shifts to medical doctors, if an allocation rule returns an infeasible allocation and shifts are disposed to make it feasible, the emergency-room might remain understaffed. A similar problem may occur when allocating papers to referees, where disposals may leave some papers without reviews. The allocation rules developed in the above papers may not yield EF1 allocations when they are constrained to return feasible allocations (Section 3.1).
Thus, an open problem remains:
Open problem: Given agents with different additive valuations and different matroid constraints, which settings admit a complete and feasible EF1 allocation?
Contribution and Techniques
Feasible envy. Before presenting our results, we shall discuss the EF1 notion in settings with heterogeneous constraints. Consider a setting with two agents, Alice and Bob, and 8 identical items of a single category, with capacities 3 and 5 for Alice and Bob, respectively. Every complete feasible allocation gives 3 items to Alice and 5 to Bob. Ignoring feasibility constraints, such an allocation is not EF1, since even after removing a single item from Bob's bundle, Alice values it at 4, which is greater than her value for her own bundle. However, a bundle of 4 items is infeasible for Alice, so her envy is not justified.
A more reasonable definition of envy in this setting is feasible envy, where each agent compares her bundle to the best feasible subset of any other agent's bundle. In the example above, the best feasible subset of Bob's bundle for Alice is worth 3. Thus, the allocation is feasibly-envy-free (F-EF).
If Alice values one of Bob's items at 2, then the above allocation is not F-EF, since the best feasible subset of Bob's bundle for Alice is worth 4, but it is F-EF1, as it becomes F-EF after removing this item from Bob's bundle (Section 2.2 for a formal definition). Throughout the paper, we use the notion of F-EF1 under heterogeneous constraints. Note that F-EF1 is equivalent to EF1 when agents have identical constraints.
Impossibilities. We present several impossibility results that direct us to the interesting domain of study. First, if the partition of items into categories is different for different agents, an F-EF1 allocation may not exist, even for two agents with identical valuations (Section 3.2). Second, going beyond matroid constraints to natural generalizations such as matroid intersection, bipartite graph matching, conflict graph or budget constraints is futile: even with two agents with identical binary valuations and identical non-matroid constraints, a complete F-EF1 allocation may not exist (Section 3.3). Third, going beyond EF1 to the stronger notion of envy-free up to any good (EFX) is also hopeless: even with two agents with identical valuations and identical uniform matroid constraints, an EFX allocation may not exist (Section 3.4).
Based on these results, we focus on finding F-EF1 allocations when the agents' constraints are represented by either: (1) partition matroids where all agents share the same partition of items into categories but may have different capacities; or (2) base-orderable (BO) matroids -a wide class of matroids containing partition matroids -where all agents have identical matroid constraints but possibly different valuations.
Algorithms (see Table 1). For partition matroids, the reason that the algorithms of Lipton et al. (2004) and Biswas and Barman (2018) fail for agents with different capacities is that they rely on cycle removal in the envy graph. Informally (Section 2.3 for details), these algorithms maintain a directed envy graph in which each agent points to every agent he or she envies. The algorithm prioritizes the agents who are not envied, since giving an item to such agents keeps the allocation EF1. If there are no unenvied agents, the envy graph must contain a cycle, which is then removed by exchanging bundles along the cycle. However, when different agents in the cycle have different constraints, this exchange may not be feasible. Thus, our main challenge is to develop techniques that guarantee that no envy-cycles are created in the first place. We manage to do so in four settings of interest:
1. There are at most two categories (Section 4).
2. All agents have identical valuations (Section 5).
3. All agents have binary valuations (Section 6).
4. There are two agents (Section 7). Each setting is addressed by a different algorithm and using a different cycle-prevention technique.
Beyond partition matroids, we consider the much wider class of matroids, termed baseorderable (BO) matroids (see definition 16). This class contains partition matroids, laminar matroids (an extension of partition matroids where the items in each category can be partitioned into sub-categories), transversal matroids, and other interesting matroid structures. In fact, it is conjectured by Bonin and Savitsky (2016) that "almost all matroids are baseorderable". For this class we present algorithms for agents with identical constraints and different additive valuations in the following cases:
5. There are two agents (Section 8).
6. There are three agents with binary valuations (Section 8).
All our algorithms run in polynomial-time.
Related Work
A recent survey of constraints in fair division is given by Suksompong (2021). Below we focus on constraints in allocation of indivisible items.
Capacity constraints. In many settings, there are lower bounds as well as upper bounds on the total number of items allocated to an agent. This is particularly relevant to the problem of assigning conference papers to referees (Garg et al., 2010;Long, Wong, Peng, & Ye, 2013;Lian, Mattei, Noble, & Walsh, 2018). The constraints may be different for each agent, but there is only one category of items. The same is true in the setting studied by Ferraioli, Gourvès, and Monnot (2014), where each agent must receive exactly k items. A balanced allocation is an allocation in which all agents receive the same number of items up to at most a single item. The round-robin algorithm finds a balanced EF1 allocation for any number of agents with additive valuations. An algorithm by Kyropoulou, Suksompong, and Voudouris (2020) finds a balanced EF1 allocation for two agents with general monotone valuations. It is open whether this result extends to three or more agents. Jojic, Panina, and Zivaljevic (2021) prove the existence of a balanced allocation in which all agents assign approximately the same value to all bundles (an "almost consensus" allocation). Note that constraints imposing a lower bound on the number of items allocated to an agent, such as balancedness constraints, are not matroid constraints, since they are not downward-closed. Capacity constraints are common also in matching markets such as doctors-hospitals and workers-firms; see Klaus, Manlove, and Rossi (2016) for a recent survey. In these settings, the preferences are usually represented by ordinal rankings rather than by utility functions, and the common design goals are Pareto efficiency, stability and strategy-proofness rather than fairness. Gafni, Huang, Lavi, and Talgam-Cohen (2021) study fair allocation in a related setting in which items may have multiple copies.
Partition constraints. Fair allocation of items of different categories is studied by Mackin and Xia (2016) and Sikdar, Adali, and Xia (2017). Each category contains n items, and each agent must receive exactly one item of each category. Sikdar, Adalı, and Xia (2019) consider an exchange market in which each agent holds multiple items of each category and should receive a bundle with exactly the same number of items of each category. The above works focus on designing strategyproof mechanisms. Nyman, Su, and Zerbib (2020) study a similar setting (they call the categories "houses" and the objects "rooms"), but with monetary transfers (which they call "rent").
Following the paper by Biswas and Barman (2018) focusing on EF1 fairness, Hummel and Hetland (2021) study partition matroid constraints in combination with a different fairness notion -the maximin-share. Their algorithm attains a 1/2-factor approximation to this fairness notion.
Matroid constraints. Gourvès, Monnot, and Tlilane (2013) study a setting with a single matroid, where the goal includes building a base of the matroid and providing worst case guarantees on the agents' utilities. Gourvès, Monnot, and Tlilane (2014) and Gourvès and Monnot (2019) require the union of bundles allocated to all agents to be an independent set of the matroid. This by design requires to leave some items unallocated, which is not allowed in our setting.
Budget constraints. Budget constraints (also called knapsack constraints) assume that each item has a cost, each agent has a budget, and the feasible bundles for an agent are the bundles with total cost at most the budget. In this setting, Wu, Li, and Gan (2021) show that a 1/4-factor EF1 allocation exists. Gan, Li, and Wu (2021) show that a 1/2-factor EF1 allocation exists when the valuations are identical, and an EF1 allocation exists among two agents with the same budget.
Connectivity constraints. Barrera, Nyman, Ruiz, Su, and Zhang (2015), Bilò, Caragiannis, Flammini, Igarashi, Monaco, Peters, Vinci, and Zwicker (2018), and Suksompong (2019) study another kind of constraint in fair allocation. The goods are arranged on a line, and each agent must receive a connected subset of the line. Bouveret, Cechlárová, Elkind, Igarashi, and Peters (2017) and Bei, Igarashi, Lu, and Suksompong (2019) study a more general setting in which the goods are arranged on a general graph, and each agent must receive a connected subgraph. Note that these are not matroid constraints.
No-conflict constraints. Li, Li, and Zhang (2021) study fair allocation with scheduling constraints, where each item is an interval, and the feasible bundles are the sets of non-overlapping intervals. Hummel and Hetland (2022) study fair allocation in a more general setting in which there is a conflict graph G, and the feasible sets are the sets of non-adjacent vertices in G.
Downward-closed constraints. Li and Vetta (2021) study fair allocation with downwardclosed constraints, which include matroid, budget, and no-conflict constraints as special cases. For this very general setting, they present an algorithm that approximates the maximin share.
Non-additive valuations. As explained in the introduction, fair allocation with constraints is closely related (though not equivalent) to fair allocation with non-additive valuations. This problem has attracted considerable attention recently. Bei, Garg, Hoefer, and Mehlhorn (2017) and Anari, Mai, Gharan, and Vazirani (2018) study allocation of multi-unit item-types when the valuation is additive between types but concave (i.e., has decreasing marginal returns) between units of the same type. They give a 2-approximation to the maximum Nash welfare (the product of utilities). Garg, Hoefer, and Mehlhorn (2018) study budget-additive valuations, where each agent has a utility-cap and values each bundle as the minimum between the sum of item values and the utility-cap. They give a 2.404-approximation to the MNW.
Particularly relevant to matroid constraints are the submodular valuations with binary marginals, where adding an item to a bundle increases the bundle value by either 0 or 1. These valuations are equivalent to matroid rank functions -functions that evaluate a bundle by the size of the largest independent set of a certain matroid contained in that bundle. In this setting, Benabbou et al. (2020) and Babaioff et al. (2021) present allocation mechanisms that are Pareto-efficient, EF1, EFX and strategyproof. Barman and Verma (2021) present a polynomial-time algorithm that finds an allocation satisfying maximinshare fairness.
Model and Preliminaries
Allocations and Constraints
We consider settings where a set M of m items should be allocated among a set N of n agents. An allocation is denoted by X = (X 1 , . . . , X n ), where X i ⊆ M is the bundle given to agent i, and X i ∩ X j = ∅ for all i = j ∈ N . An allocation is complete if i∈N X i = M . Throughout, we use [n] to denote the set {1, . . . , n}.
We consider constrained settings, where every agent i is associated with a matroid M i = (M, I i ) that specifies the feasible bundles for i.
Definition 1.
A matroid is a pair M = (M, I), where M is a set of items and I ⊆ 2 M is a nonempty set of independent sets satisfying the following properties:
(i) Downward-closed: S ⊂ T and T ∈ I implies S ∈ I; (ii) Augmentation: For every S, T ∈ I, if |S| < |T |, then S ∪ {g} ∈ I for some g ∈ T .
A base of M is a maximal independent set in M.
A special case of a matroid is a partition matroid:
Definition 2. (partition matroid) A matroid M i = (M, I i ) is a partition matroid if there exists a partition of M into categories C i = {C 1 i , .
. . , C ℓ i i } for some ℓ i ≤ m, and a corresponding vector of capacities k 1 i , . . . , k ℓ i i , such that the collection of independent sets is
I i = {S ⊆ M : |S ∩ C h i | ≤ k h i for every h ∈ [ℓ i ]}.
Given an allocation X, we denote by X h i the items from category C h i given to agent i in X.
A special case of a partition matroid is a uniform matroid, which is a partition matroid with a single category.
Definition 3. (feasible allocation) An allocation X is said to be feasible if: (i) it is individually feasible: X i ∈ I i for every agent i, and (ii) it is complete:
i X i = M .
Let F denote the set of all feasible allocations. Throughout this paper we consider only instances that admit a feasible allocation: Assumption 1. All instances considered in this paper admit a feasible allocation; i.e., F = ∅. For partition matroids, feasibility means that for every category C h , the sum of agent capacities for this category is at least |C h |.
An instance is said to have identical matroids if all agents have the same matroid feasibility constraints. I.e., I i = I j for all i, j ∈ N .
An instance with partition matroids is said to have identical categories if all the agents have the same partition into categories. I.e., ℓ i = ℓ j = ℓ for every i, j ∈ N , and C h i = C h j = C h for every h ∈ ℓ. The capacities, however, may be different.
Valuations and Fairness Notions
Every agent i is associated with an additive valuation function v i : 2 M → R + , which assigns a positive real value to every set S ⊆ M . Additivity means that there exist m values
v i (1), . . . , v i (m) such that v i (S) = j∈S v i (j). An additive valuation v i is called binary if v i (j) ∈ {0, 1} for every i ∈ N, j ∈ M . An allocation X is Social Welfare Maximazing (SWM) if X ∈ argmax X ′ ∈F i∈[n] v i (X ′ i ).
Definition 4 (envy and envy freeness). Given an allocation
X, agent i envies agent j if v i (X i ) < v i (X j )
. X is envy free if no agent envies another agent.
Definition 5 (EF1). (Budish, 2011) An allocation X is envy free up to one good (EF1) if
for every i, j ∈ N , there exists a subset Y ⊆ X j with |Y | ≤ 1, such that v i (X i ) ≥ v i (X j \ Y ). Definition 6. A best feasible subset of a set T for agent i, denoted Best i (T ) is any subset in argmax S⊆T, S∈I i v i (S).
Definition 7 (feasible valuation). The feasible valuation of agent i for a set T isv i (T ) := v i (Best i (T )).
Note thatv i (T ) is well-defined even though Best i (T ) may not be uniquely determined.
Definition 8. Given a feasible allocation X:
• Agent i F-envies agent j iffv i (X i ) <v i (X j ). • X is F-EF (feasible-EF) if no agent F-envies another one. • X is F-EF1 if for every i, j ∈ N : there exists a subset Y ⊆ X j with |Y | ≤ 1, such thatv i (X i ) ≥v i (X j \ Y ).
For further discussion of the F-EF1 criterion, and an alternative (weaker) definition, see Appendix A.
Another useful notation is positive feasible envy, which is the amount by which an agent F-envies another agent:
Definition 9. The positive feasible envy of agent i towards j in allocation X is:
Envy + X (i, j) := max(0,v i (X j ) −v i (X i ))
. Definition 10. The envy graph of an allocation X, G(X), is a directed graph where the nodes represent the agents, and there is an edge from agent i to agent
j iff v i (X i ) < v i (X j ).
The feasible envy graph is defined analogously based on the feasible-envy.
Common Tools and Techniques
Below we review the most common methods for finding an EF1 allocation.
Envy cycle elimination. The first method for attaining an EF1 allocation (in unconstrained setting, even with arbitrary monotone valuations) is due to Lipton et al. (2004).
The envy cycles elimination algorithm works as follows. Start with the empty allocation. Then, as long as there is an unallocated item: (i) choose an agent that is a source in the envy graph (not envied by another agent) and give her an arbitrary unallocated item, (ii) reconstruct the envy graph G corresponding to the new allocation, (iii) as long as G contains cycles, choose an arbitrary cycle, and shift the bundles along the cycle. This increases the total value, thus this process must end with a cycle-free graph.
Max Nash welfare. The Nash social welfare (NW) of an allocation X is the geometric mean of the agents' values:
N W = ( i∈[n] v i (X i )) 1
n . An allocation is max Nash welfare (MNW) if it maximizes the NW among all feasible allocations. Caragiannis, Kurokawa, Moulin, Procaccia, Shah, and Wang (2019) showed that in unconstrained settings with additive valuations, every MNW allocation is EF1.
Round robin (RR). RR works as follows. Given a fixed order σ over the agents, as long as there is an unallocated item, the next agent according to σ (where the next agent of agent n is agent 1) chooses an item she values most among the unallocated items. Simple as it might be, this algorithm results in an EF1 allocation in unconstrained settings with additive valuations (Caragiannis et al., 2019) Per category RR + envy cycle elimination. This algorithm (Algorithm 1) was introduced by Biswas and Barman (2018) for finding an EF1 allocation in settings with homogeneous partition constraints. It resolves the categories sequentially, resolving each one by RR followed by envy cycle elimination, where the order over the agents is determined by a topological order in the obtained envy graph. ALGORITHM 1: Per-Category Round Robin, (Biswas & Barman, 2018) initialize: σ ← an arbitrary order over the agents. ∀i ∈ [n] X i ← ∅ for every category h do Run round robin with C h , σ; Let X h i be the resulting allocation for agent i; ∀i ∈ [n] X i ← X i ∪ X h i ; Draw envy graph for current allocation; Remove cycles from the graph, switching bundles along the cycles; Set σ to be a topological order of the graph; end
Pareto Efficiency
Definition 11. An allocation X is Pareto-efficient if there is no other allocation X ′ such that all agents weakly prefer X ′ over X and at least one agent strictly prefers X ′ .
We observe that, with additive identical valuations, every complete feasible allocation is Pareto-efficient.
Observation 1. For any (possibly constrained) setting with identical additive valuations, every complete feasible allocation is Pareto efficient .
Proof. Let v denote the common valuation of all agents. Let X be a complete feasible allocation. Feasibility implies that for every agent i ∈ N :v i (X i ) = v(X i ). Therefore, i∈Nv i (X i ) = i∈N v(X i ). By completeness and additivity, the latter sum equals v(M ), which is a constant independent of X. So the sum of feasible values is the same in all complete feasible allocations. Therefore, if any other allocation gives a higher value to some agent, it must give a lower value to some other agent.
Impossibility Results
In this section we give some intuition for why previous approaches fail in the case of heterogeneous constraints, and provide impossibility results for settings beyond the ones considered in this paper.
All examples in this section involve two agents. Every item is denoted by a pair of values, where the first and second values correspond to the value of the first and second agents, respectively. For example, an item (0, 1), or simply 0, 1, denotes an item that agent 1 values at 0 and agent 2 values at 1.
Partition Matroids, Maximum Nash welfare
The following example shows that a maximum Nash welfare (MNW) outcome may not be F-EF1 in settings with feasibility constraints, even under identical partition matroid constraints and binary valuations. We note that the existence of such an example has been mentioned by Biswas and Barman (2018) without a proof; we include it here for completeness. Table 2. This example consists of 2 agents, and 2 categories. Category 1 has 4 items and capacity 2 for both agents; Category 2 has 6 items and capacity 3 for both agents. Recall that x, y refers to an item that is valued at x by agent A and valued at y by agent B. In the allocation given in Table 2, v A (X A ) = 2 and v B (X B ) = 3, resulting in Nash welfare of 6. One can verify that this allocation is not EF1. The only other feasible allocations have either value 0 for agent A and value 5 for agent B (for Nash welfare of 0), or value 1 for agent A and value 4 for agent B (for Nash welfare of 4); the latter allocation is EF1.
Example 1. Consider the setting and allocation illustrated in
Category
Alice Bob Note that Benabbou et al. (2020) prove that MNW always implies EF1 for submodular valuations with binary marginals. However, they consider clean allocations, where items with 0 marginal value are not allocated. In contrast, we consider complete allocations, in which all items must be allocated. Indeed, if we "clean" the allocation in Table 2 by having Alice dispose of the three items she does not desire in category C 2 , the allocation becomes EF1 while remaining MNW. The same reasoning (and same example) applies also to the prioritized egalitarian mechanism introduced by Babaioff et al. (2021). Specifically, this mechanism gives a clean ("non-redundant" in their terminology) Lorentz-dominating allocation, which is shown to be EF1 (and even EFX). However, the obtained allocation is not complete.
C 1 1,1 0,0 k 1 A = k 1 B = 2 1,1 0,0 C 2 0,1 0,1 k 2 A = k 2 B = 3 0,1 0,1 0,1 0,1
Partition Matroids, Heterogeneous Categories
The following example shows that if agents have partition matroid constraints, where the partitions of the items into categories is not the same, then an F-EF1 allocation might not exist, even if the valuations are identical and binary.
Example 2 (Heterogeneous categories, no F-EF1). Consider a setting with four items and two agents with identical binary valuations: (1,1), (1,1), (0, 0), (0, 0). Suppose that • Alice's partition has two categories, C 1 A = {(1, 1), (0, 0)} with capacity 1 and C 2 A = {(1, 1), (0, 0)} with capacity 1.
• Bob's partition has three categories:
C 1 B = {(1, 1)} with capacity 1, C 2 B = {(1, 1)} with capacity 1, and C 3 B = {(0, 0), (0, 0)} with capacity 0.
There is a unique feasible allocation, in which Bob gets the two (1, 1) items from C 1 B and C 2 B , and Alice gets the two (0, 0) items from C 1 A and C 2 A . Bob's bundle is feasible for Alice, so Alice envies Bob beyond F-EF1.
Non-Existence of EF1 for Non-Matroid Constraints
In this subsection we consider natural classes of constraints (set systems) that are not matroidal, and show that they might not admit a complete EF1 allocation.
Definition 12. Let I ⊆ 2 M be a set of subsets of M . Two elements x, y ∈ M are called complementary for I if, for every partition of M into two subsets X 1 , X 2 ∈ I (with Proof. Suppose both agents value both complementary items at 1 and the other items at 0. In every feasible allocation, one agent gets none of the complementary items, and thus envies the other agent who gets both these items, and the envy is by two items.
X 1 ∩ X 2 = ∅ and X 1 ∪ X 2 = M ), either {x, y} ⊆ X 1 or {x, y} ⊆ X 2 .
There are several natural constraints with complementary items.
Example 3 (Matching and matroid-intersection constraints). Consider the allocation of course seats among students, where the seats are categorized both by their time and by their subject, and a student should get at most a single seat per time and at most a single seat per subject. Suppose there are two subjects, physics and chemistry; each of them is given in two time-slots, morning and evening. The items (physics,morning) and (chemistry,evening) are complementary, since in the (unique) partition of the four seats into two feasible subsets, these two seats appear together. By Proposition 1, an EF1 allocation may not exist.
In general, the above constraints can be represented by an intersection of two partition matroids: one matroid (M, I 1 ) partitions the seats into two categories by their time, and another matroid (M, I 2 ) partitions them by their subject, and the capacities of all categories are 1. The feasible bundles are the bundles in I 1 ∩ I 2 .
The above constraints can also be represented by the set of matchings in the following bipartite graph (where the edges are the items):
Matching constraints in bipartite graphs can be formed as an intersection of two partition matroids: one partitions the edges into categories based on their leftmost endpoint, and the other partitions the edges into categories based on their rightmost endpoints, and all categories in both matroids have a capacity of 1.
Example 4 (Conflict-graph constraints). Conflict-graph constrainst were recently studied by Hummel and Hetland (2022). Suppose the items are the vertices of the graph below:
Edges denote conflicts, and the feasible sets are the set of non-adjacent vertices. Two diagonally-opposite vertices are complementary, so by Proposition 1 an EF1 allocation may not exist.
Example 5 (Budget constraints). Budget constraints were recently studied by Wu et al. (2021), Gan et al. (2021). Suppose there are two items x, y with a cost of 10 and one item z with a cost of 20, and two agents with budget 20. The only complete feasible partition is ({z}, {x, y}). The items x, y are complementary, so by Proposition 1 an EF1 allocation might not exist. Remark 1. If (M, I) is a matroid, and there is at least one partition of M into two independent sets, then there are no complementary items for I. This follows from the symmetric basis exchange proprety (Brualdi, 1969).
The opposite is not necessarily true. For example, suppose the elements of M are arranged on a line and I contains all the connected subsets along the line. This constraint is not a matroid, since it is not downward-closed. But it has no complementary items. Indeed, an EF1 allocation exists for any number of agents with binary valuations (Bilò et al., 2018).
As another example, consider a budget constraint with a budget of 7, and suppose there are four items with costs 1, 2, 3, 4. This constraint is downward-closed, but it is not a matroid, since there are maximal feasible sets of different cardinalities (1, 2, 4 and 3, 4). But it has no complementary items: 1 and 2 are separated by the feasible partition {1, 3}, {2, 4}; 3 is separated from the other items by the feasible partition {3}, {1, 2, 4}; and 4 is separated from the other items by the feasible partition {4}, {1, 2, 3}.
Therefore, characterizing the constraints for which an EF1 allocation is guaranteed to exist remains an open problem.
Non-Existence of EFX, Uniform Matroids
An Envy Free up to any good (EFX) allocation is a feasible allocation X where for every pair of agents i, j, for every good
g in j's bundle, v i (X i ) ≥ v i (X j \ {g}).
Clearly, EFX is stronger than EF1. EFX has been recently shown to exist in the unconstrained settings for up to 3 agents with additive valuations (Chaudhury, Garg, & Mehlhorn, 2020). However, under constrained settings an EFX allocation may not exist even in the simple setting of two agents with identical uniform matroid constraints and identical binary valuations.
Example 6. There are four items a, b, c, d, with values v(a) = v(b) = v(c) = 0 and v(d) = 1 for both agents, and a capacity of 2 for each agent. In every feasible allocation (i.e., allocating 2 items to each agent), the agent who does not get item d is envious beyond EFX.
At most two categories
Uniform Matroids
As a warm-up, we present a simple algorithm for a setting with a single category. We call it Capped Round Robin (CRR). CRR is a slight modification of round robin, where if an agent reached her capacity -she is skipped over (Algorithm 2). t ← t + 1 mod n. 13: end while 14: return X h CRR finds a F-EF1 allocation whenever the constraints of all agents are uniform matroids, i.e., all items belong to a single category (but agents may have different capacities and different valuations).
L ← C h , P ← {i : k h i = 0}, t ← 0, ∀i ∈ [n] X h i ← ∅. 2: while L = ∅ do 3: i ← σ[t]. 4: if i / ∈ P then 5: g = argmax g∈L v i ({g}
Theorem 2. With uniform-matroid constraints, CRR finds an F-EF1 allocation. Furthermore, if X is the outcome, for every i, j such that i precedes j in σ, v
i (X i ) =v i (X i ) ≥ v i (X j ).
The proof is similar to that of standard round robin in unconstrained setting, we include it here for completeness.
Proof. Let i, j ∈ N s.t. i precedes j in σ. First, we prove v i (X i ) ≥v i (X j ). Since i chooses first among i, j, |X i | ≥ |Best i (X j )|.
If we order X i and Best i (X j ) according to the order in which the items were taken, every item in X i was chosen before (and therefore worth more to agent i than) the corresponding item in Best i (X j ) (if such one exist, as |X i | ≥ |Best i (X j )|). That is because between the two of them, i chose first.
So v i (X i ) ≥ v i (Best i (X j )) =v i (X j ).
Now it remains to show X is F-EF1. Let g be the first item chosen by agent i. Notice that if we remove g from v i (X i ), it is equivalent to j being the one choosing first among i, j (when the item set does not include g) and therefore we can use the exact same argument to claim that v j (X j ) ≥ v j (Best j (X i \ {g})) =v j (X i \ {g}).
Two categories
While CRR may not find an F-EF1 allocation for more than one category, we can extend it to two categories by running CRR with reverse order on the second category; see Algorithm 3.
Theorem 3. When all agents have partition-matroid constraints with at most two categories, the same categories but possibly different capacities, an F-EF1 allocation always exists and can be found efficiently.
ALGORITHM 3: Back-and-Forth CRR 1: σ ← an arbitrary order over the agents. 2: Run Capped Round Robin with C 1 , σ. Let X 1 i be the outcome for each agent i ∈ N . 3: σ ′ ← reverse(σ). 4: Run Capped Round Robin with C 2 , σ ′ . Let X 2 i be the outcome for each agent i ∈ N .
5: return X i 1 ∪ X 2 i for all i ∈ N .
Proof. Algorithm Back-and-forth CRR (Algorithm 3) runs CRR in an arbitrary order for the first category, then uses the reverse order for CRR in the second category. After the first category, by Theorem 2, the allocation is F-EF1 and no agent envies another agent that appears in σ after her. Consider two arbitrary agents i, j at the end of the algorithm.
If agent i f-envied agent j (up to 1 good) after the first category, she appears before j in σ ′ and thus will not gain any more envy in the second category. If i didn't f-envy j after the first category, she can only gain envy up to one good in the second category. That is, in one of the categories she might envy up to one good, in the other she will not envy at all. We conclude that the resulting allocation is F-EF1.
Different Capacities, Identical Valuations
We now consider an arbitrary number of categories, allow agents to have different capacities, but assume that all agents have the same valuations; this is, in a sense, the dual setting of Biswas and Barman (2018) who consider identical capacities and different valuations. Using CRR as a subroutine, we show that a similar algorithm to the one used by Biswas and Barman (2018) finds an F-EF1 allocation in this setting; this follows from the fact that no cycles can be formed in the envy graph. Using Algorithm 4 we prove:
Theorem 4. For every instance with identical additive valuations and partition matroids with identical categories (but possibly different capacities), Algorithm 4 returns an F-EF1 allocation.
Similarly to Biswas and Barman (2018), our Algorithm 4 iterates over the categories running a sub-routine in each. While they run round-robin, we run CRR. The order for the sub-routine is determined by a topological sort of the envy graph. Biswas and Barman (2018) have an extra step of de-cycling the graph, which is not needed in our case due to the following lemma. Run Capped Round Robin with C h , σ.
4:
∀i ∈ [n] X i ← X i ∪ X h i .
5:
Set σ to be a topological order of the feasible-envy graph (which is acyclic by Lemma 1). 6: end for Lemma 1. For any setting with identical valuations (possibly with different capacities), the feasible envy graph of any feasible allocation is acyclic.
Proof. Assume towards contradiction that there exists some allocation X whose corresponding feasible envy graph contains a cycle; denote the agents in the cycle 1, . . . , p, according to their order on the cycle. Note that, while all agents have the same valuation v, their feasible-valuation functionsv i () may differ.
• Feasible envy implies that, for every
i ∈ [p],v i (X i ) <v i (X i+1 ), where we denote p + 1 ≡ 1. • Since the allocation is feasible,v i (X i ) = v(X i ).
• Since the feasible valuation is at most the valuation,v i (X i+1 ) ≤ v(X i+1 ).
Combining the above three inequalities implies v(X i ) < v(X i+1 ) for all i ∈ [p], so we have v(X 1 ) < · · · < v(X p ) < v(X 1 ), a contradiction.
Proof of Theorem 4. We show by induction that after every category the allocation is F-EF1. Base: after the first category the allocation is F-EF1 according to Theorem 2.
Step: assume the allocation is F-EF1 after t categories. Before running category t + 1 we reorder the agents topologically according to the feasible envy graph, and use this order as σ in Algorithm 2 (CRR). This is possible by Lemma 1, which shows that the feasible envy graph is acyclic. For every i, j such that i precedes j in σ, j does not F-envy i. By Theorem 2, during category t + 1, j can become envious of i, but only up to one good, and i's envy cannot increase. This implies that if the allocation is F-EF1 in the end of category t, it remains F-EF1 after category t + 1.
The following theorem shows that, for identical valuations and possibly different capacities, the Maximum Nash Welfare allocation is F-EF1.
Theorem 5. For different capacities and identical valuations, any feasible allocation that maximizes Nash Social Welfare is F-EF1.
Proof. Let X be an allocation that maximizes Nash social welfare (MNW), and suppose there exist agents i, j such that i f-envies j. This means that, for at least one category h, v i (X h i ) <v i (X h j ). Since the allocation is feasible, this implies v(X h i ) <v i (X h j ).
Without loss of generality, we can assume that |X h i | = k h i ; otherwise, we can add to category h some k h i − |X h i | dummy elements with value 0 to all agents and give them to agent i without affecting the valuations.
v
(X h i ) <v i (X h j ) impliesv i (X h j ) > 0, which implies |X h i | = k h i > 0 and |X h j | > 0. Therefore, the following items exist: b := argmin t∈X h i v(t); g := argmax t∈X h j v(t). So v(X h i ) ≥ k h i · v(b); v(X h j ) ≤ k h j · v(g)
.
Since i f-envies j in category C h , k h i · v(g) ≥v i (X h j ) >v i (X h i ) = v(X h i ) ≥ k h i · v(b), so v(g) − v(b) > 0.
Let X ′ be the allocation obtained by X by swapping goods b and g between i and j's allocations. X ′ is feasible since b and g are in the same category. Since X is MNW, it follows that
(v(X i ) + v(g) − v(b)) · (v(X j ) − v(g) + v(b)) ≤ v(X i ) · v(X j ). Let z = v(g) − v(b) > 0. We get: v(X i )v(X j ) − v(X i )z + v(X j )z − z 2 ≤ v(X i )v(X j ).
Simplifying the above expression and using the fact that z > 0, we get:
v(X j ) − z ≤ v(X i ).
Since the valuation is additive, and v(b) ≥ 0, we get:
v(X i ) ≥ v(X j ) − z = v(X j ) − v(g) + v(b) ≥ v(X j ) − v(g) = v(X j \ {g}).
By the fact thatv i (S) ≤ v(S) for every set S, we get:
v i (X j \ {g}) ≤ v(X j \ {g}) ≤ v(X i ) =v i (X i ).
This implies that X is F-EF1, completing the proof.
The fact that the constraints are based on partition matroids is used in the proof step regarding the exchange of items b and g. Possibly, the result can be extended to agents with different matroid constraints (M, I i ), as long as the different matroids satisfy some "pairwise basis exchange" property. Though, it is not clear how to define such a property.
Partition Matroids with Binary Valuations
In this section we assume that all agents have binary additive valuations. For this setting, we present an efficient algorithm that finds an F-EF1 allocation for n agents with different valuations, and partition matroids with different capacity constraints. For binary valuations v i (j) ∈ {0, 1} for all i, j, and for every agent i we refer to the set of items J i = {j ∈ M s.t v i (j) = 1} as agent i's desired set. T h := max i∈N k h i .
5:
for t = 1, . . . , T h do 6:
Construct the agent-item graph G h t (see Definition 13).
7:
Construct the feasible-envy graph corresponding to X.
8:
σ ← a topological order on the feasible envy-graph.
9:
Find a priority matching in G h t according to σ (see Definition 14).
10:
For every agent i who is matched to an item g i : Allocate the unmatched items of C h arbitrarily to agents with remaining capacity.
X h i ← X h i ∪ {g}.
13:
∀i ∈ [n] X i ← X i ∪ X h i . 14: end for Theorem 6. In every setting with partition matroids with binary valuations (possibly heterogeneous capacities and heterogeneous valuations), an F-EF1 allocation exists and can be computed efficiently by the Iterated Priority Matching Algorithm (Algorithm 5).
Two key tools we use are the agent-item graph and the priority matching, defined next.
Definition 13 (Agent-item graph). Given a category h and a partial allocation X, the agent-item graph is a bipartite graph G h , where one side consists of the agents with remaining capacity (i.e., agents such that |X h i | < k h i ), and one side consists of the unallocated items of C h . An edge (i, j) exists in G h iff j is a desired item of i, that is, v i (j) = 1).
Definition 14 (Priority matching). Given a graph G = (V, E), a matching in G is a subset of edges µ ⊆ E such that each vertex u ∈ V is adjacent to at most one edge in µ. Given a linear order on the vertices, σ [1], . . . , σ[n], every matching is associated with a binary vector of size n, where element i equals 1 whenever vertex σ[i] is matched. The priority matching of σ is the matching associated with the maximum such vector in the lexicographic order. Note that every ordering σ over the vertices yields a potentially different priority-matching.
Priority matching was introduced by Roth, Sönmez, andÜnver (2005) in the context of kidney exchange, where they prove that every priority matching is also a maximumcardinality matching; that is, it maximizes the total number of saturated vertices in V . 1 The Iterated Priority Matching algorithm (Algorithm 5) works category-by-category. For each category h, the items of C h are allocated in two phases, namely the matching phase and the leftover phase. The matching phase proceeds in several iterations, where in each iteration, every agent receives at most one item. The number of iterations is at most the maximum capacity of an agent in C h , denoted by T h := max i∈N k h i . Given the current allocation, let σ be a topological order over the agents in the feasible envy graph (we shall soon show that the feasible envy-graph is cycle free). In each iteration t 1. Okumura (2014) extends this result to priority classes of arbitrary sizes, and shows a polynomial time algorithm for finding a priority matching. Simpler algorithms were presented by Turner (2015bTurner ( , 2015a.
of the matching phase, we construct the agent-item graph G h t , and then compute a prioritymatching in G h t with respect to σ, and augment agent allocations by the obtained matches. We then update the feasible envy graph and proceed to the next iteration, where the next set of items in C h is allocated.
After at most T h iterations, all remaining items of category C h contribute value 0 to all agents with remaining capacity, and we move to the leftover phase. In this phase, we allocate the leftover items arbitrarily among agents, respecting feasibility constraints. This is possible since a feasible allocation exists by assumption.
To prove the correctness of the algorithm, it suffices to prove that every feasible envygraph constructed in the process is cycle-free, and that the feasible envy between any two agents is at most 1. We prove both conditions simultaneously in the following lemma.
Lemma 2. In every iteration of Algorithm 5:
(a) The feasible envy-graph has no cycles; (b) For every i, j ∈ N , Envy + X (i, j) ≤ 1.
Proof. The proof is by induction on the categories and iterations. Both claims clearly hold from the outset (i.e., under the empty allocation). In the analysis below, we refer to states before (h, t) and after (h, t) to denote the states before and after iteration t of category h, respectively.
Proof of property (a). We assume that property (a) holds before (h, 1) (i.e., before starting to allocate items in category h). We prove that it holds after (h, t) for every t. Suppose by contradiction that after (h, t) there is a cycle i 1 → · · · → i p = i 1 in the feasible envy-graph. By assumption (a), the cycle did not exist before category h, so at least one edge was created during the first t steps in category h. Suppose w.l.o.g. that it is the edge
i 1 → i 2 .
Let Q 1 be the set of items desired by i 1 that are allocated to i 1 up to iteration t of category h, and let q = |Q 1 |. Agent i 1 must have gotten these q items in the first q iterations of h (otherwise, there exists an iteration ≤ q in which i 1 did not get an item, but a desired item remained unallocated, contradicting maximum priority matching).
Let Q 2 be the set of items desired by i 1 that are allocated to i 2 up to iteration t of category h. The fact that i 1 started to envy i 2 during category h implies that |Q 2 | ≥ q + 1 and k h i 1 ≥ q + 1. Agent i 2 must have gotten all these items in the first q + 1 iterations of h (otherwise, one of these items could have been allocated to i 1 in iteration q +1, contradicting maximum priority matching). This implies that in fact |Q 2 | = q + 1. It also implies that iteration q + 1 is still within the matching phase, since there is an item desired by i 1 , and i 1 has remaining capacity. Therefore, i 2 received at least q + 1 items within the matching phase, implying that i 2 's value increased by at least q + 1 up to iteration t of category h.
Let Q 3 be the set of items desired by i 2 that are allocated to i 3 up to iteration t of category h. By assumption of the envy-cycle, i 2 envies i 3 after (h, t). By the induction assumption, Envy + X (i 2 , i 3 ) ≤ 1 before (h, 1). Since i 2 's value increased by at least q + 1 up to iteration t of category h, it must hold that |Q 3 | ≥ q + 1. We now claim that before (h, q + 1), at most one item of Q 3 was available, and i 3 got it in this iteration. Otherwise, one could allocate one of those items to i 2 , and allocate the item that i 2 received in iteration q + 1 (that is desired by i 1 ), to i 1 , increasing the priority matching.
We conclude that i 3 got an item at each one of the first q + 1 iterations of category h, as |Q 3 | ≥ q + 1. Since all of these iterations are within the matching phase, all of these items are desired by i 3 . Therefore, i 3 's value increases by at least q + 1. Repeating this argument, we conclude that every agent along the cycle received at least q + 1 desired items during the first t steps of h, including agent i p = i 1 ; but this is in contradiction to the fact that i 1 received q = |Q 1 | items.
Proof of property (b). We assume that property (b) holds for every iteration before (h, t) and prove that it holds after (h, t). By the induction assumption, before (h, 1), . . . , (h, t) we had Envy + X (i 1 , i 2 ) ≤ 1. We consider several cases. Case (1): before (h, t), we had Envy + X (i 1 , i 2 ) = 0. Since at most one item is allocated to i 2 at iteration t, we must have Envy + X (i 1 , i 2 ) ≤ 1 after (h, t). Case (2): before (h, t), the capacity of agent i 1 was exhausted, so we have
v i 1 (X h i 1 ) = k h i 1 ≥v i 1 (X h i 2 )
. But before (h, 1) we had Envy + X (i 1 , i 2 ) ≤ 1, and the envy cannot increase after adding k h i 1 to v i 1 (X 1 ) and at most k h i 1 tov i 1 (X 2 ). Case (3): Agent i 1 does not desire any item remaining before (h, t). Then clearly the envy of i 1 cannot change during (h, t).
The remaining case is that ,before (h, t) we had Envy + X (i 1 , i 2 ) = 1, the capacity of i 1 was not exhausted, and i 1 desires at least one item. Then, i 1 precedes i 2 in the topological order σ in iteration t, so the priority-matching on G h t prefers to match i 1 , than to match i 2 and leave i 1 unallocated. Therefore, the envy of i 1 at i 2 does not increase during (h, t).
Pareto Efficiency
We show that if capacities are binary (that is, k h i ∈ {0, 1} for all i, h), then Algorithm 5 returns a Pareto efficient allocation, but this is not the case under arbitrary (non-binary) capacities.
Observation 7. In settings with partition constraints with heterogeneous binary capacities and heterogeneous binary valuations, Algorithm 5 returns a Pareto efficient allocation.
Proof. Under binary capacities, the algorithm runs a single priority matching in each category. As this matching is of maximum cardinality, it maximizes the social welfare within this category. From additivity, the allocation that maximizes SW within every category maximizes SW over all categories. Any welfare-maximizing allocation is Pareto efficient.
The following example shows that when capacities may be larger than 1, even when there is a single category and two agents with the same capacity, the allocation returned by Algorithm 5 may not be Pareto efficient.
Example 7. Consider the setting and allocation depicted in the following table.
Capacities Alice Bob k A = 2 1,1 0,1 k B = 2 1,0 1,0
There are two agents sharing an identical uniform matroid with capacity 2, and four items: (1,0), (1,0), (1,1), (0, 1) (recall that (x, y) denotes an item that gives value x to Alice and value y to Bob). We claim that the allocation depicted in the table can be the outcome of Algorithm 5, and is not Pareto efficient. Indeed, in the first iteration the priority matching may assign (1,1) to Alice and (0, 1) to Bob. Then, Bob does not want any remaining item, so in the second iteration Alice gets the item (1,0). Finally, in the leftover phase Bob gets the item (1,0). In the obtained allocation, Alice has value 2 and Bob has value 1. This allocation is Pareto-dominated by the allocation giving items (1,0), (1,0) to Alice and (1, 1), (0, 1) to Bob, where Alice is indifferent and Bob is strictly better off.
It remains open whether the setting with binary valuations and general heterogeneous capacities always admits an allocation that is both F-EF1 and Pareto-efficient.
Partition Matroids with Two Agents
In this section we present an algorithm for two agents with heterogeneous capacities.
Theorem 8. In every setting with two agents and partition matroid constraints, an F-EF1 allocation exists and can be computed efficiently by Algorithm RR 2 (Algorithm 6).
To present the algorithm we introduce some notation.
• Given an allocation X, the surplus of agent i in category h is
s h i (X) :=v i (X h i ) −v i (X h j ).
That is, the difference between i's value for her own bundle and her value for j's bundle.
• Given agents 1, 2, ℓ ∈ {1, 2}, valuation functions v, v ′ and category h, R(v, v ′ , ℓ) h is the allocation obtained by Capped Round Robin (Algorithm 2 in Section 4) for category h, under valuations v 1 = v, v 2 = v ′ , and where agent ℓ plays first. When clear in the context, we omit the superscript h from R(v, v ′ , ℓ) h .
We are now ready to present Algorithm "Round Robin Squared" (RR 2 ) . In RR 2 , there are two layers of round robin (RR), one layer for choosing the next category, and one layer for choosing items within a category. For every agent i, the categories are ordered by descending order of the surplus s h i (R(v 1 , v 2 , i)), that is, the surplus that agent i can gain over the other agent by playing first in category h. Denote this order π i .
In the first iteration, agent 1 chooses the first category in π 1 . Within this category, the items are allocated according to Capped Round Robin (CRR) (Algorithm 2), with agent 1 choosing first. In the second iteration, agent 2 chooses the first category in π 2 that has not been chosen yet. Within this category, the items are allocated according to CRR, with agent 2 choosing first. The algorithm proceeds in this way, where in every iteration, the agent who chooses the next category flips; that agent chooses the highest category in her surplus-order that has not been chosen yet, and within that category, agents are allocated according to CRR with that agent choosing first. This proceeds until all categories are allocated.
The key lemma in our proof asserts that the surplus of an agent i when playing first within a category h is at least as large as minus the surplus of the same agent when playing second in the same category. Below, we denote by −i the agent who is not i. Input: A set of items M , categories C 1 , ..., C l , capacities k h i for every i = 1, 2, h ∈ [l]; a ∈ {1, 2} the first agent to choose. 1: Initialize for all i ∈ {1, 2}: X i ← ∅; π i ← Categories listed by descending order of s h i (R(v 1 , v 2 , i)). 2: while there are unallocated categories do 3: h ← the first category in π a not yet allocated.
4:
Run CRR on category h. Let X h ← R(v 1 , v 2 , a) h .
5:
For all i ∈ {1, 2}, let X i ← X i ∪ X h i .
6:
Switch a to be the other agent. 7: end while Lemma 3. For every category h and every i ∈ {1, 2}:
s h i (R(v 1 , v 2 , i) h ) ≥ −s h i (R(v 1 , v 2 , −i) h ).
We first show how Lemma 3 implies Theorem 8. Then we prove the lemma itself.
Proof of Theorem 8. Without loss of generality, suppose agent 1 is the first to choose a category. By reordering, let C 1 , ..., C ℓ be the categories in the order they are chosen. If ℓ is odd, we add a dummy empty category to make it even. We show first that agent 1 does not F-envy agent 2. We have
v 1 (X 1 ) −v 1 (X 2 ) = h=1,...,ℓ v 1 (X h 1 ) − h=1,...,ℓv 1 (X h 2 ) (by addtivity) = h=1,...,ℓ (v 1 (X h 1 ) −v 1 (X h 2 )) = h is odd s h 1 (R(v 1 , v 2 , 1)) + h is even s h 1 (R(v 1 , v 2 , 2)) (by definition of surplus) (1) ≥ h is odd s h 1 (R(v 1 , v 2 , 1)) + h is even −s h 1 (R(v 1 , v 2 , 1)) (by Lemma 3) = t=1,..., ℓ 2 (s 2t−1 1 (R(v 1 , v 2 , 1)) − s 2t 1 (R(v 1 , v 2 , 1))).(2)
Equation (1) follows from the definition of surplus, the facts that agent 1 chooses the odd categories, and the agent who chooses the category is the one to choose first within this category. Since agent 1 chooses the odd categories, and does so based on highest surplus, it implies that for every t, s 2t−1 1 (R(v 1 , v 2 , 1)) ≥ s 2t 1 (R(v 1 , v 2 , 1)), as category 2t was available when agent 1 chose category 2t − 1. Therefore, every summand in the sum of (2) is nonnegative. Thus, the whole sum is non-negative, implying that v 1 (X 1 ) ≥v 1 (X 2 ), as desired.
We next show that agent 2 does not F-envy agent 1 beyond F-EF1. As a thought experiment, consider the same setting with the first chosen category removed. Following the same reasoning as above, in this setting agent 2 does not F-envy agent 1. But within the first category, agent 2 can only F-envy agent 1 up to 1 item. That is, there exists one item in the first category such that when it is removed, it eliminates the feasible envy of the second agent within that category, and thus eliminates her feasible envy altogether. We conclude that the obtained allocation is F-EF1. Now all that is left is to prove Lemma 3. The proof is based on Lemmas 4 through 7, which are stated below. All four lemmas consider a setting with two agents with identical additive valuations v 1 = v 2 = v playing CRR (Algorithm 2) on a single category.
Lemma 4. If one agent plays according to v, the best strategy for the other agent is to play according to v. That is, for every additive valuation v ′ and every ℓ ∈ {1, 2}:
(a)v(R(v, v, ℓ) 1 ) ≥v(R(v ′ , v, ℓ) 1 ) (b)v(R(v, v, ℓ) 2 ) ≥v(R(v, v ′ , ℓ) 2 )
Proof. The two statements are obviously analogous; below we prove claim (b).
Denote by "truthful play" the play of CRR in which agent 2 plays according to v and gets the bundle R(v, v, ℓ) 2 ; denote by "untruthful play" the play of CRR in which agent 2 plays according to v ′ and gets the bundle R(v, v ′ , ℓ) 2 . Order the items in each of these two bundles in descending order of v. Denote the resulting ordered vectors γ and γ ′ respectively, such that v(γ 1 ) ≥ v(γ 2 ) ≥ · · · and v(γ ′ 1 ) ≥ v(γ ′ 2 ) ≥ · · · . Note that |γ| = |γ ′ | (agent 2 gets the same number of items in both plays). We now prove that v(γ t ) ≥ v(γ ′ t ) for all t ≤ |γ|. For every index t ≤ |γ|, denote by z t the number of items held by agent 1 in round t of agent 2, that is:
z t := min (k h 1 , t − 1) if ℓ = 2 min (k h 1 , t) if ℓ = 1
Assume towards contradiction that there exists an index t ≤ |γ| s.t. v(γ ′ t ) > v(γ t ) and let us look at the smallest such t (corresponding to a highest valued item in γ ′ ).
In the truthful play, before agent 2 picks γ t , agents 1 and 2 together hold the z t + t − 1 highest-valued items; hence there are exactly z t + t − 1 items more valuable than γ t .
In the untruthful play, agent 1 still plays by v and thus still holds at least z t of the z t + t − 1 highest-valued items. While we do not know by which order agent 2 picks items, we do know that in the final allocation γ ′ , the first t items are at least as valuable as γ ′ t , which is by assumption more valuable than γ t . Hence, there are at least z t + t items more valuable than γ t ; a contradiction.
Since the sum of values of bundle 1 and bundle 2 is fixed, we get the following corollary:
Lemma 5. If one agent plays according to v, the worst case for this agent is that the other agent plays according to v too. That is, for every v ′ and ℓ ∈ {1, 2}:
(a)v(R(v, v, ℓ) 2 ) ≤v(R(v ′ , v, ℓ) 2 ) (b)v(R(v, v, ℓ) 1 ) ≤v(R(v, v ′ , ℓ) 1 )
Applying the proof of Lemma 4, but with respect to the case where the capacities of both agents are set to be the minimum of k h 1 , k h 2 , gives the following lemma as a corollary:
Lemma 6. If one agent plays according to v, she (weakly) prefers the bundle the other agent gets when playing according to v over the bundle the ther agent gets when playing according to v ′ = v. That is, for every ℓ ∈ {1, 2}:
(a)v 1 (R(v, v, l) 2 ) ≥v 1 (R(v, v ′ , l) 2 ) (b)v 2 (R(v, v, l) 1 ) ≥v 2 (R(v ′ , v, l) 1 )
We also use the following lemma.
Lemma 7. The value of each agent for her own bundle when she plays first is at least her value for the other agent's bundle when the other agent plays first, and vice versa:
(a)v 1 (R(v, v, 1) 1 ) ≥v 1 (R(v, v, 2) 2 ) (b)v 1 (R(v, v, 2) 1 ) ≥v 1 (R(v, v, 1) 2 )
Proof. When both agents play using the same valuation, the only thing that differentiates agent 1's bundle when 1 chooses first (respectively, second) from agent 2's bundle when 2 chooses first (resp., second) is their capacities.
If k h 1 ≤ k h 2 , then R(v, v, 1) 1 ⊆ R(v, v, 2) 2 , and moreover, R(v, v, 1) 1 = Best 1 (R(v, v, 2) 2 ). Therefore, (a) holds with equality. Otherwise, R(v, v, 2) 2 = Best 1 (R(v, v, 2) 2 ) ⊆ R(v, v, 1) 1 . Therefore,v 1 (R(v, v, 1) 1 ) ≥v 1 (R(v, v, 2) 2 )
, establishing (a). Similar considerations apply to (b).
With these lemmas in hand, we are ready to prove Lemma 3.
Proof of Lemma 3. We provide the proof for i = 1; the other case is analogous. The proof follows from the following four inequalities:
1.v 1 (R(v 1 , v 2 , 1) 1 ) ≥v 1 (R(v 1 , v 1 , 1) 1 ) 2.v 1 (R(v 1 , v 2 , 1) 2 ) ≤v 1 (R(v 1 , v 1 , 1) 2 ) 3.v 1 (R(v 1 , v 1 , 1) 1 ) ≥v 1 (R(v 1 , v 2 , 2) 2 ) 4.v 1 (R(v 1 , v 1 , 1) 2 ) ≤v 1 (R(v 1 , v 2 , 2) 1 )
We now prove the four inequalities above:
1. Follows by applying Lemma 5(b) with v := v 1 , v ′ := v 2 , and ℓ = 1.
2. Follows by applying Lemma 6(a) with v := v 1 , v ′ := v 2 , and ℓ = 1.
By applying Lemma 7(a) with
v := v 1 , it follows thatv 1 (R(v 1 , v 1 , 1) 1 ) ≥v 1 (R(v 1 , v 1 , 2) 2 ).
In addition, by applying Lemma 6(a) with v := v 1 , v ′ := v 2 , and ℓ = 2, it follows that
v 1 (R(v 1 , v 1 , 2) 2 ) ≥v 1 (R(v 1 , v 2 , 2) 2 ).
4. By applying Lemma 7(b) with v := v 1 , it follows thatv 1 (R(v 1 , v 1 , 1) 2 ) ≤v 1 (R(v 1 , v 1 , 2) 1 ). In addition, by applying Lemma 5(b) with v := v 1 , v ′ := v 2 , and ℓ = 2, it follows that v 1 (R(v 1 , v 1 , 2) 1 ) ≤v 1 (R(v 1 , v 2 , 2) 1 ).
Combining the four inequalities above gives:
s h 1 (R(v 1 , v 2 , 1)) = v 1 (R(v 1 , v 2 , 1) 1 ) −v 1 (R(v 1 , v 2 , 1) 2 )) ≥ 1,2v 1 (R(v 1 , v 1 , 1) 1 ) −v 1 (R(v 1 , v 1 , 1) 2 ) ≥ 3,4v 1 (R(v 1 , v 2 , 2) 2 ) −v 1 (R(v 1 , v 2 , 2) 1 ) = −s h 1 (R(v 1 , v 2 , 2)).
The assertion of the lemma follows.
The notion of surplus allows us to treat each category as a single item, whose value for
agent i is s h i (R(v 1 , v 2 , i) h ).
The problem with extending this idea to three agents is that, for each category, there are 3! = 6 possible round-robin orders, so there are 6 different potential "surplus" quantities.
Base-Orderable Matroids with up to Three Agents
In this section we consider constraints that are represented by a wide class of matroids, termed base-orderable (BO) matroids. 2 Recall that the bases of a matroid are its inclusionmaximal independent sets. In the definitions below, we use the shorthands S + x := S ∪ {x} and S − x := S \ {x}, for any set S and item x.
Definition 15. Given a matroid (M, I) and independent sets I, J ∈ I, items x ∈ I and y ∈ J represent a feasible swap if both I − x + y and J − y + x are in I.
Definition 16 ( (Brualdi & Scrimger, 1968)). A matroid M = (M, I) is base-orderable (BO) if for every two bases I, J ∈ I, there exists a feasible exchange bijection, defined as a bijection µ : I ↔ J such that for any x ∈ I, both I − x + µ(x) ∈ I and J − µ(x) + x ∈ I.
This class contains many interesting matroids, including partition matroids, laminar matroids (a natural generalization of partition matroids where the categories may be partitioned into sub-categories), 3 transversal matroids, 4 and more. Bonin and Savitsky (2016) conjectures that "almost all matroids are base-orderable".
When different agents have different matroids, even when these are all partition matroids, an F-EF1 allocation may not exist (see Example 2). Therefore, we restrict attention to settings with a common matroid M. Before presenting our algorithm, we present two tools that are useful for any matroid -BO or not: finding a social-welfare-maximizing allocation with matroid constraints (Section 8.1), and extending a matroid such that every feasible partition is a partition into bases (Section 8.2).
Finding a social-welfare-maximizing allocation
We initialize our algorithm with an allocation that maximizes the sum of agents' utilities. Such an allocation can be found in polynomial time for any common matroid constraints.
Theorem 9. For any constraints based on a common matroid, and any n agents with additive valuations, it is possible to find in polynomial time, a complete feasible allocation that maximizes the sum of utilities.
Proof. 5 The problem of SW maximization with submodular valuations is NP-hard in general, and admits constant-factor approximations (Vondrák, 2008;Calinescu, Chekuri, Pál, & Vondrák, 2011). However, in the special case of additive valuations with matroid constraints, it can be solved in polynomial time by reduction to the maximum-weight matroid intersection problem: given two matroids over the same base-set, (Z, I 1 ) and (Z, I 2 ), where each element of Z has a weight, find an element of I 1 ∩ I 2 with a largest total weight.
We construct the base set Z := N × M , where each pair (i, j) ∈ Z corresponds to allocating item j ∈ M to agent i ∈ N . We construct two weighted matroids over Z, namely M 1 = (Z, I 1 ) and M 2 = (Z, I 2 ). We first describe the independent sets in both matroids, then specify the weight function.
The first matroid, M 1 , represents the original matroid constraints: S ⊆ Z is in I 1 iff for every agent i, the set of items S i = {j : (i, j) ∈ S} is an independent set in the original matroid. We show that M 1 is a matroid: (1) ∅ ∈ I 1 . (2) Downward-closed: let S ⊆ Z such that S ∈ I 1 . Then for every i, S i ∈ I. Consider a subset S ′ ⊆ S. For every i, S ′ i ⊂ S i . Since M is downward-closed, S ′ i ∈ I. By the definition of I 1 we conclude that S ′ ∈ I 1 . (3) Augmentation property: Let S, S ′ ∈ I 1 such that |S ′ | > |S|. Then there must be some index i for which |S ′ i | > |S i |. Since S ′ i , S i ∈ I, it follows from the augmentation property of M that there exists an item a ∈ S ′ i \ S i such that S i ∪ {a} ∈ I. Then S ∪ {(i, a)} ∈ I 1 and the augmentation property holds.
The second matroid, M 2 , is a partition matroid with m categories, where category j corresponds to the set {(1, j), . . . , (n, j)}, and every category has capacity 1. This essentially ensures that every item is given to at most one agent. One can easily verify that every subset of Z that is an independent set in both matroids represents a feasible allocation.
Next, define the weight function w : Z → N. For every (i, j) ∈ Z, let w((i, j)) = V + v i (j), where V := m · max i max j v i (j). Using Edmonds' polynomial-time algorithm (Edmonds, 1970), find a maximum-weight subset S * ∈ I 1 ∩ I 2 . The construction of w guarantees that S * maximizes the number of allocated items. Subject to this, S * maximizes the total value.
Since F = ∅ (by assumption), maximizing the number of allocated items ensures that all items are allocated, namely that the allocation is complete. Within complete allocations, maximizing the total value ensures that the returned allocation maximizes social welfare. This concludes the proof.
Ensuring a partition into bases
To simplify the algorithms, we pre-process the instance to ensure that, in all feasible allocations, every agent receives a basis of M. Recall that the rank of a matroid M is the 5. We are grateful to Chandra Chekuri for the proof idea.
cardinality of a basis of M (all bases have the same cardinality). We denote r := rank(M). In the pre-processing step, we add to M dummy items, valued at 0 by all agents, so that after the addition, |M | = n · r. This guarantees that, in every feasible allocation, every bundle contains exactly r items, so it is a basis. To ensure that the dummy items do not affect the set of feasible allocations, we use the free extension, 6 defined below. That is: all bundles that were previously feasible remain feasible; in addition, all nonmaximal feasible bundles remain feasible when the new item is added to them.
The properties of the free extension are summarized below. • rank(M) = rank(M ′ ) = r.
• Given a feasible partition of M (a partition into independent sets), where some sets in the partition are not maximal (contain less than r items), one can construct a feasible partition of M ′ by adding x new into some non-maximal set.
• Given a feasible partition of M ′ , one can construct a feasible partition of M by removing x new from the set containing it.
• M ′ is base-orderable if and only if M is base-orderable.
The first four observations are trivial. We prove the fifth one in Appendix B. By Assumption 1, our instance admits a feasible allocation. Since any feasible bundle is contained in a basis, the cardinality of every allocated bundle is at most r, so |M | ≤ n · r. We construct a new instance by applying the free extension n · r − |M | times, getting a matroid M ′ = (M ′ , I ′ ) with |M ′ | = n · r. We call the n · r − |M | new items dummy items, and let all agents value them at 0. Observation 2. The new instance satisfies the following properties.
• All bases of M are bases of M ′ .
• In every feasible allocation (Y 1 , . . . , Y n ) in the new instance, |Y i | = r for all i ∈ [n], so every Y i is a basis of M ′ .
• For every feasible allocation (X 1 , . . . , X n ) in the original instance, there is a feasible allocation (Y 1 , . . . , Y n ) in the new instance, where for all i ∈ [n], Y i contains X i plus zero or more dummy items, so all agents' valuations to all bundles are identical.
6. We are grateful to Kevin Long for the proof idea at https://math.stackexchange.com/q/4300433.
• For every feasible allocation (Y 1 , . . . , Y n ) in the new instance, there is a feasible allocation (X 1 , . . . , X n ) in the original instance, where for all i ∈ [n], Y i contains X i plus zero or more dummy items, so all agents' valuations to all bundles are identical.
• M ′ is base-orderable if and only if M is base-orderable.
By the above observation, one can assume, without loss of generality, that there are exactly n · r items, and consider only allocations in which each agent receives a basis of M. We can now use the Iterated Swaps scheme, presented in Algorithm 7.
ALGORITHM 7: Iterated Swaps Input: Constraints based on a base-orderable matroid M; n agents with additive valuations; A set M of items with |M | = n · rank(M). 1: Initialize: X ← a complete feasible SWM allocation (by Theorem 9). 2: while X is not EF1 do
3:
Find i, j ∈ N such that agent i envies agent j by more than one item.
4:
Find a feasible exchange bijection µ : X i ↔ X j .
5:
Find an item g i ∈ X i such that v i (µ(g i )) > v i (g i ).
6:
Swap items g i and µ(g i ). 7: end while The algorithm starts by finding a feasible social welfare maximizing (SWM) allocation X, using Theorem 9. As long as there exist agents i, j that violate the EF1 condition, the algorithm swaps a pair of items between i and j, such that the utility of i (the envious agent) increases, and the utility of j decreases by the same amount, so the allocation remains SWM. The process terminates with an EF1 and SWM allocation.
In the next subsections we will present two settings in which Iterated Swaps is indeed guaranteed to terminate in polynomial time with an EF1 allocation.
Three agents with binary valuations
Below, we show that Iterated Swaps finds an EF1 allocation for n = 3 agents with heterogeneous binary valuations.
Theorem 10. For identical base-orderable matroid constraints, for three agents with heterogeneous binary valuations, the Iterated Swaps algorithm (Algorithm 7) finds an EF1 allocation in polynomial time. Moreover, the resulting allocation is also social welfare maximizing, hence Pareto-efficient.
To prove Theorem 10 we use the following lemma.
Lemma 8. Consider a setting with binary valuations. Let X be a SWM allocation in which agent i envies agent j. Let µ : X i ↔ X j be a feasible-exchange bijection. Then there is an item g i ∈ X i for which v i (g i ) = v j (g i ) = 0 and v i (g j ) = v j (g j ) = 1, where g j = µ(g i ).
If X is EF1, we are done. Otherwise, by Lemmas 8 and 9, there must exist a smart swap between i, j such that the social welfare remains unchanged, Envy + X (i, j) drops by 2 and Envy + X (j, i) remains 0. Thus, Envy + X (i, j) + Envy + X (j, i) drops by 2. Let us next consider the positive envy that might be added due to terms of Φ that include the third agent, deonte it by k.
1. Envy + X (i, k) cannot increase, as the smart swap increases i's utility, while v i (X k ) does not change.
2. Envy + X (k, i) increases by at most 1: the largest possible increase in v k (X ′ i ) is 1, while v k (X k ) does not change.
3. Envy + X (k, j) increases by at most 1: the largest possible increase in v k (X ′ j ) is 1, while v k (X k ) does not change.
4. Envy + X (j, k) increases by at most 1, as this is the exact decrease in v j (X ′ j ), while v j (X k ) does not change.
We next claim that among the terms that may increase by 1 (#2, #3, #4), no two of them can increase simultaneously:
• Envy + X (k, j), Envy + X (j, k) cannot increase simultaneously as this would create an envy-cycle, contradicting SWM.
• Envy + X (k, i), Envy + X (j, k) cannot increase simultaneously, as this together with the fact that v i (X ′ j ) ≥ v i (X ′ i ) contradicts SWM: shifting bundles along the cycle i → j → k → i strictly increases the sum of utilities.
• Envy + X (k, i), Envy + X (k, j) cannot increase simultaneously as the sum of k's values to i and j's bundles is fixed, that is, v k (X i ) + v k (X j ) = v k (X ′ i ) + v k (X ′ j ).
We conclude that in every iteration the potential function drops by at least 1. Indeed, Envy + X (i, j) drops by 2, Envy + X (j, i) remains 0, Envy + X (i, k) does not change, and among Envy + X (k, i), Envy + X (k, j), Envy + X (j, k) only one can increase, by at most 1. As the valuations are binary, the initial value of the potential function Φ(·) is bounded by |M | = m. At every step it drops by at least one, so the algorithm stops after at most m iterations.
Two agents with additive valuations
The case of three agents with heterogeneouss additive valuations remains open. Below, we show that for two agents with heterogeneouss additive valuations, an EF1 allocation always exists. Suppose the agents' valuations are v 1 and v 2 . Using the cut-and-choose algorithm, we can reduce the problem to the case of identical valuations:
• Find an allocation that is EF1 for two agents with identical valuation v 1 .
• Let agent 2 pick a favorite bundle (the allocation is envy-free for 2).
• Give the other bundle to agent 1 (the allocation is EF1 for 1).
It remains to show how to find an EF1 allocation for agents with identical valuations. Biswas and Barman (2018)[Section 7 in the full version] presented an algorithm that finds an EF1 allocation for n agents with identical valuations, with constraints based on a laminar matroid -a special case of a BO matroid. In fact, their algorithm can be both simplified and extended to BO matroids using our pre-processing step and the Iterated Swaps scheme. The main idea is that, in Step 5, we have to find a pair of items with a sufficiently large value-difference. The following lemma is proved by Biswas and Barman (2018)[p.18].
Lemma 10. Consider a setting with identical additive valuations v. Let X be an allocation in which agent i envies agent j by more than one item. Let µ : X i ↔ X j be a feasibleexchange bijection. Then there is an item g i ∈ X i for which
v(g j ) − v(g i ) ≥ v(X j )/m 2 ,
where g j = µ(g i ).
Choosing such an item g i in Step 5 ensures that, with each swap, the value of agent j (the envied agent) drops by a multiplicative factor of at least (1 − 1/m 2 ). Additionally, in
Step 3, the envious agent i is chosen such that v(X i ) is smallest, and the envied agent j is chosen such that v(X j ) is largest among the agents that agent i envies by more than one item. This guarantees that each agent can be envied a polynomial number of times, and the algorithm terminates after polynomially many iterations.
Non base-orderable matroids
The Iterated Swaps technique may fail for matroids that are not base-orderable, even for two agents with identical binary valuations. The "weak link" is Lemma 8, as the following example shows.
Example 8. Consider the graphical matroid on K 4 -the clique on four vertices. Denote the vertices of K 4 by 1, 2, 3, 4 and its edges by E = {12, 13, 14, 23, 24, 34}. The K 4 graphical matroid is a matroid over the ground-set E, whose independent sets are the trees in The only feasible swap for 12 is with 14, and similarly the only feasible swap for 34 is with 14, so there is no feasible-exchange bijection.
Suppose now that the agents have identical valuations, as in the following table: Note that, with identical valuations, all allocations are SWM. Consider the allocation in which Alice holds the first three elements and Bob holds the last three elements. Then Alice Element (edge of K 4 ) Value 12 0 23 1 34 0 13 1 14 0 24 1 envies Bob, but there is no swap that increases Alice's utility while keeping the bundles of both agents feasible. Moreover, suppose there are two copies of K 4 , and in both copies the allocation is the same as above. Then, Alice envies Bob by two items, but there is no feasible single-item swap that can reduce her envy. Thus, although an EF1 allocation exists, it might not be attainable by single-item swaps from an arbitrary SWM allocation.
Future Directions
Our analysis and results suggest the following open problems.
APPENDIX Appendix A. Appendix for Section 2
Recall the definition of our main fairness notion from Section 2.2:
An allocation X is F-EF1 iff for every i, j ∈ N , there exists a subset Y ⊆ X j with |Y | ≤ 1, such that
v i (X i ) ≥v i (X j \ Y ) = v i (Best i (X j \ Y )).
This definition compares agent i's bundle to X j after first removing the most valuable item g from X j , and then considering the most valuable feasible subset within X j \ {g}.
Alternatively, we could first consider the most valuable subset of X j , and then remove the most valuable item from this subset, yielding the following definition:
Definition 18 (weakly F-EF1). An allocation X is weakly F-EF1 iff for every i, j ∈ N , there exists a subset Y ⊆ Best i (X j ) with |Y | ≤ 1, such thatv i (X i ) ≥ v i (Best i (X j ) \ Y ).
It is easy to see that every F-EF1 allocation is weakly F-EF1: if X is F-EF1, there exist
|Y | ≤ 1, Y ⊆ X j such thatv i (X i ) ≥v i (X j \ Y ). Thus, v i (X i ) ≥v i (X j \ Y ) = v i (Best i (X j \ Y )) = v i ( argmax T ⊆X j \Y, T ∈I i v i (T )) ≥ v i (Best i (X j ) \ Y ),
where the last inequality follows from the fact that Best i (X j ) \ Y ⊆ X j \ Y .
However, the converse is not true; i.e., weakly F-EF1 does not imply F-EF1. Consider a uniform matroid and two agents Alice and Bob, with two items worth 1 to both agents, and let k A = 1, k B = 2. The allocation X that gives both items to Bob is Weakly F-EF1 but not F-EF1. Indeed, for every good g ∈ X B , v A (Best A (X B ) \ {g}) =v A (∅) ≤v A (X A ).
Therefore, X is weakly F-EF1. On the other hand, for every g ∈ X B , v A (Best A (X B \ {g})) = 1 >v A (X A ).
Therefore, X is not F-EF1.
Capacities Alice Bob k A = 1 1,1 k B = 2 1,1 Table 3: An allocation that is weakly F-EF1 but not F-EF1.
All algorithms in this paper return F-EF1 allocations, thus weakly F-EF1 as well. Conversely, all impossibility results in this paper (see Section 3) continue to hold also with respect to weakly F-EF1 too: this is obvious in Section 3.1, Section 3.3 and Section 3.4 since they use identical constraints, for which all three fairness notions coincide. In Section 3.2 (which uses different partition-matroid constraints), it is easy to verify that the unique feasible allocation is not weakly F-EF1.
Proposition 1 .
1Suppose there are two agents with identical constraints represented by some I ⊆ 2 M . If there are complementary items for I, then a complete feasible EF1 allocation might not exist even when the agents have identical binary valuations.
ALGORITHM 2 :
2Capped Round Robin Input: Category C h with capacities k h i for every i ∈ [n], and an order σ over [n]. 1: Initialize:
ALGORITHM 6 :
6
Definition 17 .
17Let M = (M, I) be a matroid with rank r. The free extension of M is a matroid M ′ = (M ′ , I ′ ) defined as follows (where x new is a new item): M ′ := M + x new ; I ′ := I ∪ {I + x new | I ∈ I, |I| ≤ r − 1}.
Observation 1 .
1If the free extension of M is M ′ , with the new item x new , then: • All bases of M are bases of M ′ .
K 4 . Consider the two bases {12, 23, 34} (thick) and {24, 41, 13} (thin
Table 1 :
1A summary of our results in the context of previous results. All results are for additive valuations. Gray lines represent previous results. PE refers to outcomes that are also Pareto-efficient. BO refers to base-orderable matroids (Section 8).B&B(2018) is
Table 2 :
2An example of agents with identical partition matroid constraints and binary valuations where MNW does not imply EF1.
).6:
X h
i ← X h
i ∪ {g}. // Agent i gets her best unallocated item in C h
7:
L ← L \ {g}.
8:
if |X h
i | == k h
i then
9:
P ← P ∪ {i} // Agent i cannot get any more items from C h
10:
end if
11:
end if
12:
ALGORITHM 4 :
4Per-Category Capped Round RobinInput: M, C , k h i for every i ∈ [n], h ∈ [l] Output: an allocation X which is F − EF 1 1: Initialize: σ ← an arbitrary order over the agents; ∀i ∈ [n] X i ← ∅.2: for all C h ∈ C do
3:
ALGORITHM 5 :
5Iterated Priority Matching 1: Initialize: ∀i ∈ [n] X i ← ∅. 2: for each category h do3:
∀i ∈ [n] X h
i ← ∅.
4:
. The class of base-orderable matroids was first introduced by(Brualdi & Scrimger, 1968;Brualdi, 1969), but the term base-orderable appeared only later ((Brualdi, 1971)). 3. Formally, a laminar matroid is defined using ℓ possibly-overlapping sets C1, C2, . . . , C ℓ ⊆ M such that ℓ h=1 C h = M . Additionally, for every pair h, h ′ ∈ {1, ..., ℓ}, only one of the following holds:(i) C h ⊂ C h ′ or, (ii) C h ′ ⊂ C h or, (iii) C h ∩ C h ′ = ∅. Each C hhas a capacity k h . A set I ⊆ M is independent if and only if |I ∩ C h | ≤ k h for each h ∈ {1, ..., ℓ}. 4. Let G = (L, R, E) be a bipartite graph, and let I = {A ⊆ L : ∃ an injective function fA : A → R}. Then (L, I) is a matroid, and is called a transversal matroid.
AcknowledgmentsProof. By additivity,Since i envies j, v i (X j ) > v i (X i ), so at least one term in the first sum must be larger than the corresponding term in the second sum. So there is some g i ∈ X i for which:Since µ is a feasible-exchange bijection, swapping g i and g j yields a feasible allocation. Since X maximizes the sum of utilities among all feasible allocations, the swap cannot increase the sum of utilities; therefore, we must have v j (g j ) = 1 and v j (g i ) = 0 too. We call the exchange described by Lemma 8 a smart swap.Lemma 9. Let X be a SWM allocation where Envy + X (i, j) > 1, and let X ′ be an allocation obtained from X by a smart swap between i and j. Then:4. Envy + X ′ (j, i) = 0. Proof. 1. The smart swap decreases the utility of j by v j (g j ) − v j (g i ) = 1, and increases the utility of i by v i (g j ) − v i (g i ) = 1, and does not change the utilities of other agents. So the total sum of utilities does not change.2. After the smart swap, v i (XThe swap decreased the difference in utilities by 2. Therefore, after the swap, we still have v i (X ′ j ) ≥ v i (X ′ i ). 4. If we had Envy + X ′ (j, i) > 0, then giving X ′ i to j and X ′ j to i would increase the utility of j and not decrease the utility of i (by 3), contradicting SWM.We are now ready to prove Theorem 10. In the proof, when we mention a change in Envy + X (i, j) we refer to the change in the positive envy of agent i to agent j between allocations X and X ′ ; i.e., to Envy + X ′ (i, j) − Envy + X (i, j). Proof of Theorem 10. Let X be a complete feasible SWM allocation. Let Φ(·) be the following potential function: Φ(X) := i j =iEnvy + X (i, j).Appendix B. Appendix for Section 8
Consider a setting with n agents with additive heterogeneous valuations, partition matroids with heterogeneous capacities, and three or more categories. Does an EF1 allocation exist?. Section 4 handles at most two categoriesConsider a setting with n agents with additive heterogeneous valuations, partition matroids with heterogeneous capacities, and three or more categories. Does an EF1 allocation exist? (Section 4 handles at most two categories).
Consider a setting with n agents with additive identical valuations. Is there a class of matroids, besides partition matroids with the same categories (Section 5), for which an EF1 allocation exists even when the constraints are heterogeneous?. Consider a setting with n agents with additive identical valuations. Is there a class of matroids, besides partition matroids with the same categories (Section 5), for which an EF1 allocation exists even when the constraints are heterogeneous?
Consider a setting with n agents with binary valuations, partition matroids with heterogeneous capacities, and the capacities may be two or more. Does an EF1 and Pareto-efficient allocation exist?. Section 6.1 handles capacities in {0, 1})Consider a setting with n agents with binary valuations, partition matroids with heterogeneous capacities, and the capacities may be two or more. Does an EF1 and Pareto-efficient allocation exist? (Section 6.1 handles capacities in {0, 1}).
Consider a setting with three or more agents with heterogeneous additive valuations and partition matroids with heterogeneous capacities. Does an EF1 allocation exist?. Section 7 handles two agentsConsider a setting with three or more agents with heterogeneous additive valuations and partition matroids with heterogeneous capacities. Does an EF1 allocation exist? (Section 7 handles two agents).
Consider a setting with four or more agents with binary valuations and BO matroid constraints, or even three agents with binary valuations and general matroid constraints, or three agents with additive valuations and BO matroid constraints. Does an EF1 allocation exist?. Section 8 requires three agents, binary valuations, and BO matroidsConsider a setting with four or more agents with binary valuations and BO matroid constraints, or even three agents with binary valuations and general matroid con- straints, or three agents with additive valuations and BO matroid constraints. Does an EF1 allocation exist? (Section 8 requires three agents, binary valuations, and BO matroids).
Another interesting direction is extending our results to allocation of chores (items with negative utilities) in addition to goods. Another interesting direction is extending our results to allocation of chores (items with negative utilities) in addition to goods.
Erel Segal-Halevi received funding from the Israel Science Foundation (grant number 712/20). Siddharth Barman, Arpita Biswas, and anonymous reviewers of AAAI 2021 for their invaluable comments. Tony Huynh, Yuval Filmus, Kevin LongChandra ChekuriWe are grateful to Jonathan Turneragreement No. 866132), and the Israel Science Foundation (grant number 317/17). Erel Segal-Halevi received funding from the Israel Science Foundation (grant number 712/20). We are grateful to Jonathan Turner, Jan Vondrak, Chandra Chekuri, Tony Huynh, Yuval Filmus, Kevin Long, Siddharth Barman, Arpita Biswas, and anonymous reviewers of AAAI 2021 for their invaluable comments.
Lemma 11. Let M = (M, I) be a matroid, M ′ = (M ′ , I ′ ) be its free extension with new item x new . Then M ′ is base-orderable if and only if M is base-orderable. Lemma 11. Let M = (M, I) be a matroid, M ′ = (M ′ , I ′ ) be its free extension with new item x new . Then M ′ is base-orderable if and only if M is base-orderable.
Conversely, suppose M is base-orderable, and let I ′ , J ′ ∈ I ′ be two bases of M ′ . We consider several cases. Case 1: Both I ′ and J ′ do not contain x new . Then both are bases of M, so they have a feasible-exchange bijection. Case 2: I ′ contains x new while J ′ does not. So I ′ = I + x new , where I ∈ I, and J ′ is a basis of M. Let I + y be any basis of M that contains I, where y ∈ M . Since M is BO, there is a feasible-exchange bijection µ : I + y ↔ J ′ . Define a bijection µ ′ : I +. M ′ is base-orderable, then every two bases of M ′ have a feasible-exchange bijection. Since all bases of M are bases of M ′ , the same holds for M too. x new ↔ J ′ by: µ ′ (x) = µ(x) for x ∈ IProof. If M ′ is base-orderable, then every two bases of M ′ have a feasible-exchange bijec- tion. Since all bases of M are bases of M ′ , the same holds for M too. Conversely, suppose M is base-orderable, and let I ′ , J ′ ∈ I ′ be two bases of M ′ . We consider several cases. Case 1: Both I ′ and J ′ do not contain x new . Then both are bases of M, so they have a feasible-exchange bijection. Case 2: I ′ contains x new while J ′ does not. So I ′ = I + x new , where I ∈ I, and J ′ is a basis of M. Let I + y be any basis of M that contains I, where y ∈ M . Since M is BO, there is a feasible-exchange bijection µ : I + y ↔ J ′ . Define a bijection µ ′ : I + x new ↔ J ′ by: µ ′ (x) = µ(x) for x ∈ I;
• For all x ∈ I, we have (I + x new ) − x + µ ′ (x) = (I − x + µ(x)) + x new. • For all x ∈ I, we have (I + x new ) − x + µ ′ (x) = (I − x + µ(x)) + x new .
I+y)−x+µ(x) ∈ I. By downward-closedness, I − x + µ(x) ∈ I. By definition of the free extension, (I − x + µ(x)) + x new ∈ I ′ . Additionally, J ′ − µ ′ (x). Since µ is a feasible-exchange bijection. + x = J ′ − µ(x) + xSince µ is a feasible-exchange bijection, (I+y)−x+µ(x) ∈ I. By downward-closedness, I − x + µ(x) ∈ I. By definition of the free extension, (I − x + µ(x)) + x new ∈ I ′ . Additionally, J ′ − µ ′ (x) + x = J ′ − µ(x) + x.
Since µ is a feasible-exchange bijection, the latter set is in I, which is contained in I ′ . • For x = x new , we have (I + x new ) − x new + µ ′ (x new ) = I + µ(y) = (I + y) − y + µ(y). Since µ is a feasible-exchange bijection, the latter set is in I, which is contained in I ′ . • For x = x new , we have (I + x new ) − x new + µ ′ (x new ) = I + µ(y) = (I + y) − y + µ(y).
Since µ is a feasible-exchange bijection, the latter set is in I, which is contained in I ′ . Additionally, J ′ − µ ′ (x new ) + x new = J ′ − µ(y) + x new. Since µ is a feasible-exchange bijection, the latter set is in I, which is contained in I ′ . Additionally, J ′ − µ ′ (x new ) + x new = J ′ − µ(y) + x new .
Since µ is a feasible-exchange bijection, J ′ − µ(y) + y ∈ I. By downward-closedness, J ′ − µ(y) ∈ I. By definition of the free extension. J ′ − µ(y)) + x new ∈ I ′. Since µ is a feasible-exchange bijection, J ′ − µ(y) + y ∈ I. By downward-closedness, J ′ − µ(y) ∈ I. By definition of the free extension, (J ′ − µ(y)) + x new ∈ I ′ .
Similarly to Case 2, we write I ′ = I + x new and find a basis I + y of M. Let µ : I + y ↔ J ′ be a feasible-exchange bijection guaranteed by Case 3 above, between the bases I + y and J ′ of M ′ . Now, a bijection µ ′ : I +. I ′ , J ′ Case, ′ contains x new while I ′ is a basis of M. 3This case is analogous to Case 2. Case 4: both I ′ and J ′ contain x new. x new ↔ J ′ can be defined exactly as in Case 2Therefore, a feasible-exchange bijection exists for I ′ and J ′ . Case 3: J ′ contains x new while I ′ is a basis of M. This case is analogous to Case 2. Case 4: both I ′ and J ′ contain x new . Similarly to Case 2, we write I ′ = I + x new and find a basis I + y of M. Let µ : I + y ↔ J ′ be a feasible-exchange bijection guaranteed by Case 3 above, between the bases I + y and J ′ of M ′ . Now, a bijection µ ′ : I + x new ↔ J ′ can be defined exactly as in Case 2.
Nash social welfare for indivisible items under separable, piecewise-linear concave utilities. N Anari, T Mai, S O Gharan, V V Vazirani, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMAnari, N., Mai, T., Gharan, S. O., & Vazirani, V. V. (2018). Nash social welfare for indivisible items under separable, piecewise-linear concave utilities. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2274- 2290. SIAM.
Fair and truthful mechanisms for dichotomous valuations. M Babaioff, T Ezra, U Feige, Proceedings of the 35th AAAI Conference on Artificial Intelligence. the 35th AAAI Conference on Artificial IntelligenceBabaioff, M., Ezra, T., & Feige, U. (2021). Fair and truthful mechanisms for dichotomous valuations. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 5119-5126.
Existence and computation of maximin fair allocations under matroid-rank valuations. S Barman, P Verma, Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. the 20th International Conference on Autonomous Agents and MultiAgent SystemsBarman, S., & Verma, P. (2021). Existence and computation of maximin fair allocations under matroid-rank valuations. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 169-177.
Discrete envy-free division of necklaces and maps. R Barrera, K Nyman, A Ruiz, F E Su, Y X Zhang, 1510.02132arXiv preprintBarrera, R., Nyman, K., Ruiz, A., Su, F. E., & Zhang, Y. X. (2015). Discrete envy-free division of necklaces and maps.. arXiv preprint 1510.02132.
Earning limits in fisher markets with spending-constraint utilities. X Bei, J Garg, M Hoefer, K Mehlhorn, International Symposium on Algorithmic Game Theory. SpringerBei, X., Garg, J., Hoefer, M., & Mehlhorn, K. (2017). Earning limits in fisher markets with spending-constraint utilities. In International Symposium on Algorithmic Game Theory, pp. 67-79. Springer.
The price of connectivity in fair division. X Bei, A Igarashi, X Lu, W Suksompong, 1908.05433arXiv preprintBei, X., Igarashi, A., Lu, X., & Suksompong, W. (2019). The price of connectivity in fair division.. arXiv preprint 1908.05433.
Finding fair and efficient allocations when valuations don't add up. N Benabbou, M Chakraborty, A Igarashi, Y Zick, Algorithmic Game Theory -13th International Symposium. Springer12283SAGT 2020, ProceedingsBenabbou, N., Chakraborty, M., Igarashi, A., & Zick, Y. (2020). Finding fair and effi- cient allocations when valuations don't add up. In Algorithmic Game Theory -13th International Symposium, SAGT 2020, Proceedings, Vol. 12283 of Lecture Notes in Computer Science, pp. 32-46. Springer.
Almost Envy-Free Allocations with Connected Bundles. V Bilò, I Caragiannis, M Flammini, A Igarashi, G Monaco, D Peters, C Vinci, W S Zwicker, 14:1- 14:2110th Innovations in Theoretical Computer Science Conference (ITCS 2019). Blum, A.Dagstuhl, Germany124Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikBilò, V., Caragiannis, I., Flammini, M., Igarashi, A., Monaco, G., Peters, D., Vinci, C., & Zwicker, W. S. (2018). Almost Envy-Free Allocations with Connected Bundles. In Blum, A. (Ed.), 10th Innovations in Theoretical Computer Science Conference (ITCS 2019), Vol. 124 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 14:1- 14:21, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
Fair division under cardinality constraints. A Biswas, S Barman, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018The full versionBiswas, A., & Barman, S. (2018). Fair division under cardinality constraints. In Pro- ceedings of the Twenty-Seventh International Joint Conference on Artificial Intelli- gence, IJCAI 2018, pp. 91-97. The full version, dated 19 Oct 2020, is available at https://arxiv.org/abs/1804.09521v3.
Matroid constrained fair allocation problem. A Biswas, S Barman, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Biswas, A., & Barman, S. (2019). Matroid constrained fair allocation problem. In Proceed- ings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 9921-9922.
An infinite family of excluded minors for strong base-orderability. J E Bonin, T J Savitsky, Linear Algebra and its Applications. 488Bonin, J. E., & Savitsky, T. J. (2016). An infinite family of excluded minors for strong base-orderability. Linear Algebra and its Applications, 488, 396-429.
Fair division of a graph. S Bouveret, K Cechlárová, E Elkind, A Igarashi, D Peters, Proceedings of the 26th International Joint Conference on Artificial Intelligence. the 26th International Joint Conference on Artificial IntelligenceBouveret, S., Cechlárová, K., Elkind, E., Igarashi, A., & Peters, D. (2017). Fair division of a graph. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 135-141.
Comments on bases in dependence structures. R A Brualdi, Bulletin of the Australian Mathematical Society. 12Brualdi, R. A. (1969). Comments on bases in dependence structures. Bulletin of the Australian Mathematical Society, 1 (2), 161-167.
Induced matroids. R A Brualdi, Proceedings of the American Mathematical Society. 292Brualdi, R. A. (1971). Induced matroids. Proceedings of the American Mathematical Society, 29 (2), 213-221.
Exchange systems, matchings, and transversals. R A Brualdi, E B Scrimger, Journal of Combinatorial Theory. 53Brualdi, R. A., & Scrimger, E. B. (1968). Exchange systems, matchings, and transversals. Journal of Combinatorial Theory, 5 (3), 244 -257.
The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. E Budish, Journal of Political Economy. 1196Budish, E. (2011). The combinatorial assignment problem: Approximate competitive equi- librium from equal incomes. Journal of Political Economy, 119 (6), 1061-1103.
Course match: A largescale implementation of approximate competitive equilibrium from equal incomes for combinatorial allocation. E Budish, G P Cachon, J B Kessler, A Othman, Operations Research. 652Budish, E., Cachon, G. P., Kessler, J. B., & Othman, A. (2017). Course match: A large- scale implementation of approximate competitive equilibrium from equal incomes for combinatorial allocation. Operations Research, 65 (2), 314-336.
Maximizing a monotone submodular function subject to a matroid constraint. G Calinescu, C Chekuri, M Pál, J Vondrák, SIAM Journal on Computing. 406Calinescu, G., Chekuri, C., Pál, M., & Vondrák, J. (2011). Maximizing a monotone submod- ular function subject to a matroid constraint. SIAM Journal on Computing, 40 (6), 1740-1766.
The unreasonable fairness of maximum nash welfare. I Caragiannis, D Kurokawa, H Moulin, A D Procaccia, N Shah, J Wang, ACM Transactions on Economics and Computation (TEAC). 73Caragiannis, I., Kurokawa, D., Moulin, H., Procaccia, A. D., Shah, N., & Wang, J. (2019). The unreasonable fairness of maximum nash welfare. ACM Transactions on Eco- nomics and Computation (TEAC), 7 (3), 1-32.
Efx exists for three agents. B R Chaudhury, J Garg, K Mehlhorn, Proceedings of the 21st ACM Conference on Economics and Computation. the 21st ACM Conference on Economics and ComputationChaudhury, B. R., Garg, J., & Mehlhorn, K. (2020). Efx exists for three agents. In Pro- ceedings of the 21st ACM Conference on Economics and Computation, pp. 1-19.
On fair division under heterogeneous matroid constraints. A Dror, M Feldman, E Segal-Halevi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceDror, A., Feldman, M., & Segal-Halevi, E. (2021). On fair division under heterogeneous matroid constraints. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 5312-5320.
Submodular functions, matroids, and certain polyhedra. J Edmonds, Combinatorial Optimization-Eureka, You Shrink!. SpringerreprintedEdmonds, J. (1970). Submodular functions, matroids, and certain polyhedra. In Combina- torial Optimization-Eureka, You Shrink!, pp. 11-26. Springer. reprinted 2003.
On regular and approximately fair allocations of indivisible goods. D Ferraioli, L Gourvès, J Monnot, Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. the 2014 international conference on Autonomous agents and multi-agent systemsFerraioli, D., Gourvès, L., & Monnot, J. (2014). On regular and approximately fair allo- cations of indivisible goods. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 997-1004.
Unified fair allocation of goods and chores via copies. Y Gafni, X Huang, R Lavi, I Talgam-Cohen, 2109.08671arXiv preprintGafni, Y., Huang, X., Lavi, R., & Talgam-Cohen, I. (2021). Unified fair allocation of goods and chores via copies.. arXiv preprint 2109.08671.
Approximately envy-free budget-feasible allocation. J Gan, B Li, X Wu, 2106.14446arXiv preprintGan, J., Li, B., & Wu, X. (2021). Approximately envy-free budget-feasible allocation.. arXiv preprint 2106.14446.
Approximating the nash social welfare with budget-additive valuations. J Garg, M Hoefer, K Mehlhorn, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMGarg, J., Hoefer, M., & Mehlhorn, K. (2018). Approximating the nash social welfare with budget-additive valuations. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2326-2340. SIAM.
Assigning papers to referees. N Garg, T Kavitha, A Kumar, K Mehlhorn, J Mestre, Algorithmica. 581Garg, N., Kavitha, T., Kumar, A., Mehlhorn, K., & Mestre, J. (2010). Assigning papers to referees. Algorithmica, 58 (1), 119-136.
On maximin share allocations in matroids. L Gourvès, J Monnot, Theoretical Computer Science. 754Gourvès, L., & Monnot, J. (2019). On maximin share allocations in matroids. Theoretical Computer Science, 754, 50-64. Preliminary version appeared in CIAC 2017.
A protocol for cutting matroids like cakes. L Gourvès, J Monnot, L Tlilane, International Conference on Web and Internet Economics. SpringerGourvès, L., Monnot, J., & Tlilane, L. (2013). A protocol for cutting matroids like cakes. In International Conference on Web and Internet Economics, pp. 216-229. Springer.
Near fairness in matroids. L Gourvès, J Monnot, L Tlilane, ECAI 2014 -21st European Conference on Artificial Intelligence. IOS Press263Gourvès, L., Monnot, J., & Tlilane, L. (2014). Near fairness in matroids. In ECAI 2014 - 21st European Conference on Artificial Intelligence, Vol. 263 of Frontiers in Artificial Intelligence and Applications, pp. 393-398. IOS Press.
Guaranteeing half-maximin shares under cardinality constraints. H Hummel, M L Hetland, 2106.07300arXiv preprintHummel, H., & Hetland, M. L. (2021). Guaranteeing half-maximin shares under cardinality constraints.. arXiv preprint 2106.07300.
Fair allocation of conflicting items. H Hummel, M L Hetland, Autonomous Agents and Multi-Agent Systems. 361Hummel, H., & Hetland, M. L. (2022). Fair allocation of conflicting items. Autonomous Agents and Multi-Agent Systems, 36 (1), 1-33.
Splitting necklaces, with constraints. D Jojic, G Panina, R Zivaljevic, SIAM Journal on Discrete Mathematics. 352Jojic, D., Panina, G., & Zivaljevic, R. (2021). Splitting necklaces, with constraints. SIAM Journal on Discrete Mathematics, 35 (2), 1268-1286.
Matching under preferences. B Klaus, D F Manlove, F Rossi, Cambridge University PressKlaus, B., Manlove, D. F., & Rossi, F. (2016). Matching under preferences. Cambridge University Press.
Almost envy-freeness in group resource allocation. M Kyropoulou, W Suksompong, A A Voudouris, Theoretical Computer Science. 841Kyropoulou, M., Suksompong, W., & Voudouris, A. A. (2020). Almost envy-freeness in group resource allocation. Theoretical Computer Science, 841, 110-123.
Fair scheduling for time-dependent resources. B Li, M Li, R Zhang, Advances in Neural Information Processing Systems. 34Li, B., Li, M., & Zhang, R. (2021). Fair scheduling for time-dependent resources. Advances in Neural Information Processing Systems, 34.
The fair division of hereditary set systems. Z Li, A Vetta, ACM Transactions on Economics and Computation (TEAC). 92Li, Z., & Vetta, A. (2021). The fair division of hereditary set systems. ACM Transactions on Economics and Computation (TEAC), 9 (2), 1-19.
The conference paper assignment problem: Using order weighted averages to assign indivisible goods. J W Lian, N Mattei, R Noble, T Walsh, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)AAAI PressLian, J. W., Mattei, N., Noble, R., & Walsh, T. (2018). The conference paper assignment problem: Using order weighted averages to assign indivisible goods. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), pp. 1138- 1145. AAAI Press.
On approximately fair allocations of indivisible goods. R J Lipton, E Markakis, E Mossel, A Saberi, Proceedings of the 5th ACM conference on Electronic commerce. the 5th ACM conference on Electronic commerceLipton, R. J., Markakis, E., Mossel, E., & Saberi, A. (2004). On approximately fair allo- cations of indivisible goods. In Proceedings of the 5th ACM conference on Electronic commerce, pp. 125-131.
On good and fair paper-reviewer assignment. C Long, R. C.-W Wong, Y Peng, L Ye, 2013 IEEE 13th International Conference on Data Mining. IEEELong, C., Wong, R. C.-W., Peng, Y., & Ye, L. (2013). On good and fair paper-reviewer assignment. In 2013 IEEE 13th International Conference on Data Mining, pp. 1145- 1150. IEEE.
Allocating indivisible items in categorized domains. E Mackin, L Xia, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. the Twenty-Fifth International Joint Conference on Artificial IntelligenceMackin, E., & Xia, L. (2016). Allocating indivisible items in categorized domains. In Pro- ceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 359-365.
Fair division with multiple pieces. K Nyman, F E Su, S Zerbib, Discrete Applied Mathematics. 283Nyman, K., Su, F. E., & Zerbib, S. (2020). Fair division with multiple pieces. Discrete Applied Mathematics, 283, 115-122.
Priority matchings revisited. Y Okumura, Games and Economic Behavior. 88Okumura, Y. (2014). Priority matchings revisited. Games and Economic Behavior, 88, 242-249.
J G Oxley, Matroid theory. USAOxford University Press3Oxley, J. G. (2006). Matroid theory, Vol. 3. Oxford University Press, USA.
Pairwise kidney exchange. A E Roth, T Sönmez, M U &ünver, Journal of Economic theory. 1252Roth, A. E., Sönmez, T., &Ünver, M. U. (2005). Pairwise kidney exchange. Journal of Economic theory, 125 (2), 151-188.
Mechanism design for multi-type housing markets. S Sikdar, S Adali, L Xia, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSikdar, S., Adali, S., & Xia, L. (2017). Mechanism design for multi-type housing markets. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 684-690.
Mechanism design for multi-type housing markets with acceptable bundles. S Sikdar, S Adalı, L Xia, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Sikdar, S., Adalı, S., & Xia, L. (2019). Mechanism design for multi-type housing mar- kets with acceptable bundles. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 2165-2172.
Fairly allocating contiguous blocks of indivisible items. W Suksompong, Discrete Applied Mathematics. 260Suksompong, W. (2019). Fairly allocating contiguous blocks of indivisible items. Discrete Applied Mathematics, 260, 227-236.
Constraints in fair division. W Suksompong, ACM SIGecom Exchanges. 192Suksompong, W. (2021). Constraints in fair division. ACM SIGecom Exchanges, 19 (2), 46-61.
Faster maximium priority matchings in bipartite graphs. J Turner, 1512.09349arXiv preprintTurner, J. (2015a). Faster maximium priority matchings in bipartite graphs.. arXiv preprint 1512.09349.
Maximium priority matchings. J Turner, 1512.08555arXiv preprintTurner, J. (2015b). Maximium priority matchings.. arXiv preprint 1512.08555.
Optimal approximation for the submodular welfare problem in the value oracle model. J Vondrák, Proceedings of the fortieth annual ACM symposium on Theory of computing. the fortieth annual ACM symposium on Theory of computingVondrák, J. (2008). Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pp. 67-74.
Budget-feasible maximum nash social welfare is almost envy-free. X Wu, B Li, J Gan, The 30th International Joint Conference on Artificial Intelligence (IJCAI 2021). Wu, X., Li, B., & Gan, J. (2021). Budget-feasible maximum nash social welfare is al- most envy-free.. In The 30th International Joint Conference on Artificial Intelligence (IJCAI 2021), pp. 1-16.
| []
|
[]
| []
| []
| []
| In this manuscript, the authors describe their measurements of the Casimir force between two interpenetrating gratings using a MEMS actuator. They find significant deviations from the PFA and PAA calculations (as expected) and show good agreement with calculations using SCUFF-EM. In general, the findings are well-presented and will likely be of interested to both theorist and experimentalist studying the Casimir effect. Below are my questions for the authors.1) In the abstract the authors state that they "measure the Casimir force between two rectangular gratings in regimes not accessible before." They later describe in the text how the measurements presented in this paper are improved over those from their recent Nature Photonics paper(ref 14), where they observed non-monotonic Casimir forces in a slightly more complex geometry. Was that improvement really necessary to see this effect? I think that the manuscript could benefit from a little more discussion putting this measurement in context with their previous measurement, which seemed similar but slightly more complex. I believe the main advantage of this geometry is the larger discrepancy between experiment and PFA/PAA, but the authors should clarify.2) Related to the PFA discrepancy, the authors write that it is either a factor of 500 or 1000 at different places in the text. This should be clarified as well.3) On page 3, there is a discussion of the effect of geometry on the Casimir effect, noting that most experiments are performed in the sphere-plate configuration. The authors should also note that experiments have been performed in the plate-plate (Bressi, Phys. Rev. Lett. 88, 041804, 2002) and sphere-sphere (Garrett, Phys. Rev. Lett. 120, 040401, 2018) configurations. 4)Figure 1: For the inset in (a), "g" is not clear. Also, for (c) and (d) and elsewhere in the manuscript, "displacement" is used to describe the separation between parts of the structure. The authors should clearly identify what is mention by "displacement" by assigning it a variable and putting it in the inset of (a) or by using a different variable for the x-axis, e.g. "s" which appears in the inset. 5) For the PFA, it appears that the authors are assuming ideal metal plates rather than Si, which they use for the PAA. Is that the main difference? I don't think so, but the authors should explain a little more the difference between these two calculations because it appears that they are difference approximations about both geometry and optical properties. This will be helpful to make the conclusions of greater interested to the broader readership.Reviewer #2 (Remarks to the Author): | 10.1038/s41467-021-20891-4 | null | 221,507,917 | 2009.02187 | f7880a7dc6e66217ee945140da4ad939322f7bcf |
REVIEWER COMMENTS Reviewer #1 (Remarks to the Author):
In this manuscript, the authors describe their measurements of the Casimir force between two interpenetrating gratings using a MEMS actuator. They find significant deviations from the PFA and PAA calculations (as expected) and show good agreement with calculations using SCUFF-EM. In general, the findings are well-presented and will likely be of interested to both theorist and experimentalist studying the Casimir effect. Below are my questions for the authors.1) In the abstract the authors state that they "measure the Casimir force between two rectangular gratings in regimes not accessible before." They later describe in the text how the measurements presented in this paper are improved over those from their recent Nature Photonics paper(ref 14), where they observed non-monotonic Casimir forces in a slightly more complex geometry. Was that improvement really necessary to see this effect? I think that the manuscript could benefit from a little more discussion putting this measurement in context with their previous measurement, which seemed similar but slightly more complex. I believe the main advantage of this geometry is the larger discrepancy between experiment and PFA/PAA, but the authors should clarify.2) Related to the PFA discrepancy, the authors write that it is either a factor of 500 or 1000 at different places in the text. This should be clarified as well.3) On page 3, there is a discussion of the effect of geometry on the Casimir effect, noting that most experiments are performed in the sphere-plate configuration. The authors should also note that experiments have been performed in the plate-plate (Bressi, Phys. Rev. Lett. 88, 041804, 2002) and sphere-sphere (Garrett, Phys. Rev. Lett. 120, 040401, 2018) configurations. 4)Figure 1: For the inset in (a), "g" is not clear. Also, for (c) and (d) and elsewhere in the manuscript, "displacement" is used to describe the separation between parts of the structure. The authors should clearly identify what is mention by "displacement" by assigning it a variable and putting it in the inset of (a) or by using a different variable for the x-axis, e.g. "s" which appears in the inset. 5) For the PFA, it appears that the authors are assuming ideal metal plates rather than Si, which they use for the PAA. Is that the main difference? I don't think so, but the authors should explain a little more the difference between these two calculations because it appears that they are difference approximations about both geometry and optical properties. This will be helpful to make the conclusions of greater interested to the broader readership.Reviewer #2 (Remarks to the Author):
The paper "Strong geometry dependence of the Casimir force between interpenetrated rectangular gratings" presents new experimental measurements of the Casimir force in the system of two aligned rectangular gratings, comparison with the theory is performed. Results of the paper should be interesting to a general reader and specialists in the field. In my opinion, the paper may be suitable to publication in Nature Communications after authors clarify subtle points and consider recommendations to improve the paper written below.
In the paper under consideration the Casimir effect in the system of two aligned rectangular gratings is studied. For this system the derivative of the Casimir force versus separation between two rectangular gratings is measured in experiment, the comparison between theory and experiment is performed. Authors study both the case when the two gratings do not penetrate into each other and the case when the two gratings interpenetrate each other. The regime of interpenetration was never studied in Casimir experiments before. Authors were able to perform comparison of several exact and approximate theoretical methods with experimental results for the system of two rectangular gratings at various separations between them. This is the first Casimir effect experiment for two rectangular gratings separated by a vacuum slit, the experiment is in agreement with the theory. Note that in most previous experiments Casimir measurements were performed in a system of a sphere and another geometry separated by a vacuum slit. Here two gratings separated by a vacuum slit were aligned with high accuracy, this technique may be used in future for direct comparison of the Casimir theory and experiments in various geometries different from a sphere.
I do have several questions and recommendations to improve the paper written below.
1. Theory for evaluation of the Casimir energy between two aligned gratings in terms of Rayleigh reflection coefficients was developed in the paper [27] and in the paper with a lateral displacement of gratings: A.Lambrecht and V.N.Marachevsky, Int.J. Mod.Phys.A 24, 1789(2009. This fact and these references should be mentioned since authors obviously use general expressions for the Casimir force in terms of Rayleigh coefficients in their calculations shown by purple squares on 2. What is the temperature T in the experiment (I could not find the value in the text) ? Why is it possible to use zero temperature calculations in comparison of the theory and experiment ? If authors use the last term in the righthandsight of the formula (4) at finite temperature (a Drude term), is there an agreement between the theory and experiment (did they check this) ? What is the difference between zero temperature and finite temperature results for the force ? It would be really good to comment on this in detail. Also it would be good to draw a plot with the ratio of zero and finite temperature theoretical results for the Casimir force at various separations.
3. Authors should define the regions I, II, III, IV in nanometers and should emphasize how these regions are defined in general so that the readers could fully understand the boundaries between these regions. It is unclear to me where the regions II, III, IV start. 4. On line 189 it is written $\sim 1000$, however, in all other places it is written $\sim 500$. Probably there should be $\sim 500$ on line 189 as well.
5. It would be good to specify explicitly in the text the smallest distance between rectangular gratings for which authors were able to perform calculations within a scattering theory.
6. It would be good to discuss in more detail the difference between PAA and results of scattering theory. I expect it would be really interesting to add the plot where the ratio of PAA force to the exact force is explicitly shown at various separations. 7. While discussing pistons in Discussion section I recommend the reference: V.N. Marachevsky, Phys.Rev.D 75, 085019 (2007). In this paper exact formulas for a piston with an arbitrary cross section were derived. 8. In Supplementary information: line 13 -should probably be "1 nm", line 17 -should probably be "As discussed in the main text, …".
Reviewer #3 (Remarks to the Author):
Wang, Tang, Ng, Messina, Guizal, Crosse, Antezza, Chan, and Chan Strong geometry dependence of the Casimir force between interpenetrated rectangular gratings submitted to Nature Communications ms# 20-34993
The authors present an experiment that measures the van der Waals-Casimir force between two nanofabricated gratings of nearly rectangular shape that approach each other until they interpenetrate. The force gradient is measured by the frequency shift of a transverse oscillation of one of the gratings, and compared to numerically exact and approximate calculations. A good agreement with a fully numerical result (based on the code scuff-em) is found. Other approximations (like proximity force PFA or pairwise additivity PAA) fail in one or the other region of distances, for which the authors give heuristic explanations.
These results improve on the data shown in Ref. 14 by a team around H. B. Chan: the fabrication of the gratings is much better so that rectangular shapes are very closely achieved. The theory team has changed and is able to contribute calculations with different methods and approximations, although the full numerics is based on the same scuff-em code.
So why should this represent a significant advance to be published in Nature Communications?
A similar good agreement between experiment and theory was presented by Zou & al (Ref. 42) and Tang & al (Ref. 14).
The fact that interpenetration leads to a constant Casimir force has already been mentioned and interpreted in earlier work, too, in particular in Refs. 14, 44.
The deviation with respect to PFA is exacerbated here (a relative factor of a few hundred), but it seems that this is due to a kind of simplistic way to apply the PFA, in particular when one deals with rectangles or rounded corners. In Ref. 14, the rounded corners of the T-shaped structures were taken into account with PFA (but perhaps it was rather the PAA that was applied there), giving a good qualitative agreement with scuff-em. So one may object that the "selling point of ~500" is an artefact of an oversimplified PFA. And since it is quite obvious that the PFA is not applicable for this kind of system and since numerically exact calculations are available, it does not seem convincing to build the significance of the paper on the failure of an "out-dated approximation". But this is perhaps the main insight of this material: one can now fabricate structures where the PFA is utterly wrong (in some parameter region, as edges/corners slide against each other), and still one can demonstrate really good agreement between theory and experiment.
The authors should try to put their work into perspective in view of these remarks. A few minor points are listed below. It may also be worth considering a shorter presentation of the experimental apparatus, since the same has been used in the Refs. mentioned above. The reader would rather learn more details about how the nearly perfect rectangular shapes have been manufactures: which processing steps led to success?
If these questions are answered, then indeed, this makes an interesting paper that demonstrate that a significant challenge in the field of dispersion forces has been met. Worth being published in Nature Commun. line 67: "pair of plates" >"pairs of plates" p4, line 99: the regime of interpenetration has been explored, indeed, in Ref. 14 (Tang & al.), just with a slightly different shape of the protrusions. line 139: the "d" in the formula should be in italics line 165: Casimir "path integral" --I know that there are path integral formulations for the Casimir energy, but in the context of numerics like scuff-em, I don't think that this is an efficient starting point. Perhaps you mean "surface integral" which is taken, in this effective 2D geometry for the numerics, along a path, indeed. But the word "path integral" is fixed to Feynman's integral, please avoid the confusion.
line 170: the sentence on the breakdown of PFA is a bit too much, that has been said a few lines before.
line 184: what is "N" in "N = 100"? the number of Fourier modes? then just say so and drop the symbol. caption Fig. 2, line 216: drop "omega_R =" or add the 2 pi that is needed here. The number of digits for the quality factor Q is ridiculous, I don't believe you can be so precise (same problem in the main text). p10, around line 220: I would like to know the number for the thickness of the facing gratings (along the z-direction).
A simple basic question: it seems that the top beam (in red in Fig. 2) is "stiffer" than the lower grating (blue). Does that not mean that as the top beam is oscillating, it will "entrain" (via the Casimir force) the lower one. How can you be sure that the lower grating is kept in place and is not oscillating, too? Perhaps it is just a question of mass (the lower structure is much larger and more massive)?
line 261: correct spelling is "Torr".
line 308: replace V_o by V_0 (index zero) p15, lines 319-21: "Thermal corrections are neglected as the zeroth Matsubara frequency term accounts for nearly all of the force" --something is misunderstood here. The zero'th Matsubara term gives the *high-temperature limit* of the dispersion energy. You rather mean that the *integral* over imaginary frequencies which deviates from the sum (i.e., the force at 4K) only by less than 0.3%. Or did I misunderstand something here? p17, around line 363: the abrupt change in the PFA result seems like an artefact of having sharp corners (rectangular profile). If these corners are rounded (as shown by the TEM scan), then also the PFA can be modified to give a smooth result. This has been done in Ref. 14, for example. For sure, the discrepancy between experiment and "this version of the PFA" will be smaller, as also shown in Fig. 1 of Ref. 14.
line 365: the peak value 473 is too precise, I would only bet on something rounded like 500. You should be aware of your experimental errors for that ratio ... p17, line 374: the universal Casimir formula hbar c ... / g^3 is misleading here. As mentioned in my comments on the Supp Mat, one expects at the small distances here, that the energy per area rather follows the conventional Hamaker (nonretarded) formula A / g^2 (with material-specific Hamaker constant A). There is a problem with units in the Supp Mat calculation that leads to 1/g^3. p18, around line 383: you mention in the details in Methods that the van der Waals potential used in the PAA assumes that there is no material between any two surface (even volume) elements. So it is intuitive to understand that this does not apply for the corners of the fingers when they interpenetrate, because they interact mainly across silicon (fingers in).
line 390-91: the sentence "failure of PAA is from the non-pairwise additive nature" is redundant (french: pleónasme) and does not say anything. line 430: replace "As the displacement is further increased so that the gratings interpenetrate each other" by "As the gratings interpenetrate each other" for those who only rapidly read the Conclusion. line 437: typo "paths" > "paves (the way)" line 456: I would expect "load the sample" rather than "load the probe" line 469: I rapidly computed the plasma frequency for the given density of p-carriers and found a slightly different value, although I took the given effective mass:
>>> hole_n = 7.2e18/cm**3 >>> omega_p = sqrt(hole_n*e0**2/eps0/(0.34*me)) >>> omega_p/1e14 2.60 Or does one need the effective mass of holes rather than electrons? Methods, Eq.(8): well, this sounds like a horrible integral. Is there no way to find a reasonable approximation here? For the typical xi's in eps( i xi ), unfortunately c / xi ~ 300 nm, comparable to typical distances, so you may not rely on simple power law approximations. But there must be reasonable Padé approximations for that, no?
Response to Reviewer 1
We are pleased that Reviewer 1 thinks that our paper "will likely be of interested to both theorist and experimentalist studying the Casimir effect" and "are well-presented". We are grateful to her/him for the suggestions for improvements and positive comments. We respond to her/his suggestions below:
In the abstract the authors state that they "measure the Casimir force between two rectangular gratings in regimes not accessible before." They later describe in the text how the measurements presented in this paper are improved over those from their recent Nature Photonics paper (ref 14), where they observed non-monotonic Casimir forces in a slightly more complex geometry. Was that improvement really necessary to see this effect? I think that the manuscript could benefit from a little more discussion putting this measurement in context with their previous measurement, which seemed similar but slightly more complex. I believe the main advantage of this geometry is the larger discrepancy between experiment and PFA/PAA, but the authors should clarify.
The improvements in the fabrications are indeed essential in revealing the strong deviations of the Casimir force from PFA and PAA. The main difference is that we changed from optical lithography to electron beam lithography so that the sharp corners with little rounding can be achieved and the uniformity among individual units is significantly improved. The nonmonotonic behavior of the Casimir force demonstrated in the previous experiment [14] does not require the corners of the structures to be sharp. However, the strong deviation from PFA in the current paper does. In Ref. 14 with rounded T-protrusions, the deviations of the Casimir force from the PFA is only ~ 40%, much weaker than the factor of 500 demonstrated in this paper.
We added Supplementary Notes 3 to describe the new fabrication process that uses electron beam lithograph and Supplementary Notes 4 to compare the two experiments. We also added the following sentences to the main text: "A prior experiment measured the non-monotonic Casimir force when two T-shaped protrusions interpenetrate 14 . However, due to the limited resolution of optical lithography in the fabrication process, the protrusions are rounded at the corners. Moreover, there are non-uniform among the different units, introducing uncertainties so that deviations from the PFA cannot be unambiguously identified. To our knowledge, the strong geometry dependence of the Casimir force in the regime of interpenetration for rectangular gratings remains unexplored." "In particular, a fabrication process involving electron beam lithography was developed (Supplementary Note 4) to yield highly precise rectangular structures with minimal rounding of the corners."
Related to the PFA discrepancy, the authors write that it is either a factor of 500 or 1000 at different places in the text. This should be clarified as well.
The different factors correspond to two different geometries. For the rectangular silicon gratings with perfectly sharp corners, the deviation can reach 1000 times based on the numerical simulation from SCUFF and the semi-analytical scattering theory. For the device that is measured, the deviation is ~500 due to slight rounding of the corners in the fabrication process. In the abstract and conclusion, we use the experimentally demonstrated deviation of 500 times.
We have added the following text to clarify: "In the inset of Fig. 4b the ratio between the measured force and the PFA shows a peak at a value of ≈ 500 at d ≈426 nm which is weaker than the ≈ 1000 times deviation shown in the perfectly rectangular silicon gratings (Fig. 1c)."
On page 3, there is a discussion of the effect of geometry on the Casimir effect, noting that most experiments are performed in the sphere-plate configuration. The authors should also note that experiments have been performed in the plate-plate (Bressi, Phys. Rev. Lett. 88, 041804, 2002) and sphere-sphere (Garrett, Phys. Rev. Lett. 120, 040401, 2018) configurations.
As suggested by Reviewer 1, we included the references on the plate-plate and sphere-sphere configurations, and added a sentence to the manuscript: "Other configurations including plate-plate 35 and sphere-sphere 36 have also been measured experimentally." Figure 1: For the inset in (a), "g" is not clear. Also, for (c) and (d) and elsewhere in the manuscript, "displacement" is used to describe the separation between parts of the structure. The authors should clearly identify what is mention by "displacement" by assigning it a variable and putting it in the inset of (a) or by using a different variable for the x-axis, e.g. "s" which appears in the inset.
Following the suggestion from Reviewer 1, we changed the labeling of the lateral gap g in Fig. 1(a) inset and defined a variable d for the displacement of the movable grating in Fig. 1
(b).
We edited the caption of Fig. 1 accordingly:
"The black dotted line denotes the initial location of the bottom edge of the blue movable grating. d is defined as the displacement of the movable grating from the initial position. "
For the PFA, it appears that the authors are assuming ideal metal plates rather than Si, which they use for the PAA. Is that the main difference? I don't think so, but the authors should explain a little more the difference between these two calculations because it appears that they are difference approximations about both geometry and optical properties. This will be helpful to make the conclusions of greater interested to the broader readership.
In comparing the PFA to the PAA, we always used the same structure, with the same geometry and the same optical properties (p-doped silicon). For example, Fig. 5(a) performs such a comparison for the device fabricated that has slightly rounded corners. Figure 5(b) does so for silicon gratings that are perfectly rectangular.
To avoid confusion, we modified the sentence in the main text: "…. compare it to the exact Casimir force calculated by SCUFF-EM in Figs. 5a and 5b for the silicon grating geometry of our device and the silicon gratings that are perfectly rectangular considered at the beginning of the paper, respectively."
We also added the following sentence to the caption of The main difference between PAA and PFA is how their basic elements are chosen. For PAA, the body of the structure is divided into small cubic blocks (we chose the volume to be 1 nm 3 ) and the pairwise van der Waals energy between the two groups of elements from the two gratings is summed. For PFA, the interacting surfaces are divided into small parallel plates that face each other. Then either the Casimir force or energy from the Lifshitz formula for each pair of parallel-plates is summed. The different ways to divide the interacting bodies into basic elements (PFA: small surfaces; PAA: small volume blocks) give entirely different results that only work well in a specific distance range.
Following the suggestion of Reviewer 1 to clarify the difference of the PAA from the PFA, we have added a section in Supplementary Notes 5 to discuss the details of the PAA algorithm to illustrate the difference from the widely used PFA.
As we discussed, all the results of PFA and PAA in the main text are calculated based on the real material properties, i.e. the property of Si. The only exception is when we discuss the dependence of the distance-independent force on the lateral separation g in the regime of interpenetration. We use perfect metals to obtain a g -3 dependence in the retarded limit. We added one sentence to clarify:
"So far, all the results of PFA presented are based on the real optical property of silicon used in the experiment. For simplicity and without loss of generality, we now consider rectangular gratings made of perfect metal separated by different values of g."
Response to Reviewer 2
We thank Reviewer 2 for the recommendations to improve our paper. We are pleased that he/she considers our paper "suitable for publication in Nature Communications after authors clarify subtle points" Reviewer 2 indicates that our paper needs the following clarifications.
Theory for evaluation of the Casimir energy between two aligned gratings in terms of Rayleigh reflection coefficients was developed in the paper [27] and in the paper with a lateral displacement of gratings: A.Lambrecht and V.N.Marachevsky, Int.J. Mod.Phys.A 24, 1789(2009.This fact and these references should be mentioned since authors obviously use general expressions for the Casimir force in terms of Rayleigh coefficients in their calculations shown by purple squares on Fig.1 and Fig.5.
Following suggestions by the reviewer, we added the suggested reference in the manuscript.
What is the temperature T in the experiment (I could not find the value in the text) ? Why is it possible to use zero temperature calculations in comparison of the theory and experiment ? If authors use the last term in the righthandsight of the formula (4) at finite temperature (a Drude term), is there an agreement between the theory and experiment (did they check this) ? What is the difference between zero temperature and finite temperature results for the force ? It would be really good to comment on this in detail. Also it would be good to draw a plot with the ratio of zero and finite temperature theoretical results for the Casimir force at various separations.
The measurements were performed at a temperature of 4K (due to the need for the superconducting magnet for detecting vibration of the beam). Because the separation between the relevant parts of our structures is small (~< 500 nm), the thermal corrections to the Casimir force are negligible at 4K. We added the following text to clarify: "To simplify the calculations, a temperature of 0 K is used instead of the actual temperature of 4K in the experiment. This approximation is justified because the separation between the relevant parts of the two bodies is smaller than 500 nm for all displacements. For example, at displacement d = 1.6 μm where the top of the grating is about 0.3 μm from the main body of the beam, the calculated Casimir forces for 0 K and 4 K differ by < 0.3%."
The exact numerical results of Casimir force as a function of displacement at finite temperature (4 K) for our complex geometries take weeks to converge. Therefore, instead of generating a plot of the ratio of theoretical results for 4K and 0K as a function of d, we only manage to do the calculations for one value of d. For d = 1.6 μm, including finite temperature only changes the calculated force by < 0.3%. The effect on the comparison of theory to measurement is negligible.
Authors should define the regions I, II, III, IV in nanometers and should emphasize how these regions are defined in general so that the readers could fully understand the boundaries between these regions. It is unclear to me where the regions II, III, IV start.
Following the suggestion of Reviewer 2, we added the following text to specify the regions: "Specifically, region I (d = 0 to 430 nm) corresponds to the range of displacement before interpenetration. The measured force rapidly increases when interpenetration occurs in region II (430 -600 nm). In region III (d = 600 to 1450 nm) the force is nearly independent of displacement. The force increases rapidly in region IV (d > 1450 nm) due to the interactions between the top of the gratings and the body of the beam."
On line 189 it is written $\sim 1000$, however, in all other places it is written $\sim 500$. Probably there should be $\sim 500$ on line 189 as well.
The different factors correspond to two different geometries. For the ideal rectangular gratings with sharp corners, the deviation can reach 1000 times based on the numerical simulation from SCUFF and the semi-analytical scattering theory. For the device that is measured, the deviation is ~500 due to slight rounding of the corners in the fabrication process. In the abstract and conclusion, we use the experimentally demonstrated deviation of 500 times
We have therefore added the following text: "In the inset of Fig. 4b the ratio between the measured force and the PFA shows a peak at a value of ≈ 500 at d ≈426 nm which is weaker than the ≈ 1000 times deviation shown in the perfectly rectangular silicon gratings (Fig. 1c)." It would be good to specify explicitly in the text the smallest distance between rectangular gratings for which authors were able to perform calculations within a scattering theory.
Theoretically, we could apply the scattering theory to our geometry as long as interpenetration does not occur, i.e. when displacement < 430 nm. In practice, when the structures approach each other, the time required for the calculations to converge increases drastically. With our limited computational resources, we apply the scattering theory only for displacement between 0 nm and 300 nm.
Following the suggestion from the reviewer, we added the following sentence to the manuscript:
"In principle, the scattering theory is applicable for displacements up to d = 430 nm when interpenetration occurs. However, calculations beyond d = 300 nm are beyond our computation capability due to the computational power and time required for convergence."
It would be good to discuss in more detail the difference between PAA and results of scattering theory. I expect it would be really interesting to add the plot where the ratio of PAA force to the exact force is explicitly shown at various separations.
Follow the suggestion of Reviewer 2, we add the ratio of the force calculated with PAA to scattering theory and numerical SCUFF results in Fig. 5c and the following paragraph to discuss the difference between PAA and results of scattering theory.
"The inset of Figure 5c plots the ratio of the forces calculated by the PAA to that by the scattering theory and SCUFF-EM for silicon gratings that are perfectly rectangular, in purple and red respectively. As expected, the purple and red results largely coincide with each other because both the scattering theory and SCUFF-EM works well in Region I before interpenetration (Fig. 1). At displacement of 0.3 μm, the ratio is ~ 1.07. The value decreases as the two gratings move farther apart."
While discussing pistons in Discussion section I recommend the reference: V.N. Marachevsky, Phys.Rev.D 75, 085019 (2007). In this paper exact formulas for a piston with an arbitrary cross section were derived.
As suggested by Reviewer 2, we added the suggested reference.
In Supplementary information: line 13 -should probably be "1 nm", We double checked that the scale bar measures 1 µm. We re-wrote the sentence to avoid confusion:
"The scale bar in the main graph figure measures 1 µm (the width of the grating finger w is ~ 900 nm). In the inset, the scale bar measures 100 nm." line 17 -should probably be "As discussed in the main text, …".
We thank Reviewer 2 for pointing out this typo. It is fixed in the revised paper.
Response to Reviewer 3
We thank Reviewer 3 for a thorough review and the thoughtful suggestions. We are pleased that he considers our paper "demonstrate that a significant challenge in the field of dispersion forces has been met" and "worth being published in Nature Commun" provided that "questions are answered." We have followed his advice and made the corresponding changes.
The authors present an experiment that measures the van der Waals-Casimir force between two nanofabricated gratings of nearly rectangular shape that approach each other until they interpenetrate. The force gradient is measured by the frequency shift of a transverse oscillation of one of the gratings, and compared to numerically exact and approximate calculations. A good agreement with a fully numerical result (based on the code scuff-em) is found. Other approximations (like proximity force PFA or pairwise additivity PAA) fail in one or the other region of distances, for which the authors give heuristic explanations.
These results improve on the data shown in Ref. 14 by a team around H. B. Chan: the fabrication of the gratings is much better so that rectangular shapes are very closely achieved. The theory team has changed and is able to contribute calculations with different methods and approximations, although the full numerics is based on the same scuff-em code.
So why should this represent a significant advance to be published in Nature Communications?
First of all, to answer this question, we have adopted the following suggestion of Reviewer 3:
……….since it is quite obvious that the PFA is not applicable for this kind of system and since numerically exact calculations are available, it does not seem convincing to build the significance of the paper on the failure of an "out-dated approximation……... But this is perhaps the main insight of this material: one can now fabricate structures where the PFA is utterly wrong (in some parameter region, as edges/corners slide against each other), and still one can demonstrate really good agreement between theory and experiment.
We agree with Reviewer 3 and have added the following text to address the above question:
(for previous experiments, including Ref. 14) "……Even though the PFA cannot predict the Casimir force accurately in these experiments, it is computationally undemanding and is useful for a quick estimate of the order of magnitude of the force."
(for the current experiment) "…..The experiment involves a number of improvements to the detection platform to enable the fabrication of structures in which, for a certain range of parameters, the PFA breaks down completely and fails to estimate the order of magnitude of the Casimir force. There is good agreement between measurement and exact calculations using boundary element methods over the entire distance range, including the region where the PFA breaks down."
In short, we have fabricated structures where the geometry dependence of the Casimir force is so strong that even the order of magnitude of the Casimir force cannot be estimated by PFA. It is necessary to run SCUFF-EM on a computer workstation for weeks (the red curve in Fig. 1) instead of obtaining it almost instantly by PFA.
We now address the other concerns of Reviewer 3.
A similar good agreement between experiment and theory was presented by Zou & al (Ref. 42) and Tang & al (Ref. 14).
While these two earlier experiments from our group have good agreement with theory, they actually did not demonstrate the geometry dependence (in the form of deviation from the PFA) like our current experiment. Ref. 42 measured the force between two beams that are nearly parallel. The exact force does not deviate much from PFA. In Ref. 14, the PFA only deviates from SCUFF-EM by ~40% (Fig. 1 in Ref. 14), and the measured force falls almost in the middle of these two values (SCUFF-EM and PFA). Ref. 14 acknowledged that "our measurement does not provide unambiguous evidence for the breakdown of the PFA." In the present paper, on the other hand, significant improvements in the sample fabrication make it possible to create nearrectangular gratings that show the large deviations from the PFA. We added the following text to the introduction to clarify this point: "A prior experiment measured the non-monotonic Casimir force when two T-shaped protrusions interpenetrate 14 . However, due to the limited resolution of optical lithography in the fabrication process, the protrusions are rounded at the corners. Moreover, there are non-uniformities among the different units, introducing uncertainties so that deviations from the PFA cannot be unambiguously identified. To our knowledge, the strong geometry dependence of the Casimir force in the regime of interpenetration for rectangular gratings remains unexplored."
The fact that interpenetration leads to a constant Casimir force has already been mentioned and interpreted in earlier work, too, in particular in Refs. 14, 44.
To our knowledge, a constant and non-zero Casimir force was not discussed in Refs. 14 and 44. In Ref. 14 from our group, the force is not constant in any range of displacement. Even though the force in Fig. 1c is close to zero around 1.5 um, it is actually changing rapidly with displacement, in a manner similar to two parallel plates. When the top of the protrusions on the two sides are almost aligned, the force is not constant either, due to the significant rounding. The semi-circular shape of the top of the T-protrusions is given by the resolution of the lithographic process.
In Ref. 44, Chiu and coauthors consider the lateral Casimir force between the sphere-plates configuration with sinusoidal patterns on both sides. The two surfaces never interpenetrate, and there was no measurement of a constant non-zero force.
To avoid confusion, we modified the sentence in the main text: "…The measurement was performed when the two gratings were well-separated from each other without any interpenetration."
The deviation with respect to PFA is exacerbated here (a relative factor of a few hundred), but it seems that this is due to a kind of simplistic way to apply the PFA, in particular when one deals with rectangles or rounded corners. In Ref. 14, the rounded corners of the T-shaped structures were taken into account with PFA (but perhaps it was rather the PAA that was applied there), giving a good qualitative agreement with scuff-em. So one may object that the "selling point of ~500" is an artefact of an oversimplified PFA.
[Clarification of PFA] The PFA is an approximation and is indeed simple to apply. As we discussed above, it can give rather accurate estimates on near-planar geometry and can estimate the order of magnitude of the Casimir force in all geometries in previous experiments so far [Ref. 14,[37][38][39]45,46 etc], but not in the current experiment. To have a meaningful comparison to previous work, we use exactly the same standard PFA algorithm as in Ref. 14 (and also in other experiments). The interacting surfaces are divided into small pairs of parallel plates facing each other along the direction of displacement or perpendicular to it. The procedure can be applied regardless of whether the structures are rounded or near-rectangular. One may consider the PFA itself simplistic or oversimplified. But we made no attempts to further simplify it in our analysis. To our knowledge, deviations from the PFA remain the "figure of merit" of how strong the geometry dependence of Casimir forces is.
As reviewer 3 points out, in Ref. 14 the PFA is in good qualitative agreement with SCUFF-EM. Specifically, the deviation of SCUFF-EM from PFA is about 40% in Ref. 14 (even though measurement could not distinguish the two due to the non-uniformities in the different protrusion units). In this work on rectangular gratings, the deviations reach 500 times experimentally. The PFA is applied to the real geometry where the slightly rounded corners are fully taken into consideration in the same manner as Ref. 14. The large difference from SCUFF-EM originates from the geometry itself, and not the way that PFA is applied.
To avoid confusion, we added a sentence in the caption of Supplementary Fig. S1: "All simulations on the real device, including SCUFF-EM, PFA, and PAA, are based on the digitalized boundary where the slightly rounded corners are taken into consideration."
The authors should try to put their work into perspective in view of these remarks. A few minor points are listed below. It may also be worth considering a shorter presentation of the experimental apparatus, since the same has been used in the Refs. mentioned above. The reader would rather learn more details about how the nearly perfect rectangular shapes have been manufactures: which processing steps led to success?
As suggested by Reviewer 3, we have shortened the description of the comb actuator as it has been discussed in Refs. 14 and 42.
Much effort was indeed spent on improving the fabrication process to yield the near-rectangular structures that are essential for the large deviation of the force from PFA. We agree with Reviewer 3 that more details should be provided.
We added a detailed process flow in Supplementary Note 3 to demonstrate how we achieve the near-ideal gratings. Briefly, we switch from optical lithography to electron-beam lithography for the fabrication of the rectangular grating parts, while keeping the optical lithography for the "big" structures such as springs and comb drives. Besides, we add one more mask layer (Poly-Si) on the top of the original oxide mask layer to transfer the pattern from e-beam lithography to the device layer with much better accuracy (thinner e-beam resist and better selectivity during plasma etching). Besides the two main points mentioned briefly above, we did a number of changes to improve the accuracy, such as changing the recipes of etching to be compatible with the new masks, adding a cooling step to the e-beam photoresist, optimizing the e-beam dose distribution based on the proximity effect correction from the Monte-Carlo simulation to achieve the better profiles, etc. The details can be found in Supplementary Note 4.
We also added the following sentences to the main text:
"A prior experiment measured the non-monotonic Casimir force when two T-shaped protrusions interpenetrate 14 . However, due to the limited resolution of optical lithography in the fabrication process, the protrusions are rounded at the corners. Moreover, there are non-uniformities among the different units, introducing uncertainties so that deviations from the PFA cannot be unambiguously identified. To our knowledge, the strong geometry dependence of the Casimir force in the regime of interpenetration for rectangular gratings remains unexplored." "In particular, a fabrication process involving electron beam lithography was developed (Supplementary Note 4) to yield highly precise rectangular structures with minimal rounding of the corners." If these questions are answered, then indeed, this makes an interesting paper that demonstrate that a significant challenge in the field of dispersion forces has been met. Worth being published in Nature Commun.
We have addressed all the questions from Reviewer 3. We believe that the modified manuscript warrants publication in Nature Communications.
p2, line 61-62: I think that Ref. 24 should not be over-advertised here. The heat transfer is actually ridiculously small, since it concerns just single modes in two membranes. And there are many other examples where the van der Waals force has mechanical consequences, as in the classical analysis of thin liquid Helium films.
Following the suggestion of Reviewer 3, we move Ref. 24 to the group of references on the operation of nanomechanical systems and removed the sentence on heat transfer to avoid overadvertising this paper.
line 67: "pair of plates" >"pairs of plates"
We have adopted this suggested change.
p4, line 99: the regime of interpenetration has been explored, indeed, in Ref. 14 (Tang & al.), just with a slightly different shape of the protrusions.
We have added a description of Ref. 14 and pointed out that interpenetration has been explored before.
"A prior experiment measured the non-monotonic Casimir force when two T-shaped protrusions interpenetrate 14 . However, due to the limited resolution of optical lithography in the fabrication process, the protrusions are rounded at the corners. Moreover, they are non-uniform among the different units, introducing uncertainties so that deviations from the PFA cannot be unambiguously identified. To our knowledge, the strong geometry dependence of the Casimir force in the regime of interpenetration for rectangular gratings remains unexplored." line 139: the "d" in the formula should be in italics
We have fixed this typo in the revised paper.
line 165: Casimir "path integral" --I know that there are path integral formulations for the Casimir energy, but in the context of numerics like scuff-em, I don't think that this is an efficient starting point. Perhaps you mean "surface integral" which is taken, in this effective 2D geometry for the numerics, along a path, indeed. But the word "path integral" is fixed to Feynman's integral, please avoid the confusion.
To avoid confusion, we remove the phrase "path integral". The modified sentence now reads:
"SCUFF-EM calculates the force by evaluating the integral of Casimir energy using a classical boundary elements interaction matrix (see Methods for details)." line 170: the sentence on the breakdown of PFA is a bit too much, that has been said a few lines before.
Follow reviewer's suggestion, we remove the phrase "the PFA breaks down" and change the sentence to:
"…while in region II, the PFA predicts an unphysical infinite force gradient."
line 184: what is "N" in "N = 100"? the number of Fourier modes? then just say so and drop the symbol.
We substituted the symbol "N" with "the number of Fourier modes" Fig. 2(b): why repeat the sketches / micrographs with the interpenetrating fingers from Fig. 1? I know it's not a sketch, but it's still graphically a redundant information.
Following the suggestion of Reviewer 3, we have removed panels II and III of Fig. 2b. We note that Fig. 1 is only a sketch of a perfectly rectangular grating. It is not a real device. In Fig. 2(b), we need to keep panel I to show that the improved fabrication process can yield high quality rectangular gratings with slight rounding of the corners. We also choose to keep panel IV to show that the two gratings are well-aligned and the alignment is maintained even in the regime of interpenetration. These two panels are the result of much effort in creating rectangular gratings and minimizing lateral movements as the displacement increases. caption Fig. 2, line 216: drop "omega_R =" or add the 2 pi that is needed here. The number of digits for the quality factor Q is ridiculous, I don't believe you can be so precise (same problem in the main text).
We have added 2 pi and used the appropriate number of digits for Q in both the caption of Fig. 2 and the main text.
p10, around line 220: I would like to know the number for the thickness of the facing gratings (along the z-direction).
The grating face each other has a thickness ≈ 2.58 μm which is measured from the cross-section of the device layer of the fabricated device. To avoid misleading, we change the sentence to:
"…the device that is fabricated using a combination of both electron beam and optical lithography on the 2.58 μm-thick device layer of a highly doped silicon-on-insulator wafer (See Methods)."
A simple basic question: it seems that the top beam (in red in Fig. 2) is "stiffer" than the lower grating (blue). Does that not mean that as the top beam is oscillating, it will "entrain" (via the Casimir force) the lower one. How can you be sure that the lower grating is kept in place and is not oscillating, too? Perhaps it is just a question of mass (the lower structure is much larger and more massive)?
We forgot to rotate the panels in Fig. 2b by 180 degrees before colorizing the figure. We thank Reviewer 3 for pointing out this mistake. In the corrected figure, the thin top beam is excited into resonance for detecting the force gradient. The wide lower beam is much stiffer and has a much higher resonance frequency. As both beams are strongly underdamped, the response of the lower beam at the resonant frequency of the upper beam is negligible.
line 261: correct spelling is "Torr". line 308: replace V_o by V_0 (index zero)
We have corrected these two the typos.
p15, lines 319-21: "Thermal corrections are neglected as the zeroth Matsubara frequency term accounts for nearly all of the force" --something is misunderstood here. The zero'th Matsubara term gives the *high-temperature limit* of the dispersion energy. You rather mean that the *integral* over imaginary frequencies which deviates from the sum (i.e., the force at 4K) only by less than 0.3%. Or did I misunderstand something here?
We thank Reviewer 3 for pointing out this mistake. The revised sentence reads:
"To simplify the calculations, a temperature of 0 K is used instead of the actual temperature of 4K in the experiment. This approximation is justified because the separation between the relevant parts of the two bodies is always smaller than 500 nm. For example, at displacement d = 1.6 μm where the top of the grating is about 0.3 μm from the main body of the beam, the calculated Casimir forces for 0 K and 4 K differ by < 0.3%." p17, around line 363: the abrupt change in the PFA result seems like an artefact of having sharp corners (rectangular profile). If these corners are rounded (as shown by the TEM scan), then also the PFA can be modified to give a smooth result. This has been done in Ref. 14, for example. For sure, the discrepancy between experiment and "this version of the PFA" will be smaller, as also shown in Fig. 1 of Ref. 14.
Reviewer 3 stated correctly that rounding will reduce the deviation of PFA from the exact Casimir force. For example, the maximum deviation plotted in the inset of Fig. 1c for a rectangular grating with perfectly sharp corners (~1000 times) is larger than that for our device shown in Fig 4b (~500). In fact, this is the reason we spend the efforts to improve the fabrication to give near rectangular gratings.
However, we believe that describing our analysis as "this version of PFA" is inaccurate. We point out that the PFA used here is exactly the same as Ref. 14, as we explained in the paragraph [clarification of PFA] above. The large discrepancy between the measured force and the PFA originates from the rectangular grating geometry. It does not depend on the "version" of PFA used, because there is only one version.
The slightly rounded corners as shown in SEM images are already taken into account when we applied the PFA to the actual device. If the corners were perfectly sharp, the geometry becomes the perfect rectangular grating in which the force follows a step function with a vertical slope, as we plotted in Fig. 1c. With slight rounding of the corners, the slope is reduced as shown in Fig. 4b. However, as a function of displacement, the change in slope is still rather abrupt. We provide a more detailed explanation of the abruptness in the answer to the latter question by Reviewer 3 on Fig. 5. line 365: the peak value 473 is too precise, I would only bet on something rounded like 500. You should be aware of your experimental errors for that ratio ...
Following the suggestion of Reviewer 3, we changed the peak value from 473 to ≈ 500. p17, line 374: the universal Casimir formula hbar c ... / g^3 is misleading here. As mentioned in my comments on the Supp Mat, one expects at the small distances here, that the energy per area rather follows the conventional Hamaker (nonretarded) formula A / g^2 (with material-specific Hamaker constant A). There is a problem with units in the Supp Mat calculation that leads to 1/g^3.
In the analysis of the dependence of the distance-independent force on lateral distance g between grating fingers in the regime of interpenetration, we considered perfect metal instead of silicon to arrive at 1/g 3 scaling in the retarded limit. We modified the following sentence to emphasize that perfect metal is used:
"So far, all the results of PFA presented are based on the real optical property of silicon used in the experiment. For simplicity and without loss of generality, we now consider rectangular gratings made of perfect metal separated by different values of g."
We agree with Reviewer 3 that for real materials, smaller g will require the Hamaker formula. Therefore we have added the sentence:
"For gratings made of materials with finite conductivity, it is expected that for small g the scaling will change to 1/g 2 in the non-retarded limit."
We also edited the supplementary notes and fixed a mistake in Supplementary Note 2 as we will discuss later.
p18, around line 383: you mention in the details in Methods that the van der Waals potential used in the PAA assumes that there is no material between any two surface (even volume) elements. So it is intuitive to understand that this does not apply for the corners of the fingers when they interpenetrate, because they interact mainly across silicon (fingers in).
We agree with Reviewer 3 and have added the following description of the intuitive expectation: "In the regime of interpenetration, it is more likely for material to be present between two interacting elements. Intuitively, it is expected that deviations from PAA are larger in region III compared to region I."
line 390-91: the sentence "failure of PAA is from the non-pairwise additive nature" is redundant (french: pleónasme) and does not say anything.
Follow the suggestion from Reviewer 3, we have removed the sentence. In short, the answer is yes. The corners of the structures are indeed much sharper here compared to Ref. 14 because we create the structures with electron beam lithography instead of optical lithography. We added Supplementary Fig. S5 to compare the corners made by the two methods. Supplementary Notes 3 has also been added to describe the new fabrication process.
The main contribution of the PFA in region II and III comes from parallel plate elements that face each other in the x-direction. Before interpenetration, this contribution is exactly zero because there is no overlap between any of the parallel plate elements on the two sides. After interpenetration, the overlap area increases and so does the energy. For the gratings in this paper, the corners are only slightly rounded so that the transition from an exactly zero value to a nonzero value of this contribution is still abrupt. One can envision that with more rounding of the corners, such as to the extent in Ref. 14, the force predicted by PFA starts its rise smoothly after interpenetration. line 430: replace "As the displacement is further increased so that the gratings interpenetrate each other" by "As the gratings interpenetrate each other" for those who only rapidly read the Conclusion.
line 437: typo "paths" > "paves (the way)" line 456: I would expect "load the sample" rather than "load the probe"
We have adopted the above three suggestions from the reviewer.
line 469: I rapidly computed the plasma frequency for the given density of p-carriers and found a slightly different value, although I took the given effective mass:
We thank Reviewer 3 for pointing out the typo of the carrier concentration. We measured the sheet resistance and deduce the carrier concentration using Fig. 6.9 of [Pierret, R. F. Semiconductor Fundamentals, second edition, (Addison-Wesley, 1988).] The sheet resistance was stated correctly in our paper, while the carrier concentration should be ≈ 6.0 × 10 cm instead. We have corrected it in the modified paper.
The correct carrier concentration gives the plasma frequency calculated from = /( ) ≈ 2.37 × 10 rad s -1 and Γ = ≈ 6.45 × 10 rad s -1 shown in the method.
Methods, Eq.(8): well, this sounds like a horrible integral. Is there no way to find a reasonable approximation here? For the typical xi's in eps( i xi ), unfortunately c / xi ~ 300 nm, comparable to typical distances, so you may not rely on simple power law approximations. But there must be reasonable Padé approximations for that, no?
The integration looks indeed complex and computationally expensive. However, in practice, we optimize our algorithm to speed up the calculation. With the optimization, the PAA calculation of the force as a function of displacement [Green lines in Fig. 5a,b] is completed within 20 minutes with our computation resources.
We added Supplementary Note 5 to describe details of our optimized algorithm where we reduce the O(n 6 ) problem to O(n 4 +n 2 ). We have added the following sentence in the manuscript:
"Details of the algorithm for calculation of Eq. (8) for our geometry is presented in Supplementary Note 5."
References: the citation style is not uniform (journal names in full or abbreviated, lower case or upper case). A number of references are incomplete:
We fixed the abbreviation error of references and the incomplete reference number for the Physical Review series.
p3, line 25: try to improve the ordering of the information. Suggestion: "between two ... corners, see inset in (a): two blocks initially separated by a gap of 80 nm (x-direction) and 500 nm apart in the y-direction. Main figures (a) and (b): calculated Casimir force and force gradient as ...
We have adopted this suggested change.
p3, line 32: if your bodies are infinite in the z-direction, you can only compute forces per unit length along that direction. So the units in Fig. 2 are wrong. Only the force per unit area, between perfect conductors, can scale like 1/g^3, with the same exponent as the energy per area for two infinite plates (linear increase with the area as the bodies slide one against the other).
We thank Reviewer 3 for pointing out this mistake. We incorrectly stated that the objects considered in Fig. S2(a) are infinite in z direction. Instead, they are 2.58 µm thick.
The revised sentence reads:
"…each with cross-section of 3 μm by 3 μm square as shown in the inset of Supplementary Figure S2a and with thickness of 2.58 µm in the z-direction. The thickness is much larger than the size of the lateral gap (~ 80 nm)." p4, line 42: I think that there is no "consistency with the simple PFA argument" of the main text.
(1) In the experiment, no exponent in the gap distance g can be found.
(2) At the ~70 nm distance, it is likely that the Casimir energy in the experiment is already in the non-retarded (or Hamaker) regime, while for perfect reflectors, there is no non-retarded regime. Hence, the energy/area (and the force per unit length) scales like 1/g^2. Since the length along the zdirection is constant, this scaling should also apply to the experimental force.
The confusion here again comes from the fact that we used perfect metal with finite thickness to get 1/g 3 scaling of the displacement-independent force. Reviewer 3 correctly pointed out that the exponent is not measured in the experiment. We revised the text to emphasize that 1/g 3 scaling applies to perfect metal only:
"This result is consistent with the simple argument on perfect metal using PFA in the main text that yields ∝ 1/ . If the gratings are made of materials with finite conductivity, it is expected that for small g the scaling will change to 1/g 2 in the nonretarded limit. "
Fig. 2
2(b): why repeat the sketches / micrographs with the interpenetrating fingers fromFig. 1? I know it's not a sketch, but it's still graphically a redundant information.
Fig. 5
5(a): I again would like to understand why the PFA gives a sharp onset, while inFig. 1of Ref. 14, the PFA starts off smoothly. Do your structures have so much sharper corners here?
Fig. 5 that compare results of PFA, PAA and SCUFF: "The optical properties of silicon are used for all calculations."
Fig. 5
5(a): I again would like to understand why the PFA gives a sharp onset, while inFig. 1of Ref. 14, the PFA starts off smoothly. Do your structures have so much sharper corners here?
Carsten Henkel . . .
Henkel.p2, line 61-62: I think that Ref. 24 should not be over-advertised here. The heat transfer is actually ridiculously small, since it concerns just single modes in two membranes. And there are many other examples where the van der Waals force has mechanical consequences, as in the classical analysis of thin liquid Helium films.
Reviewer #1 (Remarks to the Author):They authors have sufficiently address my concerns, and in my opinion the paper can be published in Nat Comm.Reviewer #2 (Remarks to the Author):I have read the revised version of the manuscript and comments of the authors. I am satisfied with authors' reply to comments and changes/additions made in the manuscript.I have noticed that in Refs.[22],[40]"Casimir" is written as "casimir". Capital letter "C" should be used in "Casimir" in titles of these references.I already wrote about achievements of the manuscript in detail in my previous report. I recommend this manuscript to publication in Nature Communications after correction of the issue above.Reviewer #3 (Remarks to the Author):Strong geometry dependence of the Casimir force between interpenetrated rectangular gratings by M. Wang, L. Tang, C. Y. Ng, R. Messina, B. Guizal, J. A. Crosse, M. Antezza, C. T. Chan, and H. B. Chan submitted to Nature Commun ms# 273817-1 I am happy with the revisions made by the authors. They clarified certain points related to the substantial advances made here compared to earlier work. As stressed in my earlier report and in those of the other referees, this experiment illustrates the high degree of quantitative understanding that is now available for dispersion forces in sub-micron scale, complex geometries. There is no doubt that this topic attracts a wide inter-disciplinary audience. I recommend publication.Minor typosMain textWhen calling references in-line ("as shown in Ref. xx"), avoid typesetting xx as superscript (use LaTeX "\onlinecite{key-to-xx-ref}"). We are pleased that Reviewer 1 recommends publication of our paper.Response to Reviewer 2I have read the revised version of the manuscript and comments of the authors. I am satisfied with authors' reply to comments and changes/additions made in the manuscript.I have noticed that in Refs.[22], [40] "Casimir" is written as "casimir". Capital letter "C" should be used in "Casimir" in titles of these references.I already wrote about achievements of the manuscript in detail in my previous report. I recommend this manuscript to publication in Nature Communications after correction of the issue above.They authors have sufficiently address my concerns, and in my opinion the paper can be published in Nat Comm.We are pleased that Reviewer 2 recommends publication of our paper.Following the advice of Reviewer 2, we have put capital C for "Casimir" in Refs.[22] and [40].Response to Reviewer 3Strong geometry dependence of the Casimir force between interpenetrated rectangular gratings by M. Wang, L. Tang, C. Y. Ng, R. Messina, B. Guizal, J. A. Crosse, M. Antezza, C. T. Chan, and H. B. Chan submitted to Nature Commun ms# 273817-1 I am happy with the revisions made by the authors. They clarified certain points related to the substantial advances made here compared to earlier work. As stressed in my earlier report and in those of the other referees, this experiment illustrates the high degree of quantitative understanding that is now available for dispersion forces in sub-micron scale, complex geometries. There is no doubt that this topic attracts a wide inter-disciplinary audience. I recommend publication.Minor typosMain textWhen calling references in-line ("as shown in Ref. xx"), avoid typesetting xx as superscript (use LaTeX "\onlinecite{key-to-xx-ref}"). We are pleased that Reviewer 3 recommends publication of our paper. We have corrected all the typos listed by Reviewer 3 in both the main text and the supplementary information.
References: the citation style is not uniform (journal names in full or abbreviated, lower case or upper case). A number of references are incomplete: Ref. Phys. Rev. A. 8662502References: the citation style is not uniform (journal names in full or abbreviated, lower case or upper case). A number of references are incomplete: Ref. 31 --Phys. Rev. A 86 (2012) 062502
. Phys. Rev. Lett. 105250402Ref. 35 --Phys. Rev. Lett. 105 (2010) 250402
. Phys. Rev. Lett. 10130401Ref. 36 --Phys. Rev. Lett. 101 (2008) 030401
. Nature Commun. 42515Ref. 37 --Nature Commun. 4 (2013) 2515
. Phys. Rev. A. 8262111Ref. 38 --Phys. Rev. A 82 (2010) 062111
. Phys. Rev. B. 81115417Ref. 44 --Phys. Rev. B 81 (2010) 115417
. J. Opt. Soc. Am. A Ref. 95125404Phys. Rev. BRef. 47 --twice J. Opt. Soc. Am. A Ref. 48 --Phys. Rev. B 95 (2017) 125404
. Phys. Rev. Lett. 118266802Ref. 50 --Phys. Rev. Lett. 118 (2017) 266802
. Phys. Rev. D. 6965015Ref. 53 --Phys. Rev. D 69 (2004) 065015
Suggestion: "between two ... corners, see inset in (a): two blocks initially separated by a gap of 80 nm (x-direction) and 500 nm apart in the ydirection. Main figures (a) and (b): calculated Casimir force and force gradient as. Supp Mat p3, line 25: try to improve the ordering of the informationSupp Mat p3, line 25: try to improve the ordering of the information. Suggestion: "between two ... corners, see inset in (a): two blocks initially separated by a gap of 80 nm (x-direction) and 500 nm apart in the y- direction. Main figures (a) and (b): calculated Casimir force and force gradient as ...
if your bodies are infinite in the z-direction, you can only compute forces per unit length along that direction. So the units in Fig. 2 are wrong. Only the force per unit area. 32between perfect conductors, can scale like 1/g^3, with the same exponent as the energy per area for two infinite plates (linear increase with the area as the bodies slide one against the otherp3, line 32: if your bodies are infinite in the z-direction, you can only compute forces per unit length along that direction. So the units in Fig. 2 are wrong. Only the force per unit area, between perfect conductors, can scale like 1/g^3, with the same exponent as the energy per area for two infinite plates (linear increase with the area as the bodies slide one against the other).
At the ~70 nm distance, it is likely that the Casimir energy in the experiment is already in the non-retarded (or Hamaker) regime, while for perfect reflectors, there is no non-retarded regime. Hence, the energy/area (and the force per unit length) scales like 1/g^2. Since the length along the z-direction is constant. of the main text. (1) In the experiment, no exponent in the gap distance g can be found. this scaling should also apply to the experimental forcep4, line 42: I think that there is no "consistency with the simple PFA argument" of the main text. (1) In the experiment, no exponent in the gap distance g can be found. (2) At the ~70 nm distance, it is likely that the Casimir energy in the experiment is already in the non-retarded (or Hamaker) regime, while for perfect reflectors, there is no non-retarded regime. Hence, the energy/area (and the force per unit length) scales like 1/g^2. Since the length along the z-direction is constant, this scaling should also apply to the experimental force.
| []
|
[
"Probing Interface of Perovskite Oxide Using Surface- specific Terahertz Spectroscopy",
"Probing Interface of Perovskite Oxide Using Surface- specific Terahertz Spectroscopy"
]
| [
"Yudan Su \nDepartment of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina\n\nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"Jiaming Le \nDepartment of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina\n",
"Junying Ma \nDepartment of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina\n",
"Long Cheng \nSchool of Physical Science and Technology\nShanghaiTech University\n201210ShanghaiChina\n",
"Yuxuan Wei \nDepartment of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina\n",
"Xiaofang Zhai \nSchool of Physical Science and Technology\nShanghaiTech University\n201210ShanghaiChina\n",
"Chuanshan Tian \nDepartment of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina\n"
]
| [
"Department of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina",
"Department of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina",
"School of Physical Science and Technology\nShanghaiTech University\n201210ShanghaiChina",
"Department of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina",
"School of Physical Science and Technology\nShanghaiTech University\n201210ShanghaiChina",
"Department of Physics\nState Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)\nFudan University\n200433ShanghaiChina"
]
| []
| The surface/interface species in perovskite oxides play an essential role in many novel emergent physical phenomena and chemical processes. With low eigenenergy in the terahertz region, such species at buried interfaces remain poorly understood due to the lack of feasible experimental techniques. Here, we show that vibrational resonances and two-dimensional electron gas at the interface can be characterized using surface-specific nonlinear spectroscopy in the terahertz range. This technique uses intra-pulse difference frequency mixing (DFM) process, which is allowed only at surface/interface of a medium with inversion symmetry. Submonolayer sensitivity can be achieved using the state-of-the-art detection scheme for the terahertz emission from surface/interface. As a demonstration, Drude-like nonlinear response from the two-dimensional electron gas emerging at LaAlO3/SrTiO3 or Al2O3/ SrTiO3 interface was successfully observed. Meanwhile, the interfacial vibrational spectrum of the ferroelectric soft mode of SrTiO3 at 2.8 THz was also obtained that was polarized by the surface field in the interfacial region. The corresponding surface/interface potential, which is a key parameter for SrTiO3-based interface superconductivity and photocatalysis, can now be determined optically via quantitative analysis on the polarized phonon spectrum. The interfacial species with resonant frequencies in the THz region revealed by our method provide more insights into the understanding of physical properties of complex oxides. | null | [
"https://export.arxiv.org/pdf/2303.07871v1.pdf"
]
| 257,505,259 | 2303.07871 | 3d9151eb66ddbd015ea86066c57cc35bf26c9f0e |
Probing Interface of Perovskite Oxide Using Surface- specific Terahertz Spectroscopy
Yudan Su
Department of Physics
State Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)
Fudan University
200433ShanghaiChina
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Jiaming Le
Department of Physics
State Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)
Fudan University
200433ShanghaiChina
Junying Ma
Department of Physics
State Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)
Fudan University
200433ShanghaiChina
Long Cheng
School of Physical Science and Technology
ShanghaiTech University
201210ShanghaiChina
Yuxuan Wei
Department of Physics
State Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)
Fudan University
200433ShanghaiChina
Xiaofang Zhai
School of Physical Science and Technology
ShanghaiTech University
201210ShanghaiChina
Chuanshan Tian
Department of Physics
State Key Laboratory of Surface Physics and Key Laboratory of Micro-and Nano-Photonic Structure (MOE)
Fudan University
200433ShanghaiChina
Probing Interface of Perovskite Oxide Using Surface- specific Terahertz Spectroscopy
1 † These authors contributed equally to this work. 2 *Correspondence to: [email protected] (C.T.). 3surface terahertz spectroscopysurface potentialperovskite oxide
The surface/interface species in perovskite oxides play an essential role in many novel emergent physical phenomena and chemical processes. With low eigenenergy in the terahertz region, such species at buried interfaces remain poorly understood due to the lack of feasible experimental techniques. Here, we show that vibrational resonances and two-dimensional electron gas at the interface can be characterized using surface-specific nonlinear spectroscopy in the terahertz range. This technique uses intra-pulse difference frequency mixing (DFM) process, which is allowed only at surface/interface of a medium with inversion symmetry. Submonolayer sensitivity can be achieved using the state-of-the-art detection scheme for the terahertz emission from surface/interface. As a demonstration, Drude-like nonlinear response from the two-dimensional electron gas emerging at LaAlO3/SrTiO3 or Al2O3/ SrTiO3 interface was successfully observed. Meanwhile, the interfacial vibrational spectrum of the ferroelectric soft mode of SrTiO3 at 2.8 THz was also obtained that was polarized by the surface field in the interfacial region. The corresponding surface/interface potential, which is a key parameter for SrTiO3-based interface superconductivity and photocatalysis, can now be determined optically via quantitative analysis on the polarized phonon spectrum. The interfacial species with resonant frequencies in the THz region revealed by our method provide more insights into the understanding of physical properties of complex oxides.
Introduction
The surface and interface of complex oxides attract enormous research attention due to their unique electrical, magnetic, and electrochemical properties 1, 2 . Among these oxides, strontium titanate [SrTiO3 (STO)], as a prototypical perovskite retaining its multifunctional nature and the uniqueness in fabrication and modification with atomiclevel precision, stands out as an ideal test-bed for exploring multifarious intriguing physical and chemical phenomena 3 , in which the collective excitation and coupling present at surface/interface play the key role 4,5 . For example, at FeSe/STO interface, it is believed that both low-frequency phonon from STO and interfacial band bending are vital for the enhancement of superconductivity 6,7 . The nontrivial topological vortex/antivortex forming at the interface of PbTiO3/SrTiO3, the collective resonance of which lies in the terahertz (THz) range, provides an alternative choice for post-Moore electronic devices 8 . In the case of STO-based photocatalysis, the facet-dependent surface potential of STO is considered as the essential factor for electron-hole separation and charge transfer across the interface, through which water splitting with almost unity quantum efficiency can be realized [9][10][11] . However, the interrogation of surfaces/interfaces of perovskite oxides remains challenging experimentally because the fundamental excitations often occurs in the THz region. Yet, little surface-specific probes with chemical selectivity is available below 15 THz, especially, in hostile environment.
Second-order nonlinear optical spectroscopy, such as sum-frequency spectroscopy (SFS), has found multidisciplinary applications in the study of surfaces and interfaces.
Being an all-optical detection scheme, it can be used to probe electronic or vibrational resonances in various complex interfacial systems with sub-monolayer sensitivity 12 .
Unfortunately, for resonant frequency below 15 THz, employing SFS is extremely difficult because of lacking intense THz light source or a feasible detection scheme that can distinguish the weak sum-frequency signal from the pump light. As a result, over the past decades, applications of surface-specific nonlinear optical spectroscopy were limited to the systems with resonant frequencies ranging from mid-infrared to ultraviolet. On the other hand, there are many important excitations in the THz range, 13, 14 e.g. lattice vibrations, quasi-particles in quantum materials, and hydrogen-bond vibrations in bio-molecules. Thus, development of surface-specific spectroscopic technique operating in the THz range is desired. Indeed, several attempts had been made to study the THz response of surface/interface using optical rectification, but with necessarily large nonlinearity of, e.g., the free carriers in metal 15 or photo-carriers in semiconductors 16 . More recently, THz emission from two-dimensional materials has been reported, particularly, monolayer graphene 17 and transition-metal dichalcogenides 18 , thanks to strong enhancement through their electronic resonances.
However, in these studies, chemical selectivity through vibrational resonances is not available. More importantly, to probe the surface/interface of complex oxide in general, a versatile surface-specific THz spectroscopic scheme is needed. In this work, we develop a surface-specific terahertz spectroscopic technique using differencefrequency mixing (DFM) process at an interface. The weak terahertz radiation from the interfacial species is detected using the state-of-the-art electro-optic sampling (EOS) technique. Besides complex oxides, our novel terahertz spectroscopic scheme with submonolayer sensitivity is expected to benefit the studies of chiral vibrations of biomolecules and collective motions of hydrogen-bonding network at interfaces as well 19,20 .
Difference-frequency spectroscopy
Being a second-order nonlinear process, DFM is forbidden in the bulk of centrosymmetric medium under electric dipole approximation, but is allowed at the surface/interface, where translational continuity is necessarily broken. As for THz difference-frequency spectroscopy (THz-DFS), akin to the formalism of SFS, the field spectrum of generated THz pulse can be expressed as 12 :
( ) ( ) ( ) ( ) ( )
spectroscopy is that the former probes a selected side of target interface with chemical sensitivity, while the latter contains contributions from all interfaces. 23 Experimentally, the THz radiation is generated via intra-pulse DFM of a femtosecond pump pulse from a surface/interface (Fig. 1a). Generally, the magnitude of To verify that our THz-DFS is sensitive enough to probe a pure surface contribution,
i.e., the ( ) 2 s term, we managed to suppress the surface-field induced TO1 phonon contribution by changing the polarization combination from ppp to pss (p-polarized THz field and s-polarized pump field). It can be shown that via symmetry argument under pss-polarziation, the ( ) 2 s term is non-vanishing at the interface, while the contribution from the polarized phonon in the depletion region is forbiden (see supplementary material and Fig. S2). The interfaces of Al2O3/STO and LaAlO3/SrTiO3 (LAO/STO) are known to host two-dimensional electron gas (2DEG). 28,29 Thanks to the low scattering rate in 2DEG, 30 one expects to observe a Drude-like spectral feature that diverges towards the low frequency according to the hydrodynamic model of free carriers. 31 Indeed, as evidenced in Fig. 3a and b, the nonlinear response,
( ) 2 s , from
2DEG is clearly observed in the DF spectra for Al2O3/STO and 6 unit cell LAO/STO.
In contrast, the Drude-like behavior is absent in the DF spectra of 2 unit cell LAO/STO, 50-nm SiO2/STO and air/STO interfaces, in which no 2DEG exists. 28 Note that the nonlinear response of 2DEG is much smaller than the surface field contribution, For 2DEG merging at STO interface, the veiled mechanism is still under lively debate, where the interface potential is one of the key parameters that govern the electronic properties. The interface potential of STO can ascribe to surface oxygen vacancy 32,33 or charge transfer at the heterogeneous interface that causes band bending in the interfacial region. 34 As discussed above, the DF spectrum of TO1 mode in ppp- polarization is proportional to the surface/interface potential, (see Eq. (3)). Thus, once the (3) B spectrum of STO is calibrated in prior (described in Supplementary Material and Fig. S8, S9), can be determined in situ using THz-DFS. Figure 4a shows ppp-DF spectra of gas/STO, SiO2/STO, Al2O3/STO and LAO/STO interfaces.
The corresponding surface/interfacial potential is plotted in Fig. 4b. The interface potential of STO with 20 nm Al2O3 overlayer is found to be +0.58 V, which agrees with that determined by X-ray photo-emission spectroscopy (XPS). 35 It confirms the validity of our optical technique for measurement of surface/interface potential. In Fig. 4b, we see no clear correlation between surface/interface potential and the emergence of 2DEG.
In particular, the ppp-DF spectra for 6 unit cell LAO/STO and 2 unit cell LAO/STO are essentially the same, resulting in the same interface potential of +0.45 V, although the former hosts 2DEG at the interface. It suggests that increasing LAO thickness does not cause obvious change in the interface potential, but may lead to formation of a dipole layer across the interface that creates potential well at the topmost layers of STO for confinement of 2DEG. 36 These results demonstrate THz-DFS as a feasible tool to quantify the surface/interface potential, with unique advantages including chemical selectivity through resonances, functionality for buried interfaces, remote all-optical monitoring, etc.
Methods
Experimental setup
The experimental setup of THz-DFS is shown schematically in Fig. 1b. The system is based on a Yb:KGW regenerative amplifier (Light Conversion PHAROS) operating at 100 kHz repetition rate. The amplifier output centered at 1030 nm undergoes a nonlinear spectral broadening stage as described elsewhere. 37 After reflecting on a set of chirped mirrors, the pulse was compressed to 26 femtosecond. A tiny portion was picked out by a beam-splitter to be used as the probe pulse in EOS, while the majority pumps intra-pulse DFM process on the sample surface/interface. The incident pump was 44 μJ in energy per pulse and is focused to 0.5 mm (1/e 2 diameter) on sample. An achromatic half wave plate is used to rotate the pump polarization. The emitted terahertz pulse in reflecting direction was collimated and re-focused onto a 0.3-mm-thick GaP(110) crystal by a pair of parabolic mirrors with 200 mm and 100 mm focal length, respectively. Between the two parabolic mirrors, a broadband wire grid polarizer was used as the analyzer for the terahertz radiation. The reflected residual pump was filtered out by a PTFE plate. The whole terahertz beam path was purged with dry air to avoid vapor absorption. Routed through a translational stage, the probe beam was combined collinearly with terahertz beam by a Ge wafer. The polarization change of the probe in EO crystal was measured using the balanced detection scheme consisting of a silicon-photodiode-based balanced detector (Newport Nirvana) and a lock-in amplifier.
Sensitivity of THz-DFS setup
The noise spectrum of our THz-DFS measurement setup with integration time over 100 s (10 million pulses) is shown in Fig. S10. Above 1.0 THz the noise level is 22 2 10 − m 2 /V, which corresponds to 8 5 10 rad / Hz − in our EOS measurement. 24 Below 1.0 THz, the sensitivity becomes worse towards the low frequency because of the loss by diffraction and the weaker radiation of the oscillating dipoles at lower frequencies. Notice that (2) s, eff of a self-assembled monolayer is in the order of 21 1 10 − m 2 /V. Thus, the sensitivity of our setup is sufficient for the detection of surface-species with sub-monolayer thickness.
Fig. 1 .
1Schematic of Terahertz difference frequency spectroscopy. (a) Schematic of surface difference frequency generation from STO(001). The incident near-infrared pulse excites TO1 phonon vibration via intra-pulse DFM. The symmetry-breaking of TO1 phonon results from the surface field of STO. (b) Sketch of experiment setup. BS, beamsplitter; HWP, half-wave plate; P, linear polarizer; L, lens; F, teflon filter; OAP, off-angle parabolic mirror; QWP, quarter wave-plate; WP, Wollaston prism.
interface is in the range of 10 -20 ~ 10 -22 m 2 /V. Using 1 TW/cm 2 pump intensity, the field strength of THz output is estimated to be in the order of 10 0 ~ 10 -2 V/cm. Such a terahertz output can be recorded using the state-of-the-art EOS technique. Recently, we managed to improve the balanced detection in our EOS to a examine the validity of THz-DFS for probing the low-frequency resonances at surfaces and interfaces, STO interfaces with or without heterogeneous layer were chosen as the representatives. The schematics in Fig. 1b shows the key components of THz-DFS measurement apparatus (see methods for details). A typical THz-DFS waveform from a pristine undoped STO(001) surface in dry air is presented in Fig. 2a. Here, we used p-, pand p-(ppp) polarization combination for the THz and the femtosecond pump fields. The linear dependence of THz amplitude on pump intensity confirms the signal resulting from the second-order nonlinear process (see Supplementary Material). The Fourier transformed spectrum given in the inset of Fig. 2a shows the detection bandwidth reaches 5 THz that is limited by the EO crystal. After removing the Fresnel factors (see Supplementary Material and Fig. S7), we display the resultant amplitude, imaginary and real parts of (2) s, eff spectrum of STO inFig. 2b, c,and d, respectively. The spectra presented were normalized against a z-cut α-quartz crystal. A single resonant peak is recognized at 2.8 THz with 0., which can be readily assigned to the TO1 phonon of STO.25 Because STO is centro-symmetric in the bulk, the observed DF spectrum must originate from the surface or the surface-electric-field-induced polarization in . As compared inFig. 2e, the DF spectrum,(2) s, eff , of STO shares the same resonant frequency and linewidth with the TO1 phonon of bulk STO25 . Note that the pure surface mode is expected to differ from those in the bulk in terms of resonant features because of the differences in the structure and local environment in the two cases.27 Furthermore, we modified the STO surface via deposition of a 20-nm-thick Al2O3 film. No change in the spectral feature was observed except for increase of the amplitude as shown inFig. 2e. The above results suggest that the DF spectra, interfaces are overwhelmed by surface-electric-field-induced polarization, versus the pure surface contribution in ppp-polarization. The variance of their amplitudes is attributed to the difference of their surface potential .
Fig. 2 .
2Terahertz difference frequency spectrum of SrTiO3. (a) Typical waveform of the emitted terahertz pulse from STO(001) with its Fourier transform in the inset. Dashed line in the inset represents the detection limit. (b, c and d) Amplitude (b), real part (c) and imaginary part (d) of deduced surface second order susceptibility, after normalization against z-cut quartz reference and removal of Fresnel factor. of bare STO(001) (top), Al2O3/STO(001) (middle) and the imaginary part of dielectric function (2) of bulk STO (bottom) 25 .
.
Through proper selection of polarzition combination, the THZ-DFS is capable of characterizing species at surface/interface using the state-of-the-art EOS detection scheme.
Fig. 3 .
3THz-DFS of 2DEG at STO interfaces. polarization combination for bare STO(001) (gray), 20-nm-thick-Al2O3/STO(001) (dark blue), and 50-nm-thick-SiO2/STO(001) (blue). pss polarization combination for 2-unit-cell-LAO/STO(001) (green) and 6-unit-cell-LAO/STO(001) (light blue).
Fig. 4 .
4Surface potential of STO measured by THz-DFS. under ppp polarization combination, stacked from bottom to top for bare STO, SiO2/STO, Al2O3/STO, 2-unit-cell-LAO/STO, and 6-unit-cell-LAO/STO. Inset, comparison between the spectra of 2-unit-cell-LAO/STO and 6-unit-cell-LAO/STO. (b)Summarizing chart of surface/interface potential of STO(001) in various conditions.DiscussionOur work established THz-DFS as a viable surface-specific nonlinear optical spectroscopic method for probing low-frequency resonances on surface or at buried interface. As a demonstration, STO(001) surface and interfaces with different heterogeneous layers were investigated. The 2DEG emerging at the interface and the TO1 phonon polarized in the depletion layer were observed. Furthermore, the sensitivity of THz-DFS also satisfies the detection requirement of sub-monolayer surface species in general using the state-of-the-art EOS technique. In contrast, the well-established sum-frequency spectroscopy found successful applications in probing the elementary vibrations in the mid-IR region, e.g., the chemical and biological materials composed of light elements. Our approach opens up new opportunities for exploring the low-frequency vibrations and emergent species on surfaces or at buried interfaces in various environments.As an outlook, the detection bandwidth of THz-DFS that we demonstrated here is limited by the EO crystal. Using a high quality organic EO crystal, the detection bandwidth can reach beyond 5 THz. Research in this frequency range is intriguing because collective excitations at the interface of various condensed matter system occur between 5-15 THz, such as superconducting gap, heavy Fermion plasmons, soft mode in ferroelectricity, etc.
AcknowledgementThe authors would like to thank Rui Peng and Tong Zhang at Fudan University for
No. 12221004), and the Shanghai Science and Technology Committee (No. 20ZR1406000). No. 11874123, No. 12221004), and the Shanghai Science and Technology Committee (No. 20ZR1406000).
Interface Physics in Complex Oxide Heterostructures. P Zubko, S Gariglio, M Gabay, P Ghosez, J M Triscone, 10.1146/annurev-conmatphys-062910-140445Annual Review of Condensed Matter Physics. 2Zubko, P.; Gariglio, S.; Gabay, M.; Ghosez, P.; Triscone, J. M. Interface Physics in Complex Oxide Heterostructures. Annual Review of Condensed Matter Physics, Vol 2 2011, 2, 141-165. DOI: 10.1146/annurev-conmatphys-062910-140445.
Emergent phenomena at oxide interfaces. H Y Hwang, Y Iwasa, M Kawasaki, B Keimer, N Nagaosa, Y Tokura, 10.1038/Nmat3223Nature Materials. 20122Hwang, H. Y.; Iwasa, Y.; Kawasaki, M.; Keimer, B.; Nagaosa, N.; Tokura, Y. Emergent phenomena at oxide interfaces. Nature Materials 2012, 11 (2), 103-113. DOI: 10.1038/Nmat3223.
Physics of SrTiO3-based heterostructures and nanostructures: a review. Y Y Pai, A Tylan-Tyler, P Irvin, J Levy, 036503. DOI: ARTN 036503Reports on Progress in Physics. 813Pai, Y. Y.; Tylan-Tyler, A.; Irvin, P.; Levy, J. Physics of SrTiO3-based heterostructures and nanostructures: a review. Reports on Progress in Physics 2018, 81 (3), 036503. DOI: ARTN 036503
Observation of polar vortices in oxide superlattices. A K Yadav, C T Nelson, S L Hsu, Z Hong, J D Clarkson, C M Schlepuetz, A R Damodaran, P Shafer, E Arenholz, L R Dedon, 10.1038/nature16463Nature. 5307589Yadav, A. K.; Nelson, C. T.; Hsu, S. L.; Hong, Z.; Clarkson, J. D.; Schlepuetz, C. M.; Damodaran, A. R.; Shafer, P.; Arenholz, E.; Dedon, L. R.; et al. Observation of polar vortices in oxide superlattices. Nature 2016, 530 (7589), 198-201. DOI: 10.1038/nature16463.
Q Y Wang, Z Li, W H Zhang, Z C Zhang, J S Zhang, W Li, H Ding, Y B Ou, P Deng, K Chang, DOI: Artn 037402Interface-Induced High-Temperature Superconductivity in Single Unit-Cell FeSe Films on SrTiO3. Chinese Physics Letters. 2937402Wang, Q. Y.; Li, Z.; Zhang, W. H.; Zhang, Z. C.; Zhang, J. S.; Li, W.; Ding, H.; Ou, Y. B.; Deng, P.; Chang, K.; et al. Interface-Induced High-Temperature Superconductivity in Single Unit-Cell FeSe Films on SrTiO3. Chinese Physics Letters 2012, 29 (3), 037402. DOI: Artn 037402
Origin of charge transfer and enhanced electron-phonon coupling in single unit-cell FeSe films on SrTiO3. H M Zhang, D Zhang, X W Lu, C Liu, G Y Zhou, X C Ma, L L Wang, P Jiang, Q K Xue, X H Bao, 214.DOI:ARTN21410.1038/s41467-017-00281-5Nature Communications. 8Zhang, H. M.; Zhang, D.; Lu, X. W.; Liu, C.; Zhou, G. Y.; Ma, X. C.; Wang, L. L.; Jiang, P.; Xue, Q. K.; Bao, X. H. Origin of charge transfer and enhanced electron-phonon coupling in single unit-cell FeSe films on SrTiO3. Nature Communications 2017, 8, 214. DOI: ARTN 214 10.1038/s41467-017-00281-5.
Interfacial mode coupling as the origin of the enhancement of T-c in FeSe films on SrTiO3. J J Lee, F T Schmitt, R G Moore, S Johnston, Y T Cui, W Li, M Yi, Z K Liu, M Hashimoto, Y Zhang, 10.1038/nature13894Nature. 5157526Lee, J. J.; Schmitt, F. T.; Moore, R. G.; Johnston, S.; Cui, Y. T.; Li, W.; Yi, M.; Liu, Z. K.; Hashimoto, M.; Zhang, Y.; et al. Interfacial mode coupling as the origin of the enhancement of T-c in FeSe films on SrTiO3. Nature 2014, 515 (7526), 245-U207. DOI: 10.1038/nature13894.
Subterahertz collective dynamics of polar vortices. Q Li, V A Stoica, M Pas Ciak, Y Zhu, Y Yuan, T Yang, M R Mccarter, S Das, A K Yadav, S Park, 10.1038/s41586-021-03342-4Nature. 20217854Li, Q.; Stoica, V. A.; Pas ciak, M.; Zhu, Y.; Yuan, Y.; Yang, T.; McCarter, M. R.; Das, S.; Yadav, A. K.; Park, S.; et al. Subterahertz collective dynamics of polar vortices. Nature 2021, 592 (7854), 376-380. DOI: 10.1038/s41586-021-03342-4.
Photocatalytic water splitting with a quantum efficiency of almost unity. T Takata, J Jiang, Y Sakata, M Nakabayashi, N Shibata, V Nandal, K Seki, T Hisatomi, K Domen, Nature. 20207809Takata, T.; Jiang, J.; Sakata, Y.; Nakabayashi, M.; Shibata, N.; Nandal, V.; Seki, K.; Hisatomi, T.; Domen, K. Photocatalytic water splitting with a quantum efficiency of almost unity. Nature 2020, 581 (7809), 411-414.
Strontium titanate photoelectrodes. Efficient photoassisted electrolysis of water at zero applied potential. M S Wrighton, A B Ellis, P T Wolczanski, D L Morse, H B Abrahamson, D S Ginley, Journal of the American Chemical Society. 9810Wrighton, M. S.; Ellis, A. B.; Wolczanski, P. T.; Morse, D. L.; Abrahamson, H. B.; Ginley, D. S. Strontium titanate photoelectrodes. Efficient photoassisted electrolysis of water at zero applied potential. Journal of the American Chemical Society 1976, 98 (10), 2774-2779.
Photocatalytic hydrogen production from water on Pt-free SrTiO 3 in alkali hydroxide solutions. F Wagner, G Somorjai, Nature. 2855766Wagner, F.; Somorjai, G. Photocatalytic hydrogen production from water on Pt-free SrTiO 3 in alkali hydroxide solutions. Nature 1980, 285 (5766), 559-560.
Fundamentals of Sum-Frequency Spectroscopy. Y R Shen, Shen, Y. R. Fundamentals of Sum-Frequency Spectroscopy; 2016.
The 2017 terahertz science and technology roadmap. S S Dhillon, M S Vitiello, E H Linfield, A G Davies, M C Hoffmann, J Booske, C Paoloni, M Gensch, P Weightman, G P Williams, 10.1088/1361Journal of Physics D: Applied Physics. 504Dhillon, S. S.; Vitiello, M. S.; Linfield, E. H.; Davies, A. G.; Hoffmann, M. C.; Booske, J.; Paoloni, C.; Gensch, M.; Weightman, P.; Williams, G. P.; et al. The 2017 terahertz science and technology roadmap. Journal of Physics D: Applied Physics 2017, 50 (4), 043001. DOI: 10.1088/1361
Matter manipulation with extreme terahertz light: Progress in the enabling THz technology. P Sale N, M Basini, S Bonetti, J Hebling, M Krasilnikov, A Y Nikitin, G Shamuilov, Z Tibai, V Zhaunerchyk, V Goryashko, 10.1016/j.physrep.2019.09.002Physics Reports. Sale n, P.; Basini, M.; Bonetti, S.; Hebling, J.; Krasilnikov, M.; Nikitin, A. Y.; Shamuilov, G.; Tibai, Z.; Zhaunerchyk, V.; Goryashko, V. Matter manipulation with extreme terahertz light: Progress in the enabling THz technology. Physics Reports 2019, 836-837, 1-74. DOI: https://doi.org/10.1016/j.physrep.2019.09.002.
Optical rectification at metal surfaces. F Kadlec, P Kuzel, J L Coutaz, Doi10.1364/Ol.29.002674Opt Lett. 2922Kadlec, F.; Kuzel, P.; Coutaz, J. L. Optical rectification at metal surfaces. Opt Lett 2004, 29 (22), 2674-2676. DOI: Doi 10.1364/Ol.29.002674.
Generation of Femtosecond Electromagnetic Pulses from Semiconductor Surfaces. X C Zhang, B B Hu, J T Darrow, D H Auston, 1011-1013.DOI:Doi10.1063/1.102601Appl Phys Lett. 5611Zhang, X. C.; Hu, B. B.; Darrow, J. T.; Auston, D. H. Generation of Femtosecond Electromagnetic Pulses from Semiconductor Surfaces. Appl Phys Lett 1990, 56 (11), 1011-1013. DOI: Doi 10.1063/1.102601.
Terahertz Generation by Dynamical Photon Drag Effect in Graphene Excited by Femtosecond Optical Pulses. J Maysonnave, S Huppert, F Wang, S Maero, C Berger, W De Heer, T B Norris, L A De Vaulchier, S Dhillon, J Tignon, 10.1021/nl502684jNano Lett. 1410Maysonnave, J.; Huppert, S.; Wang, F.; Maero, S.; Berger, C.; de Heer, W.; Norris, T. B.; De Vaulchier, L. A.; Dhillon, S.; Tignon, J.; et al. Terahertz Generation by Dynamical Photon Drag Effect in Graphene Excited by Femtosecond Optical Pulses. Nano Lett 2014, 14 (10), 5797-5802. DOI: 10.1021/nl502684j.
Terahertz surface and interface emission spectroscopy for advanced materials. Y Huang, Z Yao, C He, L Zhu, L Zhang, J Bai, X Xu, 10.1088/1361-648x/ab00c0Journal of Physics: Condensed Matter. 3115Huang, Y.; Yao, Z.; He, C.; Zhu, L.; Zhang, L.; Bai, J.; Xu, X. Terahertz surface and interface emission spectroscopy for advanced materials. Journal of Physics: Condensed Matter 2019, 31 (15), 153001. DOI: 10.1088/1361-648x/ab00c0.
Chiral water superstructures around antiparallel βsheets observed by chiral vibrational sum frequency generation spectroscopy. The journal of physical chemistry letters. E A Perets, E C Yan, 10Perets, E. A.; Yan, E. C. Chiral water superstructures around antiparallel β- sheets observed by chiral vibrational sum frequency generation spectroscopy. The journal of physical chemistry letters 2019, 10 (12), 3395-3401.
Chiral phonons in microcrystals and nanofibrils of biomolecules. W J Choi, K Yano, M Cha, F M Colombari, J.-Y Kim, Y Wang, S H Lee, K Sun, J M Kruger, A F De Moura, 10.1038/s41566-022-00969-1Nature Photonics. 16Choi, W. J.; Yano, K.; Cha, M.; Colombari, F. M.; Kim, J.-Y.; Wang, Y.; Lee, S. H.; Sun, K.; Kruger, J. M.; de Moura, A. F.; et al. Chiral phonons in microcrystals and nanofibrils of biomolecules. Nature Photonics 2022, 16, 366-373. DOI: 10.1038/s41566-022-00969-1.
Unveiling Microscopic Structures of Charged Water Interfaces by Surface-Specific Vibrational Spectroscopy. Y C Wen, S Zha, X Liu, S S Yang, P Guo, G S Shi, H P Fang, Y R Shen, C S Tian, 016101.DOI:ARTN01610110.1103/PhysRevLett.116.016101Phys Rev Lett. 1161Wen, Y. C.; Zha, S.; Liu, X.; Yang, S. S.; Guo, P.; Shi, G. S.; Fang, H. P.; Shen, Y. R.; Tian, C. S. Unveiling Microscopic Structures of Charged Water Interfaces by Surface-Specific Vibrational Spectroscopy. Phys Rev Lett 2016, 116 (1), 016101. DOI: ARTN 016101 10.1103/PhysRevLett.116.016101.
Phasereferenced nonlinear spectroscopy of the α-quartz/water interface. P E Ohno, S A Saslow, H Wang, F M Geiger, K B Eisenthal, 10.1038/ncomms13587Nature Communications. 71Ohno, P. E.; Saslow, S. A.; Wang, H.-f.; Geiger, F. M.; Eisenthal, K. B. Phase- referenced nonlinear spectroscopy of the α-quartz/water interface. Nature Communications 2016, 7 (1), 13587. DOI: 10.1038/ncomms13587.
Polarization of water molecules at a charged interface: second harmonic studies of the silica/water interface. S Ong, X Zhao, K B Eisenthal, 10.1016/0009-2614(92)85309-XChemical Physics Letters. 191385309Ong, S.; Zhao, X.; Eisenthal, K. B. Polarization of water molecules at a charged interface: second harmonic studies of the silica/water interface. Chemical Physics Letters 1992, 191 (3), 327-335. DOI: https://doi.org/10.1016/0009- 2614(92)85309-X.
Enhanced sensitivity of terahertz electro-optic sampling based on reflective Brewster window. Y H Wang, J Y Ma, J M Le, Y D Su, C S Tian, to be publishedWang, Y. H.; Ma, J. Y.; Le, J. M.; Su, Y. D.; Tian, C. S. Enhanced sensitivity of terahertz electro-optic sampling based on reflective Brewster window. (to be published).
Refractive indices of SrTiO3 in the infrared region. P Dore, G Demarzi, A Paolone, Doi10.1007/Bf02677900Int J Infrared Milli. 181Dore, P.; DeMarzi, G.; Paolone, A. Refractive indices of SrTiO3 in the infrared region. Int J Infrared Milli 1997, 18 (1), 125-138. DOI: Doi 10.1007/Bf02677900.
Surface Pyroelectricity in Cubic SrTiO3. E Meirzadeh, D V Christensen, E Makagon, H Cohen, I Rosenhek-Goldian, E H Morales, A Bhowmik, J M G Lastra, A M Rappe, D Ehre, 10.1002/adma.201904733Advanced Materials. 31441904733Meirzadeh, E.; Christensen, D. V.; Makagon, E.; Cohen, H.; Rosenhek-Goldian, I.; Morales, E. H.; Bhowmik, A.; Lastra, J. M. G.; Rappe, A. M.; Ehre, D.; et al. Surface Pyroelectricity in Cubic SrTiO3. Advanced Materials 2019, 31 (44), 1904733. DOI: https://doi.org/10.1002/adma.201904733.
Surface Vibrational Modes of $\ensuremath{\alpha}$-Quartz(0001) Probed by Sum-Frequency Spectroscopy. W.-T Liu, Y R Shen, 10.1103/PhysRevLett.101.016101Phys Rev Lett. 1011Liu, W.-T.; Shen, Y. R. Surface Vibrational Modes of $\ensuremath{\alpha}$- Quartz(0001) Probed by Sum-Frequency Spectroscopy. Phys Rev Lett 2008, 101 (1), 016101. DOI: 10.1103/PhysRevLett.101.016101.
Band alignment in LaAlO3/SrTiO3 oxide heterostructures inferred from hard x-ray photoelectron spectroscopy. G Berner, A Mu Ller, F Pfaff, J Walde, C Richter, J Mannhart, S Thiess, A Gloskovskii, W Drube, M Sing, 10.1103/PhysRevB.88.115111Phys Rev B. 8811Berner, G.; Mu ller, A.; Pfaff, F.; Walde, J.; Richter, C.; Mannhart, J.; Thiess, S.; Gloskovskii, A.; Drube, W.; Sing, M.; et al. Band alignment in LaAlO3/SrTiO3 oxide heterostructures inferred from hard x-ray photoelectron spectroscopy. Phys Rev B 2013, 88 (11), 115111. DOI: 10.1103/PhysRevB.88.115111.
A high-mobility twodimensional electron gas at the spinel/perovskite interface of γ-Al2O3/SrTiO3. Y Z Chen, N Bovet, F Trier, D V Christensen, F M Qu, N H Andersen, T Kasama, W Zhang, R Giraud, J Dufouleur, 10.1038/ncomms2394Nature Communications. 20131Chen, Y. Z.; Bovet, N.; Trier, F.; Christensen, D. V.; Qu, F. M.; Andersen, N. H.; Kasama, T.; Zhang, W.; Giraud, R.; Dufouleur, J.; et al. A high-mobility two- dimensional electron gas at the spinel/perovskite interface of γ-Al2O3/SrTiO3. Nature Communications 2013, 4 (1), 1371. DOI: 10.1038/ncomms2394.
A high-mobility electron gas at the LaAlO3/SrTiO3 heterointerface. A Ohtomo, H Y Hwang, 10.1038/nature02308Nature. 4276973Ohtomo, A.; Hwang, H. Y. A high-mobility electron gas at the LaAlO3/SrTiO3 heterointerface. Nature 2004, 427 (6973), 423-426. DOI: 10.1038/nature02308.
Enhanced Third-Order Optical Nonlinearity Driven by Surface-Plasmon Field Gradients. V Kravtsov, S Almutairi, R Ulbricht, A R Kutayiah, A Belyanin, M B Raschke, 10.1103/PhysRevLett.120.203903Phys Rev Lett. 12020203903Kravtsov, V.; AlMutairi, S.; Ulbricht, R.; Kutayiah, A. R.; Belyanin, A.; Raschke, M. B. Enhanced Third-Order Optical Nonlinearity Driven by Surface-Plasmon Field Gradients. Phys Rev Lett 2018, 120 (20), 203903. DOI: 10.1103/PhysRevLett.120.203903.
Ferroelectric Relaxation of the Srtio3(100) Surface. N Bickel, G Schmidt, K Heinz, K Muller, DOI10.1103/PhysRevLett.62.2009Phys Rev Lett. 6217Bickel, N.; Schmidt, G.; Heinz, K.; Muller, K. Ferroelectric Relaxation of the Srtio3(100) Surface. Phys Rev Lett 1989, 62 (17), 2009-2011. DOI: DOI 10.1103/PhysRevLett.62.2009.
Polar oxide surfaces. C Noguera, R367-R410.DOI:Doi10.1088/0953J Phys-Condens Mat. 1231Noguera, C. Polar oxide surfaces. J Phys-Condens Mat 2000, 12 (31), R367- R410. DOI: Doi 10.1088/0953
Band Bending in Semiconductors: Chemical and Physical Consequences at Surfaces and Interfaces. Z Zhang, J T Yates, 10.1021/cr3000626Chemical Reviews. 201210Zhang, Z.; Yates, J. T. Band Bending in Semiconductors: Chemical and Physical Consequences at Surfaces and Interfaces. Chemical Reviews 2012, 112 (10), 5520- 5551. DOI: 10.1021/cr3000626.
Band bending and alignment at the spinel/perovskite γ-Al2O3/SrTiO3 heterointerface. P Schu Tz, F Pfaff, P Scheiderer, Y Z Chen, N Pryds, M Gorgoi, M Sing, R Claessen, 10.1103/PhysRevB.91.165118Phys Rev B. 1691Schu tz, P.; Pfaff, F.; Scheiderer, P.; Chen, Y. Z.; Pryds, N.; Gorgoi, M.; Sing, M.; Claessen, R. Band bending and alignment at the spinel/perovskite γ-Al2O3/SrTiO3 heterointerface. Phys Rev B 2015, 91 (16), 165118. DOI: 10.1103/PhysRevB.91.165118.
A polarity-induced defect mechanism for conductivity and magnetism at polar-nonpolar oxide interfaces. L Yu, A Zunger, 10.1038/ncomms6118Nature Communications. 51Yu, L.; Zunger, A. A polarity-induced defect mechanism for conductivity and magnetism at polar-nonpolar oxide interfaces. Nature Communications 2014, 5 (1), 5118. DOI: 10.1038/ncomms6118.
Solitary beam propagation in periodic layered Kerr media enables high-efficiency pulse compression and mode self-cleaning. S Zhang, Z Y Fu, B B Zhu, G Y Fan, Y D Chen, S J Wang, Y X Liu, A Baltuska, C Jin, C S Tian, 53.DOI:ARTN5310.1038/s41377-021-00495-9Light-Science & Applications. 20211Zhang, S.; Fu, Z. Y.; Zhu, B. B.; Fan, G. Y.; Chen, Y. D.; Wang, S. J.; Liu, Y. X.; Baltuska, A.; Jin, C.; Tian, C. S.; et al. Solitary beam propagation in periodic layered Kerr media enables high-efficiency pulse compression and mode self-cleaning. Light-Science & Applications 2021, 10 (1), 53. DOI: ARTN 53 10.1038/s41377-021-00495-9.
| []
|
[
"Detecting extrasolar planets from stellar radial velocities using Bayesian evidence",
"Detecting extrasolar planets from stellar radial velocities using Bayesian evidence"
]
| [
"F Feroz \nAstrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"S T Balan \nAstrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"M P Hobson \nAstrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK\n"
]
| [
"Astrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Astrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Astrophysics Group\nCavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK"
]
| [
"Mon. Not. R. Astron. Soc"
]
| Stellar radial velocity (RV) measurements have proven to be a very successful method for detecting extrasolar planets. Analysing RV data to determine the parameters of the extrasolar planets is a significant statistical challenge owing to the presence of multiple planets and various degeneracies between orbital parameters. Determining the number of planets favoured by the observed data is an even more difficult task. Bayesian model selection provides a mathematically rigorous solution to this problem by calculating marginal posterior probabilities of models with different number of planets, but the use of this method in extrasolar planetary searches has been hampered by the computational cost of the evaluating Bayesian evidence. Nonetheless, Bayesian model selection has the potential to improve the interpretation of existing observational data and possibly detect yet undiscovered planets. We present a new and efficient Bayesian method for determining the number of extrasolar planets, as well as for inferring their orbital parameters, without having to calculate directly the Bayesian evidence for models containing a large number of planets. Instead, we work iteratively and at each iteration obtain a conservative lower limit on the odds ratio for the inclusion of an additional planet into the model. We apply this method to simulated data-sets containing one and two planets and successfully recover the correct number of planets and reliable constraints on the orbital parameters. We also apply our method to RV measurements of HD 37124, 47 Ursae Majoris and HD 10180. For HD 37124, we confirm that the current data strongly favour a three-planet system. We find strong evidence for the presence of a fourth planet in 47 Ursae Majoris, but its orbital period is suspiciously close to one year, casting doubt on its validity. For HD 10180 we find strong evidence for a six-planet system. | 10.1111/j.1365-2966.2011.18962.x | [
"https://arxiv.org/pdf/1012.5129v2.pdf"
]
| 118,503,059 | 1012.5129 | 2236d39d9bfe00735cf98907e50716a7f17c7594 |
Detecting extrasolar planets from stellar radial velocities using Bayesian evidence
2010
F Feroz
Astrophysics Group
Cavendish Laboratory
JJ Thomson AvenueCB3 0HECambridgeUK
S T Balan
Astrophysics Group
Cavendish Laboratory
JJ Thomson AvenueCB3 0HECambridgeUK
M P Hobson
Astrophysics Group
Cavendish Laboratory
JJ Thomson AvenueCB3 0HECambridgeUK
Detecting extrasolar planets from stellar radial velocities using Bayesian evidence
Mon. Not. R. Astron. Soc
0002010Accepted -. Received -; in original form 5 May 2011Printed 5 (MN L A T E X style file v2.2)stars: planetary systems -stars: individual: HD 37124 -stars: individual: 47 Ursae Majoris - stars: individual: HD 10180 -techniques: radial velocities -methods: data analysis -meth- ods: statistical
Stellar radial velocity (RV) measurements have proven to be a very successful method for detecting extrasolar planets. Analysing RV data to determine the parameters of the extrasolar planets is a significant statistical challenge owing to the presence of multiple planets and various degeneracies between orbital parameters. Determining the number of planets favoured by the observed data is an even more difficult task. Bayesian model selection provides a mathematically rigorous solution to this problem by calculating marginal posterior probabilities of models with different number of planets, but the use of this method in extrasolar planetary searches has been hampered by the computational cost of the evaluating Bayesian evidence. Nonetheless, Bayesian model selection has the potential to improve the interpretation of existing observational data and possibly detect yet undiscovered planets. We present a new and efficient Bayesian method for determining the number of extrasolar planets, as well as for inferring their orbital parameters, without having to calculate directly the Bayesian evidence for models containing a large number of planets. Instead, we work iteratively and at each iteration obtain a conservative lower limit on the odds ratio for the inclusion of an additional planet into the model. We apply this method to simulated data-sets containing one and two planets and successfully recover the correct number of planets and reliable constraints on the orbital parameters. We also apply our method to RV measurements of HD 37124, 47 Ursae Majoris and HD 10180. For HD 37124, we confirm that the current data strongly favour a three-planet system. We find strong evidence for the presence of a fourth planet in 47 Ursae Majoris, but its orbital period is suspiciously close to one year, casting doubt on its validity. For HD 10180 we find strong evidence for a six-planet system.
tween three estimators: (a) parallel tempering, (b) the ratio estimator, and (c) Restricted Monte Carlo (RMC) for one and two planet models. However, for a 3 planet model the three estimators diverged significantly with the RMC yielding the lowest estimate. Gregory & Fischer (2010) introduced the Nested Restricted Monte Carlo (NMRC) estimator, an improvement on the RMC estimator. The NRMC estimator is expected to provide a conservative lower bound on the Bayesian evidence in higher dimensions. These Bayesian model selection techniques have already resulted in the discovery of previously unknown planets in existing datasets, e.g. Tuomi & Kotiranta (2009) discovered a second planet orbiting HD 11506 and Gregory & Fischer (2010) reported a third planet orbiting 47 Ursae Majoris using Bayesian analysis. Nevertheless, most of the Bayesian model selection techniques employed so far in extrasolar planetary searches have relied on estimates of the Bayesian evidence, with uncertain accuracy. Our aim in this paper is to present a new and efficient method for Bayesian model selection to determine the number of planets favoured by the data, and estimate their parameters, without having to calculate directly the Bayesian evidence for models containing a large number of planets.
The outline of this paper is as follows. We give a brief introduction to Bayesian inference in Sec. 2 and describe various Bayesian object detection techniques in Sec. 3. Our model for calculating radial velocities is described in Sec. 4. In Sec. 5 we describe our Bayesian analysis methodology including the descriptions of likelihood and prior probability functions. We apply our method to simulated data in in Sec. 6, and to real RV data sets on HD 37124, 47 Ursae Majoris and HD 10180 in Sec. 7. Finally our conclusions are presented in Sec. 8.
BAYESIAN INFERENCE
Our planet finding methodology is built upon the principles of Bayesian inference, and so we begin by giving a brief summary of this framework. Bayesian inference methods provide a consistent approach to the estimation of a set of parameters Θ in a model (or hypothesis) H for the data D. Bayes' theorem states that
where Pr(Θ|D, H) ≡ P (Θ) is the posterior probability distribution of the parameters, Pr(D|Θ, H) ≡ L(Θ) is the likelihood, Pr(Θ|H) ≡ π(Θ) is the prior, and Pr(D|H) ≡ Z is the Bayesian evidence.
In parameter estimation, the normalising evidence factor is usually ignored, since it is independent of the parameters Θ, and inferences are obtained by taking samples from the (unnormalised) posterior using standard MCMC sampling methods, where at equilibrium the chain contains a set of samples from the parameter space distributed according to the posterior. This posterior constitutes the complete Bayesian inference of the parameter values, and can be marginalised over each parameter to obtain individual parameter constraints.
In contrast to parameter estimation problems, for model selection the evidence takes the central role and is simply the factor required to normalize the posterior over Θ:
Z = L(Θ)π(Θ)d D Θ,(2)
where D is the dimensionality of the parameter space. As the average of the likelihood over the prior, the evidence is larger for a model if more of its parameter space is likely and smaller for a model with large areas in its parameter space having low likelihood values, even if the likelihood function is very highly peaked. Thus, the evidence automatically implements Occam's razor: a simpler theory with compact parameter space will have a larger evidence than a more complicated one, unless the latter is significantly better at explaining the data. The question of model selection between two models H0 and H1 can then be decided by comparing their respective posterior probabilities given the observed data set D, as follows
R = Pr(H1|D) Pr(H0|D) = Pr(D|H1) Pr(H1) Pr(D|H0) Pr(H0) = Z1 Z0 Pr(H1) Pr(H0) ,(3)
where Pr(H1)/ Pr(H0) is the a priori probability ratio for the two models, which can often be set to unity but occasionally requires further consideration. The natural logarithm of the ratio of posterior model probabilities (sometimes termed the posterior odds ratio) provides a useful guide to what constitutes a significant difference between two models:
∆ ln R = ln Pr(H1|D) Pr(H0|D) = ln Z1 Z0 Pr(H1) Pr(H0) .(4)
We summarize the convention usually used for model selection in Table 1. Evaluation of the multidimensional integral in Eq. 2 is a challenging numerical task. Standard techniques like thermodynamic integration are extremely computationally expensive which makes evidence evaluation at least an order of magnitude more costly than parameter estimation. Some fast approximate methods have been used for evidence evaluation, such as treating the posterior as a multivariate Gaussian centred at its peak (see e.g. Hobson & McLachlan 2003), but this approximation is clearly a poor one for multimodal posteriors (except perhaps if one performs a separate Gaussian approximation at each mode). The Savage-Dickey density ratio has also been proposed (see e.g. Trotta 2007) as an exact, and potentially faster, means of evaluating evidences, but is restricted to the special case of nested hypotheses and a separable prior on the model parameters. Various alternative information criteria for astrophysical model selection are discussed by Liddle (2007), but the evidence remains the preferred method.
The nested sampling approach, introduced by Skilling (2004), is a Monte Carlo method targeted at the efficient calculation of the evidence, but also produces posterior inferences as a byproduct. and Feroz et al. (2009b) built on this nested sampling framework and have recently introduced the MULTINEST algorithm which is very efficient in sampling from posteriors that may contain multiple modes and/or large (curving) degeneracies and also calculates the evidence. This technique has greatly reduces the computational cost of Bayesian parameter estimation and model selection and has already been applied to several model selections problem in astrophysics (see e.g. Feroz et al. , 2009c. We employ this technique in this paper.
BAYESIAN OBJECT DETECTION
To detect and characterise an unknown number of objects in a dataset the Bayesian purist would attempt to infer simultaneously the full set of parameters Θ = {N obj , Θ1, Θ2, · · · , ΘN obj , Θn}, where N obj is the (unknown) number of objects, Θi are the parameters values associated with the ith object, and Θn is the set of (nuisance) parameters common to all the objects. In particular, this approach allows for the inclusion of an informative prior (if available) on N obj . The crucial complication inherent in this approach, however, is that the dimensionality of parameter space is variable and therefore the analysis method should be able to move between spaces of different dimensionality. Such techniques are discussed in Hobson & McLachlan (2003). Nevertheless, due to this additional complexity of variable dimensionality, the techniques are generally extremely computationally intensive.
An alternative and algorithmically simpler approach for achieving virtually the same result 'by hand' is instead to consider a series of models HN obj , each with a fixed number of objects, i.e. with N obj = 0, 1, 2, . . .. One then infers N obs by identifying the model with the largest marginal posterior probability Pr(HN obj |D). The probability associated with N obj = 0 is often called the 'null evidence' and provides a baseline for comparison of different models. Indeed, this approach has been adopted previously in exoplanet studies (see e.g. Gregory & Fischer (2010)), albeit using only lower-bound estimates of the Bayesian evidence for each model. Assuming that there are np parameters per object and nn (nuisance) parameters common to all the objects, for N obj objects, there would be N obj np + nn parameters to be inferred, Thus, the dimensionality of the problem and consequently the volume of the parameter space increases almost linearly with N obj . Along with this increase in dimensionality, the complexity of the problem also increases due to exponential increase in the number of modes as a result of counting degeneracy, e.g. for N obj = 2 and Θ = {Θ1, Θ2, Θn} where Θ1 and Θ2 are the parameters values associated with first and second objects respectively and Θn is the set of nuisance parameters, one would get the same value for the likelihood L(Θ) by just rearranging Θ as {Θ2, Θ1, Θn} and therefore there should at least be twice as many modes for N obj = 2 than for N obj = 1. Similarly there are n! more modes for N obj = n than for N obj = 1. This increase in dimensionality and severe complexity of the posterior makes it very difficult to evaluate the Bayesian evidence, even approximately. In exoplanet analyses, we have found that MULTINEST is typically capable of evaluating the evidence accurately for systems with up to 3 planets. If 4 or more planets are present, MULTINEST still maps out the posterior distribution sufficiently well to obtain reliable parameter estimates, but can begin to produce inaccurate evidence estimates. Thus, even this approach to Bayesian object detection is of limited applicability in exoplanet studies.
If the contributions to the data from each object are reasonably well separated and the correlations between parameters across objects is minimal, one can use the alternative approach of setting N obj = 1 (see. e.g. Hobson & McLachlan 2003;) and therefore the model for the data consists of only a single object. This does not, however, restrict us to detecting only one object in the data. By modelling the data in such a way, we would expect the posterior distribution to possess numerous peaks, each corresponding to the location of one of the objects. Consequently the high dimensionality of the problem is traded with high multi-modality in this approach, which, depending on the statistical method employed for exploring the parameter space, could poten-tially simplify the problem enormously. For an application of this approach in detecting galaxy cluster from weak lensing data-sets see . Unfortunately, for extrasolar planet detection using RV, this approach cannot be utilized as the nature of data itself makes the parameters of different planets in multi-planet system correlated.
We therefore propose here a new general approach to Bayesian object detection that is applicable to exoplanet studies, even for systems with a large number of planets. Motivated by the fact that, as discussed above and in Sec. 2, evaluation of the evidence integral is a far more computationally demanding procedure than parameter estimation, we consider a method based on the analysis of residuals remaining after detection and subsequent inclusion in the model of N obj objects from the data, as outlined below. In what follows, we will simply assume that the prior ratio in Eq. 4 is unity, so that the posterior odds ratio R coincides with the evidence ratio. In principle, however, one could adopt a more informative prior ratio given a theory of planet formation that predicted the probability distribution for the number of planets.
Our approach to Bayesian object detection is as follows. Let us first denote the observed (fixed) data by D = {d1, d2, · · · , dM}, with the associated uncertainties being {σ1, σ2, · · · , σM}. In the general case that N obj = n, let us define the random variable Dn as the data that would be collected if the model Hn were correct, and also the random variable Rn ≡ D − Dn, which are the data residuals in this case. If we set N obj = n and analyse D to obtain samples from the posterior distribution of the model parameters Θ, using MULTINEST, then from these samples it is straightforward to obtain samples from the posterior distribution of the data residuals Rn. This is given by
Pr(Rn|D, Hn) = Pr(Rn|Θ, Hn) Pr(Θ|D, Hn) dΘ, (5) where Pr(Rn|Θ, Hn) = M i=1 1 2πσ 2 i exp − [Di − Ri − Dp,i(Θ)] 2 2σ 2 i ,(6)
and Dp(Θ) is the (noiseless) predicted data-set corresponding to the parameter values Θ. It should be noted that (5) and (6) contain no approximations. In principle, one could then perform a kernel estimation procedure on the samples obtained to produce a (possibly analytic) functional form for Pr(Rn|D, H). For simplicity, we assume here that the residuals are independently Gaussian distributed with a mean Rn = {r1, r2, · · · , rM} and standard deviations {σ ′ 1 , σ ′ 2 , · · · , σ ′ M } obtained from the samples; we find that this is a good approximation.
These residual data Rn , with associated uncertainities, can then be analysed with N obj = 0, giving the 'residual null evidence' Zr,0, which is compared with the evidence value Zr,1 obtained by analysing Rn with N obj = 1. We denote the natural logarithm of the evidence ratio Zr,1/Zr,0 between these two models by ∆ ln Zr. We are thus comparing the model H0 that the residual data does not contain an additional planet to the model H1 in which an additional planet is favoured.
Our overall procedure is therefore as follows. We first set N obj = 1 and analyse the original data set D. If, in the analysis of the corresponding residuals data, H1 is favoured over H0, then the original data D are analysed with N obj = 2 and the same process is repeated. In this way, N obj is increased in the analysis of the original data D, until H0 is favoured over H1 in the analysis of the corresponding residual data. The resulting value for N obj gives the number of objects favoured by the data. This approach thus only requires the Bayesian evidence to be calculated for N obj = 1 model (and the N obj = 0 model, which is trivial); this reduces the computational cost of the problem significantly. Moreover, in principle, this procedure is exact. The only approximation made here, for the sake of simplicity, is to assume that Pr(Rn|D, Hn) takes the form of an uncorrelated multivariate Gaussian distribution.
In adopting this approach, our rationale is that, if the n-planet model is correct, the corresponding data residuals Rn should be consistent with instrumental noise, perhaps including an additional stellar jitter contribution (see Section 5.1). In this case, the null hypothesis, H0, should be preferred over the alternative hypothesis, H1, since the latter supposes that some additional signal, not consistent with noise, is present in the data residuals. If H1 is preferred, we take this as an indication of further planet signal(s) present in the data, and therefore re-analysis the original dataset D using an (n + 1)-planet model. In this way, we circumvent the problem that the inclusion of an additional planet to an n-planet model will inevitably affect the best-fit parameters for the original n-planet subset.
MODELLING RADIAL VELOCITIES
It is extremely difficult to observe planets at interstellar distances directly, since the planets only reflect the light incident on them from their host star and are consequently many times fainter. Nonetheless, the gravitational force between the planets and their host star results in the planets and star revolving around their common centre of mass. This produces doppler shifts in the spectrum of the host star according to its RV, the velocity along the line-ofsight to the observer. Several such measurements, usually over an extended period of time, can then be used to detect extrasolar planets.
Following the formalism given in Balan & Lahav (2009), for Np planets and ignoring the planet-planet interactions, the RV at an instant ti observed at jth observatory can be calculated as:
v(ti, j) = Vj − Np p=1
Kp [sin(fi,p + ̟p) + ep sin(̟p)] , (7) where Vj = systematic velocity with reference to jth observatory, Kp = velocity semi-amplitude of the pth planet, ̟p = longitude of periastron of the pth planet, fi,p = true anomaly of the pth planet, ep = orbital eccentricity of the pth planet, Pp = orbital period of the pth planet, χp = fraction of an orbit of the pth planet, prior to the start of data taking, at which periastron occurred.
Note that fi,p is itself a function of ep, Pp and χp. While there is unique mean line-of-sight velocity of the center of motion, it is important to have a different velocity reference Vj for each observatory/spectrograph pair, since the velocities are measured differentially relative to a reference frame specific to each observatory. We also model the intrinsic stellar variability s ('jitter'), as a source of uncorrelated Gaussian noise in addition to the measurement uncertainties. Therefore for each planet we have five free parameters: K, ̟, e, P and χ. In addition to these parameters there are two nuisance parameters V and s, common to all the planets.
These orbital parameters can then be used along with the stellar mass ms to calculate the length a of the semi-major axis of the planet's orbit around the centre of mass and the planetary mass m as follows:
as sin i = KP √ 1 − e 2 2π ,(8)m sin i ≈ Km 2 3 s P 1 3 √ 1 − e 2 (2πG) 1 3 ,(9)
a ≈ msas sin i m sin i ,
where as is the semi-major axis of the stellar orbit about the centreof-mass and i is the angle between the direction normal to the planet's orbital plane and the observer's line of sight. Since i cannot be measured with RV data, only a lower bound on the planetary mass m can be estimated.
BAYESIAN ANALYSIS OF RADIAL VELOCITY MEASUREMENTS
There are several RV search programmes looking for extrasolar planets. The RV measurements consist of the time ti of the ith observation, the measured RV vi relative to a reference frame and the corresponding measurement uncertainty σi. These RV measurements can be analysed using Bayes' theorem given in Eq. 1 to obtain the posterior probability distributions of the model parameters discussed in the previous section. We now describe the form of the likelihood and prior probability distributions.
Likelihood function
As discussed in Gregory (2007a), the errors on RV measurements can be treated as Gaussian and therefore the likelihood function can be written as:
L(Θ) = i 1 2π(σ 2 i + s 2 ) exp − (v(Θ; ti) − vi) 2 2(σ 2 i + s 2 ) ,(11)
where vi and σi are the i th RV measurement and its corresponding uncertainty respectively, v(Θ; ti) is the predicted RV for the set of parameters Θ, and s is intrinsic stellar variability. A large value of s can also indicate the presence of additional planets, e.g. if a two-planet system is analysed with a single-planet model then the velocity variations introduced by the second planet would act like an additional noise term and therefore contribute to s.
Choice of priors
For parameter estimation, priors become largely irrelevant once the data are sufficiently constraining, but for model selection the prior dependence always remains. Therefore, it is important that priors are selected based on physical considerations. We follow the choice of priors given in Gregory (2007a), as shown in Table 2. The modified Jeffreys prior,
Pr(θ|H) = 1 (θ + θ0) ln(1 + θmax/θ0) ,(12)
behaves like a uniform prior for θ ≪ θ0 and like a Jeffreys prior (uniform in log) for θ ≫ θ0. We set K0 = s0 = 1 m/s and Kmax = 2129 m/s, which corresponds to a maximum planet-star mass ratio of 0.01.
(K+K 0 ) −1 ln(1+(Kmax/K 0 )(P min /P i ) 1/3 (1/ 1−e 2 i )) 0 Kmax(P min /P i ) 1/3 (1/ 1 − e 2 i ) V (m/s) Uniform 1 V min −Vmax −Kmax Kmax e Uniform 1 0 1 ̟ (rad) Uniform 1 2π 0 2π χ Uniform 1 0 1 s (m/s) Mod. Jeffreys (s+s 0 ) −1 ln(1+smax/s 0 ) 0 Kmax
APPLICATION TO SIMULATED DATA
In this section, we apply our method to two sets of simulations, one with only one planet in the data and the other with two planets. Our aim here is to test our new methodology for Bayesian object detection, in particular the use of the Bayesian evidence in determining the correct number of planets. In particular, we analyse the same simulations used in Balan & Lahav (2009), which were obtained by calculating the radial velocities using (7) for the 1-planet and 2-planet models respectively. Gaussian noise with µ = 0.0 m/s and σ = 2.0 m/s was then added to the resultant radial velocities.
One-planet simulation
The evidence and jitter values obtained in the analysis of the 1planet simulation are presented in Table 3. Here ∆ ln Z denotes the natural logarithm of evidence ratio ZN p /Z0, where Z0 is the evidence for Np = 0. ∆ ln Zr is the natural logarithm of evidence ratio ZN r,1 /ZN r,0 where ZN r,1 and ZN r,0 are the evidence values for analysing the residual data, after subtracting Np planets, as discussed in Sec. 3, with 1 and 0 planets respectively. ∆ ln Z therefore, gives the evidence in favour of Np planets while ∆ ln Zr gives the evidence in favour of there being an additional planet after Np planets have already been found and removed from the data.
The evidence values listed in Table 3 should be compared with the scale given in Table 1. It is clear that there is overwhelming evidence for the presence of 1 planet in the data. The negative ∆ ln Zr value further indicates that there is no evidence for the presence of any additional planets. Furthermore, the logarithm of the evidence for the 2-planet model was calculated to be 81.73 ± 0.16, which is lower than the logarithm of the evidence for the 1-planet model listed in Table 3, providing further support for the 1-planet model. Adopting the 1-planet model, therefore, the resulting estimated parameter values are listed in Table 4 and are in excellent agreement with the true values used to generate the simulation.
Two-planet simulation
The evidence and jitter values obtained in the analysis of the 2planet simulation are presented in Np = 1, the evidence value is quite large but ∆ ln Zr gives a very clear indication of the presence of an additional planet. The jitter s for Np = 1 is also quite large. The presence of a second planet is confirmed by ∆ ln Z value for Np = 2, which is almost 10 ln units higher than for Np = 1. The logarithm of the evidence for the 3-planet model was calculated to be 66.29 ± 0.16, which is lower than the 2-planet model (see Table. 1), thus indicating a preference for the latter. Furthermore, both ∆ ln Zr and s for Np = 2 strongly suggest that no additional planet is present. Thus, adopting the 2planet model, the estimated parameter values obtained are listed in Table 6. Once again they are in excellent agreement with the true values used to generate the simulation.
APPLICATION TO REAL DATA
In this section, we apply our Bayesian object detection technique to real RV measurements of HD 37124, 47 Ursae Majoris and HD 10180 and compare our results with those of previous analyses of these systems.
HD 37124
HD 37124 is a metal-poor G4 dwarf star at a distance of 33 pc with mass 0.85±0.02 M ⊙ (Butler et al. 2006;Valenti & Fischer 2005 The first planet orbiting HD 37124 was found by Vogt et al. (2000). Subsequently two further planets were found by Butler et al. (2003) and Vogt et al. (2005) respectively. We use the 52 RV measurements given in Vogt et al. (2005) for our analysis. The RV data is plotted in Fig. 1. We follow the object detection methodology outlined in Sec. 3 and analyse the RV data, starting with Np = 1 and increasing it until the residual evidence ratio ∆ ln Zr < 0. The resulting evidence and jitter values are presented in Table 7. We can clearly see Np = 3 is the favoured model, with both the residual evidence ratio and jitter values strongly implying no additional planets are contributing to the data. Adopting the 3-planet model, the estimated parameter values are listed in Table 8 while the 1-D marginalised posterior probability distributions are shown in Fig. 2. The mean RV curve for the 3-planet model is overlaid on the RV measurements in Fig. 1.
Comparing our parameter values with those given in Vogt et al. (2005), we see that our parameter estimates for planets HD 37124 b and HD 37124 c are in very good agreement. However, our orbital time period for HD 37124 d is about 100 days lower and our estimated eccentricity is somewhat higher. The main reason for this discrepancy is that Vogt et al. (2005) fixed the eccentricity of HD 37124 d at 0.2 which was chosen to fulfill the dynamical stability requirement. Goździewski et al. (2006) also fitted a 3-planet model for HD 37124 and our parameter estimates for all three planets are in very good agreement with theirs.
47 Ursae Majoris
47 Ursae Majoris is a solar analog, yellow dwarf star at a distance of 14.06 pc with mass 1.06 ± 0.02 M ⊙ (Takeda et al. 2007). The first planet orbiting 47 Ursae Majoris with an orbital period of 1090 days was found by Butler & Marcy (1996). A second companion to 47 Ursae Majors with orbital period of 2594 ± 90 days was discovered by Fischer et al. (2002). Subsequently the combined RV data for 47 Ursae Majoris from the Lick Observatory, spanning 21.6 years, and from the 9.2 m Hobbly-Eberly Telescope (HET) and 12.7 m Harlam J. Smith (HJS) telescopes of the McDonald Observatory (Wittenmyer et al. 2009), was analysed by Gregory & Fischer (2010) and strong evidence was found in favour of a three-planet system. We analyse the same combined data-set.
The RV data is plotted in Fig. 3. Gregory & Fischer (2010) analysed the RV data this system by ignoring the residual velocity offsets associated with dewar changes, as well as by incorporating the dewar velocity offsets as additional unknown parameters, and found the results to be consistent. We therefore ignore the velocity offsets associated with dewar changes and fit for three velocity offsets VL, VHET and VHJS associated with Lick, HET and HJS telescopes respectively.
We follow the object detection methodology outlined in Sec. 3 and analyse the RV data, starting with Np = 1 and increasing it until the residual evidence ratio ∆ ln Zr < 0. The resulting evidence and jitter values are presented in Table 9. We can clearly see Np = 4 is the favoured model, with the residual evidence ratio strongly implying no additional planets are contributing to the data. Our detection of the fourth planet contradicts the analysis of Gregory & Fischer (2010), which did not find a well-defined peak for the fourth period using combined Lick, HET and HJS data-sets. They did, however, find the fourth planet using only the Lick data-set, but their calculated upper limit on the false alarm probability for the presence of the fourth planet of ≈ 0.5 was deemed too high. Our detected fourth planet has the best-fit orbital period of 369.7 days, consistent with the period of fourth planet found by Gregory & Fischer (2010) in Lick-only data. Nonetheless, this period is suspiciously close to one year, indicating that it might be an artefact of the data reduction. We therefore discuss the results obtained from the 3-planet model in the rest of this section.
Adopting the 3-planet model, the estimated parameter values are listed in Table 10 while the 1-D marginalised posterior probability distributions are shown in Fig. 4. The mean RV curve for the 4-planet model is overlaid on the RV measurements in Fig. 3. There is fairly good agreement between our parameter constraints and those presented by Gregory & Fischer (2010).
HD 10180
HD 10180 is a G1 V type star at a distance of 39 pc with mass 1.06 ± 0.05 M ⊙ (Lovis et al. 2010). Using the RV data from HARPS instrument (Mayor 2003), Lovis et al. (2010) recently reported at least five and as many as seven planets orbiting this star. There has been much interest in the possible seventh planet as its minimum mass as reported by Lovis et al. (2010) is 1.4 M ⊕ . We analyse the same HARPS data-set after subtracting a mean radial velocity of 3.55302 km/s from it. The resultant RV data is plotted in Fig. 6. The evidence and jitter values are presented in Table 11. We can clearly see Np = 6 is the favoured model, with the residual evidence ratio strongly implying that the residual data consists of noise only. Adopting the 6-planet model, the estimated parameter values are listed in Table 12 while the 1-D marginalised posterior probability distributions are shown in Fig. 5. The mean RV curve for the 6-planet model is overlaid on the RV measurements in Fig. 6 be seen that our orbital parameters are in general reasonably good agreement with the ones presented in Lovis et al. (2010). Lovis et al. (2010) found fairly strong peaks with periods 1.178 and 6.51 days in the periodogram of the residuals of the 6planet Kaplerian model. They noted that these two peaks are aliases of each other with 1 sidereal day period (|1/6.51 − 1.0027| ≈ 1/1.178). Arguing that it is unlikely for the system to be dynamically stable with two planets having P = 5.76 days and P = 6.51 days, they concluded that if the 7th signal is caused by a planet, it is likely to have P = 1.178 days. Meanwhile, they were not able to rule out conclusively or confirm the presence of the 7th planet. Our analysis of the residual data of the 6-planet model did reveal several peaks in the posterior distribution with periods around 6.51 and 1 days, but as can be seen from the value of residual evidence in Tab. 11, they were not found to be sufficiently significant. We therefore rule out the presence of any additional planets contributing to the RV data.
CONCLUSIONS
We have presented a new and efficient method to detect extrasolar planets from RV measurements. Our method is not only able to fit for a specific number of planets, but can also infer the number of planets from the data using Bayesian model selection. We have successfully applied our method to simulated data-sets, as well as to the real systems HD 37124, 47 Ursae Majoris and HD 10180. Our method can potentially identify many undiscovered extrasolar planets in existing RV data-sets. One drawback of our method is that it ignores the planet-planet interactions, but these interactions are important only for a very small fraction of planetary systems. Moreover, our basic methodology can be extended to include such interactions. This will be undertaken in further work. Another important avenue of research in extrasolar planet searches is to perform a coherent analysis using different data-sets, e.g. by jointly analysing the RV data and light curves for the same system. This would enable us to place better constraints on the planetary parameters and also to learn about the physical structure of the planets. Once again our basic analysis technique can be easily extended to perform a joint analysis of data sets of different types We plan to extend our approach by incorporating light curve data in a forthcoming paper.
Pr(Θ|D, H) = Pr(D| Θ, H) Pr(Θ|H) Pr(D|H) ,
Figure 1 .
1Radial velocity measurements, with 1σ errorbars, and the mean fitted radial velocity curve with three planets for HD 37124.
Figure 2 .
21-D marginalised posterior probability distributions for the parameters of the three planets found orbiting HD 37124.
Figure 4 .
41-D marginalised posterior probability distributions for the parameters of the three planets found orbiting 47 Ursae Majoris. c 2010 RAS, MNRAS 000, 1-12
Figure 3 .
3Radial velocity measurements, with 1σ errorbars, and the mean fitted radial velocity curve with three planets for 47 Ursae Majoris.
Figure 5 . 12 Figure 6 .
51261-D marginalised posterior probability distributions for the parameters of the six planets found orbiting HD 10180. c 2010 RAS, MNRAS 000, 1-Top panel shows the radial velocity measurements (after subtracting mean RV of 3.55302 km/s), with 1σ errorbars. Bottom panel shows a blow-up of the mean fitted radial velocity curve with six planets for HD 10180.
Table 1 .
1The scale we use for the interpretation of model probabilities.|∆ ln R|
Odds
Probability
Remark
< 1.0
3 : 1
< 0.750
Inconclusive
1.0
∼ 3 : 1
0.750
Weak Evidence
2.5
∼ 12 : 1
0.923
Moderate Evidence
5.0
∼ 150 : 1
0.993
Strong Evidence
Table 2 .
2Prior probability distributions.Np
∆ ln Z
∆ ln Zr
s (m/s)
1
82.29 ± 0.15
−1.33 ± 0.13
0.42 ± 0.35
Table 3. The evidence and jitter values for the 1-planet simulation.
Table 5 .
5One can see that forTable 4. True and estimated parameter values for the 1-planet simulation.The estimated values are quoted as µ ± σ where µ and σ are the posterior mean and standard deviation respectively.Parameter
True
Estimate
P (days)
700.00
705.09 ± 12.71
K (m/s)
60.00
60.39 ± 0.56
e
0.38
0.38 ± 0.01
̟ (rad)
3.10
3.10 ± 0.03
χ
0.67
0.67 ± 0.05
V (m/s)
12.00
11.90 ± 0.45
s (m/s)
0.00
0.42 ± 0.35
Np
∆ ln Z
∆ ln Zr
s (m/s)
1
41.92 ± 0.14
14.82 ± 0.14
7.47 ± 1.13
2
67.31 ± 0.16
−1.45 ± 0.13
0.51 ± 0.41
Table 5. The evidence and jitter values for the 2-planet simulation.
Table 7 .
7The evidence and jitter values for the system HD 37124.).
Table 9 .
9The evidence and jitter values for the system 47 Ursae Majoris.Parameter
HD 37124 b
HD 37124 c
HD 37124 d
P (days)
154.48 ± 0.14
853.70 ± 10.02
2195.48 ± 99.06
(154.39)
(855.22)
(2156.73)
K (m/s)
27.73 ± 1.06
14.16 ± 1.26
14.52 ± 1.96
(28.38)
(14.15)
(14.90)
e
0.07 ± 0.03
0.08 ± 0.06
0.43 ± 0.09
(0.10)
(0.04)
(0.45)
̟ (rad)
1.41 ± 1.57
4.07 ± 1.58
3.47 ± 0.35
(0.70)
(5.10)
(3.78)
χ
0.72 ± 0.13
0.44 ± 0.35
0.29 ± 0.06
(0.74)
(0.04)
(0.25)
m sin i (M J )
0.64 ± 0.02
0.58 ± 0.05
0.73 ± 0.07
(0.66)
(0.58)
(0.75)
a (AU)
0.53 ± 0.00
1.66 ± 0.01
3.11 ± 0.09
(0.53)
(1.66)
(3.08)
Table 8. Estimated parameter values for the three planets found orbiting HD
37124. The estimated values are quoted as µ±σ where µ and σ are the pos-
terior mean and standard deviation respectively. The numbers in parenthesis
are the maximum-likelihood parameter values.
Np
∆ ln Zr
s (m/s)
1
98.27 ± 0.25
10.13 ± 0.47
2
23.32 ± 0.25
6.19 ± 0.36
3
4.39 ± 0.25
4.87 ± 0.33
4
−0.77 ± 0.23
4.35 ± 0.33
Table 10. Estimated parameter values for the three planets found orbiting 47 Ursae Majoris. The estimated values are quoted as µ ± σ where µ and σ are the posterior mean and standard deviation respectively. The numbers in parentheses are the maximum-likelihood parameter values.Parameter
47 UMa b
47 UMa c
47 UMa d
P (days)
1078.26 ± 1.83
2293.17 ± 79.39
14674.55 ± 5925.37
(1078.69)
(2228.61)
(17217.04)
K (m/s)
49.49 ± 1.53
8.49 ± 1.30
13.52 ± 1.09
(51.22)
(10.18)
(13.42)
e
0.03 ± 0.01
0.32 ± 0.18
0.24 ± 0.16
(0.04)
(0.55)
(0.36)
̟ (rad)
4.32 ± 0.74
2.95 ± 1.32
2.37 ± 2.37
(4.29)
(2.42)
(0.32)
χ
0.39 ± 0.11
0.64 ± 0.28
0.58 ± 0.19
(0.41)
(0.75)
(0.69)
m sin i (M J )
2.59 ± 0.09
0.53 ± 0.05
1.58 ± 0.17
(2.71)
(0.57)
(1.66)
a (AU)
2.10 ± 0.02
3.48 ± 0.08
11.81 ± 2.99
(2.11)
(3.43)
(13.40)
. It canTable 11. The evidence and jitter values for the system HD 10180.Np
∆ ln Zr
s (m/s)
1
24.84 ± 0.17
5.64 ± 0.29
2
9.46 ± 0.18
4.55 ± 0.23
3
63.47 ± 0.17
3.96 ± 0.20
4
45.47 ± 0.17
2.45 ± 0.13
5
4.49 ± 0.17
1.58 ± 0.09
6
−0.73 ± 0.17
1.36 ± 0.07
c 2010 RAS, MNRAS 000, 1-12
ACKNOWLEDGEMENTSWe would like to thank the referee, Phil Gregory, for useful comments on the paper and Pedro Carvalho for useful discussions regarding multiple object detection. This work was carried out largely on the COSMOS UK National Cosmology Supercomputer at DAMTP, Cambridge and the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England. FF is supported by a Research Fellowship from Trinity Hall, Cambridge. STB acknowledges support from the Isaac Newton Studentship.
. S T Balan, O Lahav, MNRAS. 3941936Balan S. T., Lahav O., 2009, MNRAS, 394, 1936
. R P Butler, G W Marcy, ApJ. 464153Butler R. P., Marcy G. W., 1996, ApJ, 464, L153+
. R P Butler, G W Marcy, S S Vogt, D A Fischer, G W Henry, G Laughlin, J T Wright, ApJ. 582455Butler R. P., Marcy G. W., Vogt S. S., Fischer D. A., Henry G. W., Laughlin G., Wright J. T., 2003, ApJ, 582, 455
. R P Butler, J T Wright, G W Marcy, D A Fischer, S S Vogt, C G Tinney, H R A Jones, B D Carter, J A Johnson, C Mc-Carthy, A J Penny, ApJ. 646505Butler R. P., Wright J. T., Marcy G. W., Fischer D. A., Vogt S. S., Tinney C. G., Jones H. R. A., Carter B. D., Johnson J. A., Mc- Carthy C., Penny A. J., 2006, ApJ, 646, 505
M A Clyde, J O Berger, F Bullard, E B Ford, W H Jefferys, R Luo, R Paulo, T Loredo, of Astronomical Society of the Pacific Conference Series, Current Challenges in Bayesian Model Choice. G. J. Babu & E. D. Feigelson371Statistical Challenges in Modern Astronomy IVClyde M. A., Berger J. O., Bullard F., Ford E. B., Jefferys W. H., Luo R., Paulo R., Loredo T., 2007, in G. J. Babu & E. D. Feigel- son ed., Statistical Challenges in Modern Astronomy IV Vol. 371 of Astronomical Society of the Pacific Conference Series, Cur- rent Challenges in Bayesian Model Choice. pp 224-+
. F Feroz, J R Gair, M P Hobson, E K Porter, Classical and Quantum Gravity. 26215003Feroz F., Gair J. R., Hobson M. P., Porter E. K., 2009a, Classical and Quantum Gravity, 26, 215003
. F Feroz, M P Hobson, MNRAS. 384449Feroz F., Hobson M. P., 2008, MNRAS, 384, 449
. F Feroz, M P Hobson, M Bridges, MNRAS. 3981601Feroz F., Hobson M. P., Bridges M., 2009b, MNRAS, 398, 1601
. F Feroz, M P Hobson, J T L Zwart, R D E Saunders, K J B Grainge, MNRAS. 3982049Feroz F., Hobson M. P., Zwart J. T. L., Saunders R. D. E., Grainge K. J. B., 2009c, MNRAS, 398, 2049
. F Feroz, P J Marshall, M P Hobson, arXiv:0810.0781ArXiv e-printsFeroz F., Marshall P. J., Hobson M. P., 2008, ArXiv e-prints [arXiv:0810.0781]
. D A Fischer, G W Marcy, R P Butler, G Laughlin, S S Vogt, ApJ. 5641028Fischer D. A., Marcy G. W., Butler R. P., Laughlin G., Vogt S. S., 2002, ApJ, 564, 1028
. E B Ford, AJ. 1291706Ford E. B., 2005, AJ, 129, 1706
E B Ford, P C Gregory, K Goździewski, M Konacki, A J Maciejewski, of Astronomical Society of the Pacific Conference Series, Bayesian Model Selection and Extrasolar Planet Detection. G. J. Babu & E. D. Feigelson371688Statistical Challenges in Modern Astronomy IVFord E. B., Gregory P. C., 2007, in G. J. Babu & E. D. Feigelson ed., Statistical Challenges in Modern Astronomy IV Vol. 371 of Astronomical Society of the Pacific Conference Series, Bayesian Model Selection and Extrasolar Planet Detection. pp 189-+ Goździewski K., Konacki M., Maciejewski A. J., 2006, ApJ, 645, 688
. P C Gregory, ApJ. 6311198Gregory P. C., 2005, ApJ, 631, 1198
. P C Gregory, MNRAS. 3741321Gregory P. C., 2007a, MNRAS, 374, 1321
. P C Gregory, MNRAS. 3811607Gregory P. C., 2007b, MNRAS, 381, 1607
. P C Gregory, D A Fischer, MNRAS. 403731Gregory P. C., Fischer D. A., 2010, MNRAS, 403, 731
. M P Hobson, C Mclachlan, MNRAS. 338765Hobson M. P., McLachlan C., 2003, MNRAS, 338, 765
. A R Liddle, MNRAS. 37774Liddle A. R., 2007, MNRAS, 377, L74
. N R Lomb, Ap&SS. 39447Lomb N. R., 1976, Ap&SS, 39, 447
. C Lovis, D Ségransan, M Mayor, S Udry, W Benz, J Bertaux, F Bouchy, A C M Correia, J Laskar, G Lo Curto, C Mordasini, F Pepe, D Queloz, N C Santos, arXiv:1011.4994ArXiv e-printsLovis C., Ségransan D., Mayor M., Udry S., Benz W., Bertaux J., Bouchy F., Correia A. C. M., Laskar J., Lo Curto G., Mor- dasini C., Pepe F., Queloz D., Santos N. C., 2010, ArXiv e-prints [arXiv:1011.4994]
Information Theory, Inference and Learning Algorithms. D J C Mackay, The Messenger11420Cambridge, UK Mayor MMackay D. J. C., 2003, Information Theory, Inference and Learn- ing Algorithms. Cambridge University Press, Cambridge, UK Mayor M. e. a., 2003, The Messenger, 114, 20
. J D Scargle, ApJ. 263835Scargle J. D., 1982, ApJ, 263, 835
J Skilling, American Institute of Physics Conference Series Nested Sampling. Fischer R., Preuss R., Toussaint U. V.Skilling J., 2004, in Fischer R., Preuss R., Toussaint U. V., eds, American Institute of Physics Conference Series Nested Sam- pling. pp 395-405
. G Takeda, E B Ford, A Sills, F A Rasio, D A Fischer, J A Valenti, ApJS. 168297Takeda G., Ford E. B., Sills A., Rasio F. A., Fischer D. A., Valenti J. A., 2007, ApJS, 168, 297
. R Trotta, MNRAS. 37872Trotta R., 2007, MNRAS, 378, 72
. M Tuomi, S Kotiranta, A&A. 49613Tuomi M., Kotiranta S., 2009, A&A, 496, L13
. J A Valenti, D A Fischer, ApJS. 159141Valenti J. A., Fischer D. A., 2005, ApJS, 159, 141
. S S Vogt, R P Butler, G W Marcy, D A Fischer, G W Henry, G Laughlin, J T Wright, J A Johnson, ApJ. 632638Vogt S. S., Butler R. P., Marcy G. W., Fischer D. A., Henry G. W., Laughlin G., Wright J. T., Johnson J. A., 2005, ApJ, 632, 638
. S S Vogt, G W Marcy, R P Butler, K Apps, ApJ. 536902Vogt S. S., Marcy G. W., Butler R. P., Apps K., 2000, ApJ, 536, 902
. R A Wittenmyer, M Endl, W D Cochran, H F Levison, G W Henry, ApJS. 18297Wittenmyer R. A., Endl M., Cochran W. D., Levison H. F., Henry G. W., 2009, ApJS, 182, 97
| []
|
[
"Estimates on Green functions and Schrödinger-type equations for non-symmetric diffusions with measure-valued drifts",
"Estimates on Green functions and Schrödinger-type equations for non-symmetric diffusions with measure-valued drifts"
]
| [
"Panki Kim [email protected] ",
"Renming Song [email protected] ",
"\nDepartment of Mathematics\nDepartment of Mathematics\nSeoul National University\n151-742SeoulRepublic of Korea\n",
"\nUniversity of Illinois Urbana\n61801ILUSA\n"
]
| [
"Department of Mathematics\nDepartment of Mathematics\nSeoul National University\n151-742SeoulRepublic of Korea",
"University of Illinois Urbana\n61801ILUSA"
]
| []
| In this paper, we establish sharp two-sided estimates for the Green functions of non-symmetric diffusions with measure-valued drifts in bounded Lipschitz domains. As consequences of these estimates, we get a 3G type theorem and a conditional gauge theorem for these diffusions in bounded Lipschitz domains.Informally the Schrödinger-type operators we consider are of the form L + µ · ∇ + ν where L is uniformly elliptic, µ is a vector-valued signed measure belonging to K d,1 and ν is a signed measure belonging to K d,2 . In this paper, we establish two-sided estimates for the heat kernels of Schrödinger-type operators in bounded C 1,1 -domains and a scale invariant boundary Harnack principle for the positive harmonic functions with respect to Schrödinger-type operators in bounded Lipschitz domains. | 10.1016/j.jmaa.2006.10.014 | [
"https://export.arxiv.org/pdf/math/0605557v2.pdf"
]
| 7,166,511 | math/0605557 | 7040f57f7a9775746aaf4f6c80bc35acdd3dc710 |
Estimates on Green functions and Schrödinger-type equations for non-symmetric diffusions with measure-valued drifts
19 Sep 2006 March 29, 2022
Panki Kim [email protected]
Renming Song [email protected]
Department of Mathematics
Department of Mathematics
Seoul National University
151-742SeoulRepublic of Korea
University of Illinois Urbana
61801ILUSA
Estimates on Green functions and Schrödinger-type equations for non-symmetric diffusions with measure-valued drifts
19 Sep 2006 March 29, 2022arXiv:math/0605557v2 [math.PR] * The research of this author is supported in part by a joint US-Croatia grant INT 0302167. 1 60G51, 31C25,AMS 2000 Mathematics Subject Classification: Primary: 58C6060J45; Secondary: 35P15Keywords and phrases: Brownian motiondiffusiondiffusion processnon-symmetric diffusionKato classmeasure-valued drifttransition densityGreen functionLipschitz domain3G theoremSchrödinger operatorheat kernelboundary Harnack principleharmonic function
In this paper, we establish sharp two-sided estimates for the Green functions of non-symmetric diffusions with measure-valued drifts in bounded Lipschitz domains. As consequences of these estimates, we get a 3G type theorem and a conditional gauge theorem for these diffusions in bounded Lipschitz domains.Informally the Schrödinger-type operators we consider are of the form L + µ · ∇ + ν where L is uniformly elliptic, µ is a vector-valued signed measure belonging to K d,1 and ν is a signed measure belonging to K d,2 . In this paper, we establish two-sided estimates for the heat kernels of Schrödinger-type operators in bounded C 1,1 -domains and a scale invariant boundary Harnack principle for the positive harmonic functions with respect to Schrödinger-type operators in bounded Lipschitz domains.
Introduction
This paper is a natural continuation of [11,12,14], where diffusion (Brownian motion) with measurevalued drift was discussed. For a vector-valued signed measure µ belonging to K d,1 , a diffusion with measure-valued drift µ is a diffusion process whose generator can be informally written as L + µ · ∇. In this paper we consider Schrödinger-type operators L + µ · ∇ + ν (see below for the definition) and discuss their properties.
In this paper we always assume that d ≥ 3. First we recall the definition of the Kato class K d,α for α ∈ (0, 2]. For any function f on R d and r > 0, we define
M α f (r) = sup x∈R d |x−y|≤r |f |(y)dy |x − y| d−α , 0 < α ≤ 2.
In this paper, we mean, by a signed measure, the difference of two nonnegative measures at most one of which can have infinite total mass. For any signed measure ν on R d , we use ν + and ν − to denote its positive and negative parts, and |ν| = ν + + ν − its total variation. For any signed measure ν on R d and any r > 0, we define M α ν (r) = sup x∈R d |x−y|≤r |ν|(dy) |x − y| d−α , 0 < α ≤ 2.
Definition 1.1 Let 0 < α ≤ 2. We say that a function f on R d belongs to the Kato class K d,α if lim r↓0 M α f (r) = 0. We say that a signed Radon measure ν on R d belongs to the Kato class K d,α if lim r↓0 M α ν (r) = 0. We say that a d-dimensional vector valued function V = (V 1 , · · · , V d ) on R d belongs to the Kato class K d,α if each V i belongs to the Kato class K d,α . We say that a d-dimensional vector valued signed Radon measure µ = (µ 1 , · · · , µ d ) on R d belongs to the Kato class K d,α if each µ i belongs to the Kato class K d,α .
Rigorously speaking a function f in K d,α may not give rise to a signed measure ν in K d,α since it may not give rise to a signed measure at all. However, for the sake of simplicity we use the convention that whenever we write that a signed measure ν belongs to K d,α we are implicitly assuming that we are covering the case of all the functions in K d,α as well.
Throughout this paper we assume that µ = (µ 1 , . . . , µ d ) is fixed with each µ i being a signed measure on R d belonging to K d, 1 . We also assume that the operator L is either L 1 or L 2 where
L 1 := 1 2 d i,j=1
∂ i (a ij ∂ j ) and
L 2 := 1 2 d i,j=1 a ij ∂ i ∂ j .
with A := (a ij ) being C 1 and uniformly elliptic. We do not assume that a ij is symmetric. Informally, when a ij is symmetric, a diffusion process X in R d with drift µ is a diffusion process in R d with generator L + µ · ∇. When each µ i is given by U i (x)dx for some function U i , X is a diffusion in R d with generator L+U ·∇ and it is a solution to the SDE dX t = dY t +U (X t )·dt where Y is a diffusion in R d with generator L. For a precise definition of a (non-symmetric) diffusion X with drift µ in K d,1 , we refer to section 6 in [12] and section 1 in [14]. The existence and uniqueness of X were established in [1] (see Remark 6.1 in [1]). In this paper, we will always use X to denote the diffusion process with drift µ.
In [11,12,14], we have already studied some potential theoretical properties of the process X. More precisely, we have established two-sided estimates for the heat kernel of the killed diffusion process X D and sharp two-sided estimates on the Green function of X D when D is a bounded C 1,1 domain; proved a scale invariant boundary Harnack principle for the positive harmonic functions of X in bounded Lipschitz domains; and identified the Martin boundary X D in bounded Lipschitz domains.
In this paper, we will first establish sharp two-sided estimates for the Green function of X D when D is a bounded Lipschitz domain. As consequences of these estimates, we get a 3G type theorem and a conditional gauge theorem for X in bounded Lipschitz domains. We also establish two-sided estimates for the heat kernels of Schrödinger-type operators in bounded C 1,1 -domains and a scale invariant boundary Harnack principle for the positive harmonic functions with respect to Schrödinger-type operators in bounded Lipschitz domains. The results of this paper will be used in proving the intrinsic ultracontractivity of the Schrödinger semigroup of X D in [15].
Throughout this paper, for two real numbers a and b, we denote a ∧ b := min{a, b} and a ∨ b := max{a, b}. The distance between x and ∂D is denote by ρ D (x). In this paper we will use the following convention: the values of the constants r i , i = 1 · · · 6, C 0 , C 1 , M , M i , i = 1 · · · 5, and ε 1 will remain the same throughout this paper, while the values of the constants c, c 1 , c 2 , · · · may change from one appearance to another. In this paper, we use ":=" to denote a definition, which is read as "is defined to be".
Green function estimates and 3G theorem
In this section we will establish sharp two-sided estimates for the Green function and a 3G theorem for X in bounded Lipschitz domains. We will first establish some preliminary results for the Green function G D (x, y) of X D . Once we have these results, the proof of the Green function estimates is similar to the ones in [3], [5] and [10]. The main difference is that the Green function G D (x, y) is not (quasi-) symmetric.
For any bounded domain D, we use τ D to denote the first exit time of D, i.e., τ D = inf{t > 0 :
X t / ∈ D}. Given a bounded domain D ⊂ R d , we define X D t (ω) = X t (ω) if t < τ D (ω) and X D t (ω) = ∂ if t ≥ τ D (ω),
where ∂ is a cemetery state. The process X D is called a killed diffusion with drift µ in D. Throughout this paper, we use the convention f (∂) = 0. It is shown in [12] that, for any bounded domain D, X D has a jointly continuous and strictly positive transition density function q D (t, x, y) (see Theorem 2.4 in [12]). In [12], we also showed that there exist positive constants c 1 and c 2 depending on D via its diameter such that for any (t, x, y) ∈ (0, ∞) × D × D,
q D (t, x, y) ≤ c 1 t − d 2 e − c 2 |x−y| 2 2t (2.1)
(see Lemma 2.5 in [12]). Let G D (x, y) be the Green function of X D , i.e.,
G D (x, y) := ∞ 0 q D (t,
x, y)dt.
By (2.1), G D (x, y) is finite for x = y and G D (x, y) ≤ c |x − y| d−2 (2.2)
for some c = c(diam(D)) > 0. From Theorem 3.7 in [12], we see that there exist constants r 1 = r 1 (d, µ) > 0 and c = c(d, µ) > 1 depending on µ only via the rate at which max 1≤i≤d M µ i (r) goes to zero such that for r ≤ r 1 , z ∈ R d , x, y ∈ B(z, r),
c −1 |x − y| −d+2 ≤ G B(z,r) (x, y) ≤ c |x − y| −d+2 , x, y ∈ B(z, 2r/3). (2.3) Definition 2.1 Suppose U is an open subset of R d .
(1) A Borel function u defined on U is said to be harmonic with respect to X in U if
u(x) = E x [u(X τ B )] , x ∈ B,(2.
4)
for every bounded open set B with B ⊂ U ;
(2) A Borel function u defined on U is said to be regular harmonic with respect to X in U if u is harmonic with respect to X in U and (2.4) is true for B = U .
Every positive harmonic function in a bounded domain D is continuous in D (see Proposition 2. 10 in [12]). Moreover, for every open subset U of D, we have
E x [G D (X T U , y)] = G D (x, y), (x, y) ∈ D × U (2.5)
where T U := inf{t > 0 : X t ∈ U }. In particular, for every y ∈ D and ε > 0, G D ( · , y) is regular harmonic in D \ B(y, ε) with respect to X (see Theorem 2.9 (1) in [12]). We recall here the scale invariant Harnack inequality from [11].
Theorem 2.2 (Corollary 5.8 in [11]) There exist r 2 = r 2 (d, µ) > 0 and c = c(d, µ) > 0 depending on µ only via the rate at which max 1≤i≤d M 1 µ i (r) goes to zero such that for every positive harmonic function f for X in B(x 0 , r) with r ∈ (0, r 2 ), we have
sup y∈B(x 0 ,r/2) f (y) ≤ c inf y∈B(x 0 ,r/2) f (y)
Recall that r 1 > 0 is the constant from (2.3).
G D (y, x) ≤ c inf y∈B(z,r/2) G D (y, x) (2.6)
and sup
y∈B(z,r/2) G D (x, y) ≤ c inf y∈B(z,r/2) G D (x, y) (2.7) Proof. Fix x ∈ D \ B(z, r). Since G D ( · , x) is harmonic for X in B(z,c −1 2 1 |w − y| d−2 ≤ G B(z,r) (w, y) ≤ G D (w, y) ≤ c 1 1 |w − y| d−2 .
Thus for w ∈ ∂B(z, 3r 4 ) and y 1 , y 2 ∈ B(z, r 2 ), we have
G D (w, y 1 ) ≤ c 1 |w − y 2 | |w − y 1 | d−2 1 |w − y 2 | d−2 ≤ 4 d−2 c 2 c 1 G D (w, y 2 ). (2.8)
On the other hand, by (2.5), we have
G D (x, y) = E x G D (X T B(z, 3r 4 ) , y) , y ∈ B(z, r 2 ) (2.9)
Since X T B(z, 3r 4 ) ∈ ∂B(z, 3r 4 ), combining (2.8)-(2.9), we get
G D (x, y 1 ) ≤ 4 d−2 c 2 c 1 E x G D (X T B(z, 3r 4 ) , y 2 ) = 4 d−2 c 2 c 1 G D (x, y 2 ), y 1 , y 2 ∈ B(z,r 2 )
In fact, (2.7) is true for every x ∈ D. ✷
Recall that a bounded domain D is said to be Lipschitz if there is a localization radius R 0 > 0 and a constant Λ 0 > 0 such that for every Q ∈ ∂D, there is a Lipschitz function φ Q : R d−1 → R satisfying |φ Q (x) − φ Q (z)| ≤ Λ 0 |x − z|, and an orthonormal coordinate system CS Q with origin at Q such that
B(Q, R 0 ) ∩ D = B(Q, R 0 ) ∩ {y = (y 1 , · · · , y d−1 , y d ) =: (ỹ, y d ) in CS Q : y d > φ Q (ỹ)}.
The pair (R 0 , Λ 0 ) is called the characteristics of the Lipschitz domain D.
Any bounded Lipschitz domain satisfies κ-fat property: there exists κ 0 ∈ (0, 1/2] depending on Λ 0 such that for each Q ∈ ∂D and r ∈ (0, R 0 ) (by choosing R 0 smaller if necessary), D ∩ B(Q, r) contains a ball B(A r (Q), κ 0 r).
In this section, we fix a bounded Lipschitz domain D with its characteristics (R 0 , Λ 0 ) and κ 0 . Without loss of generality, we may assume that the diameter of D is less than 1.
We recall here the scale invariant boundary Harnack principle for X D in bounded Lipschitz domains from [12].
Theorem 2.4 (Theorem 4.6 in [12]) Suppose D is a bounded Lipschitz domain. Then there exist constants M 1 , c > 1 and r 3 > 0, depending on µ only via the rate at which max 1≤i≤d M 1 µ i (r) goes to zero such that for every Q ∈ ∂D, r < r 3 and any nonnegative functions u and v which are harmonic with respect to X D in D ∩ B(Q, M 1 r) and vanish continuously on
∂D ∩ B(Q, M 1 r), we have u(x) v(x) ≤ c u(y) v(y)
for any x, y ∈ D ∩ B(Q, r).
(2.10)
For any Q ∈ ∂D, we define
∆ Q (r) := {y in CS Q : φ Q (ỹ) + 2r > y d > φ Q (ỹ), |ỹ| < 2(M 1 + 1)r} , ∂ 1 ∆ Q (r) := {y in CS Q : φ Q (ỹ) + 2r ≥ y d > φ Q (ỹ), |ỹ| = 2(M 1 + 1)r} , ∂ 2 ∆ Q (r) := {y in CS Q : φ Q (ỹ) + 2r = y d , |ỹ| ≤ 2(M 1 + 1)r} ,
where CS Q is the coordinate system with origin at Q in the definition of Lipschitz domains and φ Q is the Lipschitz function there. Let M 2 := 2(1 + M 1 ) 1 + Λ 2 0 + 2 and r 4 :
= M −1 2 (R 0 ∧ r 1 ∧ r 2 ∧ r 3 ). If z ∈ ∆ Q (r) with r ≤ r 4 , then |Q − z| ≤ |(z, φ Q (z)) − (z, 0)| + 2r ≤ 2r(1 + M 1 ) 1 + Λ 2 0 + 2r = M 2 r ≤ M 2 r 4 ≤ R 0 . So ∆ Q (r) ⊂ B(Q, M 2 r) ∩ D ⊂ B(Q, R 0 ) ∩ D.
Lemma 2.5 There exists constant c > 1 such that for every Q ∈ ∂D, r < r 4 , and any nonnegative functions u and v which are harmonic in D \ B(Q, r) and vanish continuously on ∂D \ B(Q, r), we have
u(x) u(y) ≤ c v(x) v(y)
for any x, y ∈ D \ B(Q, M 2 r).
(2.11)
Proof. Throughout this proof, we fix a point Q on ∂D, r < r 4 , ∆ Q (r), ∂ 1 ∆ Q (r) and ∂ 2 ∆ Q (r). Fix anỹ 0 ∈ R d−1 with |ỹ 0 | = 2(M 1 + 1)r. Since |(ỹ 0 , φ Q (ỹ 0 ))| > r, u and v are harmonic with respect to X in D ∩ B((ỹ 0 , φ Q (ỹ 0 )), 2M 1 r) and vanish continuously on ∂D ∩ B((ỹ 0 , φ Q (ỹ 0 )), 2M 1 r). Therefore by Theorem 2.4,
u(x) u(y) ≤ c 1 v(x) v(y) for any x, y ∈ ∂ 1 ∆ Q (r) withx =ỹ =ỹ 0 , (2.12)
for some constant c 1 > 0. Since dist(D ∩ B(Q, r), ∂ 2 ∆ Q (r)) > cr for some c := c(Λ 0 ), the Harnack inequality (Theorem 2.2) and a Harnack chain argument imply that there exists a constant c 2 > 1 such that
c −1 2 < u(x) u(y) , v(x) v(y) < c 2 , for any x, y ∈ ∂ 2 ∆ Q (r). (2.13)
In particular, (2.13) is true with y := (ỹ 0 , φ Q (ỹ 0 ) + 2r), which is also in ∂ 1 ∆ Q (r). Thus (2.12) and (2.13) imply that
c −1 3 u(x) u(y) ≤ v(x) v(y) ≤ c 3 u(x) u(y) , x, y ∈ ∂ 1 ∆ Q (r) ∪ ∂ 2 ∆ Q (r) (2.14)
for some constant c 3 > 0. Now, by applying the maximum principle (Lemma 7.2 in [11]) twice, we get that (2.14) is true for every
x ∈ D \ ∆ Q (r) ⊃ D \ B(Q, M 2 r). ✷
Combining Theorem 2.4 and Lemma 2.5, we get a uniform boundary Harnack principle for
G D (x, y) in both variables. Recall κ 0 is the κ-fat constant of D.
Lemma 2.6 There exist constants c > 1, M > 1/κ 0 and r 0 ≤ r 4 such that for every Q ∈ ∂D, r < r 0 , we have for x, y ∈ D \ B(Q, r) and z 1 , z 2 ∈ D ∩ B(Q, r/M )
G D (x, z 1 ) G D (y, z 1 ) ≤ c G D (x, z 2 ) G D (y, z 2 ) and G D (z 1 , x) G D (z 1 , y) ≤ c G D (z 2 , x) G D (z 2 , y) . (2.15) Fix z 0 ∈ D with r 0 /M < ρ D (z 0 ) < r 0 and let ε 1 := r 0 /(12M ). For x, y ∈ D, we let r(x, y) := ρ D (x) ∨ ρ D (y) ∨ |x − y| and B(x, y) := {A ∈ D : ρ D (A) > 1 M r(x, y), |x − A| ∨ |y − A| < 5r(x, y)} if r(x, y) < ε 1 , and B(x, y) := {z 0 } otherwise.
By a Harnack chain argument we get the following from (2.2) and (2.3).
Lemma 2.7 There exists a positive constant C 0 such that G D (x, y) ≤ C 0 |x − y| −d+2 , for all x, y ∈ D, and G D (x, y) ≥ C −1 0 |x − y| −d+2 if 2|x − y| ≤ ρ D (x) ∨ ρ D (y) . Let C 1 := C 0 2 d−2 ρ D (z 0 ) 2−d .
The above lemma implies that G D (·, z 0 ) and G D (z 0 , ·) are bounded above by C 1 on D \ B(z 0 , ρ D (z 0 )/2). Now we define
g 1 (x) := G D (x, z 0 ) ∧ C 1 and g 2 (y) := G D (z 0 , y) ∧ C 1 .
Using Lemma 2.3 and a Harnack chain argument, we get the following.
Lemma 2.8 For every y ∈ D and x 1 , x 2 ∈ D \ B(y, ρ D (y)/2) with |x 1 − x 2 | ≤ k(ρ D (x 1 ) ∧ ρ D (x 2 )), there exists c := c(D, k) independent of y and x 1 , x 2 such that G D (x 1 , y) ≤ c G D (x 2 , y) and G D (y, x 1 ) ≤ c G D (y, x 2 ).
(2.16)
The next two lemmas follow easily from the result above.
Lemma 2.9
There exists c = c(D) > 0 such that for every x, y ∈ D,
c −1 g 1 (A 1 ) ≤ g 1 (A 2 ) ≤ c g 1 (A 1 ) and c −1 g 2 (A 1 ) ≤ g 2 (A 2 ) ≤ c g 2 (A 1 ), A 1 , A 2 ∈ B(x, y). Lemma 2.10 There exists c = c(D) > 0 such that for every x ∈ {y ∈ D; ρ D (y) ≥ ε 1 /(8M 3 )}, c −1 ≤ g i (x) ≤ c, i = 1, 2.
Using Lemma 2.3, the proof of the next lemma is routine (for example, see Lemma 6.7 in [8]). So we omit the proof.
Lemma 2.11 For any given
c 1 > 0, there exists c 2 = c 2 (D, c 1 , µ) > 0 such that for every |x − y| ≤ c 1 (ρ D (x) ∧ ρ D (y)), G D (x, y) ≥ c 2 |x − y| −d+2 .
In particular, there exists c = c(D, µ) > 0 such that for every |x − y| ≤ (
8M 3 /ε 1 )(ρ D (x) ∧ ρ D (y)), c −1 |x − y| −d+2 ≤ G D (x, y) ≤ c |x − y| −d+2 .
With the preparations above, the following two-sided estimates for G D is a direct generalization of the estimates of the Green function for symmetric processes (see [5] for a symmetric jump process case).
Theorem 2.12 There exists c := c(D) > 0 such that for every x, y ∈ D c −1 g 1 (x)g 2 (y) g 1 (A)g 2 (A) |x − y| −d+2 ≤ G D (x, y) ≤ c g 1 (x)g 2 (y) g 1 (A)g 2 (A) |x − y| −d+2 (2.17)
for every A ∈ B(x, y).
Proof. Since the proof is an adaptation of the proofs of Proposition 6 in [3] and Theorem 2.4 in [10], we only give a sketch of the proof for the case ρ D (x) ≤ ρ D (y) ≤ 1 2M |x − y|. In this case, we have r(x, y) = |x − y|. Let r :
= 1 2 (|x − y| ∧ ε 1 ). Choose Q x , Q y ∈ ∂D with |Q x − x| = ρ D (x) and |Q y − y| = ρ D (y). Pick points x 1 = A r/M (Q x ) and y 1 = A r/M (Q y ) so that
x, x 1 ∈ B(Q x , r/M ) and y, y 1 ∈ B(Q y , r/M ). Then one can easily check that |z 0 − Q x | ≥ r and |y − Q x | ≥ r. So by the first inequality in (2.15), we have
c −1 1 G D (x 1 , y) g 1 (x 1 ) ≤ G D (x, y) g 1 (x) ≤ c 1 G D (x 1 , y) g 1 (x 1 ) ,
for some c 1 > 1. On the other hand, since |z 0 − Q y | ≥ r and |x 1 − Q y | ≥ r, applying the second inequality in (2.15),
c −1 1 G D (x 1 , y 1 ) g 2 (y 1 ) ≤ G D (x 1 , y) g 2 (y) ≤ c 1 G D (x 1 , y 1 ) g 2 (y 1 ) .
Putting the four inequalities above together we get
c −2 1 G D (x 1 , y 1 ) g 1 (x 1 )g 2 (y 1 ) ≤ G D (x, y) g 1 (x)g 2 (y) ≤ c 2 1 G D (x 1 , y 1 ) g 1 (x 1 )g 2 (y 1 ) . Moreover, 1 3 |x − y| < |x 1 − y 1 | < 2|x − y| and |x 1 − y 1 | ≤ (8M 3 /ε 1 )(ρ D (x 1 ) ∧ ρ D (y 1 )). Thus by Lemma 2.11, we have 1 2 d−2 c 2 c 2 1 |x − y| −d+2 g 1 (x 1 )g 2 (y 1 ) ≤ G D (x, y) g 1 (x)g 2 (y) ≤ 3 d−2 c 2 c 2 1 |x − y| −d+2 g 1 (x 1 )g 2 (y 1 ) ,
for some c 2 > 1.
If r = ε 1 /2, then r(x, y) = |x − y| ≥ ε 1 . Thus g 1 (A) = g 2 (A) = g 1 (z 0 ) = g 2 (z 0 ) = C 1 and ρ D (x 1 ), ρ D (y 1 ) ≥ r/M = ε 1 /(2M )
. So by Lemma 2.10,
C −2 1 c −2 3 ≤ g 1 (A)g 2 (A) g 1 (x 1 )g 2 (y 1 ) ≤ C 2 1 c 2 3 ,
for some c 3 > 1.
If r < ε 1 /2, then r(x, y) = |x − y| < ε 1 and r = 1 2 r(x, y). Hence ρ D (x 1 ), ρ D (y 1 ) ≥ r/M = r(x, y)/(2M ). Moreover, |x 1 − A|, |y 1 − A| ≥ 6r(x, y). So by applying the first inequality in (2.16) to g 1 , and the second inequality in (2.16) to g 2 (with k = 12M ),
c −1 4 ≤ g 1 (A) g 1 (x 1 ) ≤ c 4 and c −1 4 ≤ g 2 (A) g 2 (y 1 ) ≤ c 4 for some constant c 4 = c 4 (D) > 0. ✷ Lemma 2.13 (Carleson's estimate) For any given 0 < N < 1, there exists constant c > 1 such that for every Q ∈ ∂D, r < r 0 , x ∈ D \ B(Q, r) and z 1 , z 2 ∈ D ∩ B(Q, r/M ) with B(z 2 , N r) ⊂ D ∩ B(Q, r/M ) G D (x, z 1 ) ≤ c G D (x, z 2 ) and G D (z 1 , x) ≤ c G D (z 2 , x) (2.18)
Proof. Recall that CS Q is the coordinate system with origin at Q in the definition of Lipschitz domains. Let y := (0, r). Since z 1 , z 2 ∈ D ∩ B(Q, r/M ), by (2.2),
G D (y, z 1 ) ≤ c 1 r −d+2 and G D (z 1 , y) ≤ c 1 r −d+2 ,
for some constant c 1 > 0. On the other hand, since ρ D (y) ≥ c 2 r for some constant c 2 > 0 and ρ D (z 2 ) ≥ N r, by Lemma 2.11,
G D (y, z 2 ) ≥ c 3 |y − z 2 | −d+2 ≥ c 4 r −d+2 and G D (z 2 , y) ≥ c 3 |y − z 2 | −d+2 ≥ c 4 r −d+2 ,
for some constants c 3 , c 4 > 0. Thus from (2.15) with y = y, we get
G D (x, z 1 ) ≤ c 5 c 1 c 4 G D (x, z 2 ) and G D (z 1 , x) ≤ c 5 c 1 c 4 G D (z 2 , x) for some constant c 5 > 0. ✷ Recall that, for r ∈ (0, R 0 ), A r (Q) is a point in D ∩ B(Q, r) such that B(A r (Q), κ 0 r) ⊂ D ∩ B(Q, r). For every x, y ∈ D, we denote Q x , Q y by points on ∂D such that ρ D (x) = |x − Q x | and ρ D (y) = |y − Q y | respectively. It is easy to check that if r(x, y) < ε 1 A r(x,y) (Q x ), A r(x,y) (Q y ) ∈ B(x, y). (2.19) In fact, by the definition of A r(x,y) (Q x ), ρ D (A r(x,y) (Q x )) ≥ κ 0 r(x, y) > r(x, y)/M . Moreover, |x − A r(x,y) (Q x )| ≤ |x − Q x | + |Q x − A r(x,y) (Q x )| ≤ ρ D (x) + r(x, y) ≤ 2r(x, y) and |y − A r(x,y) (Q x )| ≤ |x − y| + |x − A r(x,y) (Q x )| ≤ 3r(x, y).
Lemma 2.14 There exists c > 0 such that the following holds:
(1) If Q ∈ ∂D, 0 < s ≤ r < ε 1 and A = A r (Q), then
g i (x) ≤ c g i (A) for every x ∈ D ∩ B(Q, M s) ∩ {y ∈ D : ρ D (y) > s M }, i = 1, 2.
(2) If x, y, z ∈ D satisfy |x − z| ≤ |y − z|, then
g i (A) ≤ c g i (B) for every (A, B) ∈ B(x, y) × B(y, z), i = 1, 2.
Proof. This is an easy consequence of the Carleson's estimates (Lemma 2.13), (2.19) and Lemmas 2.9-2.11 (see page 467 in [10]). Since the proof is similar to the proof on page 467 in [10], we omit the details ✷ The next result is called a generalized triangle property.
Theorem 2.15
There exists a constant c > 0 such that for every x, y, z ∈ D,
G D (x, y)G D (y, z) G D (x, z) ≤ c g 1 (y) g 1 (x) G D (x, y) ∨ g 2 (y) g 2 (z) G D (y, z) (2.20)
Proof. Let A x,y ∈ B(x, y), A y,z ∈ B(y, z) and A z,x ∈ B(z, x). If |x − y| ≤ |y − z| then |x − z| ≤ |x − y| + |y − z| ≤ 2|y − z|. So by (2.17) and Lemma 2.14 (2), we have
G D (y, z) G D (x, z) ≤ c 2 1 g 1 (A x,z )g 2 (A x,z ) g 1 (A y,z )g 2 (A y,z ) |x − z| d−2 |y − z| d−2 g 1 (y) g 1 (x) ≤ c 2 1 c 2 2 d−2 g 1 (y) g 1 (x) for some c 1 , c 2 > 0. Similarly if |x − y| ≥ |y − z|, then G D (x, y) G D (x, z) ≤ c 2 1 g 1 (A x,z )g 2 (A x,z ) g 1 (A x,y )g 2 (A x,y ) |x − z| d−2 |x − y| d−2 g 1 (y) g 1 (x) ≤ c 2 1 c 2 2 d−2 g 2 (y) g 2 (z) . Thus G D (x, y)G D (y, z) G D (x, z) ≤ c 2 1 c 2 2 d−2 g 1 (y) g 1 (x) G D (x, y) ∨ g 2 (y) g 2 (z) G D (y, z) .
✷ Lemma 2.16 There exists c > 0 such that for every x, y ∈ D and A ∈ B(x, y),
g i (x) ∨ g i (y) ≤ c g i (A), i = 1, 2.
Proof. If r(x, y) ≥ ε 1 , the lemma is clear. If r(x, y) < ε 1 , from Lemma 2.14 (1), it is easy to see that that
g i (x) ≤ c g i (A r(x,y) (Q x ))
for some c > 0, where Q x is a point on ∂D such that ρ D (x) = |x − Q x |. Thus the lemma follows from Lemmas 2.9 and and (2.19). ✷ Now we are ready to prove the 3G theorem.
Theorem 2.17
There exists a constant c > 0 such that for every x, y, z ∈ D,
G D (x, y)G D (y, z) G D (x, z) ≤ c |x − z| d−2 |x − y| d−2 |y − z| d−2 .
(2.21)
Proof. Let A x,y ∈ B(x, y), A y,z ∈ B(y, z) and A z,x ∈ B(z, x). By (2.17), the left-hand side of (2.21) is less than and equal to
g 1 (y)g 1 (A x,z ) g 1 (A x,y )g 1 (A y,z ) g 2 (y)g 2 (A x,z ) g 2 (A x,y )g 2 (A y,z ) |x − z| d−2 |x − y| d−2 |y − z| d−2 .
If |x − y| ≤ |y − z|, by Lemma 2.14 and Lemma 2.16, we have
g 1 (y) g 1 (A x,y ) ≤ c 1 , g 2 (y) g 2 (A x,y ) ≤ c 1 , g 1 (A x,z ) g 1 (A y,z ) ≤ c 2 and g 2 (A x,z ) g 2 (A y,z ) ≤ c 2
for some constants c 1 , c 2 > 0. Similarly, if |x − y| ≥ |y − z|, then
g 1 (y) g 1 (A y,z ) ≤ c 1 , g 2 (y) g 2 (A y,z ) ≤ c 1 , g 1 (A x,z ) g 1 (A x,y ) ≤ c 2 and g 2 (A x,z ) g 2 (A x,y ) ≤ c 2 .
✷ Combining the main results of this section, we get the following inequality.
Theorem 2.18 There exist constants c 1 , c 2 > 0 such that for every x, y, z ∈ D,
G D (x, y)G D (y, z) G D (x, z) ≤ c 1 g 1 (y) g 1 (x) G D (x, y) ∨ g 2 (y) g 2 (z) G D (y, z) ≤ c 2 |x − y| −d+2 ∨ |y − z| −d+2 . (2.22)
Proof. We only need to prove the second inequality. Applying Theorem 2.12, we get that there exists c 1 > 0 such that
g 1 (y) g 1 (x) G D (x, y) ≤ c 1 g 1 (y)g 2 (y) g 1 (A)g 2 (A) |x − y| −d+2 and g 2 (y) g 2 (z) G D (y, z) ≤ c 1 g 1 (y)g 2 (y) g 1 (B)g 2 (B) |x − y| −d+2
for every (A, B) ∈ B(x, y) × B(y, z). Applying Lemma 2.16, we arrive at the desired assertion.
Schrödinger semigroups for X D
In this section, we will assume that D is a bounded Lipschitz domain. We first recall some notions from [14]. A measure ν on D is said to be a smooth measure of X D if there is a positive continuous additive functional (PCAF in abbreviation) A of X D such that for any x ∈ D, t > 0 and bounded nonnegative function f on D,
E x t 0 f (X D s )dA s = t 0 D q D (s, x, y)f (y)ν(dy)ds. (3.1)
The additive functional A is called the PCAF of X D with Revuz measure ν. For a signed measure ν, we use ν + and ν − to denote its positive and negative parts of ν respectively. A singed measure ν is called smooth if both ν + and ν − are smooth. For a signed smooth measure ν, if A + and A − are the PCAFs of X D with Revuz measures ν + and ν − respectively, the additive functional A := A + − A − of is called the CAF of X D with (signed) Revuz measure ν. When ν(dx) = c(x)dx, A t is given by A t = t 0 c(X D s )ds. We recall now the definition of the Kato class. Definition 3.1 A signed smooth measure ν is said to be in the class S ∞ (X D ) if for any ε > 0 there is a Borel subset K = K(ε) of finite |ν|-measure and a constant δ = δ(ε) > 0 such that
sup (x,z)∈(D×D)\d D\K G D (x, y)G D (y, z) G D (x, z) |ν|(dy) ≤ ε (3.2)
and for all measurable set B ⊂ K with |ν|(B) < δ,
sup (x,z)∈(D×D)\d B G D (x, y)G D (y, z) G D (x, z) |ν|(dy) ≤ ε. (3.3)
A function q is said to be in the class
S ∞ (X D ), if q(x)dx is in S ∞ (X D ).
It follows from Proposition 7.1 of [14] and Theorem 2.17 above that K d,2 is contained in S ∞ (X D ). In fact, by Theorem 2.18 we have the following result. Recall that g 1 (x) = G D (x, z 0 )∧C 1 and g 2 (y) = G D (z 0 , y) ∧ C 1 .
g 2 (y) g 2 (x) G D (y, x)|ν|(dy) = 0, then ν ∈ S ∞ (X D ).
Proof. This is a direct consequence of Theorem 2.18. ✷
In the remainder of this section, we will fix a signed measure ν ∈ S ∞ (X D ) and we will use A to denote the CAF of X D with Revuz measure ν. For simplicity, we will use e A (t) to denote exp(A t ). The CAF A gives rise to a Schrödinger semigroup:
Q D t f (x) := E x e A (t)f (X D t ) .
The function
x → E x [e A (τ D )] is called the gauge function of ν. We say ν is gaugeable if E x [e A (τ D )] is finite for some x ∈ D.
In the remainder of this section we will assume that ν is gaugeable. It is shown in [14], by using the duality and the gauge theorems in [4] and [7], that the gauge function x → E x [e A (τ D )] is bounded on D (see section 7 in [14]). For y ∈ D, let X D,y denote the h-conditioned process obtained from X D with h(·) = G D (·, y) and let E y x denote the expectation for X D,y starting from x ∈ D. We will use τ y D to denote the lifetime of X D,y . We know from [14] that E y x [e A (τ y D )] is continuous in D × D (also see Theorem 3.4 in [6]) and sup (x,y)∈(D×D)\d
E y x [|A| τ y D ] < ∞ (3.4)
(also see [4] and [7]) and therefore by Jensen's inequality
inf (x,y)∈(D×D)\d E y x [e A (τ y D )] > 0, (3.5)
where d is the diagonal of the set D × D. We also know from section 7 in [14] that
V D (x, y) := E y x [e A (τ y D )]G D (x, y) (3.6)
is the Green function of {Q D t }, that is, for any nonnegative function f on D,
D V D (x, y)f (y) dy = ∞ 0 Q D t f (x) dtV D (x, y)V D (y, z) V D (x, z) ≤ c |x − z| d−2 |x − y| d−2 |y − z| d−2 .
(3.7)
4 Two-sided heat kernel estimates for {Q D t }
In this section, we will establish two-sided estimates for the heat kernel of Q D t in bounded C 1,1 domains.
Recall that a bounded domain D in R d is said to be a C 1,1 domain if there is a localization radius r 0 > 0 and a constant Λ > 0 such that for every Q ∈ ∂D, there is a C 1,1 -function φ = φ Q : R d−1 → R satisfying φ(0) = ∇φ(0) = 0, ∇φ ∞ ≤ Λ, |∇φ(x) − ∇φ(z)| ≤ Λ|x − z|, and an orthonormal coordinate system y = (y 1 , · · · , y d−1 , y d ) := (ỹ, y d ) such that B(Q, r 0 ) ∩ D = B(Q, r 0 ) ∩ {y : y d > φ(ỹ)}.
We will always assume in this section that D is a bounded C 1,1 domain. Since we will follow the method in [11] (see also [19]), the proof of this section will be little sketchy.
First, we recall some results from [11]. For every bounded C 1,1 domain D and any T > 0, there exist positive constants c i , i = 1, . . . , 4, such that
C 1 ψ D (t, x, y)t − d 2 e − C 2 |x−y| 2 t ≤ q D (t, x, y) ≤ C 3 ψ D (t, x, y)t − d 2 e − C 4 |x−y| 2 t (4.1) for all (t, x, y) ∈ (0, T ] × D × D, where ψ D (t, x, y) := (1 ∧ ρ D (x) √ t )(1 ∧ ρ D (y) √ t )
(see (4.27) in [11]). For any z ∈ R d and 0 < r ≤ 1, let
D z r := z + rD, ψ D z r (t, x, y) := (1 ∧ ρ D z r (x) √ t )(1 ∧ ρ D z r (y) √ t ), (t, x, y) ∈ (0, ∞) × D z r × D z r where ρ D z r (x)
is the distance between x and ∂D z r . Then, for any T > 0, there exist positive constants t 0 and c j , 5 ≤ j ≤ 8, independent of z and r such that
c 5 t − d 2 ψ D z r (t, x, y)e − c 6 |x−y| 2 2t ≤ q D z r (t, x, y) ≤ c 7 t − d 2 ψ D z r (t, x, y)e − c 8 |x−y| 2 2t (4.2)
for all (t, x, y) ∈ (0, t 0 ∧ (r 2 T )] × D z r × D z r (see (5.1) in [11]). We will sometimes suppress the indices from D z r when there is no possibility of confusion. For the remainder of this paper, we will assume that ν is in the Kato class K d,2 . Using the estimates above and the joint continuity of the densities q D (t, x, y) (Theorem 2.4 in [12]), it is routine (For example, see Theorem 3.17 [8], Theorem 3.1 [2] and page 4669 in [4].) to show that Q D t has a jointly continuous density r D (t, ·, ·) (also see Theorem 2.4 in [12]). So we have
E x e A (t)f (X D t ) = D f (y)r D (t, x, y)dy (4.3)
where A is the CAF of X D with Revuz measure ν in D. We have
E x e A (t)f (X D t ) = E x f (X D t ) + E x f (X D t ) t 0 e A t−s •θs dA s (4.5)
for all (t, x) ∈ (0, ∞) × D and all bounded Borel-measurable functions f in D. By the Markov Property and Fubini's theorem, we have
E x f (X D t ) t 0 e A t−s •θs dA s = t 0 E x f (X D t )e A t−s •θs dA s = t 0 E x E X D s f (X D t−s )e A (t − s) dA s .
Thus by (3.1) and (4.3)
,
E x f (X D t ) t 0 e A t−s •θs dA s = D f (y) t 0 D r D (s, x, z)q D (t − s, z, y)ν(dz)dsdy. (4.6)
Since r D (s, ·, ·) and q D (t − s, ·, ·) are jointly continuous, combining (4.5)-(4.6), we have proved the theorem. ✷
The proof of the next lemma is almost identical to that of Lemma 3.1 in [20]. We omit the proof.
(t, x, y) ∈ (0, ∞) × R d × R d , t 0 R d s − d 2 e − a|x−z| 2 2s (t − s) − d 2 e − a|z−y| 2 t−s |ν|(dz)ds ≤ ct − d 2 e − a|x−y| 2 2t sup u∈R d t 0 R d s − d 2 e − a|u−z| 2 4s |ν|(dz)ds and t 0 R d s − d+1 2 e − a|x−z| 2 2s (t − s) − d 2 e − a|z−y| 2 t−s |ν|(dz)ds ≤ ct − d+1 2 e − a|x−y| 2 2t sup u∈R d t 0 R d s − d 2 e − a|u−z| 2 4s |ν|(dz)ds(1 ∧ ρ(x) √ s )(1 ∧ ρ(z) √ s )s − d 2 e − a|x−z| 2 2s (1 ∧ ρ(y) √ t − s )(t − s) − d 2 e − a|z−y| 2 t−s |ν|(dz)ds ≤ c(1 ∧ ρ(x) √ t )(1 ∧ ρ(y) √ t )t − d 2 e − a|x−y| 2 2t sup u∈R d t 0 R d s − d 2 e − a|u−z| 2 4s |ν|(dz)ds (4.7)
Proof. With Lemma 4.2 in hand, we can follow the proof of Theorem 2.1 (page 389-391) in [17] to get the next lemma. So we skip the details. ✷
Recall that
M 1 µ i (r) = sup x∈R d |x−y|≤r |µ i |(dy) |x − y| d−1 and M 2 ν (r) = sup x∈R d |x−y|≤r |ν|(dy) |x − y| d−2 , r > 0, i = 1 · · · d.
Theorem 4.4 (1) For each T > 0, there exist positive constants c j , 1 ≤ j ≤ 4, depending on µ and ν only via the rate at which max 1≤i≤d M 1 µ i (r) and M 2 ν (r) go to zero such that
c 1 t − d 2 ψ D (t, x, y)e − c 2 |x−y| 2 2t ≤ r D (t, x, y) ≤ c 3 t − d 2 ψ D (t, x, y)e − c 4 |x−y| 2 2t (4.8)
(2) There exist T 1 = T 1 (D) > 0 such that for any T > 0, there exist positive constants t 1 and c j , 5 ≤ j ≤ 8, independent of z and r such that
c 5 t − d 2 ψ D z r (t, x, y)e − c 6 |x−y| 2 2t ≤ r D z r (t, x, y) ≤ c 7 t − d 2 ψ D z r (t, x, y)e − c 8 |x−y| 2 2t (4.9)
for all r ∈ (0, 1] and (t, x, y) ∈ (0, t 1 ∧ (r 2 (T ∧ T 1 ))] × D z r × D z r .
Proof. We only give the proof of (4.9). The proof of (4.8) is similar. Fix T > 0 and z ∈ R d . Let D r := D z r , ρ r (x) := ρ D z r (x) and ψ r (t, x, y) := ψ D z r (t, x, y). We defineĨ k (t, x, y) recursively for k ≥ 0 and (t, x, y) ∈ (0, ∞) × D × D:
I r 0 (t, x, y) := q Dr (t, x, y), I r k+1 (t, x, y) := t 0 Dr I r k (s, x, z)q(z)q Dr (t − s, z, y)dzds.
Then iterating the above gives
r Dr (t, x, y) = ∞ k=0 I r k (t, x, y), (t, x, y) ∈ (0, ∞) × D r × D r . (4.10) Let N 2 ν (t) := sup u∈R d t 0 R d s − d 2 e − |u−z| 2 2s |ν|(dz)ds, t > 0.
It is well-known (See, for example, Proposition 2.1 in [11].) that for any r > 0, there exist c 1 = c 1 (d, r) and c 2 = c 2 (d) such that
N 2 ν (t) ≤ (c 1 t + c 2 )M 2 ν (r),
for every t ∈ (0, 1). (4.11)
We claim that there exist positive constants c 3 , c 4 and A depending only on the constants in (4.2) and (4.7) such that for k = 0, 1, · · · and (t,
x, y) ∈ (0, t 0 ∧ (r 2 T )] × D r × D r |I r k (t, x, y)| ≤ c 3 ψ r (t, x, y) t − d 2 e − |x−y| 2 2t c 4 N 2 ν ( 2t A ) k , 0 < r ≤ 1. (4.12)
We will prove the above claim by induction. By (4.2), there exist constants t 0 , c 3 and A such that
|I r 0 (t, x, y)| = |q Dr (t, x, y)| ≤ c 3 ψ r (t, x, y) t − d 2 e − A|x−y| 2 2t (4.13)
for (t, x, y) ∈ (0, t 0 ∧ (r 2 T )] × D r × D r . On the other hand, by Lemma 4.3, there exists a positive constant c 5 depending only on A and d such that
t 0 Dr ψ r (s, x, z)s − d 2 e − A|x−z| 2 2s (1 ∧ ρ r (y) √ t − s )(t − s) − d 2 e − A|z−y| 2 t−s |ν|(dz)ds ≤ c 5 ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t sup u∈R d t 0 R d s − d 2 e − A|u−z| 2 4s
|ν|(dz)ds.
(4.14)
So there exists c 6 = c 6 (d) such that
|I r 1 (t, x, y)| ≤ c 2 3 c 5 ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t sup u∈R d t 0 R d s − d 2 e − A|u−z| 2 4s |ν|(dz)ds ≤ c 2 3 c 5 c 6 A d 2 ψ r (t, x, y) t − d 2 e − A|x−y| 2 2t N 2 ν ( 2t A )
for (t, x, y) ∈ (0, t 0 ∧ (r 2 T )] × D r × D r . Therefore (4.12) is true for k = 0, 1 with c 4 := c 2 3 c 5 c 6 A d 2 . Now we assume (4.12) is true up to k. Then by (4.13)-(4.14), we have
|I r k+1 (t, x, y)| ≤ t 0 Dr |I r k (s, x, z)|q Dr (t − s, z, y)||ν|(dz)ds ≤ t 0 Dr c 3 ψ r (s, x, z) s − d 2 e − A|x−z| 2 2s c 4 N 2 ν ( 2s A ) k ×c 3 (1 ∧ ρ r (y) √ t − s )(t − s) − d 2 e − A|z−y| 2 t−s |ν|(dz)ds ≤ c 2 3 c 4 N 2 ν ( 2t A ) k t 0 Dr ψ r (s, x, z) s − d 2 e − A|x−z| 2 2s (1 ∧ ρ r (y) √ t − s ) ×(t − s) − d 2 e − M |z−y| 2 t−s |ν|(dz)ds ≤ c 2 3 c 4 N 2 ν ( 2t A ) k c 5 c 6 A d 2 ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t N 2 ν ( 2t A ) ≤ c 3 ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t c 4 N 2 ν ( 2t A ) k+1 .
So the claim is proved.
Choose t 1 < (1 ∧ t 0 ) small so that c 4 N 2 ν ( 2t 1 A ) < 1 2 .
(4.15) By (4.11), t 1 depends on ν only via the rate at which M 2 ν (r) goes to zero. (4.10) and (4.12) imply that for (t, x, y) ∈ (0, t 1 ∧ (
r 2 T )] × D r × D r r Dr (t, x, y) ≤ ∞ k=0 |I r k (t, x, y)| ≤ 2c 3 ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t .
(4.16)
Now we are going to prove the lower estimate of r Dr (t, x, y). Combining (4.10), (4.12) and (4.15) we have for every (t, x, y) ∈ (0, t 1 ∧ (r 2 T )] × D r × D r ,
|r Dr (t, x, y) − q Dr (t, x, y)| ≤ ∞ k=1 |I r k (t, x, y)| ≤ c 3 c 4 N 2 ν ( 2t 1 A )ψ r (t, x, y)t − d 2 e − A|x−y| 2 2t .
Since there exist c 7 and c 8 ≤ 1 depending on T such that
q Dr (t, x, y) ≥ 2c 8 ψ r (t, x, y)t − d 2 e − c 7 |x−y| 2 2t , we have for |x − y| ≤ √ t and (t, x, y) ∈ (0, t 1 ∧ (r 2 T )] × D × D, r Dr (t, x, y) ≥ 2c 8 e −2c 7 − c 3 c 4 N 2 ν ( 2t 1 A ) ψ(t, x, y)t − d 2 . (4.17)
Now we choose t 2 ≤ t 1 small so that
c 3 c 4 N 2 ν ( 2t 2 A ) < c 8 e −2c 7 . (4.18)
Note that t 2 depends on ν only via the rate at which M 2 ν (r) goes to zero. So for (t, x, y)
∈ (0, t 2 ∧ (r 2 T )] × D × D and |x − y| ≤ √ t, we have r Dr (t, x, y) ≥ c 8 e −2c 7 ψ r (t, x, y)t − d 2 . (4.19)
It is easy to check (see pages 420-421 of [21]) that there exists a positive constant T 0 depending only on the characteristics of the bounded C 1,1 domain D such that for anyt ≤ T 0 and x, y ∈ D with ρ D (x) ≥ √t , ρ D (y) ≥ √t , one can find a arclength-parameterized curve l ⊂ D connecting x and y such that the length |l| of l is equal to λ 1 |x − y| with λ 1 ≤ λ 0 , a constant depending only on the characteristics of the bounded C 1,1 domain D. Moreover, l can be chosen so that
ρ D (l(s)) ≥ λ 2 t , s ∈ [0, |l|]
for some positive constant λ 2 depending only on the characteristics of the bounded C 1,1 domain D. Thus for any t = r 2t ≤ r 2 T 0 and x, y ∈ D r with ρ r (x) ≥ √ t, ρ r (y) ≥ √ t, one can find a arclength-parameterized curve l ⊂ D r connecting x and y such that the length |l| of l is equal to
λ 1 |x − y| and ρ r (l(s)) ≥ λ 2 √ t, s ∈ [0, |l|].
Using this fact and (4.19), and following the proof of Theorem 2.7 in [9], we can show that there exists a positive constant c 9 depending only on d and the characteristics of the bounded C 1,1 domain D such that r Dr (t, x, y) ≥ 1 2
c 8 e −2c 7 ψ r (t, x, y)t − d 2 e − c 9 |x−y| 2 t (4.20)
for all t ∈ (0, t 2 ∧ r 2 (T ∧ T 0 )] and x, y ∈ D r with ρ r (x) ≥ √ t, ρ r (y) ≥ √ t.
It is easy to check that there exists a positive constant T 1 ≤ T 0 depending only on the characteristics of the bounded C 1,1 domain D such that fort ≤ T 1 and arbitrary x, y ∈ D, one can find x 1 , y 1 ∈ D be such that ρ D (x 1 ) ≥ √t , ρ D (y 1 ) ≥ √t and |x − x 0 | ≤ √t , |y − y 0 | ≤ √t . Thus for any t = r 2t ≤ r 2 T 1 and arbitrary x, y ∈ D r , one can find x 1 , y 1 ∈ D r be such that for all (t, x, y) ∈ (0, t 2 ∧ r 2 (T ∧ T 1 )] × D r × D r . Using (4.1) instead of (4.
ρ r (x 1 ) ≥ √ t, ρ r (y 1 ) ≥ √ t and |x − x 0 | ≤ √ t, |y − y 0 | ≤ √ t. Now
2) The proof of (4.8) up to t ≤ t 3 for some t 3 depending on T and D is similar (and simpler) to the proof of (4.9). To prove (4.8) for a general T > 0, we can apply the Chapman-Kolmogorov equation and use the argument in the proof of Theorem 3.9 in [18]. We omit the details. ✷ Remark 4.5 Theorem 4.4 (2) will be used in [15] to prove parabolic Harnack inequality, parabolic boundary Harnack inequality and the intrinsic ultracontractivity for the semigroup Q D t .
Uniform 3G type estimates for small Lipschitz domains
G U (x, y) ≤ c inf y∈B(z,l/2) G U (x, y)(5.c −1 1 |w − x| d−2 ≤ G B(z,l) (w, x) ≤ G U (w, x) ≤ G B(Q,r) (w, x) ≤ c 1 |w − x| d−2 .
Thus for w ∈ ∂B(z, 3l 4 ) and y 1 , y 2 ∈ B(z, l 2 ), we have
G U (w, y 1 ) ≤ c |w − y 2 | |w − y 1 | d−2 1 |w − y 2 | d−2 ≤ 4 d−2 c 2 G U (w, y 2 ). (5.3)
On the other hand, from (2.5), we have
G U (x, y) = E x G U (X T B(z, l 2 ) , y) , y ∈ B(z, l 2 ) (5.4) Since X T B(z,3l 4 )
∈ ∂B(z, 3l 4 ), combining (5.3)-(5.4), we get
G U (x, y 1 ) ≤ 4 d−2 c 2 E x G U (X T B(z,3l 4 )
,
y 2 ) = 4 d−2 c 2 G U (x, y 2 ), y 1 , y 2 ∈ B(z,l 2 )
✷ In the remainder of this section, we fix a bounded Lipschitz domain D with characteristics (R 0 , Λ 0 ). For every Q ∈ ∂D we put
∆ Q (r) := {y in CS Q : φ Q (ỹ) + r > y d > φ Q (ỹ), |ỹ| < r}
where CS Q is the coordinate system with origin at Q in the definition of Lipschitz domains and φ Q is the Lipschitz function there. Define
r 5 := R 0 1 + Λ 2 0 + 1 ∧ r 1 ∧ r 3 . (5.5)
If z ∈ ∆ Q (r) with r ≤ r 5 , we have
|Q − z| ≤ |(z, φ Q (z)) − (Q, 0)| + r ≤ ( 1 + Λ 2 0 + 1)r ≤ R 0 . So ∆ Q (r) ⊂ B(Q, R 0 ) ∩ D.
For any Lipschitz function ψ : R d−1 → R with Lipschitz constant Λ 0 , let ∆ ψ := {y : r 5 > y d − ψ(ỹ) > 0, |ỹ| < r 5 } .
so that ∆ ψ ⊂ B(0, R 0 ). We observe that, for any Lipschitz function ϕ : R d−1 → R with the Lipschitz constant Λ, its dilation ϕ r (x) := rϕ(x/r) is also Lipschitz with the same Lipschitz constant Λ 0 . For any r > 0, put η = r r 5 and ψ = (φ Q ) η . Then it is easy to see that for any Q ∈ ∂D and r ≤ r 5 , ∆ Q (r) = η∆ ψ .
Thus by choosing appropriate constants Λ 1 > 1, R 1 < 1 and d 1 > 0, we can say that for every Q ∈ ∂D and r ≤ r 5 , the ∆ Q (r)'s are bounded Lipschitz domains with the characteristics (rR 1 , Λ 1 ) and the diameters of ∆ Q (r)'s are less than rd 1 . Since r 5 ≤ r 1 ∧r 3 , Lemma 5.1 works for G ∆ Q (r) (x, y) with Q ∈ ∂D and r ≤ r 5 . Moreover, we can restate the scale invariant boundary Harnack principle in the following way.
Theorem 5.2 There exist constants M 3 , c > 1 and s 1 > 0, depending on µ, ν and D such that for every Q ∈ ∂D, r < r 5 , s < rs 1 , w ∈ ∂∆ Q (r) and any nonnegative functions u and v which are harmonic with respect to X D in ∆ Q (r)∩B(w, M 3 s) and vanish continuously on ∂∆ Q (r)∩B(w, M 3 s),
we have u(x) v(x) ≤ c u(y) v(y)
for any x, y ∈ ∆ Q (r) ∩ B(w, s).
(5.6)
In the remainder of this section we will fix the above constants r 5 , M 3 , s 1 , Λ 1 , R 1 and d 1 > 0, and consider the Green functions of X in ∆ Q (r) with Q ∈ ∂D and r > 0. We will prove a scale invariant 3G type estimates for these Green functions for small r. The main difficulties of the scale invariant 3G type estimates for X are the facts that X does not have rescaling property and that the Green function G ∆ Q (r) (x, · ) is not harmonic for X. To overcome these difficulties, we first establish some results for the Green functions of X in ∆ Q (r) with Q ∈ ∂D and r small.
Let δ Q r (x) := dist(x, ∂∆ Q (r)). Using Lemma 5.1 and a Harnack chain argument, the proof of the next lemma is almost identical to the proof of Lemma 6.7 in [8]. So we omit the proof.
Lemma 5.3
For any given c 1 > 0, there exists c 2 = c 2 (D, c 1 , µ) > 0 such that for every Q ∈ ∂D, r < r 5 , |x − y| ≤ c 1 (δ Q r (x) ∧ δ Q r (y)), we have
G ∆ Q (r) (x, y) ≥ c 2 |x − y| −d+2 .
Recall that M 3 > 0 and s 1 > 0 are the constants from Theorem 5.2. Let M 4 := 2(1 + M 3 ) 1 + Λ 2 1 + 2 and R 4 := R 1 /M 4 . The next lemma is a scale invariant version of Lemma 2.5. The proof is similar to the proof of Lemma 2.5. We spell out the details for the reader's convenience.
Lemma 5.4 There exists constant c > 1 such that for every Q ∈ ∂D, r < r 5 , s < rR 4 , w ∈ ∂∆ Q (r) and any nonnegative functions u and v which are harmonic in ∆ Q (r) \ B(w, s) and vanish continuously on ∂∆ Q (r) \ B(w, s), we have
u(x) u(y) ≤ c v(x) v(y)
for any x, y ∈ ∆ Q (r) \ B(w, M 4 s).
(5.7)
Proof. We fix a point Q on ∂D, r < r 5 , s < rR 4 and w ∈ ∂∆ Q (r) throughout this proof. Let ∆ s := {y in CS w : ϕ w (ỹ) + 2s > y d > ϕ w (ỹ), |ỹ| < 2(M 3 + 1)s} , ∂ 1 ∆ s := {y in CS w : ϕ w (ỹ) + 2s ≥ y d > ϕ w (ỹ), |ỹ| = 2(M 3 + 1)s} , ∂ 2 ∆ s := {y in CS w : ϕ w (ỹ) + 2s = y d , |ỹ| ≤ 2(M 3 + 1)s} ,
where CS w is the coordinate system with origin at w in the definition of the Lipschitz domain ∆ Q (r) and ϕ w is the Lipschitz function there. If z ∈ ∆ s , |w − z| ≤ |(z, ϕ w (z)) − (z, 0)| + 2s ≤ 2s(1 + M 3 ) 1 + Λ 2 + 2s = M 4 s ≤ rR 1 .
So ∆ s ⊂ B(Q, M 4 s) ∩ D ⊂ B(Q, rR 1 ) ∩ D. For |ỹ| = 2(M 3 + 1)s, we have |(ỹ, ϕ w (ỹ))| > s. So u and v are harmonic with respect to X in ∆ Q (r) ∩ B((ỹ, ϕ w (ỹ)), 2M 3 s) and vanish continuously on ∂∆ Q (r) ∩ B((ỹ, ϕ w (ỹ)), 2M 3 s) where |ỹ| = 2(M 3 + 1)s. Therefore by Theorem 5.2,
u(x) u(y) ≤ c v(x) v(y)
for any x, y ∈ ∂ 1 ∆ s withx =ỹ. (5.8) Since dist(∆ Q (r)∩B(w, s), ∂ 2 ∆ s ) > cs for some c 1 = c 1 (D), if x ∈ ∂ 2 ∆ s , the Harnack inequality (Theorem 2.2) and a Harnack chain argument give that there exists constant c 2 > 1 such that
c −1 2 < u(x) u(y) , v(x) v(y) < c 2 . (5.9)
In particular, (5.9) is true with x = x s := (x, ϕ w (x) + 2s), which is also in ∂ 1 ∆ s . Thus (5.8) and (5.9) imply that
c −1 3 u(x) u(y) ≤ v(x) v(y) ≤ c 3 u(x) u(y) , x, y ∈ ∂ 1 ∆ s ∪ ∂ 2 ∆ s (5.10)
Since u, v are ν-harmonic with respect to X ∆ Q (r) , by Theorem 7.7 in [6] and our Lemma 6.1, there exist finite measures µ 1 and ν 1 on ∂U such that u(x) = ∂∆ Q (r)
K(x, z)µ 1 (dz) and v(x) = ∂∆ Q (r)
K(x, z)ν 1 (dz), x ∈ ∆ Q (r). M (x, z)ν 1 (dz), x ∈ ∆ Q (r). By Theorem 7.3 (2) in [6] and our Lemma 6.1, we have for every x ∈ U u(x) v(x) = ∂∆ Q (r) K(x, z)µ 1 (dz)
∂∆ Q (r) K(x, z)ν 1 (dz) ≤ c 2 1 ∂∆ Q (r) M (x, z)µ 1 (dz) ∂∆ Q (r) M (x, z)ν 1 (dz) = c 2 1 u 1 (x) v 1 (x) ≤ c 4 1 u(x) v(x) .
Since u 1 , v 1 are harmonic for X U and vanish continuously on ∂∆ Q (r) ∩ ∂D, by the boundary Harnack principle (Theorem 4.6 in [12]), there exist N and c 2 such that
u 1 (x) v 1 (x) ≤ c 2 u 1 (y) v 1 (y) , x, y ∈ D ∩ B(Q, r N ).
Thus for every x, y ∈ D ∩ B(Q, r N )
u(x) v(x) ≤ c 2 1 u 1 (x) v 1 (x) ≤ c 2 c 2 1 u 1 (y) v 1 (y) ≤ c 2 c 4 1 u(y) v(y) . ✷
Lemma 2. 3
3For any bounded domain D, there exists c = c(D, µ) > 0 such that for every r ∈ (0, r 1 ∧ r 2 ] and B(z, r) ⊂ D, we have for every x ∈ D \ B(
(
also see Lemma 3.5 of[4]). (3.4)-(3.6) and the continuity of E yx [e A (τ y D )] imply that V D (x, y) is comparable to G D (x, y) and V D (x, y) is continuous on (D × D) \ d.Thus there exists a constant c > 0 such that for every x, y, z ∈ D,
Theorem 4. 1 e
1The density r D (t, x, y) satisfies the following equationr D (t, x, y) = q D (t, x, y) + t 0 D r D (s, x, z)q D (t − s, z, y)ν(dz)ds (4.4)for all (t, x, y) ∈ (0, ∞) × D × D.Proof. Recall that A is the CAF of X D with Revuz measure ν in D and Let θ be the usual shift operator for Markov processes. Since for any t > 0 e A (t) = e At = A t−s •θs dA s ,
Lemma 4. 2
2For any a > 0, there exists a positive constants c depending only on a and d such that for any
Lemma 4. 3
3For any a > 0, there exists a positive constant c depending only on a and d such that for any (t, x, y) ∈ (0, ∞) × D × D,
Using (4.17) and(4.20) one can repeat the last paragraph of the proof of Theorem 2.1 in[17] to show that there exists a positive constant c 10 depending only on d and the characteristics of the bounded C 1,1 domain D such that r Dr (t, x, y) ≥ c 8 c 10 e −2c 7 ψ r (t, x, y)t − d 2 e − 2c 9 |x−y| 2 t (4.21)
Recall that r 1
1> 0 is the constant from (2.3) and r 3 > 0 is the constant from Theorem 2.2. The next lemma is a scale invariant version of Lemma 2.3. The proof is similar to the proof of Lemma 2.3. Lemma 5.1 There exists c = c(d, µ) > 0 such that for every r ∈ (0, r 1 ∧ r 3 ], Q ∈ R d and open subset U with B(z, l) ⊂ U ⊂ B(Q, r), we have for every x ∈ U \ B(
M
(x, z)µ 1 (dz) and v 1 (x) := ∂∆ Q (r)
2 )
2Proof. (5.1) follows from Theorem 2.2. So we only need to show (5.2). Since r < r 1 , by (2.3), there exists c = c(d) > 1 such that for every x, w ∈ B(z, 3l 4 )
Hence by Khasminskii's lemma, supx,z∈∆ Q (r) u Q r (x, y) ≤ 2, r < r 6 , Q ∈ ∂D.By Jensen's inequality, we also haveTherefore, we have proved the following lemma.Lemma 6.1 For r < r 6 , ν| ∆ Q (r) ∈ S ∞ (X ∆ Q (r) ) and ν| ∆ Q (r) is gaugeable. Moreover, there exists a constant c such that c −1 ≤ u Q r (x, y) ≤ c for x, y ∈ ∆ Q (r) and r < r 6 .Theorem 6.2 (Boundary Harnack principle) Suppose D be a bounded Lipschitz domain in R d with the Lipschitz characteristic (R 0 , Λ 0 ) and let M 5 := ( 1 + Λ 2 0 + 1). Then there exists N > 1 such that for any r ∈ (0, r 6 ) and Q ∈ ∂D, there exists a constant c > 1 such that for any nonnegative functions u, v which are ν-harmonic in D ∩ B(Q, rM 5 ) with respect to X D and vanish continuously on ∂D ∩ B(Q, rM 5 ), we haveProof. Note that, with M 5 = ( 1 + Λ 2 0 + 1), ∆ Q (r) ⊂ D ∩ B(Q, M 5 r). So u, v are ν-harmonic in ∆ Q (r). For the remainder of the proof, we fix Q ∈ ∂D, r ∈ (0, r 5 ) and a point x Q r ∈ ∆ Q (r). Let .
Brownian motion with singular drift. R F Bass, Z.-Q Chen, Ann. Probab. 312R. F. Bass and Z.-Q. Chen, Brownian motion with singular drift. Ann. Probab. 31(2) (2003), 791-817.
Semigroup of Schrödinger operators with potentials given by Radon measures. Ph, Z M Blanchard, Ma, Stochastic processes, physics and geometry. Teaneck, NJWorld Sci. PublishingPh. Blanchard and Z. M. Ma, Semigroup of Schrödinger operators with potentials given by Radon measures, In Stochastic processes, physics and geometry, 160-195, World Sci. Publishing, Teaneck, NJ, 1990.
Sharp estimates for the Green function in Lipschitz domains. K Bogdan, J. Math. Anal. Appl. 243K. Bogdan, Sharp estimates for the Green function in Lipschitz domains. J. Math. Anal. Appl. 243 (2000), 326-337.
Gaugeability and conditional gaugeability. Z.-Q Chen, Trans. Amer. Math. Soc. 354Z.-Q. Chen, Gaugeability and conditional gaugeability. Trans. Amer. Math. Soc. 354 (2002), 4639-4679.
Green function estimate for censored stable processes. Z.-Q Chen, P Kim, Fields. 124Probab. Theory RelatZ.-Q. Chen and P. Kim, Green function estimate for censored stable processes. Probab. Theory Relat. Fields 124 (2002), 595-610.
Stability of Martin boundary under non-local Feynman-Kac perturbations. Z.-Q Chen, P Kim, Probab. Theory Relat. Fields. 128Z.-Q. Chen and P. Kim, Stability of Martin boundary under non-local Feynman-Kac perturbations. Probab. Theory Relat. Fields 128 (2004), 525-564.
General gauge and conditional gauge theorems. Z.-Q Chen, R Song, Ann. Probab. 30Z.-Q. Chen and R. Song, General gauge and conditional gauge theorems. Ann. Probab. 30 (2002), 1313-1339.
From Brownian motion to Schrödinger's equation. K L Chung, Z X Zhao, Springer-VerlagBerlinK. L. Chung and Z. X. Zhao, From Brownian motion to Schrödinger's equation, Springer-Verlag, Berlin, 1995.
A new proof of Moser's parabolic Harnack inequality using the old ideas of Nash. E B Fabes, D W Stroock, Arch. Rational Mech. Anal. 964E. B. Fabes and D. W. Stroock, A new proof of Moser's parabolic Harnack inequality using the old ideas of Nash. Arch. Rational Mech. Anal. 96(4) (1986), 327-338.
Uniform boundary Harnack principle and generalized triangle property. W Hansen, J. Funct. Anal. 2262W. Hansen, Uniform boundary Harnack principle and generalized triangle property. J. Funct. Anal. 226(2), (2005), 452-484
Two-sided estimates on the density of Brownian motion with singular drift, to appear in the Doob-memorial volume of the Ill. P Kim, R Song, J. of Math. P. Kim and R. Song, Two-sided estimates on the density of Brownian motion with singular drift, to appear in the Doob-memorial volume of the Ill. J. of Math., 2006.
Boundary Harnack principle for Brownian motions with measure-valued drifts in bounded Lipschitz domains. P Kim, R Song, PreprintP. Kim and R. Song, Boundary Harnack principle for Brownian motions with measure-valued drifts in bounded Lipschitz domains, Preprint, 2006.
Intrinsic ultracontractivity of non-symmetric diffusion semigroups in bounded domains. P Kim, R Song, PreprintP. Kim and R. Song, Intrinsic ultracontractivity of non-symmetric diffusion semigroups in bounded domains, Preprint, 2006.
On dual processes of non-symmetric diffusions with measure-valued drifts. P Kim, R Song, PreprintP. Kim and R. Song, On dual processes of non-symmetric diffusions with measure-valued drifts, Preprint, 2006.
Intrinsic ultracontractivity of non-symmetric diffusions with measure-valued drifts and potentials. P Kim, R Song, PreprintP. Kim and R. Song, Intrinsic ultracontractivity of non-symmetric diffusions with measure-valued drifts and potentials, Preprint, 2006.
Mesures associés aux fonctionnelles additives de Markov. I. D Revuz, Trans. Amer. Math. Soc. 148D. Revuz, Mesures associés aux fonctionnelles additives de Markov. I, Trans. Amer. Math. Soc. 148 (1970), 501-531.
Comparison of Green functions and harmonic measure of parabolic operators. L Riahi, Potential Anal. 234L. Riahi, Comparison of Green functions and harmonic measure of parabolic operators, Potential Anal. 23(4) (2005), 381-402.
Sharp bounds on the density, Green function and jumping function of subordinate killed BM. R Song, Probab. Theory Related Fields. 128R. Song, Sharp bounds on the density, Green function and jumping function of subordinate killed BM, Probab. Theory Related Fields 128 (2004), 606-628.
A Harnack inequality for the equation ∇(a∇u) + b∇u = 0, when |b| ∈ K n+1. Q S Zhang, Manuscripta Math. 89Q. S. Zhang, A Harnack inequality for the equation ∇(a∇u) + b∇u = 0, when |b| ∈ K n+1 , Manuscripta Math. 89 (1996), 61-77.
Gaussian bounds for the fundamental solutions of ∇(A∇u) + B∇u − u t = 0. S Qi, Zhang, Manuscripta Math. 93Qi S. Zhang, Gaussian bounds for the fundamental solutions of ∇(A∇u) + B∇u − u t = 0, Manuscripta Math. 93 (1997), 381-390.
The boundary behavior of heat kernels of Dirichlet Laplacians. Q S Zhang, J. Differential Equations. 182Q. S. Zhang, The boundary behavior of heat kernels of Dirichlet Laplacians, J. Differential Equations 182(2002), 416-430.
| []
|
[
"The origin of the infrared emission in radio galaxies. III. Analysis of 3CRR objects",
"The origin of the infrared emission in radio galaxies. III. Analysis of 3CRR objects"
]
| [
"D Dicken \nDepartment of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA\n",
"C Tadhunter [email protected] \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nHounsfield RoadS3 7RHSheffieldUK\n",
"D Axon \nDepartment of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA\n\nDepartment of Physics and Astronomy\nUniversity of Sussex\nPevensey 2\n\nUniversity of Sussex\nBN1 9QHFalmer, BrightonUK\n",
"A Robinson \nDepartment of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA\n",
"R Morganti [email protected] \nASTRON\nP.O. Box 27990 AADwingelooThe Netherlands\n\nKapetyn Astronmical Institute\nUniversity of Groningen\n9700 AV Gronin-gen, The Netherlands -2Postbuss 800\n",
"P Kharb \nDepartment of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA\n",
"Received ; Accepted "
]
| [
"Department of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA",
"Department of Physics and Astronomy\nUniversity of Sheffield\nHounsfield RoadS3 7RHSheffieldUK",
"Department of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA",
"Department of Physics and Astronomy\nUniversity of Sussex\nPevensey 2",
"University of Sussex\nBN1 9QHFalmer, BrightonUK",
"Department of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA",
"ASTRON\nP.O. Box 27990 AADwingelooThe Netherlands",
"Kapetyn Astronmical Institute\nUniversity of Groningen\n9700 AV Gronin-gen, The Netherlands -2Postbuss 800",
"Department of Physics and Astronomy\nRochester Institute of Technology\n84 Lomb Memorial Drive14623RochesterNYUSA"
]
| []
| We present Spitzer photometric data for a complete sample of 19 low redshift (z < 0.1) 3CRR radio galaxies as part of our efforts to understand the origin of the prodigious mid-to far-infrared (MFIR) emission from radio-loud AGN.Our results show a correlation between AGN power (indicated by [OIII]λ5007 emission line luminosity) and 24µm luminosity. This result is consistent with the 24µm thermal emission originating from warm dust heated directly by AGN illumination. Applying the same correlation test for 70µm luminosity against[OIII] luminosity we find this relation to suffer from increased scatter compared to that of 24µm. In line with our results for the higher-radio-frequency-selected 2Jy sample, we are able to show that much of this increased scatter is due to heating by starbursts which boost the far-infrared emission at 70µm in a minority of objects (17−35%). Overall this study supports previous work indicating AGN illumination as the dominant heating mechanism for MFIR emitting dust in the majority of low to intermediate redshift radio galaxies (0.03 < z < 0.7), with the advantage of strong statistical evidence. However, we find evidence that the low redshift broad-line objects (z < 0.1) are distinct in terms of their positions on the MFIR vs.[OIII] correlations. | 10.1088/0004-637x/722/2/1333 | [
"https://arxiv.org/pdf/1008.2719v1.pdf"
]
| 8,042,005 | 1008.2719 | 04f742d4fd38a0a0dad12ecbde140928340cf3d7 |
The origin of the infrared emission in radio galaxies. III. Analysis of 3CRR objects
16 Aug 2010
D Dicken
Department of Physics and Astronomy
Rochester Institute of Technology
84 Lomb Memorial Drive14623RochesterNYUSA
C Tadhunter [email protected]
Department of Physics and Astronomy
University of Sheffield
Hounsfield RoadS3 7RHSheffieldUK
D Axon
Department of Physics and Astronomy
Rochester Institute of Technology
84 Lomb Memorial Drive14623RochesterNYUSA
Department of Physics and Astronomy
University of Sussex
Pevensey 2
University of Sussex
BN1 9QHFalmer, BrightonUK
A Robinson
Department of Physics and Astronomy
Rochester Institute of Technology
84 Lomb Memorial Drive14623RochesterNYUSA
R Morganti [email protected]
ASTRON
P.O. Box 27990 AADwingelooThe Netherlands
Kapetyn Astronmical Institute
University of Groningen
9700 AV Gronin-gen, The Netherlands -2Postbuss 800
P Kharb
Department of Physics and Astronomy
Rochester Institute of Technology
84 Lomb Memorial Drive14623RochesterNYUSA
Received ; Accepted
The origin of the infrared emission in radio galaxies. III. Analysis of 3CRR objects
16 Aug 2010Subject headings: galaxies:active -infrared:galaxies
We present Spitzer photometric data for a complete sample of 19 low redshift (z < 0.1) 3CRR radio galaxies as part of our efforts to understand the origin of the prodigious mid-to far-infrared (MFIR) emission from radio-loud AGN.Our results show a correlation between AGN power (indicated by [OIII]λ5007 emission line luminosity) and 24µm luminosity. This result is consistent with the 24µm thermal emission originating from warm dust heated directly by AGN illumination. Applying the same correlation test for 70µm luminosity against[OIII] luminosity we find this relation to suffer from increased scatter compared to that of 24µm. In line with our results for the higher-radio-frequency-selected 2Jy sample, we are able to show that much of this increased scatter is due to heating by starbursts which boost the far-infrared emission at 70µm in a minority of objects (17−35%). Overall this study supports previous work indicating AGN illumination as the dominant heating mechanism for MFIR emitting dust in the majority of low to intermediate redshift radio galaxies (0.03 < z < 0.7), with the advantage of strong statistical evidence. However, we find evidence that the low redshift broad-line objects (z < 0.1) are distinct in terms of their positions on the MFIR vs.[OIII] correlations.
Introduction
Identifying the origin of prodigious thermal mid-to far-infrared (MFIR) emission is a key component for a comprehensive understanding of Active Galactic Nuclei (AGN).
However, this task is not trivial, because the thermal MFIR emitting dust structures cannot be resolved in most of these galaxies. Therefore, past studies have favored a statistical approach to investigations, focusing on samples of radio galaxies, which can be selected without bias with respect to orientation (Golombek et al. 1988;Impey & Gregorini 1993Heckman et al. 1992, 1994Hes et al. 1995;Haas et al. 2004;Shi et al. 2005;Cleary et al. 2007). Although these studies suggested that the mid-IR (5−30µm) emitting structures are heated by AGN illumination, the lack of sample completeness and the low mid-IR detection rate meant that the AGN heating hypothesis could not be supported with a full statistical analysis.
Additionally, in the past, linking the active nucleus with the origin of the far-IR (>30µm) emission from cool dust components proved difficult. The failure of uniform compact dust torus models to produce the observed far-IR SEDs of AGN (Pier & Krolik 1992) led to the proposal of clumpy dust torus geometries (Nenkova et al. 2002(Nenkova et al. , 2008 which produce the required dust temperatures through cloud shadowing. Alternatively, other studies argued that the cool dust producing the far-infrared emission is predominantly heated by starbursts (Rowan- Robinson 1995, Schweitzer et al. 2006. However, the idea that starbursts dominate the heating of the far-IR emitting dust in AGN has yet to be firmly established with solid observational evidence.
To address the problems associated with previous MFIR investigations of radio-loud AGN, that suffered from biased, incomplete and/or inhomogeneous samples, we carried out a program of deep Spitzer/MIPS MFIR photometric observations for a complete sample of 47 2Jy radio galaxies with redshifts 0.05 < z < 0.7 (Program 20233: PI Tadhunter).
The results from these data are published in T07, D08 and D09. The results have shown that [OIII] optical emission line luminosity (L [OIII] ) is significantly correlated with both the mid-(24µm) and far-infrared (70µm) luminosities (L 24µm and L 70µm respectively).
The AGN-photoionised narrow-line region (NLR) is emitted on a small scale (≤5 kpc), therefore the [OIII]λ5007 emission from the NLR is likely to provide a good indication of the intrinsic power of the illuminating AGN (e.g. Rawlings & Saunders 1991;Tadhunter et al. 1998;Simpson 1998 and discussion in D09). Consequently, the correlations between isotropic MFIR luminosity and [OIII] optical emission line luminosity provide strong empirical evidence to support AGN illumination as the dominant heating mechanism of the thermal MFIR emitting dust. Moreover, since radio-loud quasars, broad-line and narrow-line galaxies follow similar correlations between MFIR and [OIII] luminosities, without significant offsets between the two groups, the results also provide strong support for the orientation-based unified schemes for powerful, radio-loud AGN (Barthel 1989).
In addition, we carefully considered the starburst contribution to the AGN heating of dust. We found that the objects showing optical evidence for starburst activity from spectral synthesis modeling of their spectra, appear to have enhanced far-IR emission compared to the general sample. Our interpretation of these results is that, while AGN illumination is the primary heating mechanism for both the warm (mid-IR emitting, 24µm) and cool (far-IR emitting, 70µm) dust in most powerful radio galaxies, heating by starbursts acts to substantially boost the 70µm luminosity in the 20−30% of objects in the 2Jy sample with optical evidence for star formation activity.
The above results support the conclusions for previous studies of powerful radio galaxies (Heckman et al. 1994;Hes et al. 1995;Haas et al. 2004;Shi et al. 2005;Cleary et al. 2007) with the advantage of a thorough statistical analysis afforded to us by the complete and well detected sample. Having established these results for the 2Jy sample, which represents -5radio-loud AGN at intermediate redshifts (0.05 < z < 0.7), it is natural to investigate whether we find similar results for other samples of radio-loud AGN.
The low frequency (≈170 MHz) selected 3C sample of radio-loud AGN has been favored by many previous investigators, because the low selection frequency means that it is unlikely to be affected by an orientation bias. Recently, deep optical spectroscopic data at both high and low resolution have been published for 3CR sources (Buttiglione et al. 2009), allowing us to create a sample that is complete in both MFIR and [OIII] observations. Furthermore, the 3CR objects make an ideal comparison to the higher frequency selected (2.7 GHz) 2Jy sample. Investigation of a low selection frequency sample allows us to test whether the selection frequency of the 2Jy sample leads to any biases that may affect our understanding of the MFIR emission from radio-loud AGN.
We present here the analysis of Spitzer photometric observations for a complete sample of 3CRR radio galaxies (Laing et al. 1983) with z < 0.1. The following investigation serves to test our previous conclusions concerning the origin of the thermal MFIR, based on the southern 2Jy sample, using a sample of radio-loud objects with, on average, lower redshifts and radio powers, as well as a different selection frequency.
Samples and data reduction
Tadhunter, private communication, (10) Osterbrock et al. (1976). Note that, as well as the evidence based on optical spectroscopy, the presence of energetically signif-icant star formation activity in 3C285, 3C293, 3C321, and 3C305 is supported by the detection of PAH features in their mid-IR Spitzer/IRS spectra (Dicken et al., 2010, in preparation;Shi et al. 2007).
This paper presents results for a complete sub-sample of 19 3CRR radio galaxies selected from the sample of Laing et al. (1983) (see Table 1). We have limited these data for completeness to objects with FRII radio morphologies and redshifts z ≤ 0.1. This leads to a sample with a high level of completeness in both Spitzer/MIPS detections and [OIII] λ5007 emission line flux measurements. In the following discussion we will refer to this sample as the 3CRR sample. Note that, although two objects in the sample (3C277.3, 3C293) have uncertain radio morphological classifications, and cannot be confidently characterized as either FRI or FRII types, they are included here for completeness.
All of the 3CRR sample objects have been previously observed with Spitzer/MIPS.
These data were downloaded as raw MIPS images from the Spitzer Reserve Observation Catalogue (ROC) and reduced in an identical wayto the 2Jy sample discussed in detail in D08. The MFIR flux and data and associated errors are presented in Table 2 were extracted using aperture photometry, again using identical methods to those used for the 2Jy sample described in D08. The [OIII] fluxes were obtained from published deep optical spectra at both high and low resolution taken using Dolores on the TNG (Buttiglione et al. 2009), except DA240, 4C73.08, 3C321 and 3C445 a (see note in Table 2). We detect 100% of the 3CRR sample at 24µm and 89% at 70µm.
For comparison we also discuss here the 2Jy sample from our previous study. This sample consists of 46 powerful radio galaxies and steep-spectrum quasars (F ν ∝ ν −α , α 4.8 2.7 > 0.5) selected from the 2Jy sample of Wall & Peacock (1985) with redshifts 0.05 < z < 0.7.
A full discussion of the selection and MFIR data reduction for this sample is published in D08 along with tables of MFIR fluxes and luminosities (D09). Note that two objects overlap between the 3CRR and 2Jy samples (3C403, 3C445).
In addition, published deep optical spectra have allowed us to identify the objects in the two samples with evidence for young stellar populations at optical wavelengths. The Saunders et al. (1989), and 3C445 taken from an average of data presented in Osterbrock et al. (1976), Tadhunter (1986), Morris & Ward (1988) and Buttiglione et al. (2009).
Column 10 presents the 5 GHz radio luminosities taken from Laing et al. (1983). Luminosities were again calculated using Ho = 71 kms −1 M pc −1 , Ωm = 0.27 and Ω λ = 0.73 along with spectral indices derived from the F(70)/F(24) flux ratios for the MFIR data, and the high frequency radio spectral index α 4.8GHz 2.7GHz for the radio data.
a Also in the 2Jy sample references for the stellar population analysis are given in Table 1.
The Origin of the mid-to far-IR emission
We first consider the 3CRR sample alone. In Figures 1 (a) and (b) we plot the L 24µm
and L 70µm against L [OIII] for the 19 objects in the 3CRR sample. A visual inspection of Figure 1 (a) identifies a correlation between L 24µm vs. L [OIII]λ5007 . This correlation is statistically confirmed in Section 4 and is consistent with the results found previously for the 2Jy sample (D09). Such a correlation supports the hypothesis that the warm, 24µm-emitting dust is heated by direct AGN illumination, assuming that the [OIII] luminosity is a good indicator of intrinsic radiative AGN power. Secondly, a visual inspection of Figure 1
(b),
plotting L 70µm vs. L [OIII]λ5007 , reveals less correlation in the 3CRR data compared to the result at L 24µm , also consistent with the results for the 2Jy sample (D09).
We compare these results with the 2Jy sample in Figures 1 (c) and (d) which show L 24µm and L 70µm plotted against L [OIII] for the 3CRR and 2Jy samples plotted together.
Firstly, Figure 1 (c) (L [OIII] vs. L 24µm ) reveals a strong correlation for the combined sample, with good continuity between the 3CRR and 2Jy samples at the low luminosity end of the correlation. Secondly, plotting L 70µm vs. L [OIII] for the combined sample also reveals a strong correlation, that is not apparent when plotting the 3CRR sample alone.
Again these correlations are confirmed, statistically, in Section 4.
However, there is notable additional scatter in the L 70µm vs. L [OIII] correlation compared to that involving L 24µm . The crosses in the bottom right corners of Figures 1 (c) and (d) show the maximum error for the points, demonstrating that the scatter is real and not purely a consequence of observational uncertainties (discussed further in section 5). In this context, the apparent lack of a correlation between L [OIII] and L 70µm for the 3CRR for the combined 3CRR and 2Jy samples.
sample alone is plausibly explained in terms of a combination of the high intrinsic scatter of the L 70µm vs. L [OIII] correlation and the small redshift and [OIII] luminosity range of the 3CRR sample.
Correlation statistics
The high rate of Spitzer detections at MFIR wavelengths for the two samples allows us to conduct statistical tests on the significance of the correlations, discussed in the previous section and presented in Figure 1, using the Spearman rank correlation coefficient.
However, although the overall detection rate is high for the observations of the two samples The results of the Spearman rank tests for the correlations shown in Figure 1 are presented in Table 3. As well as the percentage levels of significance, we also present the r s statistic, where a value of r s close to 1 is rated highly significant.
First, considering the 3CRR sample alone (Columns 2-3), we find that the L 24µm vs.
Origin of the Far-Infrared Emission
We now consider the cause of the additional scatter in the correlations in the far-IR 70µm luminosities. In particular, it is important to consider whether starbursts heat the cool dust that radiates at far-infrared wavelengths, since morphological evidence suggests that at least some powerful radio galaxies are triggered in major gas rich galaxy mergers (e.g. Heckman et al. 1986). Such mergers are predicted to be associated with powerful starbursts (e.g. di Matteo et al. 2005). Moreover, understanding the connection between starbursts and AGN is important for the interpretation of sub-millimeter observations in the context of the star formation history of radio-loud AGN at high redshift (Archibald et al. 2001).
Evidence for starburst heating in the far-IR continuum
By using results from our own spectral synthesis modeling work, as well as the literature, we have identified objects in both the samples that show clear evidence for recent star formation activity at optical wavelengths 1 (see Table 1 for 3CRR and D09 for the 2Jy sample). Therefore, in Figure 2 we plot the L 24µm and L 70µm data against L [OIII] for the combined 3CRR and 2Jy sample, in this case highlighting the 12 objects in the two samples . The fitting also does not include the 11 optical starburst objects and optical starburst objects. This confirms the result for the 2Jy sample presented in T07 and D09, using an increased sample of starburst radio galaxies (a total of 12 optical starburst objects compared with the 7 in the 2Jy sample alone).
In order to evaluate the degree of enhancement in the far-IR emission above the main correlation, we have fitted regression lines on both plots (a) and (b) in Figure 2. The lines
shown are the bisectors for objects without optical starburst of a linear least squares fits of
x on y and y on x (See Figure 2 for details). On the 70µm plot in Figure 2 (b) it can be seen that 11 out of 12 of the optical starburst objects lie more than 0.3 dex (i.e. a factor of 2) above the regression line.
We have also used a one dimensional Kolmogorov-Smirnoff (K-S) two sample test to compare the cumulative distributions of the vertical displacements from the fitted regression line in the L 70µm vs. L [OIII] plot ( Figure 2). The test calculates the probability that the starburst and non-starburst objects are drawn from the same distribution. The null hypothesis that the optical starburst and non-optical starburst are draw from the same parent population is rejected at a better than 0.01% level. This result further supports our interpretation that the far-infrared emission is boosted in the optical starburst objects. low redshift z < 0.09 and all (apart from 3C192) have much warmer colors than the rest of the objects in the 3CRR and 2Jy samples. Intriguingly, this group includes all four of the objects in the original papers that defined BLRG as a class Miller 1975 andOsterbrock et al. 1976) and notably all three of the broad line objects in the 3CRR sample.
Color differences at low redshifts
In Figure 3 we present the 70µm/24µm color vs. redshift, L 5GHz and L [OIII] for the 3CRR and 2Jy samples. In this figure we have marked in red the 4 BLRG that lie 1−2 σ below the correlation between L 70µm and L [OIII] . These 4 objects also have warm colors; 0.6 < F (70)/F (24) < 0.8 compared to a median of F (70)/F (24) = 2.3. To begin with, Figure 3 shows clearly that the 3CRR sample objects have, on average, lower redshifts and radio powers than the 2Jy sample. However, it is interesting that the 4 BLRG with warm colors all tend to higher L [OIII] luminosity than all but two (3C321, 3C403) of the low redshift 3CRR objects. Indeed, the fact that these objects tend to higher [OIII] emission than objects of similar redshift and radio luminosity explains their position under the correlation between L [OIII] and L 70µm . In addition, the fact that the BLRG do not fall below the L 24µm vs. L [OIII] correlation is explained by an enhancement in their 24µm emission as well as that in their [OIII] emission. This is consistent with their warm colors (see Fig 3), and the tendency of their Spitzer IRS spectra to peak at around 24µm (Dicken et al. 2010 in prep).
6. Discussion
Origin of the mid-and far-infrared emission
From the plots of [OIII] emission line vs. MFIR luminosities, as well as from thorough statistical analysis, we have shown that the origin of the MFIR emission in 3CRR radio galaxies is most likely AGN illumination of the thermal emitting dust for the majority of objects.
Considering the far-IR (cool dust) there are two main heating mechanism candidates: AGN illumination and starburst heating. However, the similar slopes of the 24 and 70µm correlations (gradients of the fitted regression lines are 0.83 ± 0.05 and 0.82 ± 0.08 for 24µm and 70µm respectively 2 ) presented in Figure 2 indicate a common heating mechanism, for the warm and cool MFIR emitting dust components, i.e. AGN illumination. This is consistent with models that are capable of producing the broad MFIR SEDs by AGN illumination of near-nuclear clumpy tori (Nenkova et al. 2002(Nenkova et al. , 2008. Alternatively, we showed in D09 that it is possible to account for the observed far-IR emission from the re-radiation of AGN illuminated narrow line clouds. Such a scenario is attractive as it does not require special or complex torus geometries for the circum-nuclear dust structures.
Many previous studies have acknowledged the benefits of a statistical approach to understanding the origin of the thermal MFIR emission from radio-loud AGN, given the impossibility of resolving the MFIR emitting dust structures in the majority of objects. Such investigations began with exploratory IRAS studies investigating the contributions of thermal and non-thermal emission (Neugebauer et al. 1986;Golombek et al. 1988;Knapp et al. 1990;Impey & Gregorini (1993)). However, it was the studies of Heckman et al. (1994) and Hes et al. (1995) that first provided evidence for a link between the MFIR emission and the AGN, finding 60µm luminosities and extended radio luminosity to be correlated over 3 orders of magnitude. However, the detection rate in the far-IR of these IRAS observations was low (<30%). Therefore Heckman et al. (1994) based their results on groups of objects averaged in redshift bins, and Hes et al. (1995) plotted only the objects that were detected.
In further work Haas et al. (1998) suggested that the broad range of temperatures of dust emitting the MFIR emission argues in favor of AGN heating although this cannot rule out a starburst heating contribution. Subsequently, with ISO data, Haas et al. (2004) found the ratios of mid to far-infrared emission was higher for radio-loud AGN compared with ULIRGS. As AGN are likely to have hotter dust temperatures, this is consistent with, but does not prove, an AGN origin to the MFIR emission.
One of the first studies to take advantage of sensitive MFIR data from Spitzer was that by Shi et al. (2005). They found that a subset of the radio-loud AGN in their heterogeneous sample fall in the region of the MFIR color vs.
[OIII]/Hβ diagnostic diagram normally occupied by AGN (i.e. relatively warm colors and large [OIII/Hβ, see Kewley et al. (2001)), thus providing evidence that the cool dust is heated by AGN illumination rather than by starbursts. In addition, Cleary et al. (2007) found a correlation between MFIR luminosity (corrected for non-thermal contamination) and low frequency radio luminosity, suggesting AGN heating of the dust, based on a sample of 33 intermediate-redshift 3CR radio galaxies with a relatively low detection rate at far-IR wavelengths (60%). The results we have presented are based on the combined 3CR and 2Jy complete sample of 63 objects with a 92% detection rate at 70µm. Along with our previous work, these results strongly reinforce the idea that the heating of the cool, far-IR emitting dust in the majority of radio galaxies is dominated by AGN illumination.
Starburst contribution to the far-infrared emission
We have shown that the additional scatter above the main correlation between L 70µm
and L [OIII] seen in 3CRR sample is accounted for by starburst boosting of the 70µm far-IR emission. This enhancement is not seen for optical starburst objects at 24µm. The results from the 3CRR sample add statistical weight to our previous study with only 7 optical starburst objects in the 2Jy sample to 12 optical starburst objects in the combined 3CRR and 2Jy sample.
It is possible to estimate the rate of energetically significant starburst activity in the This result is also consistent with the study of Shi et al. (2007), who find starburst tracing PAH features in only 2 of the 10 3CRR objects that overlap with the sample presented in this paper, and that of Fu & Stockton (2009) who do not find evidence for PAH features in any of the 12 objects in their FRII radio galaxies.
It is generally accepted that the dust producing the continuum emission at 24µm in AGN is heated almost exclusively by AGN illumination. In order for the cool dust emitting in the far-IR to be dominated by starburst rather than AGN heating, a remarkable degree of coordination between AGN and starburst activity would be implied, given the strong correlations between L 24µm , L 70µm and L [OIII] , and the similarity between the slopes of the L 24µm vs. L [OIII] and L 70µm vs. L [OIII] correlations. Although such coordination cannot be entirely ruled out, we consider it less likely. It is a fact that only a minority of objects in the 2Jy and 3CRR samples show any evidence for recent star formation activity, therefore, the current phase of AGN activity seen in these objects is unlikely to be fueled by the gas flows that occur at the peaks of major gas-rich mergers.
Unified schemes
Thermal MFIR continuum emission can be used to test the orientation-based unified schemes for powerful radio sources (e.g. Barthel 1989), under the assumption the MFIR emission is isotropic. For orientation-based unification to hold, correlations between MFIR emission and other isotropic wavelength emission such as low frequency radio or the [OIII], should reveal no differences between the relative positions of the different optical classes of objects. This tests the hypothesis that all the objects contain the same central engine, however, the optical classes arise because the optical emission is not emitted isotopically and can be obscured. In Figure 1 we have labelled the objects by their optical class as broad-line radio galaxies and quasars (BLRG/Q), narrow-line radio galaxies (NLRG) and weak line radio galaxies (WLRG) 3 . Applying such a test to the combined 3CRR and 2Jy sample in Figure 1 (c) and (d), in general, little difference between the MFIR luminosities of BLRG/Q compared to NLRG is found. This result is in contrast to studies based on lower sensitivity IRAS data (Heckman et al. 1994, Hes et al. 1995, which suggested that BLRG/Q have enhanced MFIR emission compared with NLRG objects of similar radio power.
However, as discussed above, it is apparent that all three 3CRR BLRGs and one of the two BLRG in the 2Jy sample with z < 0.1 4 lie below the correlation between L 70µm and L [OIII] . On the basis of Figure 3 it is likely that this displacement is due to their relatively strong [OIII] emission and not sub-luminous 70µm emission. The fact that this group of low-z BLRG falls within the main body of the points in the L 24µm vs. L [OIII] correlation is consistent with anisotropic [OIII] emission, provided that the 24µm emission is also enhanced in these objects by a similar degree to the [OIII]. Such enhancement is consistent with the warm colors of the low-z BLRG, as well as the tendency of their Spitzer IRS spectra to peak at around 24µm (Dicken et al. 2010 in prep).
In order to reconcile the position of the low-z BLRG in the MFIR correlation plots with the orientation-based unified schemes, both the [OIII] emission and the 24µm emission must be anisotropic, and subject to significant dust extinction by the torus. Such anisotropy in mid-infrared emission above 15µm has been seen for Seyfert galaxies when comparing the data from type 1 to type 2 (Buchanan et al. 2006 24µm, and explained this in terms of the BLRG objects lacking a cool dust components in the far-infrared -an explanation supported by the apparent lack of morphological features due to dust in the BLRG relative to the NLRG in optical HST images 5 . If correct, this explanation would be inconsistent with the simplest versions of the orientation-based unified schemes, since it would imply that BLRG represent a separate class of radio-loud AGN that lack dust or, alternatively, a later evolutionary phase in the evolution of radio-loud AGN population. Based on the results presented in this paper (in particular, Figures 1 and 3),
we feel that anisotropic [OIII] and 24µm emission is a more plausible explanation for the differences between the properties of the low-z BLRG and NLRG.
Unfortunately, the number of low-z BLRG in our sample is small. To further investigate the apparently unusual MFIR properties of such objects, it will be important in the future to study observations of a larger sample, and also examine their mid-infrared spectra in more detail.
Conclusions
In this paper we have investigated MFIR observations of a sample of 19 3CRR radio galaxies. The main conclusions are as follows.
• From the statistical analysis of the 3CRR sample, correlating MFIR luminosities with the AGN power indicator [OIII], we conclude that the dominant heating mechanism 5 Note, however, that the NLRG sample of van Bemmel & Barthel (2001) is heterogeneous and, even with the spatial resolution of the HST, it is difficult to detect near-nuclear dust features in the BLRG because, unlike the NLRG, they have luminous point-like nuclei at optical wavelengths.
-25for mid-IR emitting dust is AGN illumination. This result is consistent with our previous work based on the 2Jy sample of southern radio galaxies. Moreover, based on our analysis of the combined 2Jy and 3CRR sample, we conclude that the dominant heating mechanism for the cooler, far-IR emitting dust is also likely to be AGN illumination in the majority of radio-loud AGN.
• Following the indications of previous work we have investigated whether additional scatter in the [OIII] vs. 70µm luminosity correlation for 3CRR objects is a consequence of starburst heating which boosts the far-IR emission in some objects. We find that this is indeed the case for the 12 optically identified starburst objects in the combined 2Jy and 3CRR sample. We conclude that starburst heating of the far-IR emitting dust is important in only 17−35% of objects.
• Although we find no statistically significant differences between the properties of the BLRG/Q and NLRG for the joint 2Jy and 3CRR sample, or the 2Jy sample alone, we note that all the classical BLRG in our 3CRR sample at z < 0.1 show evidence for enhanced [OIII] emission and warmer MFIR colors compared with the majority of NLRG at similar redshifts. This suggests that torus-induced anisotropy in [OIII] and 24µm emission may be more significant in powerful radio galaxies at low redshifts than in their higher redshift counterparts. However, larger samples, along with a more detailed comparisons between the mid-IR spectra of BLRG and NLRG, are required to put this result on a firmer footing. sample for the correlations presented in Figure 1. Values of 0 < rs < 1 are given for each test, where a value close to 1 is highly significant. Columns 2 and 3 present the statistics for the 3CRR sample alone. Columns 4 and 5 present the statistics for the combined 2Jy and 3CRR sample undertaken with a z limited sample z > 0.06 for the 2Jy sample, but including all the 3CRR sample; in this test the remaining upper limits were handled in a bootstrap method described in Section 4. Columns 6 and 7 present statistics for all the objects in the combined sample, handling the upper limits using survival analysis statistics.
tions downloaded from the Spitzer archive. Definitions for column 5 are: NLRGnarrow-line radio galaxy, BLRG -broad-line radio galaxy, WLRG -weak-line radio galaxy. Definitions for column 7 are: No -No optical starburst, U -Uncertain starburst objects, SB -Optical starburst objects. SB references are: (1) Holt et al. (2007), (2) Aretxaga et al. (2001), (3) Tadhunter et al. (1996), (4) O'Dea et al. (2001), (5) Tadhunter et al. (2005), (6) Robinson (2001), (7) Wills et al. (2002), (8)Clark (1996),
Fig. 1 .
1-: Luminosity correlation plots: (a) L 24µm vs. L [OIII]λ5007 and (b) L 70µm vs.L [OIII]λ5007 for the 3CRR sample alone. (c) L 24µm vs. L [OIII]λ5007 and (d) L 70µm vs. L [OIII]λ5007
it is important to consider the effect of the 7 remaining upper limits in 70µm luminosity and 4 upper limits in the [OIII] emission line luminosity.In order to remove the effect on the statistical tests of the 4 upper limits in[OIII], the 2Jy sample is limited to z > 0.06 and one object with an upper limit in [OIII] (PKS1839-48) was removed, leaving 38 of the original 46 2Jy objects. In addition, we applied a bootstrap method for dealing with six remaining 70µm upper limits. For this we replaced the upper limits with 70µm fluxes derived using the measured 24µm flux of each object and a 70µm/24µm flux ratio chosen at random from the distribution of measured flux ratios for the detected sample objects. These 70µm estimates were then converted to luminosities and included in the rank correlation test. This process was repeated 1000 times and the median of the correlation coefficients for those cycles was used for the correlation statistics involving 70µm (see D09 for further details). Also, for the purposes of comparison, we investigated the correlations with upper limits using the ASURV(Isobe et al. 1986;Lavalley et al. 1992) package implemented in IRAF, including all upper limits in[OIII]. The survival analysis statistics used in ASURV have been acknowledged as a powerful tool for analyzing samples with upper or lower limits.-13 -
Fig. 2 .
2that have been identified as having optical starbursts. It is evident from a visual inspection of Figure 2 that much of the additional scatter in the L 70µm vs. L [OIII] correlation compared to the L 24µm vs. L [OIII] correlation is a consequence of enhanced far-IR emission in the -: Plots showing the correlations between MFIR and [OIII] luminosity for the combined 3CRR and 2Jy sample at (a) 24µm and (b) 70µm, with optical starbursts marked with separate symbols (blue stars). The regression line is fitted to the entire 3CRR sample as well as the 2Jy sample objects with z > 0.06 in order to avoid most of the objects with upper limits in [OIII]
AFig. 3 .
3detailed inspection of the L 70µm vs. L [OIII] plots in Figures 1 (d) and 2 (b), identifies the six objects that lie on the bottom edge of the correlation (NLRG: 3C98, 3C192;BLRG:3C227, 3C382, 3C390.3, 3C445), the latter are displaced by 1-3 σ based on the distribution of the residuals from the fitted regression line. The amount by which these objects lie below the regression line is much less than that by which the starburst objects are boosted above the correlation. However, it is interesting that these objects are all at -:Plots of 70µm/24µm MFIR color vs. redshift, 5GHz total radio power and[OIII] luminosity (from top to bottom respectively). Symbols are the same asFigure 1. The four BLRG lying below the L 70µm vs. L[OIII] correlation with the warmest colors are marked with red.
2Jy and 3CRR samples by considering the main optical and infrared indicators of starbursts. The results are then: 12 (19%) of the objects in the combined sample show unambiguous spectroscopic evidence for recent star formation activity at optical wavelengths; 12 (19%) have cool MFIR colors (L 70µm /L 24µm > 5); 22 (35%) of the objects lie more than 0.3 dex (factor ×2) about the regression line in the L 70µm vs. L [OIII] correlation in Figure 2; and 22 objects (35%) show at least one of these indicators. Therefore an estimate of the proportion of powerful radio-loud AGN showing evidence for energetically significant recent star formation activity in the combined 2Jy plus 3CRR sample is in the range 19−35%.
Table 1 .
13CRR Sample DataNote. -The basic parameters for the 3CRR sample are presented. Note thatName
z
RA(J2000) Dec(J2000) Opt. Class Rad. Class SB SB ref
3C33
0.060
01 08 52.8 +13 20 14
NLRG
FRII
No
6
3C35
0.067
01 12 02.2 +49 28 35
WLRG
FRII
No
7
3C98
0.030
03 58 54.4 +10 26 03
NLRG
FRII
No
2
DA240
0.036
07 48 36.9 +55 48 58
WLRG
FRII
No
2
3C192
0.060
08 05 35.0 +24 09 50
NLRG
FRII
No
7
4C73.08 0.058
09 49 45.9 +73 14 23
NLRG
FRII
No
-
3C236
0.101
10 06 01.7 +34 54 10
WLRG
FRII
SB
1,4
3C277.3 0.085
12 54 11.7 +27 37 33
WLRG
FRI/FRII No
8
3C285
0.079
13 21 17.8 +42 35 15
NLRG
FRII
SB
1,2
3C293
0.045
13 52 17.8 +31 26 46
WLRG
FRI/FRII SB
5
3C305
0.042
14 49 21.6 +63 16 14
NLRG
FRII/CSS SB
5
3C321
0.096
15 31 43.4 +24 04 19
NLRG
FRII
SB
1,3
3C326
0.090
15 52 09.1 +20 05 24
NLRG
FRII
No
9
3C382
0.058
18 35 03.4 +32 41 47
BLRG
FRII
U
10
3C388
0.092
18 44 02.4 +45 33 30
WLRG
FRII
No
9
3C390.3 0.056
18 42 09.0 +79 46 17
BLRG
FRII
U
10
3C403 a 0.059
19 52 15.7 +02 30 23
NLRG
FRII
No
9
3C445 a 0.057
22 23 49.6 −02 06 12
BLRG
FRII
U
10
3C452
0.081
22 45 48.8 +39 41 16
NLRG
FRII
no
7
2 of the objects in the 3CRR sample are in common with the 2Jy sample: 3C403
(PKS1949+02), 3C445 (PKS2221−02). Fluxes were measured from Spitzer observa-
Table 2 .
23CRR Sample LuminositiesName
z
S 24µm (mJy) σ L 24 (W/Hz) S 70µm (mJy) σ L 70 (W/Hz) L [OIII] (W)
L 5GHz
radio (W/Hz)
3C33
0.060
99.4
0.2
9.0 × 10 23
145.5
3.4
1.3 × 10 24 2.0 × 10 34
3.4 × 10 25
3C35
0.067
0.9
0.2
1.1 × 10 22
18.7
6.4
2.3 × 10 23 1.0 × 10 33
6.5 × 10 24
3C98
0.030
45.5
0.6
9.2 × 10 22
36.4
3.5
7.4 × 10 22 1.0 × 10 34
7.0 × 10 24
DA240
0.036
3.9
0.4
1.2 × 10 22
32.1
4.6
9.7 × 10 22 6.0 × 10 32
5.2 × 10 24
3C192
0.060
6.3
0.4
5.2 × 10 22
15.1
6.7
1.3 × 10 23 2.2 × 10 34
1.6 × 10 25
4C73.08
0.058
44.6
0.4
3.2 × 10 23
23.2
2.3
1.7 × 10 23 9.4 × 10 33
4.5 × 10 24
3C236
0.101
17.3
0.3
4.5 × 10 23
64.6
5.5
1.7 × 10 24 8.1 × 10 33
4.1 × 10 25
3C277.3
0.085
9.0
0.3
1.6 × 10 23
18.8
3.3
3.4 × 10 23 8.6 × 10 33
2.1 × 10 25
3C285
0.079
46.2
0.4
7.2 × 10 23
200.6
2.7
3.2 × 10 24 3.6 × 10 33
9.2 × 10 24
3C293
0.045
31.1
0.3
1.5 × 10 23
303.0
6.7
1.5 × 10 24 6.4 × 10 32
9.0 × 10 24
3C305
0.042
44.0
0.1
1.8 × 10 23
311.5
2.3
1.3 × 10 24 1.1 × 10 34
4.3 × 10 24
3C321
0.096
264.0
0.1
6.1 × 10 24
897.1
5.7
2.1 × 10 25 2.1 × 10 35
2.7 × 10 25
3C326
0.090
0.7
0.1
1.5 × 10 22
<9.0
-< 2.0 × 10 23 2.5 × 10 33
9.5 × 10 24
3C382
0.058
98.8
0.2
7.2 × 10 23
56.3
4.3
4.1 × 10 23 6.0 × 10 34
1.8 × 10 25
3C388
0.092
2.6
0.2
5.4 × 10 22
<11.1
-< 2.5 × 10 23 5.2 × 10 33
3.8 × 10 25
3C390.3
0.056
217.1
0.2
1.5 × 10 24
162.9
3.1
1.1 × 10 24 1.2 × 10 35
3.2 × 10 25
3C403 a
0.059
193.0
0.2
1.6 × 10 24
348.4
3.7
2.7 × 10 24 7.2 × 10 34
1.9 × 10 25
3C445 a
0.057
232.1
0.3
1.7 × 10 24
186.4
5.2
1.3 × 10 24 1.7 × 10 35
1.7 × 10 25
3C452
0.081
55.6
0.1
4.3 × 10 23
55.7
4.7
4.3 × 10 23 1.1 × 10 34
1.9 × 10 25
Note. -Columns 3,4,5,6,7 and 8 present the 24 and 70µm fluxes, errors and luminosities for the 3CRR sample calculated
from fluxes. Column 9 presents the [OIII] luminosities calculated from fluxes taken from Buttiglione et al. (2009), except for
the cases of 3C321, DA240 and 4C73.08 which were taken from
L [
LOIII] correlation is highly significant: we reject the null hypothesis that the variables are unrelated at a >99.5% level. On the other hand, the L 70µm vs. L[OIII] correlation for the 3CRR sample alone is the least significant correlation we have examined: we only reject the null hypothesis that the variables are unrelated at a >80% level. The additional scatter in the correlation between L [OIII] vs. L 70µm is further discussed in Sections 5.1 and 6.Inspecting the results for the combined 3CRR and 2Jy sample presented in Columns 4 and 6 ofTable 3we find that all the tests show a correlation significance of better than 99.9%, and the ASURV results reinforce those obtained using the bootstrap technique outlined above. These combined sample statistical tests strongly support the relation between the thermal MFIR emission and the [OIII] emission in radio galaxies for a broad range of redshift and radio powers. Because we believe that the [OIII] emission is a good indicator of AGN power, the combined 3CRR and 2Jy sample statistical results provide some of the strongest empirical evidence to date that the dominant heating mechanism for the MFIR continuum emission dust is AGN illumination. partial rank correlation test. This tests the hypothesis that the correlations are not intrinsic but arise because L[OIII] and L M F IR are independently correlated with redshift. However in both cases the null hypothesis that the variables are unrelated is still rejected at the >99.5% level of significance.Moreover, the second part of Table 3 (rows 3 and 4) shows the results of a Spearman
). The required degree of anisotropy for the[OIII] emission is a factor of ∼3−7. In this case, a significant proportion of the[OIII] emission must be emitted on a relatively small (∼pc) scale in these objects. For at least one of the low-z BLRG − 3C390.3 − such a small scale for the NLR is supported by the evidence for significant variability in the narrow line CIV and [OIII] emission on a timescale of a few years(Clavel & Wamsteker 1987;Zheng et al. 1995).The fact that there is no difference between the relative positions of higher redshift (>0.1) BLRG/Q and NLRG in the of [OIII] emission line vs. MFIR luminosities plots supports the assumption that the [OIII] emission is isotropic for these objects in the combined sample. However, there is some evidence for mild anisotropy in the [OIII] emission of the -generally higher redshift -BLRG/Q in the 2Jy sample as discussed in detail in D09.However, the displacement for the 2Jy BLRG/Q below the L 70µm vs. L [OIII] correlation is not statistically significant (D09). Such a difference could reflect a change in the properties of the torus and/or the spatial distribution of the NLR gas with the luminosity of the AGN.van Bemmel &Barthel (2001) compared ISO and IRAS photometric properties of 10 BLRG, with those of a heterogeneous sample of 5 NLRG detected by IRAS. They found that 7 of the BLRG objects have warm MFIR colors with continuum spectra peaking around
This work is based [in part] on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute
Table 3 .
33CRR and 2Jy Sample Statistical AnalysisNote. -Results of various Spearman rank correlation statistics for the 3CRR plus the combined 3CRR and 2JyRank Correlation
rs (3CRR)
significance
rs (3CRR+2Jy)
significance
rs (ASURV)
significance
(1) L 24 vs. L [OIII]
0.74
> 99.5%
0.89
> 99.9%
0.90
> 99.9%
(2) L 70 vs. L [OIII]
0.3
> 80%
0.77
> 99.9%
0.76
> 99.9%
Partial rank correlation with z
(3)L 24 vs. L [OIII]
0.78
> 99.9%
(4)L 70 vs. L [OIII]
0.47
> 99.9%
In the following discussion we will label these objects as optical starbursts. However, we emphasize that while some of these objects do contain genuine, current, starburst activity, others are in fact in a post-starburst phase. SeeTadhunter et al. (2005);Holt et al. (2007);Wills et al. (2008) for further discussion.
The slopes calculations do not include the identified optical starburst objects.
see Tadhunter et al. (1998) for definition and D09 for further detailed discussion.4 Of the 5 low redshift (z < 0.1) BLRG in the combined sample, only PKS1733-56 (z = 0.098) falls above the correlation. However this object has a significant starburst component, as indicated by the detection of strong PAH features in its mid-IR spectrum (Dicken et al. 2010).
MNRAS, 356, 480
Based on observations made with ESO Telescopes at the Paranal Observatory. D. D. acknowledges support from NASA grant based on observations from Spitzer program 50588. of Technology, under contract with the National Aeronautics and Space Administration. Spitzerof Technology, under contract with the National Aeronautics and Space Administration. Based on observations made with ESO Telescopes at the Paranal Observatory. D. D. acknowledges support from NASA grant based on observations from Spitzer program 50588. Facilities: Spitzer (MIPS).
. E N Archibald, J S Dunlop, D H Hughes, S Rawlings, S A Eales, R J Ivison, MNRAS. 323417Archibald, E. N., Dunlop, J. S., Hughes, D. H., Rawlings, S., Eales, S. A., & Ivison, R. J. 2001, MNRAS, 323, 417
. I Aretxaga, E Terlevich, R J Terlevich, G Cotter, Á I Díaz, MNRAS. 325636Aretxaga, I., Terlevich, E., Terlevich, R. J., Cotter, G., & Díaz,Á. I. 2001, MNRAS, 325, 636
. P D Barthel, ApJ. 336606Barthel, P. D. 1989, ApJ, 336, 606
. C L Buchanan, J F Gallimore, C P O'dea, S A Baum, D J Axon, A Robinson, M Elitzur, M Elvis, AJ. 132401Buchanan, C. L., Gallimore, J. F., O'Dea, C. P., Baum, S. A., Axon, D. J., Robinson, A., Elitzur, M., & Elvis, M. 2006, AJ, 132, 401
. S Buttiglione, A Capetti, A Celotti, D J Axon, M Chiaberge, F D Macchetto, W B Sparks, A&A. 4951033Buttiglione, S., Capetti, A., Celotti, A., Axon, D. J., Chiaberge, M., Macchetto, F. D., & Sparks, W. B. 2009, A&A, 495, 1033
. N ; J Clark, W Wamsteker, ApJ. 3209University of Sheffield Clavel,PhD thesisClark, N. 1996, PhD thesis, University of Sheffield Clavel, J., & Wamsteker, W. 1987, ApJ, 320, L9
. K Cleary, C R Lawrence, J A Marshall, L Hao, D Meier, ApJ. 660117Cleary, K., Lawrence, C. R., Marshall, J. A., Hao, L., & Meier, D. 2007, ApJ, 660, 117
. P Di Matteo, R Capuzzo Dolcetta, P Miocchi, Celestial Mechanics and Dynamical Astronomy. 9159di Matteo, P., Capuzzo Dolcetta, R., & Miocchi, P. 2005, Celestial Mechanics and Dynamical Astronomy, 91, 59
. H Fu, A Stockton, ApJ. 6961693Fu, H., & Stockton, A. 2009, ApJ, 696, 1693
. D Golombek, G K Miley, G Neugebauer, AJ. 9526Golombek, D., Miley, G. K., & Neugebauer, G. 1988, AJ, 95, 26
. M Haas, R Chini, K Meisenheimer, M Stickel, D Lemke, U Klaas, E Kreysa, ApJ. 503109Haas, M., Chini, R., Meisenheimer, K., Stickel, M., Lemke, D., Klaas, U., & Kreysa, E. 1998, ApJ, 503, L109+
. M Haas, S A H Müller, F Bertoldi, R Chini, S Egner, W Freudling, U Klaas, O Krause, D Lemke, K Meisenheimer, R Siebenmorgen, I Van Bemmel, A&A. 424531Haas, M., Müller, S. A. H., Bertoldi, F., Chini, R., Egner, S., Freudling, W., Klaas, U., Krause, O., Lemke, D., Meisenheimer, K., Siebenmorgen, R., & van Bemmel, I. 2004, A&A, 424, 531
. T M Heckman, K C Chambers, M Postman, ApJ. 39139Heckman, T. M., Chambers, K. C., & Postman, M. 1992, ApJ, 391, 39
. T M Heckman, C P O'dea, S A Baum, E Laurikainen, ApJ. 42865Heckman, T. M., O'Dea, C. P., Baum, S. A., & Laurikainen, E. 1994, ApJ, 428, 65
. T M Heckman, E P Smith, S A Baum, W J M Van Breugel, G K Miley, G D Illingworth, G D Bothun, B Balick, ApJ. 311526Heckman, T. M., Smith, E. P., Baum, S. A., van Breugel, W. J. M., Miley, G. K., Illingworth, G. D., Bothun, G. D., & Balick, B. 1986, ApJ, 311, 526
. R Hes, P D Barthel, H Hoekstra, A&A. 3038Hes, R., Barthel, P. D., & Hoekstra, H. 1995, A&A, 303, 8
. J Holt, C N Tadhunter, R M Delgado, K J Inskip, J Rodriguez, B H C Emonts, R Morganti, K A Wills, MNRAS. 381611Holt, J., Tadhunter, C. N., González Delgado, R. M., Inskip, K. J., Rodriguez, J., Emonts, B. H. C., Morganti, R., & Wills, K. A. 2007, MNRAS, 381, 611
. C Impey, L Gregorini, AJ. 105853Impey, C., & Gregorini, L. 1993, AJ, 105, 853
. T Isobe, E D Feigelson, P I Nelson, ApJ. 306490Isobe, T., Feigelson, E. D., & Nelson, P. I. 1986, ApJ, 306, 490
. L J Kewley, C A Heisler, M A Dopita, S Lumsden, ApJS. 13237Kewley, L. J., Heisler, C. A., Dopita, M. A., & Lumsden, S. 2001, ApJS, 132, 37
. G R Knapp, W E Bies, J H Van Gorkom, AJ. 99476Knapp, G. R., Bies, W. E., & van Gorkom, J. H. 1990, AJ, 99, 476
. R A Laing, J M Riley, M S Longair, MNRAS. 204151Laing, R. A., Riley, J. M., & Longair, M. S. 1983, MNRAS, 204, 151
M Lavalley, T Isobe, E Feigelson, Astronomical Society of the Pacific Conference Series. D. M. Worrall, C. Biemesderfer, & J. Barnes25245Astronomical Data Analysis Software and Systems ILavalley, M., Isobe, T., & Feigelson, E. 1992, in Astronomical Society of the Pacific Conference Series, Vol. 25, Astronomical Data Analysis Software and Systems I, ed. D. M. Worrall, C. Biemesderfer, & J. Barnes, 245-+
. S L Morris, M J Ward, MNRAS. 230639Morris, S. L., & Ward, M. J. 1988, MNRAS, 230, 639
. M Nenkova, Ž Ivezić, M Elitzur, ApJ. 5709Nenkova, M., Ivezić,Ž., & Elitzur, M. 2002, ApJ, 570, L9
. M Nenkova, M M Sirocky, R Nikutta, Ž Ivezić, M Elitzur, ApJ. 685160Nenkova, M., Sirocky, M. M., Nikutta, R., Ivezić,Ž., & Elitzur, M. 2008, ApJ, 685, 160
. G Neugebauer, G K Miley, B T Soifer, P E Clegg, ApJ. 308815Neugebauer, G., Miley, G. K., Soifer, B. T., & Clegg, P. E. 1986, ApJ, 308, 815
. C P O'dea, A M Koekemoer, S A Baum, W B Sparks, A R Martel, M G Allen, F D Macchetto, G K Miley, AJ. 1211915O'Dea, C. P., Koekemoer, A. M., Baum, S. A., Sparks, W. B., Martel, A. R., Allen, M. G., Macchetto, F. D., & Miley, G. K. 2001, AJ, 121, 1915
. D E Osterbrock, A T Koski, M M Phillips, ApJ. 206898Osterbrock, D. E., Koski, A. T., & Phillips, M. M. 1976, ApJ, 206, 898
. D E Osterbrock, J S Miller, ApJ. 197535Osterbrock, D. E., & Miller, J. S. 1975, ApJ, 197, 535
. E A Pier, J H Krolik, ApJ. 40199Pier, E. A., & Krolik, J. H. 1992, ApJ, 401, 99
. S Rawlings, R Saunders, Nature. 349138Rawlings, S., & Saunders, R. 1991, Nature, 349, 138
. A Robinson, MNRAS. 272737University of Sheffield Rowan-Robinson, MPhD thesisRobinson, A. 2001, PhD thesis, University of Sheffield Rowan-Robinson, M. 1995, MNRAS, 272, 737
. R Saunders, J E Baldwin, S Rawlings, P J Warner, L Miller, MNRAS. 238777Saunders, R., Baldwin, J. E., Rawlings, S., Warner, P. J., & Miller, L. 1989, MNRAS, 238, 777
. M Schweitzer, D Lutz, E Sturm, A Contursi, L J Tacconi, M D Lehnert, K M Dasyra, R Genzel, S Veilleux, D Rupke, D.-C Kim, A J Baker, H Netzer, A Sternberg, J Mazzarella, S Lord, ApJ. 64979Schweitzer, M., Lutz, D., Sturm, E., Contursi, A., Tacconi, L. J., Lehnert, M. D., Dasyra, K. M., Genzel, R., Veilleux, S., Rupke, D., Kim, D.-C., Baker, A. J., Netzer, H., Sternberg, A., Mazzarella, J., & Lord, S. 2006, ApJ, 649, 79
. Y Shi, P Ogle, G H Rieke, R Antonucci, D C Hines, P S Smith, F J Low, J Bouwman, C Willmer, ApJ. 669841Shi, Y., Ogle, P., Rieke, G. H., Antonucci, R., Hines, D. C., Smith, P. S., Low, F. J., Bouwman, J., & Willmer, C. 2007, ApJ, 669, 841
. Y Shi, G H Rieke, D C Hines, G Neugebauer, M Blaylock, J Rigby, E Egami, K D Gordon, A Alonso-Herrero, ApJ. 62988Shi, Y., Rieke, G. H., Hines, D. C., Neugebauer, G., Blaylock, M., Rigby, J., Egami, E., Gordon, K. D., & Alonso-Herrero, A. 2005, ApJ, 629, 88
. C Simpson, MNRAS. 29739Simpson, C. 1998, MNRAS, 297, L39
. C Tadhunter, University of SussexPhD thesisTadhunter, C. 1986, PhD thesis, University of Sussex
. C Tadhunter, T G Robinson, R M Delgado, K Wills, R Morganti, C N Tadhunter, R C Dickson, M A Shaw, MNRAS. 281591Tadhunter, C., Robinson, T. G., González Delgado, R. M., Wills, K., & Morganti, R. 2005, Tadhunter, C. N., Dickson, R. C., & Shaw, M. A. 1996, MNRAS, 281, 591
. C N Tadhunter, R Morganti, A Robinson, R Dickson, M Villar-Martin, R A E Fosbury, MNRAS. 2981035Tadhunter, C. N., Morganti, R., Robinson, A., Dickson, R., Villar-Martin, M., & Fosbury, R. A. E. 1998, MNRAS, 298, 1035
. I Van Bemmel, P Barthel, A&A. 37921van Bemmel, I., & Barthel, P. 2001, A&A, 379, L21
. J V Wall, J A Peacock, MNRAS. 216173Wall, J. V., & Peacock, J. A. 1985, MNRAS, 216, 173
. K A Wills, C Tadhunter, J Holt, R González Delgado, K J Inskip, J Rodríguez Zaurín, R Morganti, MNRAS. 385136Wills, K. A., Tadhunter, C., Holt, J., González Delgado, R., Inskip, K. J., Rodríguez Zaurín, J., & Morganti, R. 2008, MNRAS, 385, 136
. K A Wills, C N Tadhunter, T G Robinson, R Morganti, MNRAS. 333211Wills, K. A., Tadhunter, C. N., Robinson, T. G., & Morganti, R. 2002, MNRAS, 333, 211
. W Zheng, E Perez, S A Grandi, M V Penston, AJ. 1092355 This manuscript was prepared with the AAS L A T E X macros v5.2Zheng, W., Perez, E., Grandi, S. A., & Penston, M. V. 1995, AJ, 109, 2355 This manuscript was prepared with the AAS L A T E X macros v5.2.
| []
|
[
"Is DAMA Bathing in a Sea of Radioactive Argon?",
"Is DAMA Bathing in a Sea of Radioactive Argon?"
]
| [
"D N Mckinsey \nDepartment of Physics\nUniversity of California Berkeley\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n1 Cyclotron Rd94720BerkeleyCAUSA\n"
]
| [
"Department of Physics\nUniversity of California Berkeley\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n1 Cyclotron Rd94720BerkeleyCAUSA"
]
| []
| A hypothesis is proposed to explain the long-standing DAMA/LIBRA puzzle. Introduced into the DAMA/LIBRA shielding is a purge gas of nominally high-purity nitrogen, which under this hypothesis contains argon impurities. Argon is introduced into the nitrogen purge gas either through leaks in the purge gas plumbing, or through commercially-supplied bottled nitrogen, diffuses through materials in the detector housings, and then comes in direct contact with the DAMA/LIBRA detectors. These argon impurities can then lead to a modulating 2.8 keV background under two scenarios. Scenario 1): These impurities include the isotope 37 Ar, which decays by electron capture, emitting a 2.8 keV x-ray. These decays appear as single-site, monoenergetic events in DAMA/LIBRA, and produce an annual modulation due to the variation of neutron flux in the atmosphere and at the Earth's surface, which in turn leads to a seasonal variation in 37 Ar production from the reactions 40 Ca(n,α) 37 Ar and 36 Ar(n,γ) 37 Ar. Scenario 2): Radon is also in the DAMA/LIBRA purge gas, modulating seasonally at a rate below the current DAMA/LIBRA limits. When radon or its short-lived daughters decay, the resulting beta, gamma, and bremsstrahlung radiation cause stable 40 Ar to be ionized within the copper housings surrounding the NaI(Tl) detectors, resulting in characteristic 2.8 keV x-rays. Modulating backgrounds might also result from radon-induced neutron or gamma-ray flux from the surrounding cavern, leading to a small modulating background enhanced at low energy by the presence of 40 Ar within the copper housings. These two scenarios are straightforward to test through assay of the purge gas as well as Monte Carlo and laboratory study of the DAMA/LIBRA copper housings when excited by ionizing radiation. | null | [
"https://arxiv.org/pdf/1803.10110v1.pdf"
]
| 4,947,675 | 1803.10110 | 48628664cab3a24070999394386d81331c359e10 |
Is DAMA Bathing in a Sea of Radioactive Argon?
27 Mar 2018
D N Mckinsey
Department of Physics
University of California Berkeley
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
1 Cyclotron Rd94720BerkeleyCAUSA
Is DAMA Bathing in a Sea of Radioactive Argon?
27 Mar 2018(Dated: March 28, 2018)
A hypothesis is proposed to explain the long-standing DAMA/LIBRA puzzle. Introduced into the DAMA/LIBRA shielding is a purge gas of nominally high-purity nitrogen, which under this hypothesis contains argon impurities. Argon is introduced into the nitrogen purge gas either through leaks in the purge gas plumbing, or through commercially-supplied bottled nitrogen, diffuses through materials in the detector housings, and then comes in direct contact with the DAMA/LIBRA detectors. These argon impurities can then lead to a modulating 2.8 keV background under two scenarios. Scenario 1): These impurities include the isotope 37 Ar, which decays by electron capture, emitting a 2.8 keV x-ray. These decays appear as single-site, monoenergetic events in DAMA/LIBRA, and produce an annual modulation due to the variation of neutron flux in the atmosphere and at the Earth's surface, which in turn leads to a seasonal variation in 37 Ar production from the reactions 40 Ca(n,α) 37 Ar and 36 Ar(n,γ) 37 Ar. Scenario 2): Radon is also in the DAMA/LIBRA purge gas, modulating seasonally at a rate below the current DAMA/LIBRA limits. When radon or its short-lived daughters decay, the resulting beta, gamma, and bremsstrahlung radiation cause stable 40 Ar to be ionized within the copper housings surrounding the NaI(Tl) detectors, resulting in characteristic 2.8 keV x-rays. Modulating backgrounds might also result from radon-induced neutron or gamma-ray flux from the surrounding cavern, leading to a small modulating background enhanced at low energy by the presence of 40 Ar within the copper housings. These two scenarios are straightforward to test through assay of the purge gas as well as Monte Carlo and laboratory study of the DAMA/LIBRA copper housings when excited by ionizing radiation.
The DAMA/LIBRA experiment is a low-background search for direct dark matter interactions using about 232 kg of high-purity NaI(Tl) crystals viewed by photomultiplier tubes, contained within a copper shield, and located deep underground in the Gran Sasso National Laboratory (LNGS). An experimental description of DAMA/LIBRA may be found in [1]. With the first report of a positive annual modulation signal [2] in 1998, DAMA/LIBRA has ever since observed a significant seasonal variation in its event rate within an energy range of 2-6 keV electron equivalent [3]. This variation is measured to be 0.0110 ± 0.0012 cpd/kg/keV, with a statistical significance of 9.3σ. Many experiments have not observed concomitant dark matter interaction rates, and it is difficult to construct dark matter models that can explain DAMA yet are consistent with the various null results. Several experiments are in operation or under construction to test the DAMA/LIBRA claim.
Numerous instrumental explanations for this annual modulation have been proffered, largely based on the variation of muon rate underground which is correlated with temperature variations in the stratosphere. These explanations include modulation of muon flux causing phosphorescence [4], modulation of neutrons activating 128 I [5], or a combination of solar neutrinos and atmospheric muons [6]. These explanations all have their difficulties (see [7][8][9]), largely consisting of the small muon flux underground, the single-hit nature of the DAMA/LIBRA excess, and the challenge of confining the * [email protected] excess to the 2-6 keV energy range.
Here two scenarios are proposed that could explain the DAMA anomaly, both involving the generation of 2.8 keV x-rays from argon atoms in the DAMA nitrogen purge gas. The argon might enter the DAMA/LIBRA purge system through leaks in its plumbing, or simply from the tanks of commercially produced high-purity nitrogen that feed the purge system. Unfortunately, it is very common for nominally high-purity nitrogen gas to actually have substantial levels of argon contamination, as for most applications a bit of argon in the nitrogen is of no consequence. When specifying nitrogen purity, many vendors do not distinguish between noble gases and the nitrogen itself, preferring to only specify impurity levels in terms of chemically reactive impurities such as O 2 , H 2 O, CO 2 , CO 2 , and hydrocarbons.
Scenario 1):
A model which parsimoniously explains the features of the DAMA/LIBRA excess involves the isotope 37 Ar, which has a 35.04-day half-life and is produced through the reactions 36 Ar(n, γ) 37 Ar (through thermal neutron capture in the atmosphere) and 40 Ca(n,α) 37 Ar (fast neutron interactions with calcium in the soil). The latter reaction has a neutron cross-section peaking in the 5-6 MeV range, at approximately 200 mb [10]. Upon its decay by electron capture, 37 Ar + e − → 37 Cl + ν e , x-rays can be released from the capture of the K-shell, L-shell, and M-shell electrons at 2.8 keV, 0.270 keV, and 0.0175 keV with branching ratios of 0.90, 0.09, and 0.009 [11]. Of interest here is the argon K-shell electron at 2.8 keV.
Famously used [12] by Ray Davis and collaborators to detect the solar neutrino flux through neutrino capture on 37 Cl, and later used as a neutrino calibration source for the SAGE experiment [13,14], 37 Ar has also been employed to calibrate gaseous neon [15], two-phase Ar [16] and two-phase Xe [17,18] detectors at 0.27 keV and 2.8 keV, as it provides a convenient, single-site, lowenergy source. It has also been seen as a background in dark matter experiments. In the re-analysis of the LUX 2013 data set [19], a weak line was seen at 2.8 keV, consistent with 37 Ar decay, and 37 Ar decay events have been seen in DarkSide data [20]. Measurement of 37 Ar may also be used for detection of underground nuclear explosions [21], and its calibrated detection is well-established.
It is well known that neutron flux at the Earth's surface varies with the season, peaking in the spring or summer and falling in the winter. This variation is due to reduced atmospheric density, and in many locations, also because of winter snow cover and higher water content in the soil, which helps to moderate neutrons. The neutron flux is increased at higher altitudes, and it is known that this in turn increases the local 37 Ar content [22] in soil air. At Pic-du-Midi in the French Pyrenees, a 25% variation in the 0.1 to 20 MeV neutron flux is observed [23], with a ratio between thermal, epithermal, and fast neutron fluxes that varies with the season. Because 37 Ar is produced predominantly by neutron interactions, one may reasonably expect that its content in the atmosphere also depends on the season, with especially high production at high elevations, such as on the slopes of Gran Sasso (elevation 2912 m) and other mountains of Abruzzo. At LNGS, the well-ventilated experimental halls may be expected to have similar 37 Ar levels as the surrounding countryside. If argon manages to enter the DAMA/LIBRA purge system, then 37 Ar may diffuse through sealing materials in the DAMA/LIBRA detector housings and come in direct contact with its NaI(Tl) crystals, creating 2.8 keV single-site events.
For a purge gas containing 1% argon, an 37 Ar rate of 3 mBq per liter of argon gas, 25000 cm 2 of NaI surface area exposed to the purge gas, and a purge gas-filled 1-cm gap between the copper housing and the NaI, one may expect a total 37 Ar x-ray rate of order 40 per day entering the NaI. If this were to modulate by 25% over the year, then this would correspond to a modulation of 10 cpd in the full 232 kg of NaI(Tl). In the 2-6 keV energy window this would be 0.01 cpd/kg/keV, comparable to the observed DAMA/LIBRA modulation.
The key assumption behind Scenario 1 is that the purge gas can find its way into the copper housing surrounding the NaI detectors, as the 2.8 keV x-rays will not penetrate the copper assuming it is 1-2 mm thick. These copper housings were sealed at St. Gobain, encapsulating the NaI crystals so as to avoid crystal degradation due to moisture. But if the seals were to open to any degree, this might not be obvious since the DAMA/LIBRA purge gas is free of moisture. Dedicated checking of the DAMA/LIBRA detectors could test whether the copper housings have remained intact. With more technical details about the nature of the seals and any epoxy or o-rings used, one could study the permeation of argon through the seal.
The above hypothesis may be easily tested, first by measuring the argon content of the high-purity nitrogen purge gas used in DAMA, with samples taken from the bottles themselves, from just before entering the experiment, and on the output nitrogen stream. This will help to diagnose any ways that argon might be entering the purge gas. Second, the purge gas could be measured specifically for 37 Ar using facilities built specifically for this purpose [24]. When the 37 Ar contamination level is established, it would be informative to perform a Monte Carlo study of the degree to which 37 Ar contributes to the DAMA/LIBRA excess, using detailed and accurate models of the NaI crystals, fused silica light guides, photomultipliers, and copper housings, to calculate the fraction of 37 Ar x-rays that could reach a NaI crystal. In addition, measurement of the magnitude and seasonal variation of 37 Ar backgrounds in LNGS would in any case be of great interest for the overall dark matter program.
Scenario 2): A second possibility, also involving argon 2.8 keV x-rays, is that argon gas inside the copper housings (directly surrounding the NaI crystals) is being excited due to seasonally varying 222 Rn atoms or associated neutron and gamma-ray flux, as has been seen in the Soudan laboratory [25]. The argon may remain from the original encapsulation of the NaI detectors if they are very well sealed, or it might come from argon impurities in the nitrogen purge gas, equilibrating through diffusion in argon content with the gas inside the copper housing. Gamma or beta rays (from e.g. radon daughters covering the copper housings) that scatter or are absorbed by passive material in the housing may also deposit some energy in the gas inside the housing, either directly or through bremsstrahlung radiation. If this gas contains some argon, then the ionized argon will emit 2.8 keV x-rays, which may then penetrate the Tetratec-teflon tape wrapping the NaI(Tl) detectors. This mechanism is similar to that used in the particle-induced x-ray emission (PIXE) method, which is widely used for measurement of trace elements [26]. In this case the exciting particles are beta and gamma radiation emitted by 222 Rn and its daughters or by neutron-activated materials, while the element being measured is 40 Ar, as its characteristic 2.8 keV x-ray is readily detected by the NaI(Tl) crystals in DAMA/LIBRA. DAMA sets an upper limit of 5.8 × 10 −2 Bq/m 3 (∼ 5000 cpd/m 3 ) for 222 Rn decays in its nitrogen purge gas [27], based on an analysis of double coincidences of gamma-rays from the decay of 214 Bi. The 222 Rn atom, with half-life 3.8 days, decays to a chain of four short-lived daughters before decaying to 210 Pb which has a half-life of 22 years. If this combined rate of ∼ 25, 000 cpd/m 3 exhibits a small annual modulation, then this may be passed on to 40 Ar excitation inside the copper housing, resulting in an annual modulation at 2.8 keV. The required 0.0011 cpd/kg/keV corresponds to about 10 cpd within the 2-6 keV energy range, well within the realm of possibility given the above scenario.
This mechanism can evade the current DAMA/LIBRA background limits because even a small rate of background events, depositing most energy into passive materials and thereby evading detection at higher energies, can efficiently channel their signal into the 2-6 keV energy range through argon K-shell excitation. The limit on overall modulation across the energy spectrum above 90 keV, denoted R 90 , is consistent with zero but with uncertainty ranging from 0.17 to 0.19 cpd/kg, or about 40 cpd. This limit is then weaker than the observed 10 cpd modulated event rate in the 2-6 keV energy region. So if the rate of argon x-ray production is roughly 25% or greater than the integrated event rate above 90 keV, then these 2.8 keV x-rays could produce the observed excess in the 2-6 keV energy window.
In principle, such decay events could also create multiple-hit events in DAMA/LIBRA; these are events in which more than one NaI detector sees an energy deposition above threshold. The rate quoted by DAMA for multiple-hit modulation is consistent with zero within the 2-6 keV window, with an uncertainty of ±0.0004 cpd/keV/kg, or about 4% of the 2-6 keV modulation signal. In order to create the DAMA/LIBRA excess while circumventing this limit, event types would be needed that excite argon atoms and have a probability less than 4% of causing a simultaneous hit in a different NaI(Tl) detector. This seems plausible, given that many of these events will be from beta particles or low-energy gammarays, which would be unlikely to interact in more than one NaI(Tl) detector.
This second scenario doesn't require any 37 Ar, just a bit of argon impurities in the gas in which the NaI crystals are sealed. The Rn decay rate, the Rn modulation amplitude, neutron modulation amplitude, amount of argon excitation, and efficiency of 2.8 keV xray detection may each be tuned, so that their product gives a total detection rate compatible with the DAMA result. By design, this scenario only creates argon xray events at 2.8 keV, since the chance of double xray detection (leading to events at 5.6 keV) is small by comparison. The K-shell x-rays from nitrogen, which would also be produced in this scenario, are at 0.392 keV and invisible to the DAMA/LIBRA experiment.
The scenarios described above have other noteworthy properties. They can help to explain why, relative to a measured nat K contamination of 13 ppb [28], an increased amount (20 ppb) of nat K is needed to fit the constant 3 keV peak that is traditionally ascribed to x-rays from 40 K decay [29]. The difference may be explained by a constant (non-modulating) argon xray background, which will not affect fits to data at higher energies. In addition, the observed amplitude of the modulation signal substantially decreased after the original DAMA data gathered from 1995 to 2001, for which the published modulation amplitude in the 2-6 keV bin was reported to be 0.0200 ± 0.0032 cpd/kg/keV. One can certainly expect that the amplitude of the argon x-ray modulation will be geometry-dependent, and could easily have changed between DAMA/NaI and DAMA/LIBRA.
In conclusion, it is proposed that the annual modulation in the DAMA/LIBRA single-site event rate is due to argon gas, a Trojan horse within the copper housing surrounding the NaI(Tl) detectors. This hypothesis may be tested by monitoring of the DAMA/LIBRA purge gas for argon impurities, by measuring the purge gas and atmosphere in LNGS for 37 Ar and 222 Rn content and their seasonal variations, testing the copper housings for leaks and argon diffusion rates, and through studies of the x-ray production of argon-contaminated nitrogen gas when excited by ionizing radiation. Once the argon content of the purge gas has been determined, it would be informative for DAMA to use different sources of purge gas with varying levels of argon, measuring the effect of argon concentration on the event rate in the 2-6 keV energy range. Low levels of argon concentration in nitrogen may be achieved through use of boiloff gas from liquid nitrogen (see [30] and references therein). Given a detailed model of the detector housings and all materials used in their construction, NaI(Tl) detectors within, and argon content of the enclosed gas, a Monte Carlo study of this system could be performed to simulate the decays of Rn daughters and neutron activated-materials, so as to determine whether Scenario 2 is a plausible explanation for the 2-6 keV excess.
Useful conversations are acknowledged with Gilles Gerbier, Wick Haxton, Bill Holzapfel, Dave Nygren, Reina Maruyama, Bernard Sadoulet, and members of the LUX-ZEPLIN collaboration.
. R Bernabei, The European Physical Journal C. 56333R. Bernabei et al., The European Physical Journal C 56, 333 (2008).
. R Bernabei, Phys. Lett B. 424195R. Bernabei et al., Phys. Lett B 424, 195 (1998).
. R Bernabei, arXiv:1308.5109Eur.Phys.J. 732648R. Bernabei et al., Eur.Phys.J. C73, 2648 (2013), arXiv:1308.5109.
. D Nygren, arXiv:1102.0815D. Nygren, arXiv:1102.0815.
. J Ralston, arXiv:1006.5255J. Ralston, arXiv:1006.5255.
. J Davis, Phys. Rev. Lett. 11381302J. Davis et al., Phys. Rev. Lett. 113, 081302 (2014).
. P Barbeau, arXiv:1409.3185Phys. Rev. Lett. 113229001P. Barbeau et al., Phys. Rev. Lett. 113, 229001 (2014), arXiv:1409.3185.
. R Bernabei, arXiv:1409.3516The European Physical Journal C. 743196R. Bernabei et al., The European Physical Journal C 74, 3196 (2014), arXiv:1409.3516.
. J Klinger, V Kudryavtsev, arXiv:1503.07225J. Klinger and V. Kudryavtsev, (2015), arXiv:1503.07225.
. J Barnes, J. Inorg. Nucl. Chem. 37399J. Barnes et al., J. Inorg. Nucl. Chem 37, 399 (1974).
. V Barsanov, Nucl. Exp. Tech. 49454V. Barsanov et al., Nucl. Exp. Tech. 49, 454 (2006).
. B Cleveland, The Astrophysics Journal. 496505B. Cleveland et al., The Astrophysics Journal 496, 505 (1998).
. W C Haxton, Phys. Rev. C. 382474W. C. Haxton, Phys. Rev. C 38, 2474 (1988).
. J N Abdurashitov, J. Phys.: Conf. Ser. 39284J. N. Abdurashitov et al., J. Phys.: Conf. Ser. 39, 284 (2006).
. Q Arnaud, arXiv:1706.04934Astroparticle Physics. 97Q. Arnaud et al., Astroparticle Physics 97, 54 (2018), arXiv:1706.04934.
. S Sangiorgio, Nucl. Instrum. Meth. A. 72869S. Sangiorgio et al., Nucl. Instrum. Meth. A 728, 69 (2013).
. D Y Akimov, arXiv:1408.1823Journal of Instrumentation. 911014D. Y. Akimov et al., Journal of Instrumentation 9, P11014 (2014), arXiv:1408.1823.
. E Boulton, Journal of Instrumentation. 128004E. Boulton et al., Journal of Instrumentation 12, P08004 (2017).
. D S Akerib, LUXarXiv:1608.07648Phys. Rev. Lett. 11821303D. S. Akerib et al. (LUX), Phys. Rev. Lett. 118, 021303 (2017), arXiv:1608.07648.
. P Agnes, arXiv:1510.00702Phys. Rev. D. 9381101P. Agnes et al., Phys. Rev. D 93, 081101 (2016), arXiv:1510.00702.
. C Aalseth, Nuclear Instruments and Methods A. 6258C. Aalseth et al., Nuclear Instruments and Methods A 62, 58 (2011).
. R Riedmann, R Purtschert, Environmental Science and Technology. 458656R. Riedmann and R. Purtschert, Environmental Science and Technology 45, 8656 (2011).
. A Cheminet, Radiat Prot Dosimetry. 161284A. Cheminet et al., Radiat Prot Dosimetry 161, 284 (2014).
. R M Williams, Applied Radiation and Isotopes. 109430R. M. Williams et al., Applied Radiation and Isotopes 109, 430 (2016).
. A Tiwari, Phys. Rev. C. 9644609A. Tiwari et al., Phys. Rev. C 96, 044609.
. S Johansson, T Johansson, Nuclear Instruments and Methods. 137473S. Johansson and T. Johansson, Nuclear Instruments and Methods 137, 473 (1976).
. R Bernabei, arXiv:1306.1411International Journal of Modern Physics A. 281330022R. Bernabei et al., International Journal of Modern Physics A 28, 1330022 (2013), arXiv:1306.1411.
. R Bernabei, arXiv:1210.6199R. Bernabei et al., arXiv:1210.6199.
. V Kuryavtsev, M Robinson, N Spooner, arXiv:0912.2983Astroparticle Physics. 33V. Kuryavtsev, M. Robinson, and N. Spooner, Astroparticle Physics 33, 91 (2009), arXiv:0912.2983.
. H Simgen, International Journal of Modern Physics A. 291442009H. Simgen et al., International Journal of Modern Physics A 29, 1442009 (2016).
| []
|
[
"An update on Fermi-LAT transients in the Galactic plane, including strong activity of Cygnus X-3 in mid-2020",
"An update on Fermi-LAT transients in the Galactic plane, including strong activity of Cygnus X-3 in mid-2020"
]
| [
"D A Prokhorov \nGRAPPA\nAnton Pannekoek Institute for Astronomy\nUniversity of Amsterdam\nScience Park 9041098 XHAmsterdamThe Netherlands\n",
"A Moraghan \n11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan\n"
]
| [
"GRAPPA\nAnton Pannekoek Institute for Astronomy\nUniversity of Amsterdam\nScience Park 9041098 XHAmsterdamThe Netherlands",
"11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan"
]
| [
"Mon. Not. R. Astron. Soc"
]
| We present a search for Galactic transient γ-ray sources using 13 years of the Fermi Large Area Telescope data. The search is based on a recently developed variable-size sliding-time-window (VSSTW) analysis and aimed at studying variable γ-ray emission from binary systems, including novae, γ-ray binaries, and microquasars. Compared to the previous search for transient sources at random positions in the sky with 11.5 years of data, we included γ rays with energies down to 500 MeV, increased a number of test positions, and extended the data set by adding data collected between February 2020 and July 2021. These refinements allowed us to detect additional three novae, V1324 Sco, V5855 Sgr, V357 Mus, and one γ-ray binary, PSR B1259-63, with the VSSTW method. Our search revealed a γ-ray flare from the microquasar, Cygnus X-3, occurred in 2020. When applied to equal quarters of the data, the analysis provided us with detections of repeating signals from PSR B1259-63, LS I +61 • 303, PSR J2021+4026, and Cygnus X-3. While the Cygnus X-3 was bright in γ rays in mid-2020, it was in a soft X-ray state and we found that its γ-ray emission was modulated with the orbital period. | 10.1093/mnras/stac3453 | [
"https://export.arxiv.org/pdf/2209.12461v3.pdf"
]
| 252,531,402 | 2209.12461 | 36d667ab684dd0dd755531d321d0ce7391291e16 |
An update on Fermi-LAT transients in the Galactic plane, including strong activity of Cygnus X-3 in mid-2020
Jan 2023. January 2023
D A Prokhorov
GRAPPA
Anton Pannekoek Institute for Astronomy
University of Amsterdam
Science Park 9041098 XHAmsterdamThe Netherlands
A Moraghan
11F of AS/NTU Astronomy-Mathematics Building
Academia Sinica Institute of Astronomy and Astrophysics
No.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan
An update on Fermi-LAT transients in the Galactic plane, including strong activity of Cygnus X-3 in mid-2020
Mon. Not. R. Astron. Soc
000Jan 2023. January 2023(MN L A T E X style file v2.2)binaries: general -methods: data analysis -gamma-rays: general
We present a search for Galactic transient γ-ray sources using 13 years of the Fermi Large Area Telescope data. The search is based on a recently developed variable-size sliding-time-window (VSSTW) analysis and aimed at studying variable γ-ray emission from binary systems, including novae, γ-ray binaries, and microquasars. Compared to the previous search for transient sources at random positions in the sky with 11.5 years of data, we included γ rays with energies down to 500 MeV, increased a number of test positions, and extended the data set by adding data collected between February 2020 and July 2021. These refinements allowed us to detect additional three novae, V1324 Sco, V5855 Sgr, V357 Mus, and one γ-ray binary, PSR B1259-63, with the VSSTW method. Our search revealed a γ-ray flare from the microquasar, Cygnus X-3, occurred in 2020. When applied to equal quarters of the data, the analysis provided us with detections of repeating signals from PSR B1259-63, LS I +61 • 303, PSR J2021+4026, and Cygnus X-3. While the Cygnus X-3 was bright in γ rays in mid-2020, it was in a soft X-ray state and we found that its γ-ray emission was modulated with the orbital period.
INTRODUCTION
Transients in the Milky Way consist of a variety of sources. Gamma-ray emitting binaries include four classes of γ-ray sources: (i) so-called γ-ray binaries; high-mass Xray binary systems whose spectral energy distribution peaks at energies above MeV, (ii) novae; thermonuclear explosions in binaries following accretion on to white dwarfs, (iii) microquasars; binaries powered by accretion on to a compact object that display relativistic jets, and (iv) colliding wind binaries; binaries in which stellar outflows develop shocks giving rise to γ-ray emission. There are a variety of physical processes responsible for the γ-ray emission for these different source classes (for a review, see Dubus 2013;Paredes & Bordas 2019). The detections of binaries in the GeV γ-ray band were based on the data collected by the Large Area Telescope (LAT; Atwood et al. 2009) on-board the Fermi Gamma-ray Space Telescope launched in June 2008 and we highlight some notable sources as follows: ⋆ E-mail:[email protected] † E-mail:[email protected] • Gamma-ray binaries. During the first years of the Fermi mission, three γ-ray binaries, LS I +61 • 303, LS 5039, and 1FGL J1018.6-5856, were detected (Abdo et al. 2009a,b;Fermi LAT Collaboration et al. 2012). In these papers, the γ-ray binaries were identified on the basis of their modulated emission with periods from a few to several days, see also the blind all-sky search for cyclic γ-ray sources by Prokhorov & Moraghan (2017) and the most recently detected γ-ray binary, 4FGL J1405.1-6119 (Corbet et al. 2019). In addition, γ-ray binaries, PSR B1259-63 and HESS J0632+057, with periods longer than a hundred days were detected by Tam et al. (2011); Caliandro et al. (2015) and by Li et al. (2017), respectively. Long-term GeV γ-ray variability of LS I +61 • 303 with a superorbital period of 1667 days was reported by Ackermann et al. (2013b) and Prokhorov, Moraghan & Vink (2021).
• Novae. Fermi -LAT detected 17 Galactic novae. The first of these sources, V407 Cyg, was observed in March 2010 (Abdo et al. 2010b). The brightest novae, such as V407 Cyg 2010 and V1324 Sco 2012, produce γ-ray fluxes about 10 times higher than the faintest ones, such as V549 Vel (see, Li et al. 2020b;Franckowiak et al. 2018). Multi-wavelength observations of novae are useful for their identification and their discoveries are made in the optical band. The duration of γ-ray emission from a nova is about two weeks (e.g., Franckowiak et al. 2018).
• Microquasars. The first microquasar detected with Fermi -LAT, Cygnus X-3, produced γ-ray flares in 2008 and 2009 (Fermi LAT Collaboration et al. 2009). These γ-ray flaring events of Cygnus X-3 were also detected with AG-ILE (Tavani et al. 2009). The γ-ray flares of the source were observed during a soft X-ray state and allowed the Fermi -LAT collaboration to establish that the corresponding γray flux was modulated with a period of 4.7917 ± 0.0011 hours (Fermi LAT Collaboration et al. 2009), which is ascribed to the orbital period of the binary system. Another microquasar, Cygnus X-1, was detected at GeV energies during hard X-ray states (Zanin et al. 2016). Gamma-ray studies of other microquasars, including SS 433 and V404 Cyg, in the GeV band are an active research field (e.g., Li et al. 2020a;Harvey et al. 2021).
• Colliding wind binaries. Gamma-ray emitters belonging to this class include η Carinae (Abdo et al. 2010a) and γ 2 Velorum (Pshirkov 2016). A Fermi -LAT analysis of two full orbits and the third periastron of η Carinae and hints of γ-ray orbital variability from γ 2 Velorum were reported by Martí-Devesa & Reimer (2021) and Martí-Devesa et al. (2020), respectively.
Other sources in the Milky Way also produce transient γ-ray signals. Among these sources are transitional millisecond pulsars switching from accretion to a radio pulsar stage, e.g., a low-mass X-ray binary transition system PSR J1023+0038 (Stappers et al. 2014), and young pulsar wind nebulae, e.g. the Crab nebula (Tavani et al. 2011;Abdo et al. 2011a). In addition, flaring, nearby, young Mdwarf stars are potential transient γ-ray sources in the Milky Way, see Ohm & Hoischen (2018) and the discussion by Loh et al. (2017) on a high Galactic latitude transient event detected at a position consistent with DG CVn, but likely associated with a flaring background blazar.
The Fermi -LAT provides unprecedented sensitivity for all-sky monitoring of γ-ray activity. Analysis techniques applied to searches for transient sources have different levels of detail and coverage. Searches for variable γ-ray emission at different positions inside the large region of the sky, e.g., the Galactic plane (Neronov et al. 2012) or the entire sky (the Fermi all-sky variability analysis by Ackermann et al. 2013a;Abdollahi et al. 2017), on the time scale of weeks or months use a measure of variability computed as, e.g., the maximum deviation of the flux from the average value. The reduced χ 2 of the fit of the light curve with the constant flux is another technique which is adopted in the Fermi -LAT catalogue (Abdollahi et al. 2020) for testing about 5,000 γ-ray sources. Both these statistics allow tests of a large number of positions or sources and are not computationally expensive. However, these techniques have a predetermined time interval at which variability is searched. In Prokhorov, Moraghan & Vink (2021), the authors developed a variable-size sliding-time-window (VSSTW) technique and applied it, in addition to a search for γ-ray emission from supernovae, to a search for transient γ-ray sources at random positions in the sky. The search by means of the VSSTW technique accounts for a start and duration of emission which serve as two variables. The VSSTW method al-lowed the authors to confirm the presence of transient γ-ray emission from transitional pulsars, solar flares, γ-ray bursts, novae, and the Crab Nebula, and was successful in finding both short (e.g. solar flares) and long (e.g. transitional pulsars) high-γ-ray-flux states of sources.
The most recent search for transient γ-ray sources with full coverage of the Galactic plane was based on 7.4 years of Fermi -LAT data (Abdollahi et al. 2017). Given that Fermi -LAT accumulated 13 years of data by August 2021, a new search was warranted. In Prokhorov, Moraghan & Vink (2021), the authors performed an all-sky VSSTW test search for transients in Fermi -LAT data and found a new strong transient γ-ray source projected onto the Galactic plane, that is near PSR J0205+6449 and flared in 2017. In addition to this new source, Prokhorov, Moraghan & Vink (2021) confirmed 7 flares of novae and the superorbital modulation of γ-ray emission from LS I +61 • 303. Since the area of the sky probabilistically covered by that initial search is ≈63% (that is (1 − 1/e) × 100%) of the sky, there are known transient sources that escaped detection in that test search, notably a bright nova V1324 Sco 2012. To increase the number of transient γ-ray sources and in particular γray emitting binaries detected with the VSSTW technique in the Galactic plane, we performed a new search using more Fermi -LAT data as well as full coverage and finer pixelization of the Galactic plane.
OBSERVATIONS AND ANALYSIS
In this section, we describe Fermi -LAT observations, data reduction, and the VSSTW analysis.
Fermi -LAT observations and data reduction
Fermi-LAT is a pair-conversion telescope and has been scanning the sky continuously since August 2008. Due to its large detector area and field of view (≈ 20% of the sky), Fermi-LAT allows efficient monitoring of the γ-ray sky. The telescope provides an angular resolution per single event of 1.5 • at 0.5 GeV, 1.0 • at 0.8 GeV, narrowing to 0.5 • at 2 GeV, and further narrowing to 0.1 • above 10 GeV. At energies below ∼10 GeV, the accuracy of the directional reconstruction of γ rays is limited by multiple scattering in the converter foils of Fermi-LAT. Given both the angular resolution dependence with energy and the broadband sensitivity to sources with power-law spectra 1 , we selected the optimal lower energy limit of 0.5 GeV. We made this selection with two purposes: to tighten the point spread function (PSF) and to include γ rays with energies between 0.5 GeV and 0.8 GeV. The latter allows a more detailed study of γ-ray sources with soft spectra, such as Cygnus X-3 (Fermi LAT Collaboration et al. 2009) and PSR B1259-63 (Tam et al. 2011;H. E. S. S. Collaboration et al. 2020), than in Prokhorov, Moraghan & Vink (2021). We selected the upper energy limit of 500 GeV, because of the small amount of detected γ-ray events above this energy.
We downloaded the Fermi-LAT Pass 8 (P8R3) data from the Fermi Science Support Center consisting of 680 weeks of the SOURCE class data (evtype=128), collected between 2008-08-04 and 2021-08-12 and including 80 weeks of data additional to the data set analysed by Prokhorov, Moraghan & Vink (2021). We performed the data analysis using the FERMITOOLS v1.2.23 package. We rejected γ-ray events with zenith angles larger than 90 • to reduce contamination by albedo γ rays from the Earth and applied the recommended cuts on the data quality (DATA QUAL> 0 && LAT CONFIG== 1). We binned the data into time intervals of one week and in four energy bands, that are 0.5-0.8 GeV, 0.8-2.0 GeV, 2.0-5.0 GeV, and 5.0-500.0 GeV. Treatment of these four energy bands provided us with an analysis independent on the photon index. We further binned the Fermi-LAT events using the HEALPIX package 2 (Górski et al. 2005) into a map of resolution N side = 512 in Galactic coordinates with 'RING' pixel ordering. With these settings, the total number of pixels is equal to 12 × 512 2 =3145728 and the area of each pixel is 4π×(180/π) 2 /(12×512 2 )=0.0131 deg 2 , given that HEALPix subdivides the sphere into 12 equal-area base pixels at the lowest resolution. To compute the exposure, we used the standard tools gtltcube and gtexpcube2. To correct the livetime for the zenith angle cut, we used the 'zmax' option on the command line. For this analysis, we selected 68608 positions whose centers coincide with those of the HEALPix grid of resolution N side = 256 in Galactic coordinates with 'RING' pixel ordering and are between −5 • and +5 • Galactic latitudes. On top of that, we excluded 1. • 5 radius regions around the Geminga and Vela pulsars since systematic errors, which are not taken into account in our analysis, might exceed statistical ones for these two brightest γ-ray sources (Pshirkov & Rubtsov 2013). We used events within a 0. • 5 radius circle centered on each of the remaining 68476 positions. Between −5 • and +5 • Galactic latitudes the number of γray sources in the 4FGL-DR2 catalogue (Abdollahi et al. 2020) is 1101 and larger than the number of regions with the radius of the PSF 68% containment in the lowest energy band, which is (360 • × 10 • /(π × 1. • 5 × 1. • 5)) = 509. The 0. • 5 radius aperture covers an area twice larger than the aperture used in Prokhorov, Moraghan & Vink (2021) and sufficient to accumulate a significant number of events from the potential source in the four energy bands, but still small enough to suppress the contamination from blazars with variable γray fluxes or other neighbouring sources, especially in the two lowest energy bands.
Time sliding window search
Using the publicly available python code that employs a likelihood analysis (for a review of likelihood theory, see Mattox et al. 1996) for finding the most statistically significant time interval of a high flux at a given position in the sky 3 developed by Prokhorov, Moraghan & Vink (2021), we performed a search for transient γ-ray sources in the Galactic plane. Following their approach, we compared a model with the presence of a temporary bright state above a steady flux level and a model assuming a source with a steady flux in time. For this purpose, we computed a likelihood for each of these models taking into account the numbers of expected and observed counts from the source during each week and in each energy band, and multiplying Poisson probabilities for all weeks. To evaluate the significance of evidence for a bright state, we used the Test Statistic (TS), which is defined as twice the difference in log-likelihood maximums of two models. Since the two considered models are nested, the probability distribution of the TS is approximately a chisquare distribution with four degrees of freedom (one degree for each energy band) accordingly to Wilks' theorem (Wilks 1938). In this search, a sliding-time window can start at any of the 680 weeks and have any duration unless the 680th week is reached.
The number of weeks is 680 and the number of uncorrelated positions tested in the Galactic plane is high. We estimated that the average value of a statistical level for signals in a cleaned sample is 3.5σ corresponding to a trial factor of 2457 due to a choice of time interval. We estimated the number of uncorrelated test positions as 3600/(π × 2 2 ) = 287, where 3600 deg 2 is the total area covered by the region of the Galactic plane selected for this analysis and the radius of 2 • minimises a position correlation of signals. We adapted a global significance level where we indicated the significance level after taking the "look elsewhere effect" into account. This effect is quantified in terms of a trial factor that is the ratio of the probability of observing the excess in the obtained time interval to the probability of observing it with the same local significance level anywhere in the allowed range at any of the uncorrelated test positions. Accounting for the trial factor, that is 2457 × 287 ≈ 7.0e5, a global significance level above 5.0σ translates to a local significance level higher than 7.0σ. The criterion for classifying a high flux time interval at a given position as a transient signal satisfies the convention of a 5σ global significance level being required to qualify as a discovery.
The performed time sliding window search showed that apart from the two brightest sources, the Vela and Geminga pulsars, systematic uncertainties should be taken into account for another bright γ-ray source, PSR J1826-1256, previously discussed by Neronov et al. (2012), Abdollahi et al. (2017), and Prokhorov, Moraghan & Vink (2021). We checked and found that the classification of this source as a transient source is not robust if systematic uncertainties in exposures are present at a level of 3%.
We also found that some high flux time intervals with a global significance above 5.0σ correspond to γray flares of known blazars from the Fermi-LAT catalogue (Abdollahi et al. 2020). Both blazars located close to and at a distance larger than 0. • 5 from a tested position can affect the search, e.g., NRAO 676 which is ≃ 0. • 1 from a tested position and PKS 1830-21, located at a Galactic latitude of -5.7 • , which is ≃ 0. • 8 from another tested position correspond to highly significant signals. Therefore, we performed a source-by-source check and removed signals corresponding to activity in known γ-ray blazars.
SEARCH FOR GALACTIC TRANSIENTS
In this section, we present a search for both non-repeating and repeating transient γ-ray sources in the Galactic plane using the VSSTW method.
Transient γ-ray sources in the Galactic plane
We present the results in Table 1 which contains the list of transient γ-ray sources whose high flux states are revealed by the performed VSSTW analysis at local and global significance levels above 7σ and ≈ 5σ, respectively. Twelve transient γ-ray signals shown above the horizontal line in Table 1 are at local significance levels higher than 7σ. To these signals, we added two signals at lower local significances of 6.9σ and 6.5σ, given their robust identification with V357 Mus and the Crab pulsar nebula, respectively. The identifiers of γ-ray signals that are not present in the paper by Prokhorov, Moraghan & Vink (2021) are shown in bold. Among the γ-ray signals that occurred during coincident time intervals and at neighbouring positions in the sky, we listed the one with the highest significance in Table 1.
Gamma-ray binaries. Table 1 contains two γ-ray binaries, LS I +61 • 303 and PSR B1259-63. The former binary with an orbital period of 26.5 days is known for its longterm γ-ray variability associated with a superorbital period of 1667 days (Gregory 2002). The start of a high-γ-ray-flux state on the date indicated in this Table is in agreement with that reported by Prokhorov, Moraghan & Vink (2021) and corresponds to the maximum after 1667 days since the previous maximum reported by Ackermann et al. (2013b). The other binary reported in Table 1, PSR B1259-63, has an orbital period of 1237 days. Its high-γ-ray-flux state started in October 2017 following the periastron passage on 22 September 2017. This result confirms that the GeV flaring state only started 40 days after the 2017 periastron and lasted approximately 50 days (Johnson et al. 2018). The GeV γ-ray emission associated with the 2017 periastron passage of PSR B1259-63 showed a very different behaviour than those in the 2010 and 2014 flaring events (e.g., Johnson et al. 2018). Novae. Our search resulted in the detection of 8 novae, V906 Car, V407 Cyg, V1324 Sco, V959 Mon, V392 Per, V1369 Cen, V5855 Sgr, and V357 Mus. Four of these 8 novae, namely V407 Cyg, V1324 Sco, V959 Mon, and V1369 Cen, were included in the second Fermi all-sky variability analysis catalogue (2FAV; Abdollahi et al. 2017). Three novae, V906 Car, V5855 Sgr and V357 Mus, happened in March 2018, October 2016, and December 2018, respectively, are not present in the 2FAV catalogue, which covers Fermi -LAT observations until January 2016. Five of these 8 novae, namely V906 Car, V407 Cyg, V959 Mon, V392 Per, and V1369 Cen, were confirmed using the VSSTW method by Prokhorov, Moraghan & Vink (2021). In that paper, the other two confirmed novae, V339 Del and V5856 Sgr, are at higher Galactic latitudes than those covered by the current search. In addition to the novae reported by Prokhorov, Moraghan & Vink (2021), we confirmed V1324 Sco, V5855 Sgr, and V357 Mus and increased the sample of novae detected with this analysis technique by 60%. The novae, V5855 Sgr and V357 Mus, have fainter γ-ray fluxes (Gordon et al. 2021) than the other novae in Table 1. Although their global significance levels are below 5σ, these levels, 4.8σ and 4.0σ, are sufficiently high to detect these signals and for their identification with these novae owing to the localization in both time and the sky.
Two novae at low Galactic latitudes detected in Fermi -LAT data, but not present in Table 1 are V549 Vel (Li et al. 2020b) and V1707 Sco (Li et al. 2019), the faintest novae in γ rays.
Microquasars. We detected a microquasar, Cygnus X-3, using the VSSTW method. Given that the detected high flux state is in 2020 and beyond the time interval covered by Prokhorov, Moraghan & Vink (2021), we present a detailed Table 1. The list of transient γ-ray signals obtained from the performed VSSTW analysis. The second and third columns show the Right Ascension and the Declination of a transient γ-ray source. The fourth and fifth columns show the start date and the length of a high-γ-ray-flux state. The six and seventh columns show the local and global significances at which the high flux state is detected. The eighth and ninth columns show the name and class of a γ-ray source associated with a transient signal. The tenth column shows the distance between the positions of transient and associated sources. analysis of this signal in Section 4. Figure 1 shows the significance map including transient signals from three γ-ray sources of different classes, namely the microquasar Cygnus X-3, the nova V407 Cyg, and the pulsar PSR J2021+4026.
Pulsars, pulsar nebula, and the source G05. Four γ-ray signals in Table 1 are localized in the directions of pulsars, including PSR J2021+4026 (G03), PSR J0205+6449 (G05), and the Crab pulsar (G13). The signal, G03, is at a distance from the near pulsar of 0. • 24 that corresponds to the mean spacing in the HEALPix grid of resolution N side = 256 and its association is robust. The pulsar, PSR J2021+4026, is indeed a variable γ-ray pulsar whose flux decreased on 2011 October 16, see Allafort et al. (2013);Prokhorov, Moraghan & Vink (2021). The start date and the length of a high-γ-ray-flux state of PSR J2021+4026 reported in Table 1 are compatible with those reported by Prokhorov, Moraghan & Vink (2021). The Crab pulsar nebula γ-ray 'superflare' that occurred on 2011 April 12 (Buehler et al. 2012) was confirmed as a transient γ-ray signal at a local significance above 8σ by means of the VSSTW method by Prokhorov, Moraghan & Vink (2021). The Crab pulsar, located at a Galactic latitude of −5. • 8, is at a large distance (≃ 1. • 1) from the associated pixel (in Table 1) because of the separation of Crab from the region used in this search, see also the case of PKS 1830-21 described above. This separation also explains a lower significance of the Crab nebula in Table 1 compared to that previously reported. The other transient γ-ray source at a significant distance from the associated pixel, N08, is near PSR J0205+6449 that is a 65-millisecond young rotationpowered pulsar. At the position of this pulsar the local significance is 7.8σ, while at the associated pixel the local significance is 13.2σ. Figure 2 shows the significance map obtained from the VSSTW analysis with the positions of PSR J0205+6449 and a radio source, TXS 0205+643, and illustrates this fact. The position of PSR J0205+6449 is shown by a cross. This difference in significances is suggestive that another γ-ray source can be responsible for this transient γray signal. The Fermi -LAT count map shown in Figure 4 of Prokhorov, Moraghan & Vink (2021) also indicates a similar offset from the position of PSR J0205+6449 to the east. The flare of the source N08 was in 2017 and the γ-ray flux was 3.7 times higher (Prokhorov, Moraghan & Vink 2021) than that of the nearest source, PSR J0205+6449, in the Fermi -LAT 10-year source catalogue (4FGL-DR2). When our manuscript was in preparation, the Fermi -LAT collaboration released a new catalogue (4FGL-DR3) that is based on 12 years of data (Abdollahi et al. 2022). In the 4FGL-DR3 catalogue, a new source, namely 4FGL J0209.7+6437, to the east of PSR J0205+6449 is added and identified with a blazar of uncertain type, TXS 0205+643 or NVSS J020935+643725. Figure 2 also shows the γ-ray binary LS I +61 • 303, which is one of the γ-ray sources expected to exhibit repeating signals on the time scale of years.
Repeating high-γ-ray-flux transient signals
We presented above the results obtained from the VSSTW analysis applied to 13 years of Fermi -LAT data with the purpose of identifying the strongest γ-ray flare for each source. There exist, however, γ-ray sources showing modulated emission with a period longer than 3 years, such as PSR B1259-63 and η Carinae (e.g., Johnson et al. 2018;Martí-Devesa & Reimer 2021). To search for repeating flares from such sources during shorter time intervals than 13 years, we divided the entire data into four equal time intervals of 170 weeks each (that is about three and a quarter years) and performed a search for γ-ray transient signals in each of these time intervals by means of the VSSTW method. We repeated the analysis of data divided into 170week intervals selecting data with a 85-week shift for improving the sensitivity for sources whose flaring activity is at the edges of the time intervals. 68476 positions in the Galactic plane used in this search are the same as those used in the previous section.
The performed search resulted in 8 additional flaring events, two of them are during the time interval that is from LS I +61 • 303. Two high-γ-ray-flux events of LS I +61 • 303 started in March 2009 and April 2019 and lasted for a few years (about 136 and 95 weeks, respectively). These events are at local significances of 10.2σ and 8.5σ and most likely associated with superorbital modulation and correspond to the preceding (Ackermann et al. 2013b) and succeeding high-γ-ray-flux states with respect to those reported in Table 1.
The selected 170-week intervals partially or significantly overlap with one of these three high-flux intervals and the VSSTW method applied to these intervals provides evidence for superorbital modulation. Meanwhile, the VSSTW method does not show a high-flux state lasting 4 weeks or less in duration corresponding to the orbital modulation with the period of 26.5 days. This is most likely due to the fact of each 170-week interval comprises many (45) lengths equal to the period. The orbital period modulation of LS I +61 • 303 in γ rays was established by other methods (see, e.g., Prokhorov & Moraghan 2017).
PSR J2021+4026. This pulsar experienced a decrease in flux near 2011 October 16 (Allafort et al. 2013) that corresponds to the end of the high-γ-ray-flux state reported in Table 1. In addition to that dimming event in 2011, further γ-ray state changes of PSR J2021+4026 in 2014 and 2018 were reported by Fiori et al. (2022). We found the flux changes in December 2014 and February 2018 with significances of 10.3 σ and 10.4 σ each confirming the result reported by Fiori et al. (2022). They also reported that PSR J2021+4026 was in its low flux state from 2018 February 2 to 2020 May 26. The performed VSSTW analysis allowed us to reveal a new high-γ-ray-flux event at a significance level of 8.5σ started in June 2020. Figure 3 shows the significance map centered on the position of Cygnus X-3, obtained from the VSSTW analysis, and based on the data accumulated during the most recent time interval. Two transient signals from Cygnus X-3 and PSR J2021+4026 are significantly detected, while no transient signal is detected from the pulsar PSR J2032+4127 which is located at a distance of 0. • 50 from Cygnus X-3. PSR J2032+4127 is a binary with a period of 45-50 years (e.g., Lyne et al. 2015). The non-detection of a transient GeV signal from PSR J2032+4127 is consistent with no change in flux at GeV energies during the 2017 periastron passage of PSR J2032+4127 and also between August 2008 and December 2019 reported by Chernyakova et al. (2020). We performed a sanity check by comparing the fluxes from a test position located at the same distance from Cygnus X-3 as PSR J2021+4026 (and at a distance of 3. • 1 from PSR J2021+4026) before and during the γ-ray bright state of Cygnus X-3 in 2020 and found that the variation in γray flux at that test position is significantly smaller than the change in flux of PSR J2021+4026 in June 2020, ensuring that the γ-ray flux changes of Cygnus X-3 and PSR J2021+4026 in 2020 are different events.
Cygnus X-3. In addition to the γ-ray flaring event in mid-2020 and lasting for about six months as reported in the next section, we found another flaring event from Cygnus X-3 at a statistical significance of 7.0σ when we applied the analysis to the time interval that is from September 2016 to January 2020. This additional flaring event started in the beginning of August 2018 and lasted for two weeks.
FLARE OF CYGNUS X-3 IN MID-2020
In this section, we present the results of Fermi -LAT observations of Cygnus X-3 during the first 13 years of the Fermi mission and a search for modulated γ-ray emission from Cygnus X-3 during the flare in 2020.
Likelihood analysis with FERMITOOLS
We binned the data into 181 28-day intervals in order to produce a light curve and analysed Fermi -LAT data collected during these time intervals using the binned maximum likelihood mode of gtlike, which is part of FERMITOOLS. We employed the TS (Mattox et al. 1996) to evaluate the significance of the γ-ray emission coming from the source located at the position of Cygnus X-3 during each of these 28-day intervals. Along with Cygnus X-3, the model includes 4FGL sources within a region of 20 • radius around Cygnus X-3. We took their spectral shapes from the 4FGL catalogue (Abdollahi et al. 2020). We selected the energy range for this analysis from 0.5 GeV to 300 GeV with 25 logarithmic energy bins and used spatial binning with a pixel size of 0. • 05. We allowed the power-law normalisation and photon index of Cygnus X-3 and the normalisations of bright 4FGL γ-ray sources, in particular PSR J2021+4026, PSR J2032+4127, and the Cygnus cocoon, to vary, while keeping the normalisations of other 4FGL sources fixed. We adopted a background model that includes components describing the diffuse Galactic and isotropic γ-ray emission. We used the standard templates gll iem v07.fits and iso P8R3 SOURCE V2 v1.txt for the Galactic diffuse and isotropic components, respectively, and allowed their normalisations to vary as well.
We computed the light curve and found that 23 of the 181 data points are with TS values greater than 16, each corresponding to a detection of the source at a significance level greater than 4σ. The highest, second-, fourth-, and seventh-highest TS values are 168, 115, 95, and 67 (that is 13.0σ, 10.7σ, 9.7σ, and 8.2σ), respectively, and correspond to four 28-day intervals in the course of the flaring event detected by means of the VSSTW search in Section 2.3. Figure 4 shows the obtained light curve including only the γ-ray data points with T S > 16. The time interval of the flaring event in 2020 lies between two vertical lines in this figure. Figure 4 also shows the soft (2-4 keV) and hard (15-50 keV) X-ray light curves based on data points from MAXI (Hori et al. 2018) and Swift-BAT (Krimm et al. 2013) and taken from the webpages, http://maxi.riken. jp/star_data/J2032+409/J2032+409.html and https:// swift.gsfc.nasa.gov/results/transients/CygX-3/, respectively. It is acknowledged that Cygnus X-3 exhibits an overall anticorrelation between soft and hard X-ray fluxes (e.g., Szostek et al. 2008) and also an overall anticorrelation between hard X-ray and γ-ray emission (Fermi LAT Collaboration et al. 2009;Tavani et al. 2009). The light curves in Figure 4 shows that γ-ray flaring events of Cygnus X-3 correspond to deep local minimums of its hard X-ray emission (with a Swift-BAT count rate of 0.02 cts cm −2 s −1 as suggested by Piano et al. 2012). The horizontal line shown in Figure 4 corresponds to this threshold Swift-BAT count rate and indicates that Cygnus X-3 was in a soft X-ray state during the γ-ray flaring event in 2020. For the sake of illustration, we selected the start date of 2016-04-22 (MJD 57500) while showing these light curves.
We also performed a likelihood analysis of Fermi -LAT data accumulated during 5 consecutive 28-day intervals during the flaring event in 2020. The corresponding data points are shown in Figure 4 between two vertical lines. Each of these intervals allows a significant detection of γ-ray emission from Cygnus X-3. The joint analysis of these 5 data sets resulted in the TS value for Cygnus X-3 of 448, which corresponds to 21σ significance. The likelihood maximisation yields a photon index of Γ = 3.12 ± 0.10 and an integral flux of (6.74 ± 0.41) × 10 −8 ph cm −2 s −1 above 0.5 GeV.
In addition to the strongest γ-ray flaring event, Figure 4 shows that there are other significantly detected γ-ray flaring events observed during the intervals of faint hard and powerful soft X-ray emission. The performed analysis resulted in 18 additional detections in 28-day intervals. Five of these 18 γ-ray detections occurred before 2016-04-22. The results of Fermi -LAT observations corresponding to the first four of them and the fifth one were reported by Fermi LAT Collaboration et al. (2009) andCorbel et al. (2012), respectively. The data points provided by 13 detections in 28-day intervals occurring after 2016-04-22 are shown in Figure 4. These 13 data points correspond to 6 soft X-ray states of Cygnus X-3. The γ-ray flaring events corresponding to these soft X-ray states were in August -September 2016, February -April 2017, June -August 2018, February -June 2019, January -February 2020, and March -May 2021. All these 6 γ-ray flaring events of Cygnus X-3 have been reported to be present in AGILE data and the first two of them in Fermi -LAT data, see Astronomer's Telegrams 4 . In this Section, the last four of these γ-ray flaring events are for the first time reported to be present in Fermi -LAT data. Note that the γ-ray flare revealed by means of the VSSTW analysis, that is G04 from Table 1, has not been reported as Astronomer's Telegrams by the Fermi -LAT or AGILE collaborations.
4.8-hour gamma-ray pulsations of Cygnus X-3
To assess the significance of γ-ray modulation during the flaring event in 2020, we binned γ rays into 1000-second time bins and performed a Poisson maximum likelihood analysis of data extracted from the 0. • 35 radius aperture around Cygnus X-3. We chose the aperture size smaller than the distance from Cygnus X-3 to PSR J2032+4127. We modelled the expected number of counts in a bin centered on time, ti, as N (ti) = ǫi × (F0 + F1 × (1 + cos (2π (ti − T0) /P + δ))), where ǫi, T0, P , and δ are the exposure, start time, period, and phase. Compared to the equation in the Supporting Online Material for Fermi LAT Collaboration et al. (2009), we included 2π (its absence there appears to be a misprint) and renormalised F1 by adding 1 to the cosine function making the modelled flux always non-negative.
When considering emission from the entire flaring event consisting of 5 28-day intervals, we found that a modulated flux model improves the log likelihood over a constant flux model by ∆ ln L = 31.0. Twice the difference in log likelihood follows a χ 2 distribution with 3 degrees of freedom (additional degrees of freedom are F1, P , and δ, see Fermi LAT Collaboration et al. 2009). This change in the log likelihood corresponds to a significance level of 7.3σ. We derived the period using a profile likelihood method. The best-fit period is P = 4.79298 ± 0.00045 hours and in reasonable agreement with the period, P = 4.7917 ± 0.0011 hours, obtained by Fermi LAT Collaboration et al. (2009) from the active states of Cygnus X-3 in 2008-2009 and with the orbital period of Cygnus X-3. We computed the statistical uncertainties using the likelihood profile for which we changed the period, P , until the maximum likelihood decreases by 0.5 in logarithm. We report the statistical uncertainties, however they should be treated with caution, since systematic uncertainties are also present due to the assumed cosine-shape of the modulated signal and the assumption of a constant flux amplitude during the modulated signal. The uncertainty due to the latter assumption can be reduced by dividing the entire interval into similar flux intervals. We show the folded γ-ray light curve obtained from the Fermi -LAT data accumulated during the active state of Cygnus X-3 in mid-2020 in Figure 5. The maximum γray flux in mid-2020 occurs just before superior conjunction. Applying a sinusoidal model to the phase-folded data, we estimated that the phase of maximum of γ-ray emission LAT) for the first of these 6 intervals, ATels #10109 and #10243 (both Fermi-LAT) and ATels #10138 and #10179 (both AGILE) for the second interval, ATels #11804 and #11814 (both AGILE) for the third interval, Atels #12677, #12678, and #12894 (all AGILE) for the fourth interval, ATels #13423 and #13458 (both AGILE) for the fifth interval, and in ATels #14662 and #14780 (both AGILE) for the sixth interval. See, Atel #15009 for γ-ray activity detected by AGILE in October 2021 (i.e., started after the time interval included in this paper). is at φ = 0.84. Fermi LAT Collaboration et al. (2009) reported that the maximum γ-ray flux during the active state in 2008 and 2009 also occurs just before superior conjunction at φ = 0.88 and φ = 0.76, respectively. Therefore, γ-ray emission is produced when energetic particles propagating from the compact object, as seen behind the Wolf-Rayet star. The presence of modulated γ-ray emission during the flaring event in 2020 strengthens the previous conclusion about modulation drawn from the first two years of Fermi -LAT observations of Cygnus X-3. We also searched for the presence of periodic emission from the source for each of the 5 28-day intervals of the flaring event. The start and end times of these intervals are listed in Table 2. We found that the source emits periodic γ-ray emission during the fifth 28-day interval, when the source is the brightest. There is some weak evidence for periodic emission during three of the other four 28day intervals. We found that a modulated flux model improves the log likelihood over a constant model by 5.1, 5.9, 2.8, 4.3, and 15.9 for the first to the fifth 28-day intervals. The corresponding significances for the presence of periodic γ-ray emission are 2.4σ, 2.6σ, 1.5σ, 2.1σ, and 5.0σ. The best-fit period corresponding to the γ-ray brightest 28-day interval is P = 4.7972 ± 0.0033 hours. The period reported by Fermi LAT Collaboration et al. (2009) is well within the 90% confidence interval and the orbital period, P = 4.7926 hours, extrapolated from Singh et al. (2002) is in better agreement. We performed a combined analysis of the first four 28-day intervals. The combined analysis yields ∆ ln L = 19.1 corresponding to a 5.5σ significance level. The period obtained from the combined analysis is P = 4.79388 ± 0.00083 hours and in agreement with the period derived from the entire flaring event and the orbital period. We also compared the flux amplitudes of modulated emission, F1, corresponding to the combined four and fifth 28-day intervals and found that the amplitude, F1, is by a factor of about 2 higher for the fifth interval. This fact supports that the flux increase during the fifth interval was due to the modulated emission. Both the consistency of the derived period with the orbital period and the fact that each of the studied time intervals spans many cycles of the period (see, Vaughan et al. 2016, for the false positive rate of few-cycle periodicities) strongly support the reliability of the obtained results.
Outlook
The detection of modulated γ-ray emission during the flare from Cygnus X-3 in 2020 opens new opportunities to study the conditions at which γ-ray activity in this microquasar is produced. Searches for modulated emission during the other 6 flaring events shown in Figure 4 is required. To increase the statistic one can also perform a stacking analysis (see, e.g., Barbiellini et al. 2014) of the Fermi -LAT observations during these flaring events. Another way to increase the statistic is to use a larger aperture and to subtract the contribution of PSR J2032+4127 to the γ-ray signal using the pulsar gating technique (see Fermi LAT Collaboration et al. 2009). The pulsar gating technique can be useful to further boost the significance of the detected modulation during the 2020 flaring event allowing a more detailed study.
Multi-wavelength studies of Cygnus X-3 from the lowest to the highest frequency during different X-ray states and during γ-ray activity in mid-2020 can provide further information about this binary system. Cygnus X-3 exhibits occasional giant radio flares as intense as a few thousand times the quiescent emission level in the radio band, first seen by Gregory & Kronberg (1972). Giant radio flares in 2011, 2016, and 2017 marked the transitions between different X-ray states, accompanied by rising non-thermal hard X-ray emission (Corbel et al. 2012;Trushkin et al. 2017). Most recently, a giant radio flare was observed in February 2020 (Green & Elwood 2020). Examples of major and minor flares and comparison of their physical parameters are given by Spencer et al. (2022). Meanwhile, comparison of the results obtained with TeV γ-ray observations between 2006 and 2011 by the MAGIC and VERITAS telescopes (Aleksić et al. 2010;Archambault et al. 2013) with those by the SHALON telescope (Sinitsyna & Sinitsyna 2018) can provide insights if TeV γ-ray emission is created under specific conditions.
SUMMARY
We performed a VSSTW analysis with the purpose of searching for transient γ-ray signals in the Galactic plane using 13 years of the Fermi-LAT data. Compared to the previous search by means of this technique by Prokhorov, Moraghan & Vink (2021), besides using more years of data we restored full coverage, used finer pixelization of the Galactic plane, and also broadened the energy range down to 0.5 GeV. The detected sources are listed in Table 1. This refined search increased the number of transient γ-ray sources in the Galactic plane by more than 50% from the number reported in the previously published search. Among these sources are a γ-ray binary (PSR B1259-63), three more novae (V1324 Sco, V5855 Sgr, and V357 Mus), and, in particular, a microquasar, Cygnus X-3. The γ-ray binary PSR B1259-63 is a soft γ-ray source and the inclusion of lower energy γ rays was crucial for its detection. The nova, V1324 Sco, is a strong transient γ-ray source that had avoided detection in the previous VSSTW search due to sparser coverage of the Galactic plane. The novae, V5855 Sgr and V357 Mus, are faint in γ rays and the increase of aperture area by 2 times (and therefore statistics) is a relevant factor for their detection. The refined search also allowed us to study in more detail the transient signal, N08 (or G05 in Table 1), that is near PSR J0205+6449. The significance map shown in Figure 2 illustrates that the position of the highest significance is at an offset from PSR J0205+6449 to the east. This fact is suggestive that another γ-ray source can be responsible for this transient signal.
We also performed a VSSTW analysis of data subsets consisting of 170 weeks of data each and searched for repeating high-γ-ray-flux transient signals. This search reveals repeating signals from PSR B1259-63, LS I +61 • 303, PSR J2021+4026, and Cygnus X-3 additional to those in Table 1. The repeating signals from PSR B1259-63 are identified with periastron passages of the binary system and with a period of about 1237 days. The repeating signals from LS I +61 • 303 are most likely associated with 1667 day superorbital modulation of the binary system. The latest of these three high states of LS I +61 • 303 started in 2019 and its detection had not yet been reported at GeV energies. The search showed that in addition to the dimming event in 2011 listed in Table 1, the pulsar, PSR J2021+4026, experienced two flaring events, one was between 2014 and 2018 and the other started in 2020. The former had recently been reported by Fiori et al. (2022) and the latter had not been reported. The second flaring event in Cygnus X-3 lasted for two weeks corresponds to a soft X-ray state in August 2018.
The VSSTW analysis revealed a high-γ-ray-flux state of the microquasar, Cygnus X-3. This flaring event happened in 2020. By comparing the light curves in the γ-ray, soft Xray, and hard X-ray bands, we found that Cygnus X-3 was in a soft X-ray state confirming the previously noted trend (Fermi LAT Collaboration et al. 2009;Tavani et al. 2009). We performed a binned likelihood analysis and found that the γ-ray spectrum of Cygnus X-3 corresponding to this high-γ-ray-flux state is with a relatively soft photon index of 3.1 and that the integral γ-ray flux above 0.5 GeV is as high as that reported by Fermi LAT Collaboration et al. (2009) for the flaring events in 2008 and 2009. Given the high γray flux and the long duration of the γ-ray flaring event in 2020, this event provided us with an opportunity to search for orbital modulation of the γ-ray emission. We found that the modulation during this flaring event has high significance and the best-fit period of 4.793 hours. The obtained period is in agreement with that derived from Fermi -LAT observations of the previous flaring events from Cygnus X-3 (Fermi LAT Collaboration et al. 2009) and the orbital period of this binary system. We also found that the phase of maximum γ-ray emission in mid-2020 is around superior conjunction. This conclusion suggests that γ-ray emission seen in 2008-2009 and in mid-2020 are likely to be produced by the same mechanism.
ACKNOWLEDGEMENTS
We are grateful to Jacco Vink for valuable suggestions and discussions and thank the referee for the constructive comments that helped us to improve the manuscript.
Computations were performed on the computational facilities belonging to the ALMA Regional Center Taiwan, Academia Sinica, Taiwan. D.P. is supported by funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101004131 (SHARP).
DATA AVAILABILITY
Fermi -LAT data analysed in this paper are publicly distributed by the LAT team and can be downloaded from the LAT Data Server. The python code used to produce the results of the paper is publicly available. The significance maps obtained from the VSSTW analysis applied to the entire data set and each of the analysed subsets of data are publicly available in the fits format at https://zenodo.org/ record/7348674.
A NOTE IN PROOF
After acceptance of our manuscript for publication, we became aware of the paper by Zdziarski et al. (2018), who provided an extensive study of Cygnus X-3 using Fermi -LAT data collected between 2008 August and 2017 August. Note that the two flaring events of Cygnus X-3 detected with the VSSTW method are after that time range and that other newly reported flaring events of Cygnus X-3 in Section 4.1 are indeed reported for the first time.
Figure 1 .
1The significance map of γ-ray transient emission in σ showing the microquasar Cygnus X-3, the nova V407 Cyg, and the pulsar PSR J2021+4026.
Figure 2 .
2The significance map of γ-ray transient emission in σ showing the binary LS I +61 • 303, the γ-ray source N08/G05, and the source, TXS 0205+643.The spectrum reported byJohnson et al. (2018) exhibits a cut-off at ≃800 MeV during the flare in 2017. The inclusion of Fermi -LAT data between energies from 500 MeV to 800 MeV in the VSSTW search increases the local significance of this post-periastron flare from 6.2σ to 10.3σ explaining why this high flux state was not reported byProkhorov, Moraghan & Vink (2021).
Figure 3 .
3The significance map of γ-ray transient emission in σ showing the Cygnus region and corresponding to the 170-week data subset from May 2018 to July 2021.August 2008 to November 2011, another one is during the time interval that is from June 2013 to September 2016, the next two are during the time interval that is from September 2016 to January 2020, and the remaining three are in the time interval that is from May 2018 to July 2021. These 8 events are from four γ-ray sources, namely PSR B1259-63 (two events), LS I +61 • 303 (two events), PSR J2021+4026 (three events), and Cygnus X-3.PSR B1259-63. Two high-γ-ray-flux events of PSR B1259-63 started in January 2011 and March 2021 and lasted for 3 week and 8 weeks, respectively. The local significances of these transient γ-ray signals are 7.1σ and 6.4σ. These high-γ-ray-flux events are associated with the periastron passages of PSR B1259-63 in December 2010(Abdo et al. 2011b) and February 2021(Chang et al. 2021;Chernyakova et al. 2021), respectively.
Figure 4 .
4Fermi-LAT, MAXI, and Swift-BAT light curves of Cygnus X-3 showing data points from April 2016 to July 2021.
Figure 5 .
5The light curve folded on the time interval corresponding to the four orbital periods for the Fermi-LAT data in the time range JD 2458967.2871-2459107.1494. The length of 1 unit of phase is equal to the orbital period and the compact object is behind the Wolf-Rayet star at phase=0 (superior conjuction).
Table 2 .
2Thestart and end times of the five time intervals during
the γ-ray flare of Cygnus X-3 in 2020 shown in Fermi mission
elapsed time (MET; in seconds).
Interval #
Start
End
1
609695017
612114217
2
612114217
614533417
3
614533417
616952617
4
616952617
619371817
5
619371817
621791017
© 2022 RAS
http://www.slac.stanford.edu/exp/glast/groups/canda/ lat_Performance.htm © 2022 RAS, MNRAS 000, ??-??
http://healpix.sourceforge.net 3 https://zenodo.org/record/4739389
© 2022 RAS, MNRAS 000, ??-??
https://www.astronomerstelegram.org/; These Astronomer's telegrams include ATel #9429 (AGILE) and ATel #9502 (Fermi-© 2022 RAS, MNRAS 000, ??-??
. A A Abdo, M Ackermann, M Ajello, A Allafort, L Baldini, J Ballet, G Barbiellini, D Bastieri, ApJ. 723649Abdo A. A., Ackermann M., Ajello M., Allafort A., Baldini L., Ballet J., Barbiellini G., Bastieri D. e. a., 2010a, ApJ, 723, 649
. A A Abdo, M Ackermann, M Ajello, A Allafort, L Baldini, J Ballet, G Barbiellini, Science. 331739Abdo A. A., Ackermann M., Ajello M., Allafort A., Baldini L., Ballet J., Barbiellini G. e. a., 2011a, Science, 331, 739
. A A Abdo, M Ackermann, M Ajello, A Allafort, J Ballet, G Barbiellini, D Bastieri, K Bechtol, ApJ. 73611Abdo A. A., Ackermann M., Ajello M., Allafort A., Ballet J., Barbiellini G., Bastieri D., Bechtol K. e. a., 2011b, ApJ, 736, L11
. A A Abdo, ApJ. 701123Abdo A. A. et al., 2009a, ApJ, 701, L123
. A A Abdo, M Ackermann, M Ajello, W B Atwood, M Axelsson, L Baldini, J Ballet, G Barbiellini, ApJ. 70656Abdo A. A., Ackermann M., Ajello M., Atwood W. B., Axelsson M., Baldini L., Ballet J., Barbiellini G. V. e. a., 2009b, ApJ, 706, L56
. A A Abdo, M Ackermann, M Ajello, W B Atwood, L Baldini, J Ballet, G Barbiellini, D Bastieri, Science. 329817Abdo A. A., Ackermann M., Ajello M., Atwood W. B., Bal- dini L., Ballet J., Barbiellini G., Bastieri D. e. a., 2010b, Science, 329, 817
. S Abdollahi, ApJS. 24733Abdollahi S. et al., 2020, ApJS, 247, 33
. S Abdollahi, F Acero, L Baldini, J Ballet, D Bastieri, R Bellazzini, B Berenji, A Berretta, ApJS. 26053Abdollahi S., Acero F., Baldini L., Ballet J., Bastieri D., Bellazzini R., Berenji B., Berretta A. e. a., 2022, ApJS, 260, 53
. S Abdollahi, M Ackermann, M Ajello, A Albert, L Baldini, J Ballet, G Barbiellini, D Bastieri, ApJ. 84634Abdollahi S., Ackermann M., Ajello M., Albert A., Baldini L., Ballet J., Barbiellini G., Bastieri D. e. a., 2017, ApJ, 846, 34
. M Ackermann, M Ajello, A Albert, A Allafort, E Antolini, L Baldini, J Ballet, G Barbiellini, ApJ. 77157Ackermann M., Ajello M., Albert A., Allafort A., Antolini E., Baldini L., Ballet J., Barbiellini G. e. a., 2013a, ApJ, 771, 57
. M Ackermann, M Ajello, J Ballet, G Barbiellini, D Bastieri, R Bellazzini, E Bonamente, T Brandt, ApJ. 77335Ackermann M., Ajello M., Ballet J., Barbiellini G., Bastieri D., Bellazzini R., Bonamente E., Brandt T. J. e. a., 2013b, ApJ, 773, L35
. J Aleksić, L A Antonelli, P Antoranz, M Backes, C Baixeras, J A Barrio, D Bastieri, ApJ. 721843Aleksić J., Antonelli L. A., Antoranz P., Backes M., Baix- eras C., Barrio J. A., Bastieri D. e. a., 2010, ApJ, 721, 843
. A Allafort, L Baldini, J Ballet, G Barbiellini, M G Baring, D Bastieri, R Bellazzini, E Bonamente, ApJ. 7772Allafort A., Baldini L., Ballet J., Barbiellini G., Baring M. G., Bastieri D., Bellazzini R., Bonamente E. e. a., 2013, ApJ, 777, L2
. S Archambault, M Beilicke, W Benbow, K Berger, R Bird, A Bouvier, J Buckley, ApJ. 779150Archambault S., Beilicke M., Benbow W., Berger K., Bird R., Bouvier A., Buckley J. H. e. a., 2013, ApJ, 779, 150
. W B Atwood, A A Abdo, M Ackermann, W Althouse, B Anderson, M Axelsson, L Baldini, J Ballet, ApJ. 6971071Atwood W. B., Abdo A. A., Ackermann M., Althouse W., Anderson B., Axelsson M., Baldini L., Ballet J. e. a., 2009, ApJ, 697, 1071
. G Barbiellini, D Bastieri, K Bechtol, R Bellazzini, R D Blandford, A W Borgland, J Bregeon, P Bruel, ApJ. 784118Barbiellini G., Bastieri D., Bechtol K., Bellazzini R., Bland- ford R. D., Borgland A. W., Bregeon J., Bruel P. e. a., 2014, ApJ, 784, 118
. R Buehler, ApJ. 74926Buehler R. et al., 2012, ApJ, 749, 26
. G A Caliandro, C C Cheung, J Li, J D Scargle, D F Torres, K S Wood, M Chernyakova, ApJ. 81168Caliandro G. A., Cheung C. C., Li J., Scargle J. D., Torres D. F., Wood K. S., Chernyakova M., 2015, ApJ, 811, 68
. Z Chang, S Zhang, Y.-P Chen, L Ji, L.-D Kong, P.-J Wang, 7472UniverseChang Z., Zhang S., Chen Y.-P., Ji L., Kong L.-D., Wang P.-J., 2021, Universe, 7, 472
. M Chernyakova, D Malyshev, P Blay, B Van Soelen, S Tsygankov, MNRAS. 495365Chernyakova M., Malyshev D., Blay P., van Soelen B., Tsy- gankov S., 2020, MNRAS, 495, 365
. M Chernyakova, 7242UniverseChernyakova M. et al., 2021, Universe, 7, 242
. S Corbel, MNRAS. 4212947Corbel S. et al., 2012, MNRAS, 421, 2947
. R H D Corbet, ApJ. 88493Corbet R. H. D. et al., 2019, ApJ, 884, 93
. G Dubus, A&A Rev. 2164Dubus G., 2013, A&A Rev., 21, 64
. Lat Fermi, Collaboration, Science. 3261512Fermi LAT Collaboration et al., 2009, Science, 326, 1512
. Lat Fermi, Collaboration, Science. 335189Fermi LAT Collaboration et al., 2012, Science, 335, 189
A Fiori, Fermi-LAT CollaborationM Razzano, Fermi-LAT CollaborationP Saz Parkinson, Fermi-LAT CollaborationR Mignani, Fermi-LAT Collaboration37th International Cosmic Ray Conference. Berlin609Fiori A., Razzano M., Saz Parkinson P., Mignani R., Fermi- LAT Collaboration, 2022, in 37th International Cosmic Ray Conference. 12-23 July 2021. Berlin, p. 609
. A Franckowiak, P Jean, M Wood, C C Cheung, Buson S. 609120A&AFranckowiak A., Jean P., Wood M., Cheung C. C., Buson S., 2018, A&A, 609, A120
. A C Gordon, E Aydi, K L Page, K.-L Li, L Chomiuk, K V Sokolovsky, K Mukai, J Seitz, ApJ. 910134Gordon A. C., Aydi E., Page K. L., Li K.-L., Chomiuk L., Sokolovsky K. V., Mukai K., Seitz J., 2021, ApJ, 910, 134
. K M Górski, E Hivon, A J Banday, B D Wandelt, F K Hansen, M Reinecke, M Bartelmann, ApJ. 622759Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ, 622, 759
. D A Green, P Elwood, Research Notes of the American Astronomical Society. 436Green D. A., Elwood P., 2020, Research Notes of the Amer- ican Astronomical Society, 4, 36
. P C Gregory, ApJ. 575427Gregory P. C., 2002, ApJ, 575, 427
. P C Gregory, P P Kronberg, Nature. 239440Gregory P. C., Kronberg P. P., 1972, Nature, 239, 440
. A&A. 633102H. E. S. S. Collaboration et al., 2020, A&A, 633, A102
. M Harvey, C B Rulten, P M Chadwick, MNRAS. 5066029Harvey M., Rulten C. B., Chadwick P. M., 2021, MNRAS, 506, 6029
. T Hori, ApJS. 2357Hori T. et al., 2018, ApJS, 235, 7
. T J Johnson, K S Wood, M Kerr, R H D Corbet, C C Cheung, P S Ray, N Omodei, ApJ. 86327Johnson T. J., Wood K. S., Kerr M., Corbet R. H. D., Cheung C. C., Ray P. S., Omodei N., 2018, ApJ, 863, 27
. H A Krimm, ApJS. 20914Krimm H. A. et al., 2013, ApJS, 209, 14
. J Li, D F Torres, K S Cheng, E De Oña Wilhelmi, P Kretschmar, X Hou, J Takata, ApJ. 846169Li J., Torres D. F., Cheng K. S., de Oña Wilhelmi E., Kretschmar P., Hou X., Takata J., 2017, ApJ, 846, 169
. J Li, D F Torres, R.-Y Liu, M Kerr, E De Oña Wilhelmi, Y Su, Nature Astronomy. 41177Li J., Torres D. F., Liu R.-Y., Kerr M., de Oña Wilhelmi E., Su Y., 2020a, Nature Astronomy, 4, 1177
K.-L Li, The Astronomer's Telegram. 131161Li K.-L. et al., 2019, The Astronomer's Telegram, 13116, 1
. K.-L Li, F.-J Hambsch, U Munari, B D Metzger, L Chomiuk, A Frigo, J Strader, ApJ. 905114Li K.-L., Hambsch F.-J., Munari U., Metzger B. D., Chomiuk L., Frigo A., Strader J., 2020b, ApJ, 905, 114
. A Loh, S Corbel, G Dubus, MNRAS. 4674462Loh A., Corbel S., Dubus G., 2017, MNRAS, 467, 4462
. A G Lyne, B W Stappers, M J Keith, P S Ray, M Kerr, F Camilo, T J Johnson, MNRAS. 451581Lyne A. G., Stappers B. W., Keith M. J., Ray P. S., Kerr M., Camilo F., Johnson T. J., 2015, MNRAS, 451, 581
. G Martí-Devesa, O Reimer, A&A. 65444Martí-Devesa G., Reimer O., 2021, A&A, 654, A44
. G Martí-Devesa, O Reimer, J Li, D F Torres, A&A. 635141Martí-Devesa G., Reimer O., Li J., Torres D. F., 2020, A&A, 635, A141
. J R Mattox, ApJ. 461396Mattox J. R. et al., 1996, ApJ, 461, 396
. A Neronov, D Malyshev, M Chernyakova, A Lutovinov, A&A. 5439Neronov A., Malyshev D., Chernyakova M., Lutovinov A., 2012, A&A, 543, L9
. S Ohm, C Hoischen, MNRAS. 4741335Ohm S., Hoischen C., 2018, MNRAS, 474, 1335
. J M Paredes, P Bordas, Rendiconti Lincei. Scienze Fisiche e Naturali. 30107Paredes J. M., Bordas P., 2019, Rendiconti Lincei. Scienze Fisiche e Naturali, 30, 107
. G Piano, A&A. 545110Piano G. et al., 2012, A&A, 545, A110
. D A Prokhorov, A Moraghan, MNRAS. 4713036Prokhorov D. A., Moraghan A., 2017, MNRAS, 471, 3036
. D A Prokhorov, A Moraghan, J Vink, MNRAS. 5051413Prokhorov D. A., Moraghan A., Vink J., 2021, MNRAS, 505, 1413
. M S Pshirkov, MNRAS. 45799Pshirkov M. S., 2016, MNRAS, 457, L99
. M S Pshirkov, G I Rubtsov, Soviet Journal of Experimental and Theoretical Physics. 11659Pshirkov M. S., Rubtsov G. I., 2013, Soviet Journal of Ex- perimental and Theoretical Physics, 116, 59
. N S Singh, S Naik, B Paul, P C Agrawal, A R Rao, K Y Singh, A&A. 392161Singh N. S., Naik S., Paul B., Agrawal P. C., Rao A. R., Singh K. Y., 2002, A&A, 392, 161
. V G Sinitsyna, V Y Sinitsyna, Astronomy Letters. 44162Sinitsyna V. G., Sinitsyna V. Y., 2018, Astronomy Letters, 44, 162
. R E Spencer, M Garrett, J D Bray, D A Green, MNRAS. 5122618Spencer R. E., Garrett M., Bray J. D., Green D. A., 2022, MNRAS, 512, 2618
. B W Stappers, ApJ. 79039Stappers B. W. et al., 2014, ApJ, 790, 39
. A Szostek, A A Zdziarski, M L Mccollough, 3881001MN-RASSzostek A., Zdziarski A. A., McCollough M. L., 2008, MN- RAS, 388, 1001
. P H T Tam, R H H Huang, J Takata, C Y Hui, A K H Kong, K S Cheng, ApJ. 73610Tam P. H. T., Huang R. H. H., Takata J., Hui C. Y., Kong A. K. H., Cheng K. S., 2011, ApJ, 736, L10
. M Tavani, Nature. 462620Tavani M. et al., 2009, Nature, 462, 620
. M Tavani, Science. 331736Tavani M. et al., 2011, Science, 331, 736
. S Trushkin, M Mccollough, N Nizhelskij, P Tsybulev, Galaxies. 586Trushkin S., McCollough M., Nizhelskij N., Tsybulev P., 2017, Galaxies, 5, 86
. S Vaughan, P Uttley, A G Markowitz, D Huppenkothen, M J Middleton, W N Alston, J D Scargle, W M Farr, MNRAS. 4613145Vaughan S., Uttley P., Markowitz A. G., Huppenkothen D., Middleton M. J., Alston W. N., Scargle J. D., Farr W. M., 2016, MNRAS, 461, 3145
. S S Wilks, The Annals of Mathematical Statistics. 960Wilks S. S., 1938, The Annals of Mathematical Statistics, 9, 60
. R Zanin, A Fernández-Barral, E De Oña Wilhelmi, F Aharonian, O Blanch, V Bosch-Ramon, D Galindo, A&A. 59655Zanin R., Fernández-Barral A., de Oña Wilhelmi E., Aha- ronian F., Blanch O., Bosch-Ramon V., Galindo D., 2016, A&A, 596, A55
. A A Zdziarski, MNRAS. 4794399Zdziarski A. A. et al., 2018, MNRAS, 479, 4399
| []
|
[
"NON-UNIFORM DEPENDENCE FOR THE NOVIKOV EQUATION IN BESOV SPACES",
"NON-UNIFORM DEPENDENCE FOR THE NOVIKOV EQUATION IN BESOV SPACES"
]
| [
"Jinlu Li ",
"ANDMin Li ",
"Weipeng Zhu "
]
| []
| []
| In this paper, we investigate the dependence on initial data of solutions to the Novikov equation. We show that the solution map is not uniformly continuous dependence on the initial data in Besov spaces B s p,r (R), s > max{1 + 1 p , 3 2 }.2010 Mathematics Subject Classification. 35Q35. | 10.1007/s00021-020-00511-9 | [
"https://arxiv.org/pdf/2002.00321v1.pdf"
]
| 211,010,720 | 2002.00321 | fac4d548417d7ac887b2732e0372b0fb17ea2a16 |
NON-UNIFORM DEPENDENCE FOR THE NOVIKOV EQUATION IN BESOV SPACES
2 Feb 2020
Jinlu Li
ANDMin Li
Weipeng Zhu
NON-UNIFORM DEPENDENCE FOR THE NOVIKOV EQUATION IN BESOV SPACES
2 Feb 2020
In this paper, we investigate the dependence on initial data of solutions to the Novikov equation. We show that the solution map is not uniformly continuous dependence on the initial data in Besov spaces B s p,r (R), s > max{1 + 1 p , 3 2 }.2010 Mathematics Subject Classification. 35Q35.
Introduction and main result
In this paper, we consider the Cauchy problem for the Novikov equation
(1 − ∂ 2 x )u t = 3uu x u xx + u 2 u xxx − 4u 2 u x , t > 0, u(x, 0) = u 0 (x).
(1.1)
What we are most concerned about is the issue of non-uniform dependence on the initial data. This equation was discovered very recently by Novikov in a symmetry classification of nonlocal PDEs with cubic nonlinearity. He showed that Eq.(1.1) is integrable by using a definition of the existence of an infinite hierarchy of quasilocal higher symmetries [35]. It has a bi-Hamiltonian structure and admits exact peakon solutions u(t, x) = ± √ ce |x−ct| with c > 0 [27]. The Novikov equation had been studied by many authors. Indeed, it is locally well-posed in certain Sobolev spaces and Besov spaces [40,41,43,44]. Moreover, it has global strong solutions [40], finite-time blow up solutions [44] and global weak solutions [29,39].
The Novikov equation can be thought as a generalization of the well-known Camassa-Holm (CH) equation
(1 − ∂ 2
x )u t = 3uu x − 2u x u xx − uu xxx . This equation is known as the shallow water wave equation [2,13]. It is completely integrable, which has been studied extensively by many authors [2,6,14]. The CH equation also has a Hamiltonian structure [4,22], and admits exact peaked solitons of the form ce |x−ct| with c > 0 which are orbitally stable [15]. These peaked solutions also mimic the pattern specific to the waves of greatest height [7,11,37].
The local well-posedness for the Cauchy problem of CH equation in Sobolev spaces and Besov spaces was established in [9,10,16,36]. Moreover, the CH equation has global strong solutions [5,9,10], finite-time blow-up strong solutions [5,8,9,10], unique global weak solution [42], and it is continuous dependence on initial data [31].
The CH equation was the only known integrable equation having peakon solutions until 2002 when another such equation was discovered by Degasperis and Procesi [18] (
1 − ∂ 2 x )u t = 4uu x − 3u x u xx − uu xxx .
The DP equation can be regarded as a model for nonlinear shallow water dynamics and its asymptotic accuracy is the same as for the CH shallow water equation [19], also, it's integrable with a bi-Hamiltonian structure [17,12]. Similar to the CH equation, the DP equation has travelling wave solutions [30,38]. The Cauchy problem of the DP equation is locally well-posed in certain Sobolev spaces and Besov spaces [23,24,45]. In addition, it has global strong solutions [33,45,47], the finite-time blow-up solutions [20,21], global weak solutions [3,20,46,47], and it is the continuous dependence on initial data [31]. Different form the CH equation, the DP equation has not only peakon solutions [17], periodic peakdon solutions [46], but also shock peakons [34] and the periodic shock waves [21].
The issue of non-uniform dependence has attracted a lot of attention after Kenig et al's study on some dispersive equations [28]. For the non-uniform continuity of CH and DP equations in Sobolev spaces, we refer to [26,25]. And for the Novikov equation, Himonas and Holliman have proved that the data-to-solution map is not uniformly continuous in Sobolev spaces H s , s > 3 2 , they used the method of approximate solutions in conjunction with well-posedness estimates [24]. Up to now, to our best knowledge, there is no paper concerning the non-uniform dependence on initial data for the Novikov equation under the framework of Besov spaces, which is we shall investigate in this paper.
The Novikov equation (1.1) can be changed into a transport-like form
u t + u 2 u x = −(1 − ∂ 2 x ) −1 1 2 u 3 x + ∂ x 3 2 uu 2 x + u 3 , u(0, x) = u 0 , (1.2)
For simplicity, we denote
R(u) = R 1 (u) + R 2 (u) + R 3 (u), where R 1 (u) = − 1 2 (1 − ∂ 2 x ) −1 u 3 x , R 2 (u) = −∂ x (1 − ∂ 2 x ) −1 u 3 , R 3 (u) = − 3 2 ∂ x (1 − ∂ 2 x ) −1 uu 2
x . Then, we have the following result.
Theorem 1.1. Let 1 ≤ p, r ≤ ∞ and s > max{1 + 1 p , 3 2 }. The data-to-solution map for the Novikov equation (1.2) is not uniformly continuous from any bounded subset in B s p,r into C([0, T ]; B s p,r (R))
. That is, there exists two sequences of solutions u n and v n such that
||u n 0 || B s p,r (R) + ||v n 0 || B s p,r (R) 1, lim n→∞ ||u n 0 − v n 0 || B s p,r (R) = 0, lim inf n→∞ ||u n (t) − v n (t)|| B s p,r (R) t, t ∈ [0, T 0 ], with small positive time T 0 .
Our paper is organized as follows. In Section 2, we give some preliminaries which will be used in the sequel. In Section 3, we give the proof of our main theorem.
Notations. Given a Banach space X, we denote its norm by · X . The symbol A B means that there is a uniform positive constant c independent of A and B such that A ≤ cB.
Littlewood-Paley analysis
In this section, we will recall some facts about the Littlewood-Paley decomposition, the nonhomogeneous Besov spaces and their some useful properties. For more details, the readers can refer to [1].
There exists a couple of smooth functions (χ,
ϕ) valued in [0, 1], such that χ is supported in the ball B {ξ ∈ R d : |ξ| ≤ 4 3 }, ϕ is supported in the ring C {ξ ∈ R d : 3 4 ≤ |ξ| ≤ 8 3 } and ϕ ≡ 1 for 4 3 ≤ |ξ| ≤ 3 2 . Moreover, ∀ ξ ∈ R d , χ(ξ) + j≥0 ϕ(2 −j ξ) = 1, ∀ ξ ∈ R d \ {0}, j∈Z ϕ(2 −j ξ) = 1, |j − j ′ | ≥ 2 ⇒ Supp ϕ(2 −j ·) ∩ Supp ϕ(2 −j ′ ·) = ∅, j ≥ 1 ⇒ Supp χ(·) ∩ Supp ϕ(2 −j ·) = ∅.
Then, we can define the nonhomogeneous dyadic blocks ∆ j and nonhomogeneous low frequency cut-off operator S j as follows:
∆ j u = 0, if j ≤ −2, ∆ −1 u = χ(D)u = F −1 (χF u), ∆ j u = ϕ(2 −j D)u = F −1 (ϕ(2 −j ·)F u), if j ≥ 0, S j u = j−1 j ′ =−∞ ∆ j ′ u. Definition 2.1 ([1]). Let s ∈ R and 1 ≤ p, r ≤ ∞. The nonhomogeneous Besov space B s p,r (R d ) consists of all tempered distribution u such that ||u|| B s p,r (R d ) (2 js ||∆ j u|| L p (R d ) ) j∈Z ℓ r (Z) < ∞.
Then, we have the following product laws. [1] and Lemma 2.9, [32]). Let 1 ≤ p, r ≤ ∞. Assume that
uv B s p,r (R d ) ≤ C u L ∞ (R d ) v B s p,r (R d ) + v L ∞ (R d ) u B s p,r (R d ) . (2) Let 1 ≤ p, r ≤ ∞ and s > max{ 3 2 , 1 + d p }. Then, we have ||uv|| B s−2 p,r (R d ) ≤ C||u|| B s−1 p,r (R d ) ||v|| B s−2 p,r (R d ) . Lemma 2.3 (Theorem 3.38,σ > −d min( 1 p , 1 p ′ ) or σ > −1 − d min( 1 p , 1 p ′ ) if div v = 0. (2.1)
There exists a constant C = C(d, p, r, σ) such that for any smooth solution to the following linear transport equation:
∂ t f + v∂ x f = g, f | t=0 = f 0 . We have sup s∈[0,t] f (s) B σ p,r (R d ) ≤ Ce CVp(v,t) f 0 B σ p,r (R d ) + t 0 g(τ ) B s p,r (R d ) dτ , (2.2) with V p (v, t) = t 0 ∇v(s) B d p p,∞ (R d )∩L ∞ (R d ) ds, if σ < 1 + d p , t 0 ∇v(s) B σ p,r (R d ) ds, if σ = 1 + d p and r > 1, t 0 ∇v(s) B σ−1 p,r (R d ) ds, if σ > 1 + d p or {σ = 1 + d p and r = 1}. If f = v, then for all σ > 0 (σ > −1, if div v = 0), the estimate (2.2) holds with V p (t) = t 0 ∇v(s) L ∞ (R d ) ds.
Non-uniform continuous dependence
In this section, we will give the proof of our main theorem. Firstly, we recall the local well-posedness result. . Next, we give two technical lemmas to estimate the error. Lettingφ be a C 0 (R) function such that φ(x) = 1, |x| ≤ 1 4 , 0, |x| ≥ 1 2 . We choose the velocity u n 0 having the following form:
u n 0 = f n , with f n (x) = 2 −ns φ(x) sin( 17 12 2 n x), n ∈ Z.
Notice that suppf n ⊂ ξ ∈ R :
17 12 2 n − 1 2 ≤ |ξ| ≤ 17 12 2 n + 1 2 ,
then we can deduce that ∆ j f n = 0, j = n and ∆ n f n = f n . Moreover, there holds ||f n || B σ p,r ≤ C2 (σ−s)n . Assume that u n is the solution of (1.2) with initial data u n 0 . Then, we have the following estimate between u n 0 and u n . Proof. By the well-posedness result (see Lemma 3.1), the solution u n belong to C([0, T ]; B s p,r ) and have common lifespan T ≃ 1. In fact, it is easy to show that for k ≥ −1, ||u n 0 || B s+k p,r ≤ C2 kn , and
||u n || L ∞ T (B s+k p,r ) ≤ C2 kn . Since s − ε s − 1 > 1 p , then we have ||f || L ∞ ≤ ||f || B s−1−εs p,r . (3.1)
Hence, we obtain for all t ∈ [0, T ]
||u n − u n 0 || B s p,r ≤ t 0 ||∂ τ u n || B s p,r dτ ≤ t 0 || − (1 − ∂ 2 x ) −1 1 2 (u n x ) 3 + ∂ x 3 2 u n (u n ) 2 x + (u n ) 3 || B s p,r dτ + t 0 || − (u n ) 2 u n x || B s p,r dτ ≤ C ||u n || 2 B s p,r ||u n || B s−1 p,r + ||u n || 3 B s−1 p,r + ||u n || L ∞ ||u n || B s p,r ||u n x || B s p,r ≤ C2 −n + C2 −3n + C2 (−1−εs)n 2 n ≤ C2 −nεs .
This completes the proof of this lemma.
Next, we choose the velocity v n 0 having the following form: v n 0 = f n + g n , with
g n (x) = 2 − 1 2 n φ(x), n ∈ Z.
Assume that v n is the solution of (1.2) with initial data v n 0 . Then, we have the following estimate between v n 0 and v n .
||v n − v n 0 + t(v n 0 ) 2 ∂ x v n 0 || B s p,r ≤ Ct 2 + C2 −nεs .
Here, C is a positive constant independent with t.
Proof. By the well-posedness result (see Lemma 3.1), the solution v n belong to C([0, T ]; B s p,r ) and have common lifespan T ≃ 1. For simplicity, we denote
w n = v n − v n 0 − tV n 0 with V n 0 = −(v n 0 ) 2 ∂ x v n 0 . It is easy to check that for σ ≥ − 1 2 ||v n 0 || B s+σ p,r ≤ C2 σn , (3.2)
then we have
||V n 0 || B s+σ p,r ≤ C||v n 0 || 2 L ∞ ||∂ x v n 0 || B s+σ p,r + C||v n 0 || L ∞ ||∂ x v n 0 || L ∞ ||v n 0 || B s+σ p,r ≤ C2 − 1 2 n 2 − 1 2 n 2 (σ+1)n + C2 − 1 2 n 2 − 1 2 n 2 σn ≤ C2 σn , for σ ≥ − 1 2 , (3.3) and ||v n || L ∞ T (B s+σ p,r ) ≤ C2 σn , for σ ≥ − 1 2 . (3.4) Note that ||v n 0 , V n 0 , v n || B s−1 p,r ≤ C||v n 0 , V n 0 , v n || B s− 1 2 p,r ≤ C2 − 1 2 n ,
which will be used frequently in the sequel. By (1.2), we can deduce that
∂ t w n + (v n ) 2 ∂ x w n = − t(v n ) 2 ∂ x V n 0 − t(v n + v n 0 )V n 0 ∂ x v n 0 − w n (v n + v n 0 )∂ x v n 0 + R(v n )
. For the term R 2 (v n ), we have from Lemma 2.2 and (3.2)-(3.4) that
||R 2 (v n )|| B s p,r ≤ C||(v n ) 3 || B s−1 p,r ≤ C||v n || B s−1 p,r ||v n || 2 L ∞ ≤ C2 − 3 2 n ≤ C2 −(1+εs)n . (3.5)
For the term R 3 (v n ), we obtain from Lemma 2.2 and (3.1)-(3.4), that
||R 3 (v n )|| B s−1 p,r ≤ ||v n (∂ x v n ) 2 || B s−2 p,r ≤ C||v n || B s−1 p,r ||(∂ x v n ) 2 || B s−2 p,r ≤ C||v n || B s−1 p,r ||(∂ x v n ) 2 || B s− 3 2 p,r ≤ C||v n || B s−1 p,r ||∂ x v n || B s− 3 2 p,r ||∂ x v n || L ∞ ≤ C2 −(1+εs)n . (3.6)
and
||R 3 (v n )|| B s p,r ≤ ||v n (∂ x v n ) 2 || B s−1 p,r ≤ C||v n || B s p,r ||v n || L ∞ ||∂ x v n || L ∞ + C||v n || B s−1 p,r ||∂ x v n || 2 L ∞ ≤ C2 −( 1 2 +εs)n ≤ C2 −nεs . (3.7)
For the term R 1 (v n ), we rewrite it as
R 1 (v n ) = −(1 − ∂ 2 x ) −1 ∂ x v n (∂ x v n + ∂ x v n 0 )∂ x w n R 1,1 −(1 − ∂ 2 x ) −1 t(∂ x v n )(∂ x v n + ∂ x v n 0 )∂ x V n 0 R 1,2 −(1 − ∂ 2 x ) −1 ∂ x v n (∂ x v n 0 ) 2 R 1,3
, then we have from Lemma 2.2 that
||R 1,1 || B s p,r ≤ C||∂ x v n (∂ x v n + ∂ x v n 0 )∂ x w n || B s−2 p,r ≤ C||∂ x w n || B s−2 p,r ||∂ x v n (∂ x v n + ∂ x v n 0 )|| B s−1 p,r ≤ C||w n || B s−1 p,r , (3.8) ||R 1,2 || B s p,r ≤ Ct||∂ x v n (∂ x v n + ∂ x v n 0 )∂ x V n 0 || B s−2 p,r ≤ Ct||∂ x V n 0 || B s−2 p,r ||∂ x v n (∂ x v n + ∂ x v n 0 )|| B s−1 p,r ≤ Ct2 − 1 2 n . (3.9)
Using the following inequality
||(∂ x v n 0 ) 3 || B s−2 p,r ≤ C||(∂ x g n ) 2 ∂ x f n || B s−2 p,r + C||(∂ x f n ) 2 ∂ x g n || B s−1 p,r + C||(∂ x f n ) 3 || B s−1 p,r + C||(∂ x g n ) 3 || B s−1 p,r ≤ C2 n(s−2) ||∂ x g n || 2 L ∞ ||∂ x f n || L p + C||g n || B s p,r ||f n || B s p,r ||∂ x f n || L ∞ + C||g n || 3 B s p,r + C||f n || B s p,r ||∂ x f n || 2 L ∞ ≤ C2 −2n + C2 − 3 2 n + C2 −2n(s−1) + C2 −n(s−1)− 1 2 n ≤ C2 −n(1+εs) ,
we have
||R 1,3 || B s p,r ≤ C||∂ x v n (∂ x v n 0 ) 2 || B s−2 p,r ≤ C||∂ x w n (∂ x v n 0 ) 2 || B s−2 p,r + Ct||∂ x V n 0 (∂ x v n 0 ) 2 || B s−2 p,r + C||∂ x v n 0 (∂ x v n 0 ) 2 || B s−2 p,r ≤ Ct2 − 1 2 n + C||w n || B s−1 p,r + 2 −n(1+εs) . (3.10)
Combining (3.5)-(3.10), we obtain
||R(v n )|| B s−1 p,r ≤ Ct2 − 1 2 n + C||w n || B s−1 p,r + 2 −n(1+εs) ,(3.11)
and ||R(v n )|| B s p,r ≤ Ct2 − 1 2 n + C||w n || B s−1 p,r + 2 −nεs .
(3.12)
By Lemma 2.2, we have
||(v n ) 2 ∂ x V n 0 || B s−1 p,r ≤ C||v n || 2 B s−1 p,r ||V n 0 || B s p,r ≤ C2 −n , (3.13) ||(v n ) 2 ∂ x V n 0 || B s p,r ≤ C||v n || 2 L ∞ ||V n 0 || B s+1 p,r + C||v n || 2 B s p,r ||∂ x V n 0 || L ∞ ≤ C,
(3.14)
||(v n + v n 0 )V n 0 ∂ x v n 0 || B s−1 p,r ≤ C2 −n , (3.15) ||(v n + v n 0 )V n 0 ∂ x v n 0 || B s p,r ≤ C. (3.16)
Applying Lemma 2.2 again yields
||w n (v n + v n 0 )∂ x v n 0 || B s−1 p,r ≤ C||w n || B s−1 p,r ,(3.17)
and This completes the proof of this lemma.
||w n (v n + v n 0 )∂ x v n 0 || B s p,
Proof of the main theorem. Now, we need prove the result of Theorem 1.1. It is easy to show that
||u n 0 − v n 0 || B s p,r ≤ ||g n || B s p,r ≤ C2 − 1 2 n ,(3.21)
which tend to 0 for n tends to infinity. According Lemmas 3.2-3.3 , we have
||u n − v n || B s p,r ≥ c||t(v n 0 ) 2 ∂ x v n 0 || B s p,r − Ct 2 − C2 −nεs . (3.22)
Notice that (v n 0 ) 2 ∂ x v n 0 = (v n 0 ) 2 ∂ x g n + (f n ) 2 ∂ x f n + 2g n f n ∂ x f n + (g n ) 2 ∂ x f n . By Lemma 2.2, we have ||(v n 0 ) 2 ∂ x g n || B s p,r ≤ C||v n 0 || L ∞ ||v n 0 || B s p,r ||g n || B s+1 p,r ≤ C2 −nεs , ||(f n ) 2 ∂ x f n || B s p,r ≤ C||f n || 2 L ∞ ||f n || B s+1 p,r + C||f n || L ∞ ||∂ x f n || L ∞ ||f n || B s p,r ≤ C2 −nεs ,
and ||g n f n ∂ x f n || B s p,r ≤ C||f n || L ∞ ||g n || L ∞ ||f n || B s+1 p,r + C||f n || B s p,r ||∂ x f n || L ∞ ||g n || B s p,r ≤ C2 −nεs .
This alongs with (3.22) implies ||u n − v n || B s p,r ≥ ct||(g n ) 2 ∂ x f n || B s p,r − Ct 2 − C2 −nεs .
(3.23)
Using the facts ∆ j ((g n ) 2 ∂ x f n ) = 0, j = n and ∆ n ((g n ) 2 ∂ x f n ) = (g n ) 2 ∂ x f n , direct calculation shows that ||(g n ) 2 ∂ x f n || B s p,r = 2 ns ||(g n ) 2 ∂ x f n || L p
Lemma 2.2 ([1]). (1) For any s > 0 and 1 ≤ p, r ≤ ∞, there exists a positive constant C = C(d, s, p, r) such that
Lemma 3.1 ([41]). For 1 ≤ p, r ≤ ∞ and s > max{1 + 1 p , 3 2 } and initial data u 0 ∈ B s p,r (R), there exists a time T = T (s, p, r, ||u 0 || B s p,r (R) ) > 0 such that the equation (1.2) have a unique solution u ∈ C([0, T ]; B s p,r (R)). Moreover, for all t ∈ [0, T ], there holds ||u(t)|| B s p,r (R) ≤ C||u 0 || B s p,r (R)
Then there holds ||u n − u n 0 || L ∞ T (B s p,r ) ≤ C2 −nεs .
Lemma 3 . 3 .
33Under the assumptions of Theorem 1.1, there holds for all t ∈ [0, T ],
3 (x)|| L p ,(3.24) by Riemann-Lebesgue's Lemma. Hence, combining (3.21), (3.23), (3.24) and choosing the positive T 0 small enough, we can obtain our result.
r ≤ C||w n || B s−1 p,r ||v n 0 || B s+1 p,r ||v n + v n 0 || B s−1 p,r + C||w n || B s ||w n || B s p,r dτ + Ct 2 + C2 −nεs , which implies ||w n || B s p,r ≤ Ct 2 + C2 −nεs .p,r
≤ C2
1
2 n ||w n || B s−1
p,r + C||w n || B s
p,r .
(3.18)
According to Lemma 2.3 and combining (3.11), (3.13), (3.15), (3.17), we obtain
||w n || B s−1
p,r ≤ C
t
0
||w n || B s−1
p,r dτ + Ct 2 2 − 1
2 n + C2 −n(1+εs) ,
which implies
||w n || B s−1
p,r ≤ Ct 2 2 − 1
2 n + C2 −n(1+εs) .
(3.19)
Using Lemma 2.3 again and combining (3.12), (3.14), (3.16), (3.18), (3.19) we have
||w n || B s
p,r ≤ C
t
0
||w n || B s
p,r dτ +
t
0
2
1
2 n ||w n || B s−1
p,r dτ + Ct 2 + C2 −nεs
≤ C
t
0
(3.20)
H Bahouri, J Y Chemin, R Danchin, Fourier Analysis and Nonlinear Partial Differential Equations. Berlin, HeidelbergSpringer-Verlag343H. Bahouri, J. Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differen- tial Equations, Grundlehren der Mathematischen Wissenschaften, vol. 343, Springer-Verlag, Berlin, Heidelberg, 2011.
An integrable shallow water equation with peaked solitons. R Camassa, D D Holm, Phys. Rev. Lett. 71R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664.
On the well-posedness of the Degasperis-Procesi equation. G M Coclite, K H Karlsen, J. Funct. Anal. 233G. M. Coclite and K. H. Karlsen, On the well-posedness of the Degasperis-Procesi equation, J. Funct. Anal., 233 (2006), 60-91.
The Hamiltonian structure of the Camassa-Holm equation. A Constantin, Expositiones Mathematicae. 151A. Constantin, The Hamiltonian structure of the Camassa-Holm equation, Expositiones Math- ematicae, 15(1) (1997), 53-85.
Existence of permanent and breaking waves for a shallow water equation: a geometric approach. A Constantin, Ann. Inst. Fourier (Grenoble). 50A. Constantin, Existence of permanent and breaking waves for a shallow water equation: a geometric approach, Ann. Inst. Fourier (Grenoble), 50 (2000), 321-362.
On the scattering problem for the Camassa-Holm equation. A Constantin, Proceedings of the Royal Society of London, Series A. 457A. Constantin, On the scattering problem for the Camassa-Holm equation, Proceedings of the Royal Society of London, Series A, 457(2001), 953-970.
The trajectories of particles in Stokes waves. A Constantin, Invent. Math. 1663A. Constantin, The trajectories of particles in Stokes waves. Invent. Math., 166 (2006), no. 3, 523-535.
Wave breaking for nonlinear nonlocal shallow water equations. A Constantin, J Escher, Acta Math. 181A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243.
Global existence and blow-up for a shallow water equation. A Constantin, J Escher, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 264A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 26 (1998), 303-328.
Well-posedness, global existence and blowup phenomena for a periodic quasi-linear hyperbolic equation. A Constantin, J Escher, Comm. Pure Appl. Math. 51A. Constantin and J. Escher, Well-posedness, global existence and blowup phenomena for a periodic quasi-linear hyperbolic equation, Comm. Pure Appl. Math., 51 (1998), 475-504.
Analyticity of periodic traveling free surface water waves with vorticity. A Constantin, J Escher, Ann. of Math. 2A. Constantin and J. Escher, Analyticity of periodic traveling free surface water waves with vorticity. Ann. of Math., (2) 173 (2011), no. 1, 559C568.
Inverse scattering transform for the Degasperis-Procesi equation. A Constantin, R Ivanov, J Lenells, Nonlinearity. 2310A.Constantin, R. Ivanov, J. Lenells, Inverse scattering transform for the Degasperis-Procesi equation. Nonlinearity 23 (2010), no. 10, 2559-2575.
The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations. A Constantin, D Lannes, Arch. Ration. Mech. Anal. 1921A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations. Arch. Ration. Mech. Anal., 192 (2009), no. 1, 165-186.
A shallow water equation on the circle. A Constantin, H P Mckean, Comm. Pure Appl. Math. 528A. Constantin and H. P. McKean, A shallow water equation on the circle. Comm. Pure Appl. Math., 52 (1999), no. 8, 949-982.
Stability of peakons. A Constantin, W A Strauss, Comm. Pure Appl. Math. 53A. Constantin and W. A. Strauss, Stability of peakons, Comm. Pure Appl. Math. , 53 (2000), 603-610.
A few remarks on the Camassa-Holm equation, Differential and Integral Equations. R Danchin, 14R. Danchin, A few remarks on the Camassa-Holm equation, Differential and Integral Equa- tions, 14 (2001), 953-988.
A new integral equation with peakon solutions. A Degasperis, D D Holm, A N W Hone, Theor. Math. Phys. 133A. Degasperis, D. D. Holm, and A. N. W. Hone, A new integral equation with peakon solutions, Theor. Math. Phys., 133 (2002), 1463-1474.
A Degasperis, M Procesi, Asymptotic integrability, Symmetry and perturbation theory. River Edge, NJA. Degasperis and M. Procesi, Asymptotic integrability, Symmetry and perturbation theory (Rome 1998), pp. 23-37. World Sci. Publ., River Edge, NJ, 1999.
On asymptotically equivalent shallow water wave equations. H R Dullin, G A Gottwald, D D Holm, Phys. D. H. R. Dullin, G. A. Gottwald, and D. D. Holm, On asymptotically equivalent shallow water wave equations, Phys. D, 190 (2004), 1-14.
Global weak solutions and blow-up structure foe the Degasperis-Procesi equation. J Escher, Y Liu, Z Yin, J. Funct. Anal. J. Escher, Y. Liu and Z. Yin, Global weak solutions and blow-up structure foe the Degasperis- Procesi equation, J. Funct. Anal., 241 (2006), 457-485.
Shock waves and blow-up phenomena for the periodic Degasperis-Procesi equation. J Escher, Y Liu, Z Yin, Indiana Univ. Math. J. 56J. Escher, Y. Liu and Z. Yin, Shock waves and blow-up phenomena for the periodic Degasperis- Procesi equation, Indiana Univ. Math. J., 56 (2007), 87-177.
Symplectic structures, their Backlund transformation and hereditary symmetries. A Fokas, B Fuchssteiner, 1981/82Physica D. 41A. Fokas and B. Fuchssteiner, Symplectic structures, their Backlund transformation and hered- itary symmetries, Physica D, 4(1) (1981/82), 47-66.
On the Cauchy problem for the Degasperis-Procesi equation. G Gui, Y Liu, Quart. Appl. Math. 69G. Gui and Y. Liu, On the Cauchy problem for the Degasperis-Procesi equation,Quart. Appl. Math., 69 (2011), 445-464.
The Cauchy problem for the Novikov equation. A A Himonas, C Holliman, Nonlinearity. A. A. Himonas and C. Holliman, The Cauchy problem for the Novikov equation,Nonlinearity, 25 (2012), 449-479.
On well-posedness of the Degasperis-Procesi equation Discrete Contin. A Himonas, C Holliman, Dyn. Syst. A. 31A. Himonas and C. Holliman, On well-posedness of the Degasperis-Procesi equation Discrete Contin. Dyn. Syst. A, 31 (2011) 469-484.
Non-uniform dependence on initial data of solutions to the Euler equations of hydrodynamics. A Himonas, G Misio, Comm. Math. Phys. 296A. Himonas and G. Misio lek, Non-uniform dependence on initial data of solutions to the Euler equations of hydrodynamics, Comm. Math. Phys., 296 (2010), 285-301.
Integrable peakon equations with cubic nonlinearity. A N W Hone, J Wang, Journal of Physics A: Mathematical and Theoretical. 10A. N. W. Hone and J. Wang, Integrable peakon equations with cubic nonlinearity, Journal of Physics A: Mathematical and Theoretical, 41 2008, 372002, 10pp.
On the ill-posedness of some canonical dispersive equations. C Kenig, G Ponce, L Vega, Duke Math. 106C. Kenig, G. Ponce, L. Vega, On the ill-posedness of some canonical dispersive equations, Duke Math., 106 (2001) 617-633.
Global weak solutions to the Novikov equation. S Lai, J. Funct. Anal. 265S. Lai, Global weak solutions to the Novikov equation, J. Funct. Anal., 265 (2013), 520-544.
Traveling wave solutions of the Degasperis-Procesi equation. J Lenells, J. Math. Anal. Appl. 306J. Lenells, Traveling wave solutions of the Degasperis-Procesi equation, J. Math. Anal. Appl., 306 (2005), 72-82.
Remarks on the well-posedness of Camassa-Holm type equations in Besov spaces. J Li, Z Yin, J. Differential Equations. 261J. Li and Z. Yin, Remarks on the well-posedness of Camassa-Holm type equations in Besov spaces, J. Differential Equations, 261 (2016), 6125-6143.
Well-posedness and analytic solutions of the two-component Euler-Poincaré system. J Li, Z Yin, Monatsh. Math. 183J. Li and Z. Yin, Well-posedness and analytic solutions of the two-component Euler-Poincaré system, Monatsh. Math., 183 (2017), 509-537.
Global Existence and Blow-Up Phenomena for the Degasperis-Procesi Equation. Y Liu, Z Yin, Commun. Math. Phys. 267Y. Liu and Z. Yin, Global Existence and Blow-Up Phenomena for the Degasperis-Procesi Equation, Commun. Math. Phys., 267 (2006), 801-820.
Formation and dynamics of shock waves in the Degasperis-Procesi equation. H Lundmark, J. Nonlinear. Sci. 17H. Lundmark, Formation and dynamics of shock waves in the Degasperis-Procesi equation, J. Nonlinear. Sci., 17 (2007), 169-198.
Generalization of the Camassa-Holm equation. V Novikov, J. Phys. A. 4214V. Novikov, Generalization of the Camassa-Holm equation, J. Phys. A, 42 (2009), 342002, 14 pp.
On the Cauchy problem for the Camassa-Holm equation. G Rodriguez-Blanco, Nonlinear Anal. 46G. Rodriguez-Blanco, On the Cauchy problem for the Camassa-Holm equation, Nonlinear Anal., 46 (2001), 309-327.
Stokes waves. J F Toland, Topol. Methods Nonlinear Anal. 71J. F. Toland, Stokes waves. Topol. Methods Nonlinear Anal., 7 (1996), no.1, 1-48.
Periodic and solitary-wave solutions of the Degasperis-Procesi equation. V O Vakhnenko, E J Parkes, Chaos Solitons Fractals. 20V. O. Vakhnenko and E. J. Parkes, Periodic and solitary-wave solutions of the Degasperis- Procesi equation, Chaos Solitons Fractals, 20 (2004), 1059-1073.
Global weak solutions for the Novikov equation. X Wu, Z Yin, Journal of Physics A: Mathe-matical and Theoretical. 4417X. Wu and Z. Yin, Global weak solutions for the Novikov equation,Journal of Physics A: Mathe-matical and Theoretical, 44 (2011), 055202, 17pp.
Well-posedness and global existence for the Novikov equation, Annali della Scuola Normale Superiore di Pisa. X Wu, Z Yin, Classe di Scienze. Serie V. X. Wu and Z. Yin, Well-posedness and global existence for the Novikov equation, Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V, 11 (2012), 707-727.
A note on the Cauchy problem of the Novikov equation. X Wu, Z Yin, Applicable Analysis. 92X. Wu and Z. Yin, A note on the Cauchy problem of the Novikov equation, Applicable Analysis, 92 (2013), 1116-1137.
On the weak solutions to a shallow water equation. Z Xin, P Zhang, Comm. Pure Appl. Math. 53Z. Xin and P. Zhang, On the weak solutions to a shallow water equation, Comm. Pure Appl. Math., 53 (2000), 1411-1433.
The Cauchy problem for the integrable Novikov equation. W Yan, Y Li, Y Zhang, J. Differential Equations. W. Yan, Y. Li and Y. Zhang, The Cauchy problem for the integrable Novikov equation, J. Differential Equations, 253 (2012), 298-318.
The Cauchy problem for the Novikov equation. W Yan, Y Li, Y Zhang, Nonlinear Differential Equations and Applications NoDEA. 20W. Yan, Y. Li and Y. Zhang, The Cauchy problem for the Novikov equation, Nonlinear Differential Equations and Applications NoDEA, 20 (2013), 1157-1169.
Global existence for a new periodic integrable equation. Z Yin, J. Math. Anal. Appl. 49Z. Yin, Global existence for a new periodic integrable equation, J. Math. Anal. Appl., 49 (2003), 129-139.
Global weak solutions to a new periodic integrable equation with peakon solutions. Z Yin, J. Funct. Anal. 212Z. Yin, Global weak solutions to a new periodic integrable equation with peakon solutions, J. Funct. Anal., 212 (2004), 182-194.
Global solutions to a new integrable equation with peakons. Z Yin, Indiana Univ. Math. J. 53Z. Yin, Global solutions to a new integrable equation with peakons, Indiana Univ. Math. J., 53 (2004), 1189-1210.
| []
|
[
"Probabilistic Forecasting of Patient Waiting Times in an Emergency Department Probabilistic Forecasting of Patient Waiting Times in an Emergency Department",
"Probabilistic Forecasting of Patient Waiting Times in an Emergency Department Probabilistic Forecasting of Patient Waiting Times in an Emergency Department"
]
| [
"Siddharth Arora [email protected] \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n",
"James W Taylor [email protected] \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n",
"Ho-Yin Mak [email protected] \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n",
"James W Taylor \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n",
"§ Ho \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n",
"Yin Mak \nSaїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora\n"
]
| [
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora",
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora",
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora",
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora",
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora",
"Saїd Business School\nUniversity of Oxford\nPark End StreetOX1 1HPOxfordU.K. † Siddharth Arora"
]
| []
| Problem definition:We study the estimation of the probability distribution of individual patient waiting times in an emergency department (ED). Our feature-rich modelling allows for dynamic updating and refinement of waiting time estimates as patient-and ED-specific information (e.g., patient condition, ED congestion levels) is revealed during the waiting process. Aspects relating to communicating forecast uncertainty to patients, and implementing this methodology in practice, are also discussed.Academic/Practical Relevance: While it is known that waiting time estimates can help improve patients' overall satisfaction and prevent abandonment, existing methods focus on point forecasts. By providing personalized probabilistic forecasts, our approach gives patients and first responders a more comprehensive picture of the possible waiting trajectory, and provides more reliable inputs to inform prescriptive modelling of ED operations. For example, we demonstrate that publishing probabilistic waiting time estimates can inform patients in selecting an ED from a network of EDs, which can lead to more uniform spread of patient load across the network.Methodology:We use the machine learning approach of quantile regression forest (QRF) to produce the probabilistic forecasts. Using a large patient-level dataset we extract the following categories of predictor variables: (1) calendar effects, (2) demographics, (3) staff levels, (4) ED workload due to patient volumes, and (5) the severity of patient condition. Rankings of predictor importance are derived from regression trees.Results:The proposed approach generates more accurate probabilistic and point forecasts, when compared with methods proposed in the literature for modelling waiting times, and rolling average benchmarks typically used in practice. Patient workflow and calendar effects were identified as key predictors of ED waiting times.Managerial Implications: For emergency healthcare service providers, probabilistic waiting time estimates could assist in ambulance diversion, staff allocation, and managing patient-flow, which could facilitate efficient operations and cost-savings while aiding better patient care and outcomes. | 10.1287/msom.2023.1210 | null | 219,176,876 | 2006.00335 | db356789a985dcc591584299f9d3fd38528493d2 |
Probabilistic Forecasting of Patient Waiting Times in an Emergency Department Probabilistic Forecasting of Patient Waiting Times in an Emergency Department
Siddharth Arora [email protected]
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
James W Taylor [email protected]
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
Ho-Yin Mak [email protected]
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
James W Taylor
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
§ Ho
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
Yin Mak
Saїd Business School
University of Oxford
Park End StreetOX1 1HPOxfordU.K. † Siddharth Arora
Probabilistic Forecasting of Patient Waiting Times in an Emergency Department Probabilistic Forecasting of Patient Waiting Times in an Emergency Department
1 2low-acuitymachine learningquantile regression forestmanaging patient-flow
Problem definition:We study the estimation of the probability distribution of individual patient waiting times in an emergency department (ED). Our feature-rich modelling allows for dynamic updating and refinement of waiting time estimates as patient-and ED-specific information (e.g., patient condition, ED congestion levels) is revealed during the waiting process. Aspects relating to communicating forecast uncertainty to patients, and implementing this methodology in practice, are also discussed.Academic/Practical Relevance: While it is known that waiting time estimates can help improve patients' overall satisfaction and prevent abandonment, existing methods focus on point forecasts. By providing personalized probabilistic forecasts, our approach gives patients and first responders a more comprehensive picture of the possible waiting trajectory, and provides more reliable inputs to inform prescriptive modelling of ED operations. For example, we demonstrate that publishing probabilistic waiting time estimates can inform patients in selecting an ED from a network of EDs, which can lead to more uniform spread of patient load across the network.Methodology:We use the machine learning approach of quantile regression forest (QRF) to produce the probabilistic forecasts. Using a large patient-level dataset we extract the following categories of predictor variables: (1) calendar effects, (2) demographics, (3) staff levels, (4) ED workload due to patient volumes, and (5) the severity of patient condition. Rankings of predictor importance are derived from regression trees.Results:The proposed approach generates more accurate probabilistic and point forecasts, when compared with methods proposed in the literature for modelling waiting times, and rolling average benchmarks typically used in practice. Patient workflow and calendar effects were identified as key predictors of ED waiting times.Managerial Implications: For emergency healthcare service providers, probabilistic waiting time estimates could assist in ambulance diversion, staff allocation, and managing patient-flow, which could facilitate efficient operations and cost-savings while aiding better patient care and outcomes.
Introduction
Emergency departments (EDs) are coming under increasing pressure to provide safe and quality care to patients in a timely manner. It was estimated that between 2006 and 2016, there were 1.4 billion ED visits in the US alone, whereby the number of visits increased by around 2.3 million per year (Singer et al. 2019). The hospital staffing and infrastructure has not grown at the same rate, which has resulted in longer waiting times. In England, the national health service (NHS) has a pledge set out in the handbook of its constitution stating that 95% of patients attending the ED should be treated, admitted or discharged within four-hours (NHS 2019). However, for the first time in 2019, all major ED units in England missed their waiting time targets; while in the US, from 2003 to 2009, the mean waiting time increased by 25% (Hing and Bhuiya 2012). This is a matter of growing concern, as long ED waiting times are associated with increased morbidity and mortality, and are one of the leading causes of patient dissatisfaction (Bernstein et al. 2009). Waiting times and perceived queue length can influence patients to drop out from the ED (Batt and Terwiesch 2015). Patients that drop out from the ED are linked with having a higher likelihood of re-presentation and poor outcomes (Carter et al. 2014). Providing delay estimates can help lower the perceived waiting time (Jouini et al. 2011). Growing the capacity in EDs to eradicate congestion and minimize waiting times requires long-term planning and funding; meanwhile, providing patients with an estimate of their waiting times can be an inexpensive and immediate way forward to managing patient expectations and reducing abandonment rates.
For emergency healthcare service providers, long waiting times can have significant economic implications. In private healthcare systems (e.g., in the US), shorter and more transparent waiting times could lead to higher revenues for hospitals, since about 10% of total US healthcare cost is spent on emergency care (Galarraga and Pines 2016). In many OECD countries with public healthcare systems, service providers incur performance fines, including financial penalties, if they exceed the waiting time targets (OECD Executive Summary 2013). It is estimated that prolonging waiting time in the ED by just 10 minutes increases the cost of care by an average of 6% for a high-acuity patient, and 3% for a low-acuity patient (Woodworth and Holmes 2020). To deal with long waiting times (or congestion), EDs sometimes rely on external agencies to provide temporary workforce. However, temporary staff cost 20% more on average than the permanent staff (Buchan et al. 2019). Waiting time estimates could potentially assist hospitals in making more informed staffing decisions, thereby reducing the dependency on costly surge capacities. Given the impact of long waiting times on patient outcomes and its economic implications for service providers; streamlining patient-flow, minimizing waiting times, and optimizing resource allocation, while providing quality care, is at the heart of reforming emergency healthcare services.
Modelling of ED operations has been an active area of research in operations management. A significant stream of literature takes descriptive and prescriptive views on patient flows in the ED by use of queueing models (Armony 2015;Batt and Terwiesch 2015;Bayati 2017;Xu and Chan 2016), as well as discrete-event simulations (Baril et al. 2019). For a detailed review of literature in this area, see, for example (Hu et al. 2018;Keskinocak and Savva 2020;Misic and Perakis 2020;Singh and Terwiesch 2012), and references therein.
The literature on predicting patient waiting times in the ED is relatively scarce. Accurate predictive models for waiting times can provide valuable information to patients, assist service providers in planning and operations, as well as support prescriptive studies of ED operations (e.g., for calibrating queueing and simulation models). Ang et al. (2016) generate point forecasts for ED waiting times (from registration to start-of-treatment) based on the least absolute shrinkage operator (Lasso) using predictor variables inspired by fluid model estimators. They report an over 30% reduction in mean squared error, compared to a rolling average model, which is the standard method adopted in practice in US hospitals (Dong et al. 2019). Ding et al. (2010) generate estimates of treatment, boarding, and waiting room time using quantile regression, focussing on 10, 50, and 90% quantiles. Using data available at triage, Sun et al. (2012) use quantile regression to predict the median and 95% quantile of the waiting time (from triage to consultation). We are, however, not aware of any existing study that models and evaluates the whole probability distribution of ED waiting times.
Our proposed approach differs from the aforementioned studies in that it is: (1) probabilistic, and (2) personalized. Waiting time is inherently uncertain and the distribution is asymmetric (see Section 2.2), and so a point forecast can be uninformative and even potentially misleading, which could risk greater dissatisfaction among patients. Given that patients can feel increasingly dissatisfied if they end up waiting longer than the published estimate, it is imperative that the uncertainty associated with forecasts is adequately conveyed to the patients, so that patient expectations can be better managed. Moreover, a significant majority of people prefer forecasts that express uncertainty, while in cases when only a deterministic forecast is provided, people tend to make their inferences about the forecast uncertainty (Morss et al. 2008). This underscores the importance of communicating the forecast uncertainty to the patients. Thus, in this study, we generate and evaluate probabilistic forecasts, rather than focussing on just a point estimate or predefined quantiles. Patient waiting times depend on a multitude of complex features (or predictor variables), some of which are also time-varying (such as, staff levels, queue length) and/or patient-specific. Incorporating such individual and time-varying features enables our method to produce personalized forecasts that are updated over time, as new information regarding the ED's utilization and the patient's conditions are observed. The potential nonlinear relationship between high-dimensional ED features and waiting times motivates our decision to adopt a machine learning approach. Specifically, we employ a quantile regression forest (QRF) to estimate conditional probability distributions of the waiting times in a nonlinear and nonparametric framework (Meinshausen 2006).
Moreover, to investigate key predictors of ED waiting times, we use a random forest (RF) to derive rankings of feature importance. This information could provide useful insights into patient flow and potential bottlenecks in the ED.
Using a large patient-level dataset, we extract five different categories of features, that quantify: (1) calendar effects (time of arrival), (2) demographics (age, sex), (3) staff levels, (4) ED workload due to patient volumes (attendances), and (5) the severity of patient condition. We provide a comparison of the QRF-based approach with Q-Lasso (Ang et al. 2016), quantile regression (Ding et al. 2010;Sun et al. 2012), k-nearest neighbour, and rolling average benchmarks that are typically used in practice. Model evaluation is based on a comparison of distributional, quantile, and point forecast accuracy.
We further discuss three aspects related with practical implementation of the proposed methodology.
Firstly, we propose and evaluate a colour-coded (categorical) scheme to effectively communicate forecast uncertainty to the patients. Secondly, we show that our modelling scheme could be used to provide updates of waiting time estimates, by incorporating additional information regarding triage, which becomes available at the time of initial assessment. Finally, we demonstrate that personalized probability distribution estimates of waiting times, when used in conjunction with travel time estimates, could help patients select an ED site from a geographic network of EDs, and could result in more uniform spread of workload among EDs. Patients could access waiting and travel time estimates, on say, a smartphone application. The findings of this study could have direct implications for both patients and emergency service providers.
The paper is arranged as follows. Section 2 describes patient-flow and presents our ED dataset.
Section 3 presents features and the quantile regression forest. Empirical results based on a comparison of forecast accuracy are provided in Section 4. Illustrative examples demonstrating the practical applications of the modelling framework are presented in Section 5. Section 6 summarizes the paper and discusses future work.
ED Patient-flow and Data
Understanding patient-flow in the ED
This study employs data from a major public hospital in the UK. This hospital treats both minor and major injuries, and is operational 24 hours a day. We refer to it as Hospital 1. The policies at Hospital 1 are determined by the NHS, and hence the operations at this hospital are similar to other hospitals in the UK. Figure 1 presents a basic schematic diagram of ED patient flow at Hospital 1. Patients arrive at the ED either via ambulance or any other mode of transport. Upon arrival, patients register at the reception desk, where they provide the following information: name and address, date of birth, reason for visit, name of the general practitioner (doctor) with whom the patient is registered (we denote the time of registration as ). After registration, patients are requested to take a seat in the ED waiting room until they are called for an initial assessment. At the time of initial assessment (denoted by ), patients are categorized by a nurse using a: (1) patient group number (code denoting reason for presenting complaint), (2) human resource group (code denoting use of resources), and (3) triage category (to prioritize patients depending upon their severity, the different triage categories are 'minor injury', 'major injury', 'urgent care', or 'resuscitation'). Patients who are triaged as 'minor injury' might need to wait longer in the queue, compared to other patients that arrive later, but who have a more serious health condition. Patients with critical medical needs are triaged as 'urgent care' or 'resuscitation'. These patients are seen with priority by a doctor upon arrival at the ED. Thus, in this study, we only generate waiting time estimates for patients that are triaged as either 'minor injury' or 'major injury' (we refer to these as low-acuity patients). Following the initial assessment, a patient returns to the waiting area until they are called by a nurse to start treatment. At the time of treatment (denoted by ), the patient is seen by a doctor. Depending upon the outcome of the treatment, the patient departs from the ED (patients either leave the hospital, or get admitted to the ICU or some other ward within the hospital).
Studies have suggested that patients are more sensitive to their start-of-treatment time compared to the time-of-departure (Boudreaux et al. 2000). Once with the physician, the patient is typically far more tolerant of the passage of time (Anderson et al. 2007). This motivates us to focus on the time spent waiting until the start of treatment. In addition to receiving, when they arrive, an estimate of their waiting time to treatment, it is helpful to receive updates to this during their wait. We thus update the waiting time estimate for each patient. Specifically, for each low-acuity patient, we model the following two waiting time metrics: (1) 1 : time elapsed from registration to start-of-treatment ( 1 = −
). This prediction is generated at the time of registration. (2) 2 : time elapsed from initial assessment to start-of-treatment ( 2 = − ). This prediction is generated at the time of initial assessment. Note that 1 relates to an initial estimate of , and 2 updates this estimate at the time of the initial assessment, whereby information regarding triage and changes in patient-flow are incorporated in the modelling.
Patient-level ED data
We employ five years of patient-level ED data from 1 January 2014 to 31 December 2018. The data from Hospital 1 is feature-rich. Recording multiple data fields for each patient while working in a highpressure environment such as an ED, however, also increases the likelihood of having incomplete entries. It is rather unsurprising that EDs are particularly vulnerable to data quality issues (Ward et al. 2015). In practice, domain experts may employ several criteria to deal with data incompleteness. In this study, to avoid any potential bias resulting from data imputation, we adopt stringent criteria for preprocessing where we discard entries with either: incomplete or missing data fields (null entries), faulty timestamps (negative waiting times), or highly unlikely values (waiting times ≥14 hours, age ≥ 110 years During the modelling, we first generate a probability distribution estimate of the waiting time from registration to the start-of-treatment ( 1 ) for each of the 334,635 low-acuity patients. We then update this prediction by estimating the waiting time from initial assessment to start-of-treatment ( 2 ) for the 281,910 patients that underwent an initial assessment. Note that, depending upon the severity of the condition and utilization of the ED, some patients (52,725 patients in our dataset) start their treatment without first undergoing an initial assessment. The average start-of-treatment waiting time from point of registration ( 1 ) and from initial assessment ( 2 ) was 86.9 minutes (standard deviation 64.9 minutes) and 72.5 minutes (standard deviation 60 minutes), respectively. We use the first four years (2014)(2015)(2016)(2017) as the in-sample period to estimate model parameters, while the final year (2018) was employed as the out-of-sample period to evaluate forecast accuracy. plots of the waiting times ( 1 and 2 ). Attendances at EDs exhibit strong diurnal periodicity, whereby the demand is usually low during the night, and peaks around midmorning and early evening ( Figure 2a). A similar diurnal pattern is observed for the median staff count, which is expected, as hospitals allocate more staff to the ED during busier periods of the day (Figure 2b). Figure 2c presents start-oftreatment waiting times ( 1 ) for a two-week period. It is evident that waiting times are highly variable, which underscores the need to generate probabilistic forecasts. Figure 2d presents the autocorrelation plot for waiting times ( 1 ), which is computed at different lags across consecutive patients. This figure shows that waiting times for consecutive patients are autocorrelated, which motivates our decision to employ empirical benchmarks that are based on the rolling average methods used by hospitals in practice. The waiting time distributions are right-skewed, as some patients end up waiting for long hours in the ED (Figure 2e). The diurnal waiting times are typically lower during hours of the day when more staff are available ( Figure 2f). Waiting times are overall higher during weekends and Mondays while waiting times on Thursdays are relatively lower. Figure 2 is generated using only the in-sample data.
Modelling Waiting Times Using a Quantile Regression Forest
In this section, we present our proposed modelling approach. Firstly, we describe the process of feature engineering using the ED data in Section 3.1. The features are used as input variables during the modelling. The quantile regression forest method, which is an extension of random forests, is presented in Section 3.2. Rankings of the most salient features derived from this approach are provided in Section 3.3.
Feature Engineering
Feature engineering is an integral part of modelling, as the accuracy of any statistical method or machine learning approach is conditional upon the quality of input features (Guyon et al. 2008). In the context of this study, feature engineering involves deriving potential predictors of waiting times from the raw patient-level ED data. The patient-level records used in this study are very detailed, which allows us to extract a range of features for each patient. Table 1 presents a list of features extracted from the data along with a brief description. Each feature belongs to one of the following five categories:
(Category 1) Calendar effects: These features accommodate periodic variations in waiting times across different periods of the day (diurnal periodicity) and periods of the week (weekly periodicity). Waiting times are typically longer around 8pm-2am (Figure 2f), and during weekends and Mondays ( Figure A1). Moreover, EDs often experience anomalous levels of attendances during the holiday periods (Rostami-Tabar and Ziel 2020). To incorporate the effect of anomalous load on waiting times, we use indicator variables to identify holidays (such as Christmas, New Year's Day) and winter proximity days (days around Christmas day).
(Category 2) Demographics: These features account for potential differences in waiting times across different demographics (such as age, sex).
(Category 3) Staff levels: This feature accounts for the ED service capacity due to staffing. Waiting times are typically low during periods of the day when more staff are available (Figure 2b and 2f). Due to data protection issues, information regarding staff schedules was not directly provided by the hospital. Therefore, we used the unique identifiers/codes of the staff members responsible for discharging a patient to infer the hourly staffing levels.
(Category 4) ED operations: These features reflect the state of the ED's operations at any given time, allowing us to represent changes in the ED workload at different points of patient-flow. Relevant features include the numbers of patients in the ED and in different status (e.g., registered but not yet assessed). See Table 1 for the full list of features. For a given low-acuity patient, the waiting time depends on the number of high-acuity patients that are currently in the ED queue. Thus, although we generate waiting time estimates for only the low-acuity patients, we use data for patients across all triage categories for feature engineering to allow for a more complete and accurate representation of the ED workload during the modelling. Moreover, we include features that indicate the number of patients who breached the NHS four-and 12-hour waiting time targets (over the last 24 hours). We also use hourly averages of lagged waiting times as features.
(Category 5) Patient condition: These features accommodate the severity of the patient's condition (or presenting complaint) during the modelling. At initial assessment, the following three metrics are used to quantify the patient's condition: (1) Patient group numberindicates the reason for ED episode (e.g., road traffic accident) (2) Human resource group codeindicates the level of resources needed by the patient.
(3) Triage categorypatients are prioritized for treatment based on their triage level. We are not aware of any other study on predicting patient waiting times based on such detailed information on patient condition. This aspect of modelling leads to a key advantage of our approachthat forecasts can be refined over time as patient information is updated. This will be discussed further in Section 5.1.
Quantile Regression Forests
In this study, the aim of modelling is to estimate the probability distribution function ̂( | ) for a target observation that is conditional on the corresponding feature vector
= { 1 , 2 , … , },
where denotes the waiting time for the th patient, and ( ∈ ℝ ) is a -dimensional vector of features that quantify properties of the patient, calendar effects, and state of the ED (i.e., features listed in Table 1). For a given patient, refers to 1 at the time of registration, while at the time of initial assessment, refers to 2 . Given low-acuity patients having waiting times, = { 1 , 2 , … , } (the label vector with size × 1), and corresponding features represented by = { 1 , 2 , … , } (the feature matrix with size × ), we aim to train a model (denoted by Ω, with parameter matrix B),
which is a mapping from the input features to the corresponding waiting time distribution, ̂( | ) = Ω( , ).
QRF is a generalization of the popular random forest (RF) method. RF is an ensemble machine learning method that has commonly been used to generate accurate predictions using high-dimensional features (Breiman 2001). The performance of RF has been shown to be robust under the presence of noisy or highly correlated features (Breiman 2001). Besides prediction, RF can be used to derive rankings of feature importance which can help make valuable inference from the data (Hastie et al. 2009). While RF provides an accurate approximation of the conditional mean of the target variable in a nonlinear and nonparametric framework, QRF estimates the conditional probability distribution of the target variable (Meinshausen 2006).
To construct a QRF, we grow a large set of regression trees. While growing each tree and node, randomness is incorporated during the selection of features. For a given tree, a bagged version of the training data is used. For each node, a random subset of features is used for splitpoint selection while approximating the target variable. A tree is grown by splitting of the bootstrap training sample such that it minimizes the total impurity (sum of squared deviations about a group mean). The process of splitting continues until a minimum leaf size has been achieved. Given the set of trees, dropping a new data point down each tree reaches a leaf node that produces a single forecast (observation) of the target variable.
While RF estimates the conditional mean of the target variable by averaging such observations over the set of trees, QRF stores the value of all observations in the leaf nodes to estimate an empirical cumulative distribution function of the target variable.
For a single tree, denoted by say ( ), which is grown using a random feature subset , the point forecast of the mean obtained using a new feature vector is computed as the average of the subset of target values in the training sample associated with feature vectors ∈ ( , ), where ( , ) is the leaf node that contains . Mathematically, the forecast of the mean is given by
∑ ( , ) =1
, where the weights ( , ) are given by:
( , ) = 1 { ∈ ( , )} | : ∈ ( , ) ∈ ( , )| .
For a forest of regression trees, the weights from each tree are averaged as:
( ) = 1⁄ (∑ ( ,) =1 )
, where denotes the feature subset used for growing the th tree. In RF, the conditional mean ( | = ) can be estimated as ∑ ( )
=1
. QRF provides a natural extension for probabilistic forecasting by estimating the conditional distribution as:
̂( | = ) = ∑ ( ) ( < ). =1
Instead of computing the average of observations in leaf nodes as done in RF, QRF keeps track of all observations and their weights, for all leaves and across all trees, to estimate the distribution function of the target variable. This estimate of the conditional distribution function is asymptotically consistent (Meinshausen 2006). In a recent study, using features extracted from flight-and passenger-level data, regression trees have been shown to be effective in predicting the probability distributions of connection times and number of arrivals at an airport (Guo et al. 2018). In our study, 500 regression trees are grown (Breiman 2001), and one-third of the total features are used for splitpoint selection at each node (Meinshausen 2006), while nodes with less than 5 observations were not split any further.
Rankings of Feature Importance
We also use the RF underlying the QRF approach to identify the most salient features associated with predicting waiting times at different stages of patient-flow in the ED (see, for example, Breiman 2001).
Teasing out the set of features with the strongest predictive power can help service providers focus their attention on a small set of factors that could influence patient flow, and inform further identification studies to uncover causal relationships. The process of finding the most salient subset of features (which typically involves removing irrelevant, redundant, and noisy features) is referred to as feature selection, and has been an area of active research (Guyon et al. 2008). Figure 3 provides the feature importance scores for the 20 most salient features obtained from random forest (Brieman 2001), as described in Section 3.2, using only the in-sample data. For both 1 and 2 , age was the single most salient feature. This can perhaps be explained by the fact that Hospital 1 has a dedicated children's emergency ward that prioritizes and treats only minors (age ≤ 16 years). This finding was further corroborated by an analysis of the data which revealed that approximately a quarter of the total attendances were minors for whom the mean waiting times were considerably shorter than the remaining low-acuity patients (77 minutes vs 90 minutes, for 1 ), moreover, the two waiting time distributions were significantly different (using a Kolmogorov-Smirnov test with 1% significance level). Interestingly, the bulk of the most salient features are associated with ED operational characteristics, particularly workload due to patient volumes and calendar effects. Features based on historical rolling averages also ranked higher, which indicates the presence of correlations at multiple lags in waiting times (as evident from Figure 2d). Note that features with prefix Q in the figure denote features that were inspired by the fluid model estimators of Ang et al. (2016).
Note that staff count did not show up among the list of most salient features presented in Figure 3. Since both attendances and staff count exhibit a strong diurnal periodicity, it is possible that the diurnal variation in staff counts is largely captured by features that quantify attendances (i.e. ED workload).
We caution that, in the absence of proper identification, this observation does not imply the absence or weakness of causal effects between staffing and waiting times.
Evaluating Out-of-sample Forecasting Accuracy
In this section, we compare the performances of QRF and a set of benchmarks in terms of their distributional, quantile, and point forecast accuracy. Model rankings are assessed based on the accuracy of forecasting 1 and 2 , using data for the one-year out-of-sample period (2018)
Benchmark Methods
As naïve benchmarks, we present three empirical methods in Section 4.1.1. As sophisticated benchmarks, we present quantile regression (
Empirical Methods
The empirical methods, although simplistic in their mathematical formulation, are associated with low computational complexity, which makes them attractive for deployment in the ED. We use the following three empirical methods (based on rolling averages) for forecasting waiting times:
(1) Empirical 4-hoursempirical distribution of waiting times observed in the previous fourhours. For EDs that publish waiting time estimates in the US, a four-hour rolling average has become the conventional choice (Dong et al. 2019).
(2) Empirical -hourswaiting times for the last consecutive hours (ℎ-1, ℎ-2, …, ℎ-), where ℎ denotes the hour of arrival for the current patient.
(3) Empirical -periodswaiting times of the previous periods conditional on the same period of the day (ℎ-24, ℎ-2×24, …, ℎ-×24).
To produce these three estimates of the probability distribution of 1 , we compute averages of historical waiting times from the time of registration to start-of-treatment. For 2 , we employ waiting times from the time of initial assessment to start-of-treatment. We estimate the optimal values of and based on the minimization of the sum-of-squared errors over the cross-validation hold-out sample.
Quantile Regression
Least squares regression has been employed for forecasting waiting times (Asaro et al. 2007).
However, least squares regression only provides an estimate of the conditional mean, while ignoring other aspects of the conditional distribution. This is a serious limitation, as waiting time distributions are heavily right-skewed (Sun et al. 2012), as evident from Figure 2e. Quantile regression has thus been proposed for modelling waiting times (Ding et al. 2010;Sun et al. 2012), as it allows for a more detailed characterization of the ED data by quantifying the impact of features on the distribution of waiting times. The quantile regression estimator, for the τ quantile, is based on the minimization of an asymmetric absolute loss function (see, for example, Koenker and Bassett 1978). In this study, we estimate quantile regression models for the following quantiles τ = {5%, 15%, 25%, 35%, 45%, 50%, 55%, 65%, 75%, 85%, 95%}. To construct distributional forecasts using quantile regression, we linearly interpolate between the estimated quantiles and treat the minimum and maximum of the in-sample data as bounds of the forecast distribution.
Q-Least Absolute Shrinkage and Selection Operator (Q-Lasso)
This method was proposed by Ang et al. (2016) for generating point estimates of patient waiting times. Q-Lasso combines concepts from queuing theory with the statistical modelling approach of Lasso, which employs the following objective for model estimation:
1 ∑( − ) 2 + || || 1 =1 where || || 1 = ∑ | | =1
is the L1 norm of the coefficients ( ∈ ℝ ), and is the regularization parameter. The penalty term || || 1 helps prevent model over-fitting by forcing coefficients of less salient features to go to zero. Lasso thus makes the regression model more parsimonious. For equal to 0, Lasso defaults to ordinary least squares. This method predicts waiting times as a linear function of features within a parametric modelling framework. We estimate using 5-fold cross-validation (for details, see Hastie et al. 2009). We used this method to generate only point forecasts as there are challenges associated with Lasso while estimating standard errors (Goeman 2010), and this challenge is further exacerbated by the fact that waiting times predictions intervals cannot take negative values.
The Q-Lasso method incorporates fluid model estimators as candidate predictors in Lasso. In the context of modelling patient waiting times, these estimators generalize to the ratio: workload/processing rate. Workload is associated with the number of patients that must be seen before a new low-acuity patient can start treatment (Category 4 features), whereas the processing rate depends on the number of available staff (Category 3 features). Following Ang et al. (2016), for a given workload feature (say, total attendances), an additional feature was included in the model, which was calculated by dividing that feature with the corresponding hourly staff count (e.g. total attendances/staff count).
-Nearest Neighbour ( -NN)
Although it has not been used before in the literature for forecasting waiting times, we use -NN as it can be interpreted as a more sophisticated adaptation of the rolling average methods that are used in practice. While rolling average methods rely on averaging the observations of waiting times (for say, the last -hours, or -periods), -NN generates waiting time estimates for a given patient based on the waiting times of the previous most similar patients. Similarity, between any two patients is quantified by Euclidean distance in the feature space. As the five categories of features constitute a different number of features (Table 1), Euclidean distance was computed separately for each feature category, to ensure that each feature category received equal weight during the modelling. Features were standardized to have zero mean and unit standard deviation. For a given patient, probabilistic forecasts were generated by using the empirical distribution of the waiting times for the historical most similar patients. We estimate by minimizing the sum-of-squared errors over the cross-validation hold-outsample.
Evaluating Distributional Forecasts
Figure 4 provides an illustration of distributional forecasts generated by QRF for two days chosen at random from the out-of-sample period. The forecast origin was midnight at the start of 8 April 2018
(Sunday). The median of the forecast distribution was issued as the point forecast. The point forecast and its uncertainty exhibit a similar diurnal pattern to those shown in Figure 2f. Encouragingly, in Figure 4, the 90% interval of the forecast distribution encompasses the vast majority of the actual observations. To quantify distributional forecast accuracy, we use the continuous ranked probability score (CRPS), which is a strictly proper scoring rule that quantifies both sharpness (concentration or peakedness of the forecast distribution) and calibration (statistical consistency between forecast distribution and actual observations). We use the expectations form of the CRPS (Gneiting and Raftery 2007), defined as:
= | − | − 1 2 | − ′ |
where and ′ are independent samples drawn from the forecast probability distribution function (we draw 1000 samples), each having the same underlying distribution , is the expectation with respect to the distribution , and is the corresponding actual waiting time.
In Table 2, we present the CRPS values averaged across all patients in the out-of-sample period. QRF is the most accurate method for forecasting both 1 and 2 . QRF provides a reduction of more than 20%
in the CRPS compared to the empirical 4-hour (typically used in practice), and more than 10% reduction in comparison to the best empirical method (empirical -hours). QRF also outperformed the other benchmark methods, though the improvement over quantile regression (the best performing benchmark) was relatively modest. Surprisingly, k-NN was outperformed by the empirical -hours method. As explained earlier, we were not able to produce probabilistic forecasts from Q-Lasso. In Table 2, the errors associated with 2 are overall lower than the corresponding errors for 1 . This shows that the modelling approaches are more accurate as one gets closer to the start-of-treatment time.
This is due to three reasons: (1) for any given patient, 2 is lower in magnitude than 1 by construction,
(2) features that quantify ED workload due to patient workflow are updated at the time of assessment, and (3) crucially, additional features (that quantify the severity of the patient's condition) are incorporated while modelling 2 . Encouragingly, compared to the CRPS for 1 , QRF provides the highest percentage reduction (9.7%) in the CRPS values while estimating 2 . These results highlight that compared to linear modelling approaches, QRF is better equipped to take advantage of information (feature) updates, as the potential relationship between such new information (features) and the target variable can be nonlinear. For 1 , Figure 5 presents the out-of-sample CRPS values for each patient plotted against their corresponding actual waiting times. The magnitude of probabilistic forecast error is notably larger for patients that wait for exceedingly long hours in the ED. The accuracy is best for patients for whom the waiting time was close to the median in-sample waiting time of 73 minutes.
Evaluating Categorical Forecasts
The effective and concise communication of forecast uncertainty is a challenging task, since visualising probability distributions is not necessarily an intuitive task for patients without statistical backgrounds. To address this, we propose the following interpretable colour-coded reporting scheme:
Green (for low waiting times ≤ 45 minutes), Amber (for medium waiting times: 45 minutes < waiting times ≤ 120 minutes), and Red (for high waiting times > 120 minutes). For example, a forecast issued as: Green (10%), Amber (70%), Red (20%); would imply a 10% probability that the patient would start treatment within 45 minutes, and so on. We employ this visual form of forecast representation as people are typically familiar with colour-coded information (e.g., traffic signals, air quality indices, etc.).
Converting the continuous target variable (waiting times) into a categorical variable (green, amber, red) transforms the modelling challenge from a regression problem into a classification task.
Specifically, the aim of this classification task is to predict the discrete probabilities of the three categories of waiting times. We do this using the probability distributions of the different methods compared in Section 4.2, as well as two additional methods: a three-class classifier and a multiple binary classifier. For multi-class and binary classification, we use a random forecast classifier. Converting the waiting times into a categorical variable resulted in imbalanced data, i.e. a different number of training observations belonging to the three classes. Data imbalance makes the learner more prone to overclassify the majority class. To tackle the issue of an imbalanced data during classification, we assign a larger weight to the underrepresented class, whereby the class weights sum to one (see, for example, He and Garcia 2009). To evaluate categorical forecasts, we use the ranked probability score (RPS), which is the score for discrete distributions that is analogous to the CRPS (Epstein 1969). Table 3 presents the average out-of-sample RPS. For regression-based methods (Table 2), we estimate discrete probabilities associated with the three classes from the corresponding forecast distribution. Encouragingly, QRF outperformed other modelling schemes in the classification task, reporting better RPS. QRF could be thus be used for estimating discrete probabilities for the colour-coded waiting time categories. Table 3. RPS (×100) for categorical forecasting of: (a) 1 − registration to start-of-treatment, and (b) 2 − initial assessment to start-of-treatment.
Forecasting Method
Evaluating Quantile Forecasts
To evaluate quantile forecasts, we use unconditional coverage, which measures the percentage of observations that are lower than the quantile forecast. Ideally, this percentage should be . Figure 6 presents the unconditional coverage for 1 and 2 , averaged across all low-acuity patients, for = 5%, 15%, 25%, 35%, 45%, 50%, 55%, 65%, 75%, 85% and 95%. In Figure 6, values closer to the diagonal line (ideal coverage) are better. It can be seen from the figure that the unconditional coverage is rather impressive for empirical ( -hours), quantile regression (QREG), and QRF. The performances of -NN and the two empirical benchmarks (based on a 4-hour window and -periods) are much poorer, which is consistent with the rankings obtained using the CRPS and RPS.
Evaluating Point Forecasts
In Figure 7, we plot the median of the out-of-sample distributional forecasts of QRF (for 1 and 2 ) across different hours of the day using the one-year out-of-sample period. This figure shows that QRF is overall able to accommodate the diurnal periodicity in waiting times. For evaluating point forecasts, we use the root mean square error (RMSE) and the mean absolute error (MAE). For a quadratic loss function, the mean is the optimal forecast, whereas the median of the forecast probability distribution is the optimal forecast if the loss function is symmetric piecewise linear (Gneiting 2011). In view of this, we issue the mean of each forecast distribution as a point forecast, for evaluation based on the RMSE. For evaluation based on the MAE, we issue the median of each forecast distribution as a point forecast. Forecast of the mean estimated using the Q-Lasso is used as a point forecast.
Figure 7.
Median diurnal variations in out-of-sample actual and predicted waiting times for: (a) 1 − registration to start-of-treatment, and (b) 2 − initial assessment to start-of-treatment.
In Table 4, we present the RMSE and MAE results. For point forecasting, QRF is the best performing method. The performances of Q-Lasso and quantile regression are rather similar, whereby both models outperform the empirical benchmarks. The performance of -NN is not competitive with the best empirical benchmark.
Implementation
In this section, we discuss aspects related to the practical implementation of the proposed probabilistic modelling framework. Section 5.1 focusses on publishing and updating waiting time estimates at different stages of patient-flow in the ED. Section 5.2 demonstrates that estimates of waiting times, when used in conjunction with travel times, could help patients make informed decisions while selecting an ED site from a network of neighbouring EDs.
Implementing the Forecasting Scheme at the ED with Information Updates
We demonstrate how the proposed forecasting scheme could be used in practice to communicate personalized waiting times, which patients could access using say a smartphone, as they wait in the ED.
For demonstration, we select data for an actual patient from the out-of-sample period, and for ease of explanation, we refer to this patient as John. Figure . Schematic diagram illustrating ED patient-flow and forecasts. Panel A presents the actual timestamps of registration, initial assessment, and treatment (for a 90 year old male, triaged as 'major injury', patient group '80' and code 'VB04Z' during the initial assessment). Panel B and Panel C shows the point forecast, probabilistic forecast and categorical probability forecast, for 1 and 2 , respectively. Forecasts shown in Panel B and Panel C are generated at the time of registration ( ) and time of initial assessment ( ), respectively.
The point forecast from QRF indicated that John would start treatment in 110 minutes from the time of registration (actual 1 was 104 minutes). The colour-coded scheme for categorical forecasts conveyed a 20% chance of waiting less than 45 minutes, and a likelihood of 40% for waiting between 46 to 120 minutes, and a 40% probability of waiting longer than 120 minutes, as evident from the empirical cumulative distribution function (ECDF) of the waiting time forecasts. Note that the estimate for 1 was generated at the time of registration. At 19:57, John underwent initial assessment, and at this point he was triaged as 'major injury' and assigned a patient group number '80' (code denoting reason for presenting complaint) and a code 'VB04Z' (code denoting use of resources). This additional triage related information was incorporated in the model at the time of initial assessment. Moreover, since the state of patient-flow at the ED can change rather quickly, we update feature values for the ED workload (Category 4 features) to accommodate changes in the ED from 19:39 to 19:57. The time spent by John from registration to the start-of-treatment was also fed as a feature in the model for modelling 2 . The updated point forecast from QRF indicated that John would start treatment in 79 minutes from the time of initial assessment (actual 2 was 86 minutes). The updated colour-coded scheme now reflected a 33% probability of starting treatment within the next 45 minutes, a 45% probability of starting treatment between 46 to 120 minutes, and only a 22% probability of starting treatment after two hours. Note the initial forecast was generated at the time of registration, and then updated at the time of initial assessment. This illustration presents a possibility in which EDs could convey waiting time estimates (and associated uncertainty) to each low-acuity patient.
Implementing the Forecasting Scheme Remotely: A Demonstration
Studies have emphasized the importance of informed routing decisions to get the right patient to the right provider at the right time (Singh et al. 2020), as access to waiting times for different healthcare providers can assist patients and first responders to make better decisions in selecting a hospital site, which in turn has been shown to reduce actual wait times (Xie and Youash 2011). Estimates of waiting times, when used in conjunction with travel times, could potentially be used for ambulance routing and diversion, to facilitate the uniform spread of load within a network of neighbouring hospitals (Deo and Gurvich 2011;Xu and Chan 2016). To assist patients in selecting an ED from a network of EDs with different waiting times, smartphone applications have been proposed (such as, Waitless, NHS Quicker).
However, the aforementioned studies and smartphone applications do not take into account the uncertainty in travel and waiting times.
We show that decision-makers (such as patients, first responders) could select an ED from a network of EDs based on probabilistic modelling of travel and waiting times. This analysis necessitates data for more than one ED. We include additional data from another public hospital, situated in the same region as Hospital 1. We refer to this hospital as Hospital 2. Specifically, we use five years of data from 1 January 2014 to 31 December 2018 from Hospital 2, comprising 180,715 patient-level records. As done earlier, data for the first four years was used for training, while data from the final year was used for evaluation. Encouragingly, out-of-sample model comparison using data from Hospital 2 revealed that QRF generated the most accurate forecasts of the benchmark methods that we considered in Section 4, consistent with our findings with Hospital 1.
We simulate patient choices while selecting an ED site by accommodating uncertainty in travel and waiting time estimates. Due to data protection issues, we did not have access to patient locations. Thus, for demonstration purposes, we randomly assign patients to postcodes from the geographic neighbourhood of the hospital sites. To estimate travel times, using each patient's timestamps (time of selecting an ED) and postcode, we accessed the following data from Google maps for both hospitals:
distance, minimum driving time, and maximum driving time. Studies have investigated a range of travel time distributions under different traffic conditions (Guessous et al. 2014). For simplicity, we assumed the travel times to follow an exponential distribution in this study. The minimum travel time was treated as the true lower bound, and we consider maximum travel time as the 99% quantile of the exponential distribution. We simulate data for a day in the out-of-sample period for both hospitals ( = 318 patients).
Figure 9.
Schematic diagram illustrates patient decision-making regarding the choice between Hospital 1 and Hospital 2. Panel A presents the demographics, time of decision, and location. Panel B depicts travel routes to the two EDs. Panel C shows the exponential distribution of travel times. Panel D shows that for each sample of travel time, we have a corresponding estimate for the time of registration and feature vector. Panel E depicts a feature vector being used as an input for a regression tree. Panel F shows the empirical cumulative distribution function (ECDF) for combined travel and waiting times. Figure 9 presents a schematic diagram that illustrates patient decision-making while selecting an ED site from a network with two hospitals. For example, consider a patient, say Anna, who needs to decide whether she should visit Hospital 1 or Hospital 2. For her location (postcode: OX26 2JW), and time of decision while at home (13:04 on 9 April 2018, say ℎ ), we access traffic information regarding the travel time to both hospitals. This information is used to estimate the probability distribution of travel times ( Figure 9C). We draw a random sample of travel times (denoted by
(1) ,(2)
, … ,
from the exponential distribution ( =500). For a given travel time, say
, we estimate the corresponding time of registration (1) (as ℎ + (1) ). Having estimated (1) , we calculate the corresponding feature vector (1) (Figure 9D), whereby (1) is used as in input for QRF ( Figure 9E depicts learning using one feature and one tree). For a given hospital site, feature vectors corresponding to different times of registration are used as inputs for QRF, to generate probabilistic forecasts of waiting times. Travel time estimates are added to the corresponding waiting time estimates, to compute the probability distribution of combined travel and waiting times ( Figure 9F). In this example, using first order stochastic dominance to select between the distributions for the two hospitals, Anna would select Hospital 1. Her decision would contribute towards an increase in the overall patient-flow at Hospital 1.
Since features are dynamically extracted for each patient, changes in load and congestion from Anna's decision would be reflected in the features for the next incoming patient at Hospital 1. Feedback of patient-decisions is thus included during the modelling.
It can be envisaged that providing information on estimates of travel and waiting can influence a patient's routing decisions, which can, in turn, affect the load (number of patients) and congestion (waiting times) at an ED. We study this impact by considering the following five alternative decisionmaking criteria for selecting between the two EDs: (1) shortest distance, (2) lowest mean travel time,
(3) lowest mean of the probability distribution of the sum of travel and waiting times (risk neutral criterion), (4) lowest 75% quantile of the probability distribution of the sum of travel and waiting times (risk averse criterion), and (5) lowest 95% quantile of the probability distribution of the sum of travel and waiting times (very risk averse criterion). Table 5 compares the total load and congestion for the two hospitals (Hospital 1 and Hospital 2) for each of the five ED selection criteria. The simulation results suggest that selecting an ED based solely on the shortest distance criterion will result in diverting the vast majority of patients (291 out of 318) to Hospital 1, which would lead to exceedingly long waiting times compared to Hospital 2. Considering mean travel times only would also result in a highly nonuniform spread of load, whereby Hospital 2 would experience relatively high attendances and prolonged waiting times. Note that although the vast majority of the simulated postcodes were geographically closer to Hospital 1 on average (mean 14.0 miles) compared to Hospital 2 (mean 15.9 miles), the overall travel times to Hospital 1 were slightly longer (mean 28.9 minutes) than Hospital 2 (mean 26.2 minutes). Interestingly, selecting an ED based on either the lowest mean, 75% quantile, or 95% quantile of the combined travel and waiting times distribution results in a more uniform spread of patient load across the two hospitals, whereby the attendances and waiting times are similar for both hospitals, and no patient ends up waiting for exceeding long hours (>2 hours). Since Hospital 1 is a larger hospital with more resources compared to Hospital 2, the model typically ends up diverting more patients to Hospital 1. A larger load on the ED seems to be associated with longer waiting times in Table 5. We find that the empirical cumulative distribution function of waiting times are similar for the two EDs, in cases where patients consider both travel and waiting time estimates for decision-making.
These results suggest that assisting patients in making informed routing decisions can potentially facilitate the uniform spread of load on EDs and help reduce waiting times, which is in broad agreement with previous findings (Dong et al. 2019).
In the absence of actual waiting times for both hospitals for a given patient (as a given patient can visit only one ED), point forecasts obtained from the QRF were treated as the true waiting times.
Moreover, since it was not possible to estimate the time elapsed from arrival at the hospital to registration at the ED (e.g., time spent in the parking area), we treat the time of arrival at the ED as the actual time of registration for simulation. Thus, the numbers quoted in Table 5 should be treated with caution and be used only for comparative analysis across different criteria. Note: 1 denotes total attendances for Hospital 1. The number of patients waiting for low (< 45 minutes), medium (46 to 120 minutes), and high (> 120) times are shown in brackets. The mean and standard deviation of waiting time for Hospital 1 are denoted by μ 1,Hospital 1 and σ 1Hospital 1 , respectively. Similar notation is used for Hospital 2.
Summary and Concluding Remarks
In this study, we propose a machine learning approach using QRF to estimate the conditional quantiles of patient waiting times in an ED. The model utilized a rich set of features that were extracted from detailed patient-level records spanning five years. Rankings of predictor importance suggested that ED workload due to patient volumes and calendar effects were the most salient features. Model evaluation was based on an exhaustive comparison of distributional, quantile, and point forecast accuracy.
Encouragingly, QRF convincingly outperformed the empirical benchmarks that are typically used in practice, along with the Q-Lasso and quantile regression methods that have been proposed in the literature for modelling waiting times. The performance of QRF was consistently superior for both hospital sites, which comprised 334,635 and 180,715 patient-level records. The consistency of our findings, using a large ED dataset for modelling waiting times, helps garner confidence in the generalizability of our results to data from independent ED sites.
Arrivals of low-acuity patients in the ED significantly increase the waiting times for high-acuity patients (Bayati 2017). This is an issue for high-acuity patients that need to be admitted, because delays in admitting a patient to the intensive care unit (ICU) has adverse effects on patient outcomes (Chan et al. 2017). In 2018, around 70% of US hospitals inpatients were processed via EDs (Augustine 2019).
Thus, although we focus on modelling waiting times of low-acuity patients in EDs, this work could have implications for high-acuity patients and other parts of the hospital such as ICUs. The proposed modelling strategy could potentially be adapted to other applications, such as airports, railways, and call centres, where publishing waiting time estimates can help improve overall customer satisfaction.
A potentially useful line of future work would be to generate and evaluate patient waiting times conditional on the presenting complaint (e.g., cardiovascular disease, respiratory illness etc.). This would allow for more granular modelling of waiting times. The model estimates could be revised frequently to accommodate changes in the ED workload so that at any chosen moment, a patient could get an update on their waiting time distribution. Moreover, given that low-acuity patients are likely to drop out from EDs rather than wait in a crowded room for long hours, especially in times of social distancing, it can be envisaged that publishing waiting times of different service providers could be of particular benefit to patients and emergency healthcare services.
Figure 1 .
1Schematic diagram of typical patient-flow in the ED.
Figure 2 .
2Plots of attendances, staff count, and waiting times. Panel 2a: median diurnal attendances, 2b: median diurnal staff count, 2c: time series plot for a fortnight of start-of-treatment waiting times from the time of registration ( 1 ), 2d: autocorrelation function of 1 , 2e: relative frequencies of start-oftreatment waiting times from the time of registration and time of initial assessment ( 1 and 2 , respectively), and 2f: median diurnal waiting times ( 1 and 2 ).
Figure 2
2presents median diurnal profiles of attendances (i.e. arrivals) and staff count, along with
Figure 3 .
3Feature importance scores for modelling: (a) 1 − registration to start-of-treatment, and (b) 2 − initial assessment to start-of-treatment. Note: a higher score denotes a higher-ranked feature.
Ding et al. 2010 ;
2010Sun et al. 2012), Q-Lasso (Ang et al. 2016), and k-nearest neighbour, in Sections 4.1.2-4.1.4. For parameter estimation, we used the last year of the in-sample data (2017) as the cross-validation hold-out sample.
Figure 4 .
4Summaries of out-of-sample distributional forecasts for: (a) 1 − registration to start-oftreatment, and (b) 2 − initial assessment to start-of-treatment. Note: the shaded regions correspond to the different quantile ranges of the forecast distribution (encompassing 90% and 50% area of the forecast distribution, centered around the median).
Figure 5 .
5CRPS values for all patients in the out-of-sample period plotted against their corresponding actual waiting time ( 1 , in minutes). The left and bottom panel figures present the probability distributions of the CRPS and waiting times, respectively.
Figure 6 .
6Unconditional coverage for quantile forecasts of waiting times for: (a) 1 − registration to start-of-treatment, and (b) 2 − initial assessment to start-of-treatment. Note: values closer to the diagonal are better.
8 demonstrates the patient-flow for John at the ED, who registered at 19:39 on 9 April 2018 (non-ambulance arrival). Using features for calendar effects (time and day of arrival etc.), demographics (90 year old male), staff count, and state of the ED patient work-load at John's time of registration (number of total patients, ambulance arrivals, etc), we generate the forecasts for the waiting times.
Figure 8
8Figure 8. Schematic diagram illustrating ED patient-flow and forecasts. Panel A presents the actual timestamps of registration, initial assessment, and treatment (for a 90 year old male, triaged as 'major injury', patient group '80' and code 'VB04Z' during the initial assessment). Panel B and Panel C shows the point forecast, probabilistic forecast and categorical probability forecast, for 1 and 2 , respectively. Forecasts shown in Panel B and Panel C are generated at the time of registration ( ) and time of initial assessment ( ), respectively.
Table 1 .
1Category, names, and a brief description of different features used for modelling.Total workloadNumber of patients in the ED (total, ambulance arrivals, other mode of arrival)Workload from registration to initial assessment Number of patients in the ED that have registered but have not been assessed (total, ambulance arrivals, other mode of arrival)Workload from initial assessment to start of treatment Number of patients in the ED that have been assessed but have not started treatment (total, ambulance arrivals, other mode of arrival, number of patients triaged as, minor, major, urgent care, and, resuscitation; and ambulance arrivals triaged as minor, major, urgent care, and, resuscitation)Workload from start of treatment to departure Number of patients in the ED that have started treatment but have not yet departed (total, ambulance arrivals, other mode of arrival, number of patients triaged as, minor, major, urgent care, and, resuscitation; and ambulance arrivals triaged as minor, major, urgent care, and, resuscitation)Code to identify the reason for ED episode (defined by the NHS as: road traffic accident, assault, deliberate self-harm, sports injury, firework injury, other accident, brought in dead, and other than above)Feature category and name
Brief description
Category 1: Calendar effects
Hour of day
Arrival hour of day (1 to 24)
Hour of week
Arrival hour of week (1 to 24×7)
Day of week
Arrival day of week (1 to 7)
Month of year
Arrival month of year (1 to 12)
Holiday period effects
Indicator variables to accommodate anomalous waiting times during
holidays period (0: Normal day; 1: Holiday; 2: Days around Christmas day)
Category 2: Demographics
Age
Patient age (0 to 110 years)
Sex
Patient sex (0: Male, 1: Female)
Category 3: Staffing
Staff count
Total hourly staff count (inferred via unique staff codes)
Category 4: ED operations
4-hour breach
Number of patients who waited > 4 hours (from registration to departure,
over the last 24 hours)
12-hour breach
Number of patients who waited > 12 hours (from registration to departure,
over the last 24 hours)
Lagged waiting times
Average hourly lagged waiting times for same hour of the day (for 7 previous
consecutive days)
Category 5: Patient condition
Triage level
Category to determine patient's priority for treatment (minor, major, urgent
care, and, resuscitation)
Human resource group codes
Code to reflect the level of resources needed by the patient (12 alphanumeric
codes)
Patient group number
. Using our more detailed ED dataset, we were able to extract a richer set of features compared to previous studies. For example, to forecast patient waiting times,Sun et al. (2012) included information regarding the date, time of triage, time of consultation, and patient acuity category as features for a quantile regression model. To ensure a fair evaluation of different forecasting strategies, we decided to employ all features (as shown inTable 1) as inputs, across all methods. Forecasts for 1 were generated using features from the first four categories. For forecasting 2 , features from all five categories were used during the modelling.
Table 2 .
2Mean CRPS for distribution forecasting of: (a) 1 − registration to start-of-treatment, and (b) 2 − initial assessment to start-of-treatment.Note: lower CRPS values are better (lowest values are highlighted in bold).Forecasting Method
1
2
Empirical 4-hours
31.7
29.8
Empirical -hours
28.0
26.3
Empirical -periods
34.4
34.2
Quantile regression
25.5
24.3
k-nearest neighbour
31.3
29.2
Quantile regression forests
24.8
22.4
Table 4 .
4RMSE Note: lower RMSE and MAE values are better (lowest values are highlighted in bold).and MAE for point forecasting of: (a) 1 − registration to start-of-treatment, and (b)
2 − initial assessment to start-of-treatment.
Forecasting Method
RMSE
MAE
1
2
1
2
Empirical 4-hours
60.4
57.5
42.5
40.2
Empirical -hours
55.9
53.3
39.8
37.2
Empirical -periods
63.4
62.3
43.9
44.3
Quantile regression
50.8
49.0
36.0
33.9
Q-Lasso
51.0
48.6
37.1
33.0
k-nearest neighbour
60.7
57.4
45.1
41.9
Quantile regression forests
50.1
46.6
35.1
31.5
Table 5 .
5Number of patients and their waiting times for the two hospital sites (Hospital 1 and Hospital 2) for the following five alternative ED selection criteria: shortest distance, lowest mean travel time, and lowest mean, 75%, and 95% quantile of the probability distribution of the sum of travel and waiting times.Load & Congestion
ED Selection Criteria
Distance
Travel time
Mean
Risk Neutral
75% Quantile
Risk Averse
95% Quantile
Very Risk Averse
1
(low, medium, high)
291
(45,168,78)
71
(58,13,0)
194
(158,36,0)
188
(134,54,0)
190
(154,36,0)
2
(low, medium, high)
27
(25,2,0)
247
(6,108,133)
124
(104,20,0)
130
(114,16,0)
128
(110,18,0)
μ 1
1 (σ 1,
1 ) 93.1 (43.9)
37.5 (15.6)
36.1 (14.9)
38.4 (16.3)
36.2 (16.2)
μ 1,
2 (σ 1,
2 ) 22.8 (10.6) 131.0 (49.8)
32.6 (15.0)
31.9 (13.5)
32.1 (13.2)
Willing to Wait? The Influence of Patient Wait Time on Satisfaction with Primary Care. R T Anderson, F T Camacho, R Balkrishnan, BMC Health Services Research. 7Anderson, R. T., Camacho, F. T., and Balkrishnan, R. 2007. "Willing to Wait? The Influence of Patient Wait Time on Satisfaction with Primary Care," BMC Health Services Research (7).
Accurate Emergency Department Wait Time Prediction. E Ang, S Kwasnick, M Bayati, E L Plambeck, Aratow , M , M&SOM-Manufacturing & Service Operations Management. 18Ang, E., Kwasnick, S., Bayati, M., Plambeck, E. L., and Aratow, M. 2016. "Accurate Emergency Department Wait Time Prediction," M&SOM-Manufacturing & Service Operations Management (18:1), pp. 141-156.
On Patient Flow in Hospitals: A Data-Based Queueing-Science Perspective. M Armony, S Israelit, A Mandelbaum, Y N Marmor, Y Tseytlin, G B Yom-Tov, Stochastic Systems (5:1). Armony, M., Israelit, S., Mandelbaum, A., Marmor, Y.N., Tseytlin, Y., Yom-Tov, G. B. 2015. "On Patient Flow in Hospitals: A Data-Based Queueing-Science Perspective," Stochastic Systems (5:1), pp. 146-194.
The Impact of Input and Output Factors on Emergency Department Throughput. P V Asaro, L M Lewis, S B Boxerman, Academic Emergency Medicine. 14Asaro, P. V., Lewis, L. M., and Boxerman, S. B. 2007. "The Impact of Input and Output Factors on Emergency Department Throughput," Academic Emergency Medicine (14:3), pp. 235-242.
Latest Data Reveal the Ed's Role as Hospital Admission Gatekeeper. J J Augustine, ACEP Now. Augustine, J. J. 2019. "Latest Data Reveal the Ed's Role as Hospital Admission Gatekeeper." ACEP Now.
Discrete-Event Simulation and Design of Experiments to Study Ambulatory Patient Waiting Time in an Emergency Department. C Baril, V Gascon, D Vadeboncoeur, Journal of the Operational Research Society. 70Baril, C., Gascon, V., and Vadeboncoeur, D. 2019. "Discrete-Event Simulation and Design of Experiments to Study Ambulatory Patient Waiting Time in an Emergency Department," Journal of the Operational Research Society (70:12), pp. 2019-2038.
Waiting Patiently: An Empirical Study of Queue Abandonment in an Emergency Department. R J Batt, C Terwiesch, Management Science. 61Batt, R. J., and Terwiesch, C. 2015. "Waiting Patiently: An Empirical Study of Queue Abandonment in an Emergency Department," Management Science (61:1), pp. 39-59.
Low-Acuity Patients Delay High-Acuity Patients in an Emergency Department. M Bayati, S Kwasnick, D Luo, E L Plambeck, Available at SSRN). Bayati, M., Kwasnick, S., Luo, D., Plambeck, E. L. 2017. "Low-Acuity Patients Delay High-Acuity Patients in an Emergency Department," Available at SSRN).
The Effect of Emergency Department Crowding on Clinically Oriented Outcomes. S L Bernstein, D Aronsky, R Duseja, S Epstein, D Handel, U Hwang, M Mccarthy, K J Mcconnell, J M Pines, N Rathlev, R Schafermeyer, F Zwemer, M Schull, B R Asplin, S A E Med, E D C T Force, Academic Emergency Medicine. 16Bernstein, S. L., Aronsky, D., Duseja, R., Epstein, S., Handel, D., Hwang, U., McCarthy, M., McConnell, K. J., Pines, J. M., Rathlev, N., Schafermeyer, R., Zwemer, F., Schull, M., Asplin, B. R., Med, S. A. E., and Force, E. D. C. T. 2009. "The Effect of Emergency Department Crowding on Clinically Oriented Outcomes," Academic Emergency Medicine (16:1), pp. 1-10.
Determinants of Patient Satisfaction in a Large, Municipal Ed: The Role of Demographic Variables, Visit Characteristics, and Patient Perceptions. E D Boudreaux, R D Ary, C V Mandry, B Mccabe, American Journal of Emergency Medicine. 18Boudreaux, E. D., Ary, R. D., Mandry, C. V., and McCabe, B. 2000. "Determinants of Patient Satisfaction in a Large, Municipal Ed: The Role of Demographic Variables, Visit Characteristics, and Patient Perceptions," American Journal of Emergency Medicine (18:4), pp. 394-400.
Random Forests. L Breiman, Machine Learning. 45Breiman, L. 2001. "Random Forests," Machine Learning (45:1), pp. 5-32.
A Critical Moment: Nhs Staffing Trends, Retention and Attrition. J Buchan, A Charlesworth, B Gershlick, I Seccombe, The Health Foundation. Buchan, J., Charlesworth, A., Gershlick, B., and Seccombe, I. 2019. "A Critical Moment: Nhs Staffing Trends, Retention and Attrition," The Health Foundation.
The Relationship between Emergency Department Crowding and Patient Outcomes: A Systematic Review. E J Carter, S M Pouch, E L Larson, Journal of Nursing Scholarship. 462Carter, E. J., Pouch, S. M., and Larson, E. L. 2014. "The Relationship between Emergency Department Crowding and Patient Outcomes: A Systematic Review," Journal of Nursing Scholarship (46:2), pp. 106-115.
The Impact of Delays on Service Times in the Intensive Care Unit. C W Chan, V F Farias, G J Escobar, Management Science. 637Chan, C. W., Farias, V. F., and Escobar, G. J. 2017. "The Impact of Delays on Service Times in the Intensive Care Unit," Management Science (63:7), pp. 2049-2072.
Decentralized Ambulance Diversion: A Network Perspective. S Deo, I Gurvich, Management Science. 577Centralized VsDeo, S., and Gurvich, I. 2011. "Centralized Vs. Decentralized Ambulance Diversion: A Network Perspective," Management Science (57:7), pp. 1300-1319.
Characterizing Waiting Room Time, Treatment Time, and Boarding Time in the Emergency Department Using Quantile Regression. R Ding, M L Mccarthy, J S Desmond, J S Lee, D Aronsky, S L Zeger, Academic Emergency Medicine. 178Ding, R., McCarthy, M. L., Desmond, J. S., Lee, J. S., Aronsky, D., and Zeger, S. L. 2010. "Characterizing Waiting Room Time, Treatment Time, and Boarding Time in the Emergency Department Using Quantile Regression," Academic Emergency Medicine (17:8), pp. 813-823.
The Impact of Delay Announcements on Hospital Network Coordination and Waiting Times. J Dong, E Yom-Tov, G B Yom-Tov, Management Science. 65Dong, J., Yom-Tov, E., and Yom-Tov, G. B. 2019. "The Impact of Delay Announcements on Hospital Network Coordination and Waiting Times," Management Science (65:5), pp. 1969-1994.
A Scoring System for Probability Forecasts of Ranked Categories. E S Epstein, Journal of Applied Meteorology. 86Epstein, E. S. 1969. "A Scoring System for Probability Forecasts of Ranked Categories," Journal of Applied Meteorology (8:6), pp. 985-987.
Costs of Ed Episodes of Care in the United States. J E Galarraga, J M Pines, American Journal of Emergency Medicine. 34Galarraga, J. E., and Pines, J. M. 2016. "Costs of Ed Episodes of Care in the United States," American Journal of Emergency Medicine (34:3), pp. 357-365.
Making and Evaluating Point Forecasts. T Gneiting, Journal of the American Statistical Association. 106Gneiting, T. 2011. "Making and Evaluating Point Forecasts," Journal of the American Statistical Association (106:494), pp. 746-762.
Strictly Proper Scoring Rules, Prediction, and Estimation. T Gneiting, A E Raftery, Journal of the American Statistical Association. 102Gneiting, T., and Raftery, A. E. 2007. "Strictly Proper Scoring Rules, Prediction, and Estimation," Journal of the American Statistical Association (102:477), pp. 359-378.
L1 Penalized Estimation in the Cox Proportional Hazards Model. J J Goeman, Biometrical Journal. 52Goeman, J. J. 2010. "L1 Penalized Estimation in the Cox Proportional Hazards Model," Biometrical Journal (52:1), pp. 70-84.
Estimating Travel Time Distribution under Different Traffic Conditions. Y Guessous, M Aron, N Bhouri, S Cohen, Euro Working Group on Transportation. 317th Meeting of theGuessous, Y., Aron, M., Bhouri, N., and Cohen, S. 2014. "Estimating Travel Time Distribution under Different Traffic Conditions," 17th Meeting of the Euro Working Group on Transportation, Ewgt2014 (3), pp. 339-348.
Forecasting Airport Transfer Passenger Flow Using Real-Time Data and Machine Learning. X G Guo, .-C , Y De Reyck, B , Available at SSRN). Guo, X. G.-C., Y.; De Reyck, B. 2018. "Forecasting Airport Transfer Passenger Flow Using Real-Time Data and Machine Learning," Available at SSRN).
Causal Feature Selection. I Guyon, C Aliferis, A Elisseeff, Computational Methods of Feature Selection). Guyon, I., Aliferis, C., and Elisseeff, A. 2008. "Causal Feature Selection," Computational Methods of Feature Selection), pp. 63-85.
T Hastie, R Tibshirani, J H Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York; NYSpringerHastie, T., Tibshirani, R., Friedman, J. H. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: NY: Springer.
Learning from Imbalanced Data. H B He, E A Garcia, IEEE Transactions on Knowledge and Data Engineering. 21He, H. B., and Garcia, E. A. 2009. "Learning from Imbalanced Data," IEEE Transactions on Knowledge and Data Engineering (21:9), pp. 1263-1284.
Wait Time for Treatment in Hospital Emergency Departments. E Hing, F Bhuiya, NCHS Data Brief. Hing, E., and Bhuiya, F. 2012. "Wait Time for Treatment in Hospital Emergency Departments: 2009," NCHS Data Brief:102), pp. 1-8.
Applying Queueing Theory to the Study of Emergency Department Operations: A Survey and a Discussion of Comparable Simulation Studies. X Hu, S Barnes, B Golden, International Transactions in Operational Research. 251Hu, X., Barnes, S., and Golden, B. 2018. "Applying Queueing Theory to the Study of Emergency Department Operations: A Survey and a Discussion of Comparable Simulation Studies," International Transactions in Operational Research (25:1), pp. 7-49.
Call Centers with Delay Information: Models and Insights. O Jouini, Z Aksin, Y Dallery, M&SOM-Manufacturing & Service Operations Management. 13Jouini, O., Aksin, Z., and Dallery, Y. 2011. "Call Centers with Delay Information: Models and Insights," M&SOM-Manufacturing & Service Operations Management (13:4), pp. 534-548.
A Review of the Healthcare-Management (Modeling) Literature Published in Manufacturing & Service Operations Management. P Keskinocak, N Savva, M&SOM-Manufacturing & Service Operations Management. 22Keskinocak, P., and Savva, N. 2020. "A Review of the Healthcare-Management (Modeling) Literature Published in Manufacturing & Service Operations Management," M&SOM-Manufacturing & Service Operations Management (22:1), pp. 59-72.
Regression Quantiles. R Koenker, G Bassett, Econometrica. 461Koenker, R., and Bassett, G. 1978. "Regression Quantiles," Econometrica (46:1), pp. 33-50.
Quantile Regression Forests. N Meinshausen, Journal of Machine Learning Research. 7Meinshausen, N. 2006. "Quantile Regression Forests," Journal of Machine Learning Research (7), pp. 983-999.
Data Analytics in Operations Management: A Review. V V Misic, G Perakis, M&SOM-Manufacturing & Service Operations Management. 22Misic, V. V., and Perakis, G. 2020. "Data Analytics in Operations Management: A Review," M&SOM- Manufacturing & Service Operations Management (22:1), pp. 158-169.
Communicating Uncertainty in Weather Forecasts: A Survey of the Us Public. R E Morss, J L Demuth, J K Lazo, Weather and Forecasting. 23Morss, R. E., Demuth, J. L., and Lazo, J. K. 2008. "Communicating Uncertainty in Weather Forecasts: A Survey of the Us Public," Weather and Forecasting (23:5), pp. 974-991.
Handbook to the Nhs Constitution for England. Nhs, NHS. 2019. "Handbook to the Nhs Constitution for England."
Anticipating Special Events in Emergency Department Forecasting. B Rostami-Tabar, F Ziel, International Journal of Forecasting. In PressRostami-Tabar, B., and Ziel, F. 2020. "Anticipating Special Events in Emergency Department Forecasting," International Journal of Forecasting (In Press).
Us Emergency Department Visits and Hospital Discharges among Uninsured Patients before and after Implementation of the Affordable Care Act. A J Singer, H C Thode, J M Pines, JAMA Network Open. 2Singer, A. J., Thode, H. C., and Pines, J. M. 2019. "Us Emergency Department Visits and Hospital Discharges among Uninsured Patients before and after Implementation of the Affordable Care Act," JAMA Network Open (2:4).
Empirical Research in Healthcare Operations: Past Research, Present Understanding, and Future Opportunities. K C D Singh, S Scholtes, C Terwiesch, M&SOM-Manufacturing & Service Operations Management. 22Singh, K. C. D., Scholtes, S., and Terwiesch, C. 2020. "Empirical Research in Healthcare Operations: Past Research, Present Understanding, and Future Opportunities," M&SOM-Manufacturing & Service Operations Management (22:1), pp. 73-83.
An Econometric Analysis of Patient Flows in the Cardiac Intensive Care Unit. K C D Singh, C Terwiesch, M&SOM-Manufacturing & Service Operations Management. 14Singh, K. C. D., and Terwiesch, C. 2012. "An Econometric Analysis of Patient Flows in the Cardiac Intensive Care Unit," M&SOM-Manufacturing & Service Operations Management (14:1), pp. 50-65.
Waiting Time Policies in the Health Sector Whatworks?. O Summary, OECDSummary, O. 2013. "Waiting Time Policies in the Health Sector Whatworks?." OECD.
Real-Time Prediction of Waiting Time in the Emergency Department, Using Quantile Regression. Y Sun, K L Teow, B H Heng, C K Ooi, S Y Tay, Annals of Emergency Medicine. 60Sun, Y., Teow, K. L., Heng, B. H., Ooi, C. K., and Tay, S. Y. 2012. "Real-Time Prediction of Waiting Time in the Emergency Department, Using Quantile Regression," Annals of Emergency Medicine (60:3), pp. 299-308.
Effects of Common Data Errors in Electronic Health Records on Emergency Department Operational Performance Metrics: A Monte Carlo Simulation. M J Ward, W H Self, C M Froehle, Academic Emergency Medicine. 22Ward, M. J., Self, W. H., and Froehle, C. M. 2015. "Effects of Common Data Errors in Electronic Health Records on Emergency Department Operational Performance Metrics: A Monte Carlo Simulation," Academic Emergency Medicine (22:9), pp. 1085-1092.
Just a Minute: The Effect of Emergency Department Wait Time on the Cost of Care. L Woodworth, J F Holmes, Economic Inquiry. 582Woodworth, L., and Holmes, J. F. 2020. "Just a Minute: The Effect of Emergency Department Wait Time on the Cost of Care," Economic Inquiry (58:2), pp. 698-716.
The Effects of Publishing Emergency Department Wait Time on Patient Utilization Patterns in a Community with Two Emergency Department Sites: A Retrospective, Quasi-Experiment Design. B Xie, Youash , S , International Journal of Emergency Medicine. 429Xie, B., and Youash, S. 2011. "The Effects of Publishing Emergency Department Wait Time on Patient Utilization Patterns in a Community with Two Emergency Department Sites: A Retrospective, Quasi-Experiment Design," International Journal of Emergency Medicine (4:1), p. 29.
Using Future Information to Reduce Waiting Times in the Emergency Department Via Diversion. K Xu, Chan , C W , M&SOM-Manufacturing & Service Operations Management. 18Xu, K., and Chan, C. W. 2016. "Using Future Information to Reduce Waiting Times in the Emergency Department Via Diversion," M&SOM-Manufacturing & Service Operations Management (18:3), pp. 314-331.
| []
|
[
"Published as a conference paper at ICLR 2023 REVERSIBLE COLUMN NETWORKS MEGVII Technology 1 Beijing Academy of Artificial Intelligence",
"Published as a conference paper at ICLR 2023 REVERSIBLE COLUMN NETWORKS MEGVII Technology 1 Beijing Academy of Artificial Intelligence"
]
| [
"Yuxuan Cai [email protected] ",
"Yizhuang Zhou [email protected] ",
"Qi Han [email protected] ",
"Jianjian Sun ",
"Xiangwen Kong ",
"Jun Li ",
"Xiangyu Zhang [email protected] "
]
| []
| []
| We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% AP box on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol * | 10.48550/arxiv.2212.11696 | [
"https://export.arxiv.org/pdf/2212.11696v3.pdf"
]
| 254,974,335 | 2212.11696 | 007323e9a19faa7be415eb2122dd331b11a54989 |
Published as a conference paper at ICLR 2023 REVERSIBLE COLUMN NETWORKS MEGVII Technology 1 Beijing Academy of Artificial Intelligence
Yuxuan Cai [email protected]
Yizhuang Zhou [email protected]
Qi Han [email protected]
Jianjian Sun
Xiangwen Kong
Jun Li
Xiangyu Zhang [email protected]
Published as a conference paper at ICLR 2023 REVERSIBLE COLUMN NETWORKS MEGVII Technology 1 Beijing Academy of Artificial Intelligence
We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% AP box on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol *
INTRODUCTION
Information Bottleneck principle (IB) (Tishby et al., 2000;Tishby & Zaslavsky, 2015) rules the deep learning world. Consider a typical supervised learning network as in Fig. 1 (a): layers close to the input contain more low-level information, while features close to the output are rich in semantic meanings. In other words, information unrelated to the target is gradually compressed during the layer-by-layer propagation. Although such learning paradigm achieves great success in many practical applications, it might not be the optimal choice in the view of feature learning -down-stream tasks may suffer from inferior performances if the learned features are over compressed, or the learned semantic information is irrelevant to the target tasks, especially if a significant domain gap exists between the source and the target tasks (Zamir et al., 2018). Researchers have devoted great efforts to make the learned features to be more universally applicable, e.g. via self-supervised pre-training (Oord et al., 2018;Devlin et al., 2018;He et al., 2022; or multi-task learning (Ruder, 2017;Caruana, 1997;Sener & Koltun, 2018).
In this paper, we mainly focus on an alternative approach: building a network to learn disentangled representations. Unlike IB learning, disentangled feature learning (Desjardins et al., 2012;Bengio et al., 2013;Hinton, 2021) does not intend to extract the most related information while discard the less related; instead, it aims to embed the task-relevant concepts or semantic words into a few decoupled dimensions respectively. Meanwhile the whole feature vector roughly maintains as much information as the input. It is quite analogous to the mechanism in biological cells (Hinton, 2021;Lillicrap et al., 2020) -each cell shares an identical copy of the whole genome but has different expression intensities. Accordingly in computer vision tasks, learning disentangled features is also reasonable: for instance, high-level semantic representations are tuned during ImageNet pre-training, meanwhile the low-level information (e.g. locations of the edges) should also be maintained in other feature dimensions in case of the demand of down-stream tasks like object detection. Fig. 1 (b) sketches our main idea: Reversible Column Networks (RevCol), which is greatly inspired by the big picture of GLOM (Hinton, 2021). Our network is composed of N subnetworks (named columns) of identical structure (however whose weights are not necessarily the same), each of which receives a copy of the input and generates a prediction. Hence multi-level embeddings, i.e. from low-level to highly semantic representations, are stored in each column. Moreover, reversible transformations are introduced to propagate the multi-level features from i-th column to (i + 1)-th column without information loss. During the propagation, since the complexity and nonlinearity increases, the quality of all feature levels is expected to gradually improve. Hence the last column (Col N in Fig. 1 (b)) predicts the final disentangled representations of the input.
In RevCol, one of our key contributions is the design of the reversible transformations between adjacent columns. The concept is borrowed from the family of Reversible Networks Jacobsen et al., 2018;Mangalam et al., 2022); however, conventional reversible structures such as RevNets (Gomez et al., 2017) ( Fig. 2 (a)) usually have two drawbacks: first, feature maps within a reversible block are restricted to have the same shape * ; second, the last two feature maps in RevNets have to contain both low-level and high-level information due to the reversible nature, which may be difficult to optimize as in conflict with IB principle. In this paper, we overcome the drawbacks by introducing a novel reversible multi-level fusion module. The details are discussed in Sec. 2.
We build a series of CNN-based RevCol models under different complexity budgets and evaluate them in mainstream computer vision tasks, such as ImageNet classification, COCO object detection and instance segmentation, as well as ADE20K semantic segmentation. Our models achieve comparable or better results than sophisticated CNNs or vision transformers like ConvNeXt and Swin . For example, after ImageNet-22K pre-training, our RevCol-XL model obtains 88.2% accuracy on ImageNet-1K without using transformers or large convolutional kernels (Ding et al., 2022b;. More importantly, we find RevCol can scale up well to large models and large datasets. Given a larger private pre-training dataset, our biggest model RevCol-H obtains 90.0% accuracy on ImageNet-1K classification, 63.8% AP box on COCO detection minival set, and 61.0% mIoU on ADE20K segmentation, respectively. To our knowledge, it is the best reversible model on those tasks, as well as the best pure CNN model on COCO and ADE20K which only involves static kernels without dynamic convolutions (Dai et al., 2017;Ma et al., 2020). In the appendix, we further demonstrate RevCol can work with transformers (Dosovitskiy et al., 2020;Devlin et al., 2018) and get improved results on both computer vision and NLP tasks. Finally, similar to RevNets (Gomez et al., 2017), RevCol also shares the bonus of memory saving from the reversible nature, which is particularly important for large model training.
Relation to previous works. Although our initial idea on feature disentangling is derived from GLOM (Hinton, 2021), in RevCol there are a lot of simplifications and modifications. For example, * In precise, feature maps of odd and even indexes should be equal sized respectively. GLOM suggests contrastive auxiliary loss to avoid feature collapse. Contrastive training methods need extra pairs of positive and negative samples, which is complicated and unstable. In RevCol, reversible transformations between columns provides lossless information propagation by nature. As for other multi-scale grid-like architectures such as HRNets (Wang et al., 2020), DEQ models (Bai et al., 2020) and FPNs (Lin et al., 2017;, the design purpose of those models is to fuse multi-scale features rather than learn disentangled representations; therefore, in general they still follow the paradigm in Fig. 1 (a) -neither multiple entrances/exits nor reversible structures are employed. Based on those grid-like network topology, NAS based works (Ding et al., 2021;Wu et al., 2021;Ghiasi et al., 2019) search the optimized topology of network architectures for specific dataset. However, the RevCol architecture is not limit to specific tasks or datasets. With the reversible nature, our method maintains lossless information propagation and benefits for not only pre-training but also other down-stream tasks. Very recently, RevBiFPN (Chiley et al., 2022) comes up with an reversible variant of FPN, which is further employed in an HRNet-like architecture. Though our RevCol shares the similar idea of multi-scale reversible transformations with RevBiFPN, our work is done independently, which is derived from a different motivation of feature disentangling, and has much simpler architectures (e.g. free of reversible upsampling tower) and higher performances. We compare some of those models in Sec. 3.
METHOD
In this section, we introduce the design details of our Reversible Column Networks (RevCol). Fig. 1 (b) illustrates the top-level architecture. Notice that for each column in RevCol, for simplicity we directly reuse existing structures such as ConvNeXt , hence in the following subsections, we mainly focus on how to build the reversible connections between columns. In addition, we introduce an plug-and-play intermediate supervision on top of each column, which further improves the training convergence and feature quality.
MULTI-LEVEL REVERSIBLE UNIT
In our network, reversible transformations plays a key role in feature disentangling without information loss, whose insight comes from Reversible Neural Networks (Dinh et al., 2014;Jacobsen et al., 2018;Mangalam et al., 2022). Among them, we first take a review of one representative work RevNet (Gomez et al., 2017). As shown in Fig. 2 (a), RevNet first partitions the input x into two groups, x 0 and x 1 . Then for later blocks, for example, block t, it takes two anterior blocks' outputs x t−1 and x t−2 as input and generates the output x t . The mapping of block t is reversible, i.e. x t−2 can be reconstructed by two posterior blocks x t−1 and x t . Formally, the forward and inverse computation follow the equations † :
F orward : x t = F t (x t−1 ) + γx t−2 Inverse : x t−2 = γ −1 [x t − F t (x t−1 )],(1)
where F t denotes an arbitrary non-linear operation analogous to those residual functions in standard ResNets; γ is a simple reversible operation (e.g. channel-wise scaling), whose inverse is denoted by γ −1 . As mentioned in the introduction, the above formulation involves too strong constraint on the feature dimensions, i.e. x t , x t+2 , x t+4 , ... have to be equal sized, which is not flexible in architecture design. That is why RevNets (Gomez et al., 2017) introduce some non-reversible down-sampling blocks between reversible units, hence the whole network is not fully reversible. More importantly, we find there is no clear way to directly employ Eq. 1 to bridge the columns in Fig. 1 (b).
To address the issue, we generalize Eq. 1 into the following form:
F orward : x t = F t (x t−1 , x t−2 , ..., x t−m+1 ) + γx t−m Inverse : x t−m = γ −1 [x t − F t (x t−1 , x t−2 , ..., x t−m+1 )],(2)
where m is the order of the recursion (m ≥ 2). Clearly, the extension is still reversible. Then we partition every m feature maps into a group: (x 1 , x 2 , . . . , x m ), (x m+1 , x m+2 , . . . , x 2m ), . . . . Given the features within any of the group, we can easily compute the features in other groups recursively according to Eq. 2. Compared with the original form, Eq. 2 has the following two nice properties:
• The constraint on the feature map sizes is greatly relaxed if m is relatively large. Notice that Eq. 1 does not require feature maps within each group to be equal sized; such constraint only exist between groups. Therefore, we can use tensors of different shape to represent features of different semantic levels or different resolutions. • Eq. 2 can easily cooperate with existing network architectures, even though the latter is not reversible. For example, we can assign m feature maps in a standard ResNet to represent the feature maps within a group (x t , x t+1 , . . . , x t+m−1 ), which is still compatible with Eq. 2 since ResNet can be viewed as a part of (F t , F t+1 , . . . , F t+m−1 ) respectively. Thus the whole network is still reversible.
Therefore, we can reorganize Eq. 2 into a multi-column fashion, as shown in Fig. 2 (b). Each column is composed of m feature maps within a group, as well as their mother network. We name it multi-level reversible unit, which is the basic component of our RevCol as in Fig. 1 (b).
REVERSIBLE COLUMN ARCHITECTURE
MACRO DESIGN
As discussed in the introduction (see Fig. 1 (b)), our network RevCol is composed of multiple subnetworks with reversible connections to perform feature disentangling. Fig. 2 (c) elaborates the architecture design. Following the common practice of recent models (Dosovitskiy et al., 2020;, first the input image is split into non-overlapping patches by a patch embedding module. After that, patches are fed into each subnetwork (column). Columns can be implemented with any conventional single-column architectures, e.g. ViT (Dosovitskiy et al., 2020) or ConvNeXt ). We extract four-level feature maps from each column to propagate information between columns; for example, if the columns are implemented with widely-used hierarchical networks He et al., 2016;, we can simply extract multi-resolution features from the output of each stage. For classification tasks, we only use feature map of the last level (Level 4) in the last column for rich semantic information. For other down-stream tasks like object detection and semantic segmentation, we use feature maps of all the four levels in the last column as they contain both low-level and semantic information.
To implement the reversible connections between columns, we adopt the multi-level reversible unit proposed in Eq. 2, but in a simplified fashion: rather than take (m − 1) inputs for each non-linear operation F t (·), we use only one low-level feature x t−1 at the current column and one high-level feature x t−m+1 at the previous column as the input. The simplification does not break the reversible property. We find more inputs bring minor accuracy gain but consume much more GPU resources. Thus Eq. 2 is simplified as:
F orward : x t = F t (x t−1 , x t−m+1 ) + γx t−m Inverse : x t−m = γ −1 [x t − F t (x t−1 , x t−m+1 )].(3)
Compared with conventional architectures, the macro design of our RevCol has the following three properties or advantages:
Feature disentangling. In RevCol, the lowest level of each column maintains low-level features as it is close to the input, while the highest level in the last column is highly semantic because it is directly connected to the supervision. Therefore, information in different levels is gradually disentangled during the (lossless) propagation between columns -some feature maps are more and more semantic and some maintain to be low-level. Detailed analyses are presented in Appendix E. The property brings many potential advantages, for instance, more flexible to downstream tasks which rely on both high-level and low-level features. We argue that reversible connection plays a key role in the disentangling mechanism -some previous works like HRNet (Wang et al., 2020) involve multi-level feature fusion but without reversible connection, which may suffer from information loss and lead to inferior performances in our experiments (see Section 3.5.2).
Memory saving. The training of conventional networks takes a lot of memory footprint to store the activations during forward propagation as the demand of gradient computation. While in our RevCol, since the connections between columns are explicitly reversible, during the back-propagation we can reconstruct the required activations on the fly from the last column to the first, which means we only need to maintain activations from one column in memory during training. In Section 3.5.4, we demonstrate RevCol costs roughly O(1) additional memory with the increase of column numbers.
New scaling factor for big models. In RevCol architecture, column number serves as a new dimension in addition to depth (number of blocks), and width (channels of each block) in vanilla single-column CNNs or ViTs. Increasing column numbers has similar income as increasing both width and depth in certain range.
MICRO DESIGN
We employ ConvNeXt blocks to implement each column in our network by default; other architectures, such as transformers, are also applicable (see Appendix B for details). We make a few modifications to make ConvNeXt compatible with our macro architecture:
Fusion module. As shown in Fig. 5, in each level of original ConvNeXt, the inputs are first downsampled in a patch-merging block. Then the outputs are passed through a bunch of residual blocks. In RevCol, we introduce a fusion module to fuse the feature maps from the current and previous columns (refer to Fig. 2 (c), green and blue connections). We modify the original patch-merging block in ConvNeXt by putting the LayerNorm after the patch-merging convolution rather than before. Channel numbers are doubled in patch-merging convolution. We also introduce an up-sampling block, which is composed of a linear channel mapping layer, a LayerNorm normalization and a feature map interpolation layer. We halve the channel numbers in linear channel mapping layer. The outputs of the two blocks are summed up and then passed to the residual blocks followed by.
Kernel size. In RevCol we revise the 7 × 7 convolutions in original ConvNeXt to 3 × 3 by default, mainly to speed up the training. Increasing kernel size further obtains more accuracy, but not very much, partly because the our multi-column design enlarges the effective receptive field. Please refer to Section 3.5.5 for more details.
Reversible operation γ. We adopt a learnable reversible channel-wise scaling as reversible operation γ to keep the network stable. Each time the features are summed up in forward of Eq. 3, the magnitude grows larger, which makes the training process unstable. Using a learnable scaling can suppress the magnitude of features. During training, we truncate the absolute value of γ so that it will never be smaller than 1e −3 , because the numerical error could become large in the reverse computation when γ is too small.
INTERMEDIATE SUPERVISION
Though multi-level reversible unit is able to maintain information during column iteration, the down-sample block still can discard information inside column. Features at the end of the front columns is too close to the final output, for reversible connections simply do scaling and summation. Such information loss leads to inferior performance. Similar problem also happens when using deeply-supervised method (Lee et al., 2015;Szegedy et al., 2015).
To mitigate the problem of information collapse, we propose an intermediate supervision method which adds additional supervision into front columns. For features in front columns, we hope to keep the mutual information between features and the input image as much as possible, so that the network discard less information within columns. Consider RevCol gradually disentangle semantic and low-level information, extracting and leveraging the task-relevant information can further boost the performance. Therefore, we need to maximize the lower bound of mutual information between features and the prediction.
Inspired by , we add two auxiliary heads to last level features (Level 4). One is a decoder (He et al., 2022) which reconstructs the input image, the other is a linear classifier. The linear classifier can be trained in a regular classification fashion with the cross-entropy (CE) loss. The parameters of decoder are optimized by minimizing the binary cross-entropy (BCE) reconstruction loss. Compared with commonly used L1 and L2 loss, interpreting the distribution of reconstructed logits and input image as bit probabilities (Bernoullis) outputs smoother value, which makes it more compatible with CE Loss.
For intermediate supervision at one column, the compound loss is the weighted summation of the above two loss. Note that supervision heads may not be added to all columns. For all the variants of RevCol, we set the number of compound loss to 4 empirically (eg. for a 8 column RevCol, the supervision heads are added to column 2, 4, and 6, and 8).
The total loss L in is the summation of all compound loss:
L = n i=1 (α i L BCE + β i L CE )(4)
n denotes the total number of compound loss. L BCE and L CE denotes BCE loss and CE loss correspondingly. α i and β i are changed linearly with the compound loss number. When the compound loss is added in earlier columns, we use larger value α i and smaller value β i to keep I(h, x). In later columns, value α i decreases and β i increases, which helps boost the performance.
EXPERIMENTS
We construct different RevCol variants, RevCol-T/S/B/L, to be of similar complexities to Swin transformers and ConvNeXts. We also build a larger RevCol-XL and RevCol-H to test the scaling up capability. These variants adopt different channel dimension C, blocks in each column B and column numbers COL. The configuration hyper-parameters of these model variants are: We conduct image classification on ImageNet dataset (Deng et al., 2009;Ridnik et al., 2021). We also test our models on the downstream object detection task and semantic segmentation task on
MORE ANALYSIS EXPERIMENTS
PERFORMANCE GAIN OF REVERSIBLE COLUMNS ARCHITECTURE
In this section, we evaluate the performance gain of using reversible columns. In the first experiment, we fix a single column's structure and FLOPs then simply add more columns to scale large and test the performance. At the same time, we plot the vanilla single-column models with similar model sizes. As depicted in Fig. 3, compared to single-column models, using multi-column reversible architecture always gets better performance under same FLOPs constraint. Besides, within a certain range, scaling up RevCol in terms of increasing column numbers can have similar gains compared to scaling up with both block numbers(depth) and channel numbers(width) in single-column models. In the second experiment, we limit the model size to about 4.5G FLOPs and test model variants with different column numbers. In other words, we gradually add more columns and scale down the single column size at the same time. Results are shown in Tab. 5, we notice that adopt column number at the range of 4 to 12 can maintain the model's performance, then further more column models suffer from performance degradation. We believe the reason is the width and depth in a single column are too low to keep representation ability.
REVERSIBLE NETWORKS VS. NON-REVERSIBLE NETWORKS
In this section, we ablate different design patterns of reversible connections. First, we build a nonreversible multi-column network using the fusion module of HRNet. Second, we build another single column reversible ConvNeXt using the design of RevNet as shown in Fig. 2(a). We compare the two designs with our RevCols. The evaluation result is shown in Tab. 6. The non-reversible multi-column network suffers from information loss during propagation, which could result in lower accuracy. The reversible single-column network maintains information during propagation, but lack the superiority of multi-level fusion. This experiment further indicates the effectiveness of combining the reversible design with multi-column networks. Fig. 4 plots the GPU memory consumption with the scaling of model size. We fix the computation complexity of a single column to 1G FLOPs and increase column number. Meanwhile, we measure the memory consumption in training process which includes the forward and backward propagation. Our experiments are conducted on Nvidia Tesla V100 GPU under batch-size 64, FP16 precision and PyTorch implementation. With the increment of column number, we can see RevCol keeps an O(1) GPU memory consumption, while non-reversible architecture's memory consumption increase linearly with column number. Note that our RevCol does not keep strictly the same GPU memory consumption as column number increase, as reversible networks need to back-up the operation weights in need for calculating gradients and the re-construction of feature maps in backward propagation.
ABLATION OF KERNEL SIZE IN CONVOLUTIONS
In original ConvNeXt, large kernel convolution achieves in better performance. We conduct experiments in RevCol-T. As shown in Tab. 8, for 4 column models, using 5 × 5 convolution increase the ImageNet-1k Top-1 accuracy by 0.3% and the COCO AP box by 0.7 for RevCol-T model. Further increasing kernel size obtains more accuracy in down-stream tasks, but not too much. We consider the RevCol design already enlarges the effective receptive field and this limit the accuracy gain of using large kernel convolution. On the other hand, 3 × 3 convolution enjoys the merits of efficiency and stability in (pre)training. Therefore, we adopt kernel 3 in all RevCol models.
RELATED WORKS
DISENTANGLE REPRESENTATION LEARNING AND PART-WHOLE HIERARCHY
A disentangled representation is generally described as one which separates the factors of variation, explicitly representing the important attributes of the data (Desjardins et al., 2012;Bengio et al., 2013). (Hinton, 2021) gives an idea of representing a part-whole hierarchy by a weight-sharing columns. The GLOM architecture provides an interpretable part-whole hierarchies for deep neural network (Garau et al., 2022). In RevCol, we adopt the design of using columns, but not modeling the process of formulating islands. On the contrary, our column iteration process maintains both low-level and high-level information and gradually disentangle them. Rather than using self-supervised methods, RevCol can be trained with supervision end-to-end.
REVERSIBLE NETWORKS
CONCLUSION
In this paper, we propose RevCol, a reversible column based foundation model design paradigm. During the lossless propagation through columns, features in RevCol are learned to be gradually disentangled and the total information is still maintained rather than compressed. Our experiments suggests that RevCol can achieve competitive performance in multiple computer vision tasks. We hope RevCol could contribute to better performance in various tasks in both vision and language domains. (Dosovitskiy et al., 2020). In this section, we show the micro design of RevCol can generalized to vanilla ViT, namely RevCol-ViT, with promising experimental results.
net-ViT maintains the feature resolution in the reversible columns. Thus the patch merging blocks and up-sample blocks in the fusion modules are replaced with a simple linear projection with a post LayerNorm. We use the vanilla ViT building block instead of the ConvNext building block variant. The post LayerNorms and normalized dot-product attention are used in ViT blocks to stabilize training convergence, similar to . With the properties of isotropy, we evenly arrange the building blocks in each column. The configuration details of RevCol-ViT are: We use the same training setting with the anisotropic RevCol as described in Sec. 3.1, except that the intermediate supervision is discarded for simplicity and the stochastic depth rate is set as 0.2 for RevCol-B. We scale down the value of last linear projection layers in each FFN accroding to the network depth in initialization, same as BEiT (Bao et al., 2021). In Tab. 9, we compare the RevCol-ViT with vanilla ViT and other concurrent isotropic designs. Our RevCol-ViT surpasses vanilla vision transformer (77.9% for ViT and 81.7% for DeiT) and convolutional network ConvNeXt (82.0%) that have comparable model parameters and computational overhead on ImageNet-1k classification w.r.t. the top-1 accuracy.
B.2 LANGUAGE MODELS
Considering the great success of applying transformer to computer vision, i.e., ViT (Dosovitskiy et al., 2020), we also made some exploration to generalize RevCol to natural language processing (NLP). Based on the design in Appendix B.1, we can easily apply the isotropic RevCol to language models with minor modification. To be specific, we replace the stem module in our RevCol with word embedding and positional encoding in transformer. Then, the RevCol can be plugged into the original transformer as an encoder. The output of the last column in RevCol will be used as the memory keys and values for the attention layers in decoder, just exactly the same as the original transformer.
We select the translation task to evaluate the potential of the RevCol in NLP. We run experiments on the WMT'16 English-German (En-De) dataset with 4.5M sentences and larger WMT'14 English-French dataset with 36M sentences. Each sentence is encoded by joint source and target byte pair encoding following Sennrich et al. (2016). The details of model architecture and the BLEU score are shown in Tab. 10.
All the dataset preparation and the training configurations follows Ott et al. (2018)
B.3 ROBUSTNESS OF THE NUMBER OF COLUMNS
In the ablation analysis of the paper, we show that when fix the total FLOPs and add more columns of RevCol, the performance first increases and then get saturated. When the number of columns is extreme large, such as 20, the performance drop because of the representation ability of single column is limited. When the number of columns is usual, such as 412, the performances are similar, which verifies the setting robustness of the number of columns. To further analyze the robustness of the number of columns, in this section, we build some RevCol-ViT-B variants (see Appendix B for more details). Each variant has the same number of residual blocks with the same channel dimension, but different number of columns. In other worlds, these variants have the same channel dimension and different depth of each columns and different number of columns. We use 32 residual blocks totally and maintain the FLOPs about 18G. Fig. 6 show the performance on ImageNet-1K of different variants. The number of columns are 1, 2, 4, and 8, accordingly the depth of each column are 32, 16, 8, and 4. The performance of single column variant is lower (similar to DeiT-B (Touvron et al., 2020)) because of the single column ViT can not maintain the information as multi reversible columns. The performance is decreasing when the number of columns became larger, because of the depth of each columns is not enough. This phenomenon indicates us that given a target FLOPs, the setting of the number of columns is robust unless the depth of each columns or channel dimension is too small. The dataset consists of around 168 million(M) images, 50M of which labeled and the remaining 118M unlabeled. The majority of labeled images come from public datasets, e.g. ImageNet, Places365 (Zhou et al., 2017a), and Bamboo . The others are web-crawled images annotated by in-door employees. Unlabeled images come from weakly-annotated image-text datasets like YFCC-100M (Thomee et al., 2016). We do not use text annotations.
In order to utilize images of different label domains and the massive unlabeled images, we employ a multi-target label system similar to Ding et al. (2022a) and Ghiasi et al. (2021). We adopt a semisupervised learning strategy with ViTs, thus generating pseudo labels with continuously increased quality. We only store soft predictions with confidence higher than 1% to save storage. The final version of pseudo label we use are generated by a multi-head ViT-Huge teacher, which has an 89.0% ImageNet-1k accuracy.
C.2 IMAGE DEDUPLICATION
Since the dataset contains large amount of unverified web-crawled images, there are probably validation or test images sneaking into our training dataset. Works like Mahajan et al. (2018) and Yalniz et al. (2019) all regard image deduplication an important procedure for fair experiments.
We first iterate over the entire dataset to filter out suspicious duplicates together with the corresponding test images based on their pseudo label distance. This brings more than 10,000 images with high suspicion. We look at these image pairs and finally find about 1,200 exact-duplicates and nearduplicates. Fig. 7 shows some examples of the near-duplicates, which are difficult to detect. Never the less, training a model without removing these duplicates gives less than 0.1% accuracy gain on ImageNet-1k in our experiments. We attribute this to the absence of true labels from these duplicates.
D MORE TRAINING DETAILS
This section gives more training details on ImageNet classification, COCO detection, and ADE20K segmentation.
D.1 INTERMEDIATE SUPERVISION SETTINGS
We add intermediate supervision in ImageNet-1k training, ImageNet-22k and extra data pre-training. We used a 3-block decoder with gradually up-sampled feature maps in ImageNet-1k training. The block setting remains the same as Sec. 2.2 We use a single layer decoder in ImageNet-22k and extra data pre-training. For all the variants of RevCol, we set the number of compound loss n to 3 empirically (eg. for a 8 column RevCol, the intermediate supervision is added to column 2, 4, and 6, and the original classification CE loss is also added to column 8). α i is set to 3, 2, 1, 0 and β i is set to 0.18, 0.35, 0.53, 1.
D.2 HYPERPARAMETERS USED FOR TRAINING AND PRE-TRAINING
This section introduces the training details for main experiments, the supervised training on ImageNet and extra data. We show this setting in Tab. 11. All experiments in ablation studies are superivised trained on ImageNet-1K except additional descriptions and also follow settings described in this section.
D.3 HYPERPARAMETERS USED FOR FINE-TUNING
This section gives the hyperparameters used for fine-tuning on ImageNet-1K and downstrea COCO object detection and instance segmentation, ADE20K semantic segmentation tasks, as shown in Tab. 12, Tab. 13 and Tab. 14. According the results shown in Section 3.5.5, larger kernel convolution perform better especially in down-stream tasks. To save the pre-training cost meanwhile achieve better performance, we pad the small 3 × 3 convolution kernel in pre-trained model weights to larger size then fine-tune in detection and segmentation tasks. Inspired by Net2net method, we pad the pre-trained 3 × 3 kernel in convolution layer with Gaussian initialized values. To protect the pre-trained kernel from being disturbed by the new padded values, we initialize the padded values with 0 mean and extremely small standard deviations (1e-7). We use this trick only with our largest model RevCol-H. We pad the 3 × 3 kernel in pre-trained model to 7 × 7 kernel size in COCO detection task and 13 × 13 in ADE20k sementatic segmentation task, then fine-tune on corresponding dataset to get the final result. In general, the kernel padding trick leads to 0.5∼0.8 AP box improvement and 0.7∼1.0 mIoU improvement for RevCol-H model.
E VISUALIZATIONS OF FEATURE DISENTANGLEMENT
In this section, we show our RevCol can disentangle features with stacked columns, which is different from the conventional sequential networks. We use RevCol-S pre-trained on ImageNet-1K for analysis. First, we visualize the class activation maps (CAMs) for outputs of each last layer of a level. We adopt LayerCAM (Jiang et al., 2021) technology to generate the CAMs with the predicted classes. Fig. 8 show the heatmaps of activation. With the levels and columns going deeper, the features focus on the regions with more semantics. The outputs of RevCol-S are the different levels of last column. These features with high level semantics focus on different parts of the image and the whole part of the object, achieving disentanglement of features for task-relevant and task-irrelevant. To quantify the disentanglement, we use Centered Kernel Alignment (CKA) similarity metric (Kornblith et al., 2019) to measure the similarity between representations in RevCol-S. We calculate the CKA similarities between intermediate features in different levels and columns and images or labels of each category in the ImageNet val set. Then we plot the similarities of the category with the highest label similarity in Fig. 9. As shown in the figure, the similarities between images and intermediate features are not clearly distinguished at different levels in Column 2-5, while the features with higher levels have lower similarity to the images in Column 6-8. The similarities between labels and intermediate features are also more distinct in higher columns.
1 :
1Sketch of the information propagation in: (a) Vanilla single-column network. (b) Our reversible column network. Yellow color denotes low-level information and blue color denotes semantic information.
Figure 2 :
2(a) Reversible unit in RevNet (Gomez et al., 2017). (b) Multi-level reversible unit. All inputs for level t are highlighted. (c) An overview of the whole reversible column network architecture, with simplified multi-level reversible unit.
•
RevCol-L: C = (128, 256, 512, 1024), B = (1, 2, 6, 2), COL = 8 • RevCol-XL: C = (224, 448, 896, 1792), B = (1, 2, 6, 2), COL = 8 • RevCol-H: C = (360, 720, 1440, 2880), B = (1, 2, 6, 2), COL = 8
Figure 3 :
3ImageNet-1K performance of maintaining a constant FLOPs of a single column and adding more columns.
Figure 4 :
4GPU Memory Consumption vs. Model size
Desjardins et al. (2012); Kulkarni et al. (2015); Higgins et al. (2017); Chen et al. (2016); Karras et al. (2019) seek to learn disentangled representations through generative models. Locatello et al. (2019) points out that unsupervised learning of disentangled representations is fundamentally impossible without inductive biases both on the considered learning approaches and the datasets. The recent proposal of GLOM
Gomez et al. (2017) firstly propose RevNet that allow back propagation without saving intermediate activations. The reversible design remarkably saves the training cost, since it keep O(1) GPU memory consumption as model depth scaling up. Jacobsen et al. (2018) propose a fully reversible network that can reverse back to the input without any information loss. Chang et al. (2018) develop a theoretical framework on stability and reversibility of deep neural network and derive reversible networks that can go arbitrarily deep. Mangalam et al. (2022) expand the reversible network scope from CNNs to Transformers. RevBiFPN (Chiley et al., 2022), a concurrent work of ours, add the reversible connections to BiFPN (Tan et al., 2020) network. Our RevCol maintains the information without loss inside each column rather than the whole BiFPN network in RevBiFPN.
•
RevCol-ViT-S: C = (224, 224, 224, 224), B = (2, 2, 2, 2), HEAD = 4, COL = 4 • RevCol-ViT-B: C = (384, 384, 384, 384), B = (3, 3, 3, 3), HEAD = 6, COL = 4
Figure 6 :
6ImageNet top-1 accuracy of different variants of RevCol-ViT-B. Each variant has the same total number of residual blocks and channel dimension.
C
SEMI-LABELED PRIVATELY COLLECTED DATASET FOR LARGE MODELS C.1 DATA COLLECTION AND PSEUDO LABEL SYSTEM
Figure 7 :
7Top: Near duplicates found in unlabeled images. Bottom: ImageNet-1k validation images.
Figure 8 :Figure 9 :
89Visualizations of class activation maps using LayerCAM (Jiang et al., 2021) for different levels and columns. CKA similarities (Kornblith et al., 2019) of features and images/labels for different levels and columns.
Table 1 :
1ImageNet classification results. We compare our models with state-of-the-art • Vision Transformers and • CNNs that have comparable FLOPs and parameters. ↑ denotes models fine-tuning using image size larger than 224 2 . We report the top-1 accuracy on the validation set of ImageNet as well as the number of parameters and FLOPs. Our models are highlighted in gray.Rev-ViT-B(Mangalam et al.)224 2 87 17.6 81.8 • RepLKNet-31B (Ding et al.)224 2 79 15.3 83.5 • RevBiFPN-S5 (Chiley et al.)352 2ImageNet-22K pre-trained models (ImageNet-1K fine-tuned)Model
Image ParamsFLOPs Top-1
Size
(M) (G)
Acc.
ImageNet-1K trained models
• Swin-T (Liu et al.)
224 2
28
4.5
81.3
• DeiT-S (Touvron et al.
224 2
22
4.6
79.8
• Rev-ViT-S (Mangalam et al.)224 2
22
4.6
79.9
• RevBiFPN-S3 (Chiley et al.)288 2
20
3.3
81.1
• EfficientNet-B4 (Tan & Le) 380 2
19
4.2
82.9
• ConvNeXt-T (Liu et al.)
224 2
29
4.5
82.1
• RevCol-T
224 2
30
4.5
82.2
• Swin-S (Liu et al.)
224 2
50
8.7
83.0
• MViTv1-B (Fan et al.)
224 2
37
7.8
83.0
• T2T-ViT-19 (Yuan et al.)
224 2
39
8.4
81.4
• RevBiFPN-S4 (Chiley et al.)320 2
49
10.6 83.0
• EfficientNet-B5 Tan & Le) 456 2
30
9.9
83.6
• ConvNeXt-S (Liu et al.)
224 2
50
8.7
83.1
• RevCol-S
224 2
60
9.0
83.5
• Swin-B (Liu et al.)
224 2
89
15.4 83.5
• DeiT-B (Touvron et al.)
224 2
86
17.5 81.8
• 82
21.8 83.7
• EfficientNet-B6 (Tan & Le) 528 2
43
19.0 84.0
• ConvNeXt-B (Liu et al.)
224 2
88
15.4 83.8
• RevCol-B
224 2
138
16.6 84.1
Model
Image ParamsFLOPs Top-1
Size
(M) (G)
Acc.
• Swin-B (Liu et al.
224 2
88
15.4 85.2
• Swin-B↑ (Liu et al.
384 2
88
47.0 86.4
• ViT-B↑ (Dosovitskiy et al.) 384 2
86
55.4 84.0
• RepLKNet-31B (Ding et al.) 224 2
79
15.3 85.2
• RepLKNet-31B↑ (Ding et al.)384 2
79
45.1 86.0
• ConvNeXt-B (Liu et al.)
224 2
89
15.4 85.8
• ConvNeXt-B↑ (Liu et al.)
384 2
89
45.1 86.8
• RevCol-B
224 2
138
16.6 85.6
• RevCol-B↑
384 2
138
48.9 86.7
• Swin-L (Liu et al.)
224 2
197
34.5 86.3
• Swin-L↑ (Liu et al.)
384 2
197
103.9 87.3
• ViT-L↑ (Dosovitskiy et al.) 384 2
307
190.7 85.2
• RepLKNet-31L (Ding et al.) 384 2
172
96.0 86.6
• ConvNeXt-L (Liu et al.)
224 2
198
34.4 86.6
• ConvNeXt-L↑ (Liu et al.)
384 2
198 101.0 87.5
• RevCol-L
224 2
273
39.0 86.6
• RevCol-L↑
384 2
273
116.0 87.6
• ConvNeXt-XL↑ (Liu et al.) 384 2
350
179.0 87.8
• RevCol-XL↑
384 2
834
350.0 88.2
Extra data pre-trained models (ImageNet-1K fine-tuned)
• RevCol-XL↑
384 2
834
350.0 89.4
• RevCol-H↑
640 2
2158 2537 90.0
commonly used MS-COCO (Lin et al., 2014) and ADE20k (Zhou et al., 2017b) dataset. Training
and fine-tuning settings please refer to Appendix D. Furthermore, we show the performance of
RevCol with transformer on vision and language tasks (shown in Appendix B).
3.1 IMAGE CLASSIFICATION
On ImageNet (1.28M images) (Deng et al., 2009) dataset, we train RevCol for 300 epochs with
intermediate supervision. Hyperparameters, augmentation and regularization strategies follows Liu
et al. (2022b) We also pre-train our models on the larger ImageNet-22K dataset (Ridnik et al., 2021),
which contains 14.2 million images.
In Tab. 1, we compare our RevCol variants with commonly used recent Transformers and CNNs
on ImageNet-1k validation set. Our models outperforms a large number of vanilla single-column
CNNs and Transformers with similar complexities. For example, RevCol-S achieve 83.5% Top-1
accuracy, outperform ConvNeXt-S by 0.4 points. When pre-trained with larger ImageNet-22K dataset,
RevCol-XL achieves 88.2% Top-1 accuracy. As RevCol maintains some task-irrelevant low-level
information in classification pre-training, relaxing the constraint of params and FLOPs and enlarging
dataset size can further boost our models' performance. To further test the scaling up effectiveness
of large dataset, we build a 168-million-image semi-labeled dataset (see Appendix C). With extra
data pre-training and ImageNet-1k fine-tuning, our RevCol-H achieves 90.0% top-1 accuracy. Our
results further demonstrate with RevCol, CNN models can also share the dividends of large model
and massive data pre-training.
3.2 OBJECT DETECTION
We evaluate our proposed RevCol on object detection task. Experiments are conducted on the
MS-COCO dataset using the Cascade Mask R-CNN (Cai & Vasconcelos, 2019) framework. We
also finetune our largest model RevCol-H with HTC++ (Chen et al., 2019) and DINO (Zhang et al.,
2022a) Framework.
Table 2 :
2Object detection results on MS-COCO dataset with different backbones. We report box AP and mask AP with single scale testing on COCO minival set. FLOPs are measured under input sizes of (1280, 800).In Tab. 2, we compare the AP box and AP mask with Swin/ConvNeXt in variant sizes on COCO validation set. We find RevCol models surpass other counterparts with similar computation complexities. Information retained in pre-training helps RevCol models acheieve better results in down-stream tasks. When the model size grows larger, this advantage becomes more remarkable. After finetuning under Objects365(Shao et al., 2019) dataset and DINO framework, our largest model RevCol-H achieves 63.8% AP box on COCO detection minival set.Backbone
AP box AP box
50
AP box
75
AP mask AP mask
50
AP mask
75
Params FLOPs
ImageNet-1K pre-trained
• Swin-T (Liu et al.)
50.5
69.3
54.9
43.7
66.6
47.1
86M
745G
• ConvNeXt-T (Liu et al.)
50.4
69.1
54.8
43.7
66.5
47.3
86M
741G
• RevCol-T
50.6
68.9
54.9
43.8
66.7
47.4
88M
741G
• Swin-S (Liu et al.)
51.8
70.4
56.3
44.7
67.9
48.5
107M
838G
• ConvNeXt-S (Liu et al.)
51.9
70.8
56.5
45.0
68.4
49.1
108M
827G
• RevCol-S
52.6
71.1
56.8
45.5
68.8
49.0
118M
833G
• Swin-B (Liu et al.)
51.9
70.9
56.5
45.0
68.4
48.7
145M
982G
• ConvNeXt-B (Liu et al.)
52.7
71.3
57.2
45.6
68.9
49.5
146M
964G
• RepLKNet-B (Ding et al.)
52.2
-
-
45.2
-
-
137M
965G
• RevCol-B
53.0
71.4
57.3
45.9
69.1
50.1
196M
988G
ImageNet-22K pre-trained
• Swin-B (Liu et al.)
53.0
71.8
57.5
45.8
69.4
49.7
145M
982G
• ConvNeXt-B (Liu et al.)
54.0
73.1
58.8
46.9
70.6
51.3
146M
964G
• RepLKNet-B (Ding et al.)
53.0
-
-
46.3
-
-
137M
965G
• RevCol-B
55.0
73.5
59.7
47.5
71.1
51.8
196M
988G
• Swin-L (Liu et al.)
53.9
72.4
58.8
46.7
70.1
50.8
253M
1382G
• ConvNeXt-L (Liu et al.)
54.8
73.8
59.8
47.6
71.3
51.7
255M
1354G
• RepLKNet-L (Ding et al.)
53.9
-
-
46.5
-
-
229M
1321G
• RevCol-L
55.9
74.1
60.7
48.4
71.8
52.8
330M
1453G
Extra data pre-trained
• RevCol-H (HTC++)
61.1
78.8
67.0
53.0
76.3
58.7
2.41G
4417G
• RevCol-H (Objects365+DINO) 63.8
81.8
70.2
-
-
-
2.18G
4012G
Table 3 :
3Semantic segmentation result on ADE20k dataset with different backbones.we report mIoU
Table 4 :
4System-level comparison of state-of-the-art visual foundation models with large-scale pretraining. We include • Vision Transformers, • CNNs, and • hybrid architectures pretrained either unsupervised or supervised on image-only and vision-language datasets. COCO scores marked with † means intermediate fine-tuned on extra data like Object365(Shao et al., 2019).Model
Params
Dataset
ImageNet
COCO test-dev
ADE20K
Images
Annotation
1k
Detector APbox APmask Segmenter mIoU +ms
• SwinV2-G 3.0 G
70 M
labeled
90.2
HTC++ 63.1 † 54.4 †
UperNet
59.3 59.9
• BEiT3
1.0 G
35 M labeled & image-text
89.6
ViTDet 63.7 † 54.8 † Mask2Former 62.0 62.8
• Florence
0.9 G
900 M
image-text
90.1
DyHead 62.4
-
-
-
-
• RevCol-H 2.1 G
168 M
semi-labeled
90.0
DINO
63.6 †
-
Mask2Former 60.4 61.0
3.3 SEMANTIC SEGMENTATION
We also evaluate RevCol backbones on the ADE20K semantic segmentation task with UperNet (Xiao
et al., 2018) framework. We do not use intermediate-supervision in down-stream fine-tune pro-
cess. To further explore our model's capacity and reach the leading performance, we utilize recent
segmentation framework Mask2Former (Cheng et al., 2022), and adopt the same training settings.
In Tab. 3, we report validation mIoU with single-scale and multi-scale flip testing. RevCol models can
achieve competitive performance across different model capacities, further validating the effectiveness
of our architecture design. It's worth noting that when use Mask2Former detector and extra pre-
training data, RevCol-H achieves an mIoU of 61.0%, which shows feasible scalability towards
large-scale vision applications.
3.4 SYSTEM-LEVEL COMPARISON WITH SOTA FOUNDATION MODELS
Foundation models (Kolesnikov et al., 2020; Radford et al., 2021; Yuan et al., 2021b) are general-
purpose backbones pre-trained on massive and divergent data source. They can adapt to various
down-stream tasks with limited domain-specific data. We show comparison among various public
state-of-the-art (SOTA) foundation models including Vision Transformers and Vision-Language
models, namely, SwinV2 (Liu et al., 2022a), BEiT3 (Wang et al., 2022), and Florence (Yuan et al.,
2021b). As shown in Tab. 4, though our RevCol-H is purely convolutional and pre-trained on single
modality dataset, the results on different tasks demonstrate remarkable generalization ability of
RevCol with large scale parameters.
Table 5 :
5ImageNet 1K performances of various number of columns in RevCols under the similar computational budget.# column Params FLOPs
FLOPs
per col.
Top-1
Acc.
1
28M
4.4G
4.40G
81.9
4
30M
4.5G
1.12G
82.2
8
34M
4.7G
0.59G
82.3
12
33M
4.4G
0.35G
82.2
20
35M
4.2G
0.21G
81.0
Table 6 :
6Performance comparison on ImageNet-1K of different design patterns. Row-1 represents HRNet style network w/o reversible connections. Row-2 represents RevNet style network w/o multi-column fusions. Row-3 are our proposed RevCols.rev. conn. multi-col. Params FLOPs Acc.
35M 4.9G 78.8
34M 4.5G 81.6
30M 4.5G 82.2
Table 7 :
7Performance comparison between models with and without intermediate supervision. Results are reported on ImageNet-1K and COCO dataset. We use 1× training schedule on COCO detection task.In this section, we evaluate the performance of RevCol-T/S/B with and without intermediate supervision on ImageNet-1K. We also evaluate the object detection task performance using 1× training schedule on MS-COCO dataset. Other settings remain the same. From the validation results inModel
inter. sup. Top-1 Acc. AP box
AP mask
RevCol-T
81.4
48.3
41.8
RevCol-T
82.2 (+0.8) 48.8 (+0.6) 42.2 (+0.4)
RevCol-S
83.0
50.7
43.8
RevCol-S
83.5 (+0.5) 51.1 (+0.4) 43.8 (+0.0)
RevCol-B
83.2
51.2
44.2
RevCol-B
84.1 (+0.9) 51.6 (+0.4) 44.2 (+0.0)
Table 8 :
8Performance of models with larger kernel convolution.Kernel
Size
FLOPs
Top-1
Acc
AP box
1×
AP mask
1×
3
4.5G
82.2 48.8
42.2
5
4.5G
82.5 49.5
42.6
7
4.6G
82.5 49.3
42.4
11
4.6G
82.5 49.9
42.7
Table 9 :
9ImageNet-1K classification results. We compare our RevCol-ViT with state-of-the-art isotropic • Vision Transformers and • CNNs that have comparable FLOPs and parameters.Model
Image Size Params FLOPs Top-1 Acc.
• DeiT-S (Touvron et al., 2020)
224 2
22M
4.6G
79.8
• ConvNext-S (iso.) (Liu et al., 2022b)
224 2
22M
4.3G
79.7
• RevCol-ViT-S
224 2
16M
4.6G
80.6
• ViT-B (Dosovitskiy et al., 2020)
384 2
86M
55.4G
77.9
• DeiT-B (Touvron et al., 2020)
224 2
86M
17.6G
81.7
• Rev-ViT-B (Mangalam et al., 2022)
224 2
87M
17.6G
81.8
• Rev-MViT-B (Mangalam et al., 2022)
224 2
39M
8.7G
82.5
• ConvNext-B (iso.) (Liu et al., 2022b)
224 2
87M
16.9G
82.0
• RevCol-ViT-B
224 2
67M
18.8G
82.7
Table 10 :
10BLEU score on newstest2014 for WMT English-German (En-De) and English-French (En-Fr) translation task. † indicates we re-run the experiments with fairseq.Model
Encoder
Decoder
Params Task BLEU
arch
dmodel dff head
arch dmodel dff head
Transformer †
big
N = 6
1024
4096
16
N = 6
1024
4096
16
209M En-De 28.43
(Vaswani et al., 2017)
221M
En-Fr 43.07
RevCol-Transformer
B = (1,1,1,1)
768
3072
12
N = 6
768
3072
12
200M En-De 28.67
COL = 4
209M
En-Fr 43.40
Table 11 :
11Hyperparameters for training and pre-training RevCol.Hyperparameters
ImageNet-1K ImageNet-22K 168M Extra Data
T/S/B
B/L/XL
XL/H
Input resolution
224 2
224 2
Training epochs
300
90
10
Warmup epochs
20
5
0.15
Batch size
4096
5120
Peak learning rate
4e-3
5e-4
6.25e-4
Learning rate schedule
cosine
cosine
Layer-wise learning rate decay
AdamW momentum
(0.9, 0.999)
(0.9, 0.999)
Weight decay
0.05
0.1
0.05
Gradient clipping
1.0 (element-wise)
Drop path
0.1/0.3/0.4
0.3
0.2
EMA
0.9999
Label smoothing ε
0.1
0.1
Data augment
RandAug (9, 0.5)
RandAug (9, 0.5)
Mixup
0.8
CutMix
1.0
Random erase
0.25
Table 12 :
12Hyperparameters for fine-tuning RevCol on ImageNet-1K classification.Hyperparameters
ImageNet-1K
B/L/XL/H
Input resolution
384 2 /384 2 /384 2 /640 2
Fine-tuning epochs
30
Warmup epochs
0
Batch size
512
Peak learning rate
5e-5
Layer-wise learning rate decay
0.9/0.8/0.8/0.8
AdamW momentum
(0.9, 0.999)
Weight decay
1e-8
Learning rate schedule
cosine
Head init scale
0.001
Drop path
0.2/0.3/0.4/0.5
EMA
///0.9999
Gradient clipping
10.0 (norm)
Label smoothing ε
0.1
Data augment
RandAug (9, 0.5)
Mixup
CutMix
Random erase
0.25
Table 13 :
13Hyperparameters for fine-tuning RevCol on object detection with Cascade Mask R-CNN detector. IN-1K Pre-trained IN-22K Pre-trained RevCol-T/S/B RevCol-B/LHyperparameters
Fine-tuning epochs
36
Batch size
16
Peak learning rate
2e-4
1e-4
Warmup steps
1500
Layer-wise learning rate decay
0.85/0.8/0.8
0.9/0.8
AdamW momentum
(0.9, 0.999)
Weight decay
0.05
Drop path
0.3/0.4/0.4
0.5/0.6
Table 14 :
14Hyperparameters for fine-tuning RevCol on ADE20K semantic segmentation with UperNet segmentation framework. IN-1K Pre-trained IN-22K Pre-trained RevCol-T/S/B RevCol-B/L D.3.1 CONVOLUTION KERNEL PADDING TRICK IN DOWN-STREAM TASKSHyperparameters
Input resolution
512 2
640 2
Fine-tuning steps
80k
Batch size
16
Peak learning rate
4e-5
Warmup steps
1500
Layer-wise learning rate decay
1.0
0.9
AdamW momentum
(0.9, 0.999)
Weight decay
0.01
Drop path
0.3
In this section, we provide the architecture design details for RevCol. As depicted inFig. 2and Section 2.2, our RevCol contains multiple columns with reversible connections.Fig. 5 (a)shows the architecture of ConvNeXt. Note that we replace the 7 × 7 depth-wise convolution in ConvNeXt with 3 × 3, as described in Sec. 2.2.2. InFig. 5(b), we show in detail how to extend to our RevCol on the basis of ConvNeXt. First, we replace the down-sample block with a fusion block to fuse low-level representations in current column and high-level ones from the previous column, andFig. 5 (c)shows the details of fusion block which contains up-sample and down-sample operations to handle different resolutions. Second, for each level, same-level representations from the previous column are added to current level's output and are ready to propagate as a whole. Thanks to the two modifications, feature maps from different hierarchies aggregate together to form the intermediate representation.InFig. 5 (c), we use a Linear-LayerNorm followed by a nearest interpolation to up-sample low resolution features. A 2 × 2 kernel Conv2d with stride 2 down-samples the high resolution features, followed by a LayerNorm to balance the contributions of the two inputs.B GENERALIZATION TO TRANSFORMERS B.1 VISION TRANSFORMER MODELSRevCol contains multiple light-weight sub-networks with reversible connections. In this paper, we adopt the ConvNext micro design by default except for multi-columns fusion and smaller convolution kernel as described in Sec. 2.2.2. However, the micro design of our RevCol is not limited to convolutional networks, but is also compatible with isotropic designing, such as the vanilla vision
Multiscale deep equilibrium models. Shaojie Bai, Vladlen Koltun, J Zico Kolter, Advances in Neural Information Processing Systems. 33Shaojie Bai, Vladlen Koltun, and J Zico Kolter. Multiscale deep equilibrium models. Advances in Neural Information Processing Systems, 33:5238-5250, 2020.
Beit: Bert pre-training of image transformers. Hangbo Bao, Li Dong, Furu Wei, arXiv:2106.08254arXiv preprintHangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
Representation learning: A review and new perspectives. Yoshua Bengio, Aaron Courville, Pascal Vincent, IEEE transactions on pattern analysis and machine intelligence. 35Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
Cascade r-cnn: high quality object detection and instance segmentation. Zhaowei Cai, Nuno Vasconcelos, IEEE transactions on pattern analysis and machine intelligence. 43Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: high quality object detection and instance segmentation. IEEE transactions on pattern analysis and machine intelligence, 43(5):1483-1498, 2019.
Multitask learning. Machine learning. Rich Caruana, 28Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997.
Reversible architectures for arbitrarily deep residual neural networks. Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, Elliot Holtham, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence32Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
Hybrid task cascade for instance segmentation. Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, et al. Hybrid task cascade for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4974-4983, 2019.
Tianqi Chen, Ian Goodfellow, Jonathon Shlens, arXiv:1511.05641Net2net: Accelerating learning via knowledge transfer. arXiv preprintTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015.
Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel, Advances in neural information processing systems. 29Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29, 2016.
Maskedattention mask transformer for universal image segmentation. Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, Rohit Girdhar, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionBowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked- attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1290-1299, 2022.
Vitaliy Chiley, Vithursan Thangarasa, Abhay Gupta, Anshul Samar, Joel Hestness, Dennis Decoste, Revbifpn, arXiv:2206.14098The fully reversible bidirectional feature pyramid network. arXiv preprintVitaliy Chiley, Vithursan Thangarasa, Abhay Gupta, Anshul Samar, Joel Hestness, and Dennis DeCoste. Revbifpn: The fully reversible bidirectional feature pyramid network. arXiv preprint arXiv:2206.14098, 2022.
Deformable convolutional networks. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, Yichen Wei, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 764-773, 2017.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
Disentangling factors of variation via generative entangling. Guillaume Desjardins, Aaron Courville, Yoshua Bengio, arXiv:1210.5474arXiv preprintGuillaume Desjardins, Aaron Courville, and Yoshua Bengio. Disentangling factors of variation via generative entangling. arXiv preprint arXiv:1210.5474, 2012.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Hr-nas: Searching efficient high-resolution neural architectures with lightweight transformers. Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, Ping Luo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, and Ping Luo. Hr-nas: Searching efficient high-resolution neural architectures with lightweight transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2982-2992, 2021.
Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. Xiaohan Ding, Xiangyu Zhang, Jungong Han, Guiguang Ding, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11963-11975, 2022a.
Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. Xiaohan Ding, Xiangyu Zhang, Jungong Han, Guiguang Ding, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11963-11975, 2022b.
Nice: Non-linear independent components estimation. Laurent Dinh, David Krueger, Yoshua Bengio, arXiv:1410.8516arXiv preprintLaurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.11929arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Multiscale vision transformers. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, Christoph Feichtenhofer, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionHaoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6824-6835, 2021.
Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks. Nicola Garau, Niccolò Bisagno, Zeno Sambugaro, Nicola Conci, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionNicola Garau, Niccolò Bisagno, Zeno Sambugaro, and Nicola Conci. Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13689-13698, 2022.
Nas-fpn: Learning scalable feature pyramid architecture for object detection. Golnaz Ghiasi, Tsung-Yi Lin, Quoc V Le, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionGolnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid archi- tecture for object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7036-7045, 2019.
Multi-task self-training for learning general representations. Golnaz Ghiasi, Barret Zoph, D Ekin, Cubuk, V Quoc, Tsung-Yi Le, Lin, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionGolnaz Ghiasi, Barret Zoph, Ekin D Cubuk, Quoc V Le, and Tsung-Yi Lin. Multi-task self-training for learning general representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8856-8865, 2021.
The reversible residual network: Backpropagation without storing activations. Advances in neural information processing systems. N Aidan, Mengye Gomez, Raquel Ren, Roger B Urtasun, Grosse, 30Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. Advances in neural information processing systems, 30, 2017.
On the connection between local attention and dynamic depth-wise convolution. Qi Han, Zejia Fan, Qi Dai, Lei Sun, Ming-Ming Cheng, Jiaying Liu, Jingdong Wang, International Conference on Learning Representations. Qi Han, Zejia Fan, Qi Dai, Lei Sun, Ming-Ming Cheng, Jiaying Liu, and Jingdong Wang. On the connection between local attention and dynamic depth-wise convolution. In International Conference on Learning Representations, 2021.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Masked autoencoders are scalable vision learners. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000-16009, 2022.
beta-VAE: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner, International Conference on Learning Representations. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy2fzU9gl.
How to represent part-whole hierarchies in a neural network. Geoffrey Hinton, arXiv:2102.12627arXiv preprintGeoffrey Hinton. How to represent part-whole hierarchies in a neural network. arXiv preprint arXiv:2102.12627, 2021.
Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon, arXiv:1802.07088Deep invertible networks. arXiv preprintJörn-Henrik Jacobsen, Arnold Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088, 2018.
Big transfer (bit): General visual representation learning. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby, European conference on computer vision. SpringerAlexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In European conference on computer vision, pp. 491-507. Springer, 2020.
Similarity of neural network representations revisited. Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton, International Conference on Machine Learning. PMLRSimon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In International Conference on Machine Learning, pp. 3519- 3529. PMLR, 2019.
Deep convolutional inverse graphics network. D Tejas, Kulkarni, F William, Pushmeet Whitney, Josh Kohli, Tenenbaum, Advances in neural information processing systems. 28Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. Advances in neural information processing systems, 28, 2015.
Deeply-supervised nets. Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu, Artificial intelligence and statistics. PMLRChen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. In Artificial intelligence and statistics, pp. 562-570. PMLR, 2015.
Backpropagation and the brain. P Timothy, Adam Lillicrap, Luke Santoro, Colin J Marris, Geoffrey Akerman, Hinton, Nature Reviews Neuroscience. 216Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and Geoffrey Hinton. Backprop- agation and the brain. Nature Reviews Neuroscience, 21(6):335-346, 2020.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014.
Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125, 2017.
Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, Li Fei-Fei, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionChenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei- Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 82-92, 2019.
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZe Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022, 2021.
Swin transformer v2: Scaling up capacity and resolution. Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZe Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12009-12019, 2022a.
A convnet for the 2020s. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976-11986, 2022b.
Challenging common assumptions in the unsupervised learning of disentangled representations. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem, international conference on machine learning. PMLRFrancesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentan- gled representations. In international conference on machine learning, pp. 4114-4124. PMLR, 2019.
Weightnet: Revisiting the design space of weight networks. Ningning Ma, Xiangyu Zhang, Jiawei Huang, Jian Sun, European Conference on Computer Vision. SpringerNingning Ma, Xiangyu Zhang, Jiawei Huang, and Jian Sun. Weightnet: Revisiting the design space of weight networks. In European Conference on Computer Vision, pp. 776-792. Springer, 2020.
Exploring the limits of weakly supervised pretraining. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, Laurens Van Der Maaten, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European conference on computer vision (ECCV), pp. 181-196, 2018.
Reversible vision transformers. Karttikeya Mangalam, Haoqi Fan, Yanghao Li, Chao-Yuan Wu, Bo Xiong, Christoph Feichtenhofer, Jitendra Malik, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKarttikeya Mangalam, Haoqi Fan, Yanghao Li, Chao-Yuan Wu, Bo Xiong, Christoph Feichtenhofer, and Jitendra Malik. Reversible vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10830-10840, 2022.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Scaling neural machine translation. Myle Ott, Sergey Edunov, David Grangier, Michael Auli, 10.18653/v1/W18-6301Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMyle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 1-9, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6301.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning. PMLRAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
Tal Ridnik, Emanuel Ben-Baruch, arXiv:2104.10972Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. arXiv preprintTal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. arXiv preprint arXiv:2104.10972, 2021.
An overview of multi-task learning in. Sebastian Ruder, arXiv:1706.05098deep neural networks. arXiv preprintSebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
Multi-task learning as multi-objective optimization. Ozan Sener, Vladlen Koltun, Advances in neural information processing systems. 31Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In 54th Annual Meeting of the Association for Computational Linguistics, pp. 1715-1725. Association for Computational Linguistics (ACL), 2016.
Objects365: A large-scale, high-quality dataset for object detection. Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, Jian Sun, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionShuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 8430-8439, 2019.
Going deeper with convolutions. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
Efficientnet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, Quoc Le, International conference on machine learning. PMLRMingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105-6114. PMLR, 2019.
Efficientdet: Scalable and efficient object detection. Mingxing Tan, Ruoming Pang, Quoc V Le, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionMingxing Tan, Ruoming Pang, and Quoc V Le. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10781-10790, 2020.
Yfcc100m: The new data in multimedia research. Bart Thomee, A David, Gerald Shamma, Benjamin Friedland, Karl Elizalde, Douglas Ni, Damian Poland, Li-Jia Borth, Li, Communications of the ACM. 592Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64-73, 2016.
Deep learning and the information bottleneck principle. Naftali Tishby, Noga Zaslavsky, ieee information theory workshop (itw). IEEENaftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In 2015 ieee information theory workshop (itw), pp. 1-5. IEEE, 2015.
Naftali Tishby, C Fernando, William Pereira, Bialek, The information bottleneck method. arXiv preprint physics/0004057. Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, arXiv:2012.12877arXiv preprintHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877, 2020.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
Deep high-resolution representation learning for visual recognition. Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, IEEE transactions on pattern analysis and machine intelligence. 43Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, et al. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10):3349-3364, 2020.
Image as a foreign language: Beit pretraining for all vision and vision-language tasks. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Saksham Owais Khan Mohammed, Subhojit Singhal, Som, arXiv:2208.10442arXiv preprintWenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv preprint arXiv:2208.10442, 2022.
Revisiting locally supervised learning: an alternative to end-to-end training. Yulin Wang, Zanlin Ni, Shiji Song, Le Yang, Gao Huang, arXiv:2101.10832arXiv preprintYulin Wang, Zanlin Ni, Shiji Song, Le Yang, and Gao Huang. Revisiting locally supervised learning: an alternative to end-to-end training. arXiv preprint arXiv:2101.10832, 2021.
Fbnetv5: Neural architecture search for multiple tasks in one run. Bichen Wu, Chaojian Li, Hang Zhang, Xiaoliang Dai, Peizhao Zhang, Matthew Yu, Jialiang Wang, Yingyan Lin, Peter Vajda, arXiv:2111.10007arXiv preprintBichen Wu, Chaojian Li, Hang Zhang, Xiaoliang Dai, Peizhao Zhang, Matthew Yu, Jialiang Wang, Yingyan Lin, and Peter Vajda. Fbnetv5: Neural architecture search for multiple tasks in one run. arXiv preprint arXiv:2111.10007, 2021.
Unified perceptual parsing for scene understanding. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In Proceedings of the European conference on computer vision (ECCV), pp. 418-434, 2018.
Simmim: A simple framework for masked image modeling. Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9653-9663, 2022.
Billion-scale semisupervised learning for image classification. Hervé I Zeki Yalniz, Kan Jégou, Manohar Chen, Dhruv Paluri, Mahajan, arXiv:1905.00546arXiv preprintI Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi- supervised learning for image classification. arXiv preprint arXiv:1905.00546, 2019.
Tokens-to-token vit: Training vision transformers from scratch on imagenet. Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zi-Hang Jiang, E H Francis, Jiashi Tay, Shuicheng Feng, Yan, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLi Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zi-Hang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng Yan. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 558-567, 2021a.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, arXiv:2111.11432A new foundation model for computer vision. arXiv preprintLu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432, 2021b.
Taskonomy: Disentangling task transfer learning. Alexander Amir R Zamir, William Sax, Leonidas J Shen, Jitendra Guibas, Silvio Malik, Savarese, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAmir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3712-3722, 2018.
Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, M Lionel, Heung-Yeung Ni, Shum, Dino, arXiv:2203.03605Detr with improved denoising anchor boxes for end-to-end object detection. arXiv preprintHao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M Ni, and Heung-Yeung Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv preprint arXiv:2203.03605, 2022a.
Bamboo: Building mega-scale vision dataset continually with human-machine synergy. Yuanhan Zhang, Qinghong Sun, Yichun Zhou, Zexin He, Zhenfei Yin, Kun Wang, Lu Sheng, Yu Qiao, Jing Shao, Ziwei Liu, Yuanhan Zhang, Qinghong Sun, Yichun Zhou, Zexin He, Zhenfei Yin, Kun Wang, Lu Sheng, Yu Qiao, Jing Shao, and Ziwei Liu. Bamboo: Building mega-scale vision dataset continually with human-machine synergy, 2022b.
Places: A 10 million image database for scene recognition. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, Antonio Torralba, IEEE Transactions on Pattern Analysis and Machine Intelligence. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017a.
Scene parsing through ade20k dataset. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, Antonio Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 633-641, 2017b.
| [
"https://github.com/megvii-research/RevCol"
]
|
[
"Self-consistent equation for torsion arising as a consequence of the Dirac sea quantum fluctuations in external classical electromagnetic and gravitational fields",
"Self-consistent equation for torsion arising as a consequence of the Dirac sea quantum fluctuations in external classical electromagnetic and gravitational fields"
]
| [
"S N Vergeles \nLandau Institute for Theoretical Physics\nRussian Academy of Sciences\nMoscow region142432ChernogolovkaRussia\n\nDepartment of Theoretical Physics\nMoscow Institute of Physics and Technology\nMoskow region141707DolgoprudnyjRussia\n"
]
| [
"Landau Institute for Theoretical Physics\nRussian Academy of Sciences\nMoscow region142432ChernogolovkaRussia",
"Department of Theoretical Physics\nMoscow Institute of Physics and Technology\nMoskow region141707DolgoprudnyjRussia"
]
| []
| The quantum fluctuations of the Dirac field in external classical gravitational and electromagnetic fields are studied. A self-consistent equation for torsion is calculated, which is obtained using oneloop fermion diagrams. | 10.1088/1361-6382/ac7e14 | [
"https://arxiv.org/pdf/2203.03625v2.pdf"
]
| 247,315,525 | 2203.03625 | 4192a043a833e5dda3db6dcc118c6455797eba3a |
Self-consistent equation for torsion arising as a consequence of the Dirac sea quantum fluctuations in external classical electromagnetic and gravitational fields
20 Jul 2022
S N Vergeles
Landau Institute for Theoretical Physics
Russian Academy of Sciences
Moscow region142432ChernogolovkaRussia
Department of Theoretical Physics
Moscow Institute of Physics and Technology
Moskow region141707DolgoprudnyjRussia
Self-consistent equation for torsion arising as a consequence of the Dirac sea quantum fluctuations in external classical electromagnetic and gravitational fields
20 Jul 2022PACS numbers: 0462+v
The quantum fluctuations of the Dirac field in external classical gravitational and electromagnetic fields are studied. A self-consistent equation for torsion is calculated, which is obtained using oneloop fermion diagrams.
I. INTRODUCTION
In this paper, we study the theory of gravity, minimally related to the massive Dirac field, formulated in the form of Cartan-Palatini. The Dirac field is considered to be quantized, while the gravitational (and electromagnetic) fields are classical. Since the gauge fields are classical, the mean values of bilinear forms constructed using Dirac fields are exhausted by one-loop diagrams. The torsion tensor appears to be one of such mean value. This tensor appears as the fermion part of one of the equations of motion (see the next Section, Eq. (2.4)): By definition, the left-hand side of the last equation is the torsion tensor T a bc . In this work, the mean value of the right-hand side of Eq. (1.1) is calculated explicitly. Only divergent contributions (in momentum space) are taken into account. Therefore, the result can depend only on local geometric quantities (curvature and torsion tensors and their covariant derivatives). As a result of these calculations, the right-hand side of Eq. (1.1) turns out to be a local function of the curvature and torsion tensors and their covariant derivatives. If we assume that the curvature tensor is given, then in this way a self-consistent equation for the torsion tensor arises. This equation is one of the equations of motion in the studied model.
The self-consistent equation for torsion tensor (3.16) may be interesting for the following reasons. Even a superficial study of this equation shows that in the present cosmological epoch the torsion tensor generated by the Dirac field must be zero. This fact is consistent with experimental data. However, conditions near the Big Bang may occur when the torsion tensor turns out to be nonzero. To study the cosmological consequences arising from this, separate studies are needed.
In this paper, the question of the contribution to the right-hand side of Eq. (1.1) from the Weyl fermion fields (neutrinos) remained open. This contribution must be calculated since Weyl fermions exist. However, the corresponding computational procedure must be somewhat modified, since the Weyl fields are either massless or their masses are extremely small.
There is a good reason to consider gravitational fields as classical. The quantum theory of gravity is a nonrenormalizable theory. In particular, this means that quantum fluctuations of gravitational dynamical variables (tetrads and connections) are large on ultra-small scales of the order of the Planck length [1]. But on the scales much larger than the Planck scale, these fluctuations decrease rapidly (according to a power law). Therefore, we will assume that when considering physics on scales that are much larger than Planck's, fluctuations of the gravitational degrees of freedom are insignificant, that is, these degrees of freedom are described by classical fields. That's why the gravitational fields e a µ and ω ab µ are considered as classical. However, the quantum fluctuations of the Dirac field are important for wavelengths shorter than the Compton wavelength of the Dirac particle.
Recently, the problems of the dynamics of the Dirac and Weyl spin current in gravitational and electromagnetic fields have been actively studied. For example, in the work [2] the master equations governing the interaction between gravitational fields and gauge invariant spin-current is derived. Note that the canonical spin current (2.6) is proportional to the torsion tensor, and therefore such a spin current in an external electromagnetic field is considered here. In the paper [3] a variant of the spin current constructed with the help of the spin operator in the Pauli-Lubanski form is studied. In this paper, it is shown that the vacuum expectation value of such a spin in an external electromagnetic field is nonzero. It is shown below that the same vacuum expectation value of the spin current (2.6) is zero. We refer the reader for the interesting results and a significant number of citations on this topic to the review [4]. The one-loop calculation of the effective action arising due to integration over the Dirac field in the same model (2.1) is contained in [5].
We also point to the review [6], entitled "General relativity with spin and torsion", which may be useful in getting acquainted with the problem under study here.
This article is structured as follows. Section II defines the model under study and introduces notation. Section III presents the results of calculations and writes out a self-consistent equation for the torsion tensor. Then comes the Conclusion.
II. DEFINITION OF THE MODEL AND TECHNICAL MEANS
Let us write out the action of gravity coupled with a massive Dirac field in external classical electromagnetic and gravitational fields:
A = Ag + AΨ, Ag = − 1 4l 2 P ε abcd ε µνλρ R ab µν e c λ e d ρ d 4 x, R ab µν = ∂ µ ω ab ν − ∂ ν ω ab µ + ω a cµ ω cb ν − ω a cν ω cb µ = − R ba µν , AΨ = d 4 x √ −g i 2ẽ µ a Ψγ a D µ Ψ − D µ Ψγ a Ψ − mΨΨ . (2.1)
Here
D µ Ψ = (∂ µ − ieA µ + ω µ ) Ψ, ω µ ≡ 1 2 ω abµ σ ab , σ ab = 1 4 [γ a , γ b ],(2.
2)
The vector fields {ẽ µ a }, a = 0, 1, 2, 3 form an local orthonormal basis, so that g µνẽ
µ aẽ ν b = η ab = diag(1, −1, −1, −1), (2.3) andẽ µ a e b µ = δ b a .
Levi-Civita symbols are equal to units if their indices are ordered as (0123). We regard the lattice theory of gravity [1] as a regularization of the continuum theory (2.1). Thus, divergent integrals in the calculation of fermionic loops should be cut off at lattice (or Planck) scales.
Equation δA/δω ab µ = 0 gives the definition of torsion:
T a µν ≡ D µ e a ν − D ν e a µ = il 2 P 4 Ψ (γ a σ bc + σ bc γ a ) Ψe b µ e c ν . (2.4)
Or, equivalently
T a bc = l 2 P 4 ε a bcd Ψγ 5 γ d Ψ. (2.5)
In the process of transforming the right side of Eq. (2.4) we used the equality
γ a σ bc + σ bc γ a = −iε abcd γ 5 γ d .
Variation of action AΨ with respect to connection ω bc µ determines the expression for the Dirac spin current S a bc :
δ ω AΨ ≡ − 1 2 d 4 x √ −gS µ bc δω bc µ , S a bc = − 1 2 ε a bcd Ψγ 5 γ d Ψ. (2.6)
Comparing (2.5) and (2.6), we find:
T a bc = − l 2 P 2 S a bc . (2.7)
Thus, the torsion tensor and the Dirac spin current are proportional in the considered model. Hereinafter, ... means averaging over the fermionic vacuum at fixed external classical electromagnetic and gravitational fields.
The Dirac causal propagator is usually denoted as S c (x, y). However, in order to avoid misunderstandings and confusion with the lower Latin indices of numerous tensors that are present in the formulas along with the causal propagator, we will everywhere denote the causal propagator without an index: S(x, y). Thus for y 0 = x 0 + 0 we have (see Eq. (2.5))
Ψ β (x + 0)Ψ α (x) = −iS(x, x + 0) αβ , Tabc = − il 2 P 4 ε abcd tr γ 5 γ d S(x, x + 0). (2.8)
Since geometric quantities are assumed to be classical, we always assume Tabc = Tabc . Further, to simplify calculations, we use normal coordinates x µ centered at point p, so that x µ (p) = 0 and in the vicinity of this point
e a µ (x) = δ a µ + 1 2 T a νµ (p)x ν + 1 6 R a νλµ + 1 6 T a νc T c λµ + 1 3 T a νµ;λ (p)x ν x λ + 1 24 R c νλµ T a ρc + R a ρνc T c λµ + T a ρc T c νd T d λµ + 1 12 R a ρνµ;λ + 1 12 T a ρc T c νµ;λ + 1 8 T a ρc;λ T c νµ + 1 8 T a ρµ;ν;λ (p)x ν x λ x ρ , (2.9) ω abµ (x) = 1 2 Rabνµ(p)x ν + 1 6 Rabνf T f λµ + 1 3 Rabνµ;λ (p)x ν x λ . (2.10)
Using the formula (2.9) we get:ẽ
µ a (x) = δ µ a + δẽ µ a (x), δẽ µ a (x) = − 1 2 T µ νa (p)x ν + E µ νλa x ν x λ + E µ νλρa x ν x λ x ρ , E µ νλa = − 1 6 R µ νλa + 1 12 T µ νc T c λa − 1 3 T µ νa;λ (p), E µ νλρa = 1 24 R c νλa T µ ρc + 1 24 R µ νλc T c ρa − 1 12 R µ νλa;ρ + 1 12 T µ νc T c λa;ρ + 1 24 T µ νc;ρ T c λa − 1 8 T µ νa;λ;ρ (p). (2.11)
In quantities (2.11)-(2.10), the terms of a higher degree relative to the coordinates x µ will not be needed further. Note that at the point p Latin indices a, b, . . . and Greek indices µ, ν, . . . are indistinguishable. Further, the argument (p) for geometric quantities is omitted, since this does not lead to misunderstandings. We will calculate the Dirac causal propagator in (2.8) using perturbation theory. Thus[7]
S(0, y) = {iẽ µ a γ a D µ − m} −1 0,y = {{(iγ µ ∂ µ − m) + V } −1 0,y = S (0) (−y) − d 4 xS (0) (−x)V (x)S (0) (x − y) + . . . , S (0) (x) = d 4 k (2π) 4 e −ikx γ µ k µ + m k 2 − m 2 + i0 ,(2.12)
where (in the absence of an electromagnetic field)
V = iδẽ µ a γ a ∂ µ + iγ µ ω µ + iδẽ µ a γ a ω µ = V (0) + V (1) + V (2) .
(2.13) On the right-hand side of the last equality, V (s) , s = 0, 1, 2, denotes the contribution to the perturbation operator of degree s relative to the coordinates x µ . The contributions of the powers s > 2 are not interesting here, since they do not lead to divergent corrections. This implies that the expansions (2.11)-(2.10) relative to x µ are correct, since the divergent integrals saturate at x µ −→ 0. Let's write out all V (s) , taking into account that the degree of the operator ∂ µ is equal to (−1). Using (2.11)-(2.10) and (2.13) we find:
V (0) (x) = − i 2 γ a T µ νa x ν ∂ µ ,(2.
14)
V (1) = iE µ νλa x ν x λ γ a ∂ µ + i 4 Rabµν x µ γ ν σ ab , (2.15) V (2) = iE µ νλρa x ν x λ x ρ γ a ∂ µ + i 2 − 1 12 Rbcνf T f λa + 1 3 Rbcνa;λ x ν x λ γ a σ bc . (2.16)
III. CALCULATIONS
Obviously, the term of degree zero relative to V on the right-hand side of Eq. (2.8) is equal to zero. Indeed, we have tr γ 5 γ d S (0) (−y) ≡ 0.
A. The contribution of the electromagnetic field to the torsion tensor Let us first calculate the contribution to the torsion from the electromagnetic field in the first order and without taking into account gravity. In this case V = eγ µ A µ . According to (2.8) and (2.12), this contribution is
δ (1) A Tabc = i 4 el 2 P ε abcd tr γ 5 γ d γ ν γ µ γ λ × d 4 x ∂ ν D (0) (−x) ∂ λ D (0) (x) A µ (x) ≡ 0, (3.1) since D (0) (−x) = D (0) (x).
Here D (0) (x) is the causal propagator of a free Boson field. Comparison of Eqs. (2.7) and (3.1) shows that the mean of the spin current in the canonical representation (2.6) in an external electromagnetic field is equal to zero in the first order in the field. Note an interesting fact: a similar mean of the spin current, constructed using the representation of the spin operator in the Pauli-Lubanski form, is not equal to zero in an external electromagnetic field [3]. Consequently, these two representations of spin current differ fundamentally in quantum field theory.
B. Self-consistent equation for the torsion tensor
Everywhere below, we assume that the electromagnetic field is zero. 1. Let us calculate the contribution to the right-hand side of Eq. (2.8) in the first order in V . a) Contribution from V (0) :
δ V (0) Tabc = − 3 4 l 2 P E d 4 k (2π) 4 k 2 E (k 2 E + m 2 ) 2 · Tabc(p), k 2 E = (k 1 ) 2 + (k 2 ) 2 + (k 3 ) 2 + (k 4 ) 2 .
(3.2)
Hereinafter, the Wick rotation is used to bring the integrals to a convenient form. The squarely diverging integral in (3.2) must be cut off at a scale of the order of the Planck. This means that the maximum possible momentum is of the order
k max ∼ 2π l P . (3.3)
Therefore we have
(C V (0) ) 2 ≡ 3 4 l 2 P k max E d 4 k (2π) 4 k 2 E (k 2 E + m 2 ) 2 ∼ 1. (3.4) b) Contribution from V (1) .
It is easy to see that this contribution is zero. Indeed, such a contribution is proportional to integrals of the form
E d 4 k (2π) 4 k µ (k 2 E + m 2 ) m = 0, E d 4 k (2π) 4 k µ k ν k λ (k 2 E + m 2 ) m+1 = 0, (3.5)
which are equal to zero after integration over the angles. Thus
δ (1) V (1) Tabc = 0. (3.6)
Further, we take into account only logarithmically divergent contributions, which are obtained by taking into account the potential V (2) in the first order, as well as V (0) ⊗ V (1) in the second and V (0) ⊗ V (0) ⊗ V (0) in the third order in V . In this case, taking into account (3.3), we have
k max E d 4 k (2π) 4 1 (k 2 E + m 2 ) 2 = 1 8π 2 ln 2π l P m ,
ln 2π l P m = ln 2π l P mc ∼ 50 for electron mass.
(3.7)
Note that the estimate (3.7) is valid in the present epoch. We adhere to the version that the theory is regularized with the help of a lattice. This means that in the era close to the moment of the Big Bang, the volume of momentum space was larger in the same proportion as the volume of space decreased. For this reason, near the Big Bang, we also have k max −→ ∞, and the estimate (3.7) turns out to be much larger.
The self-consistent equation for torsion is written in terms of 4-vector t a , which in the case under consideration is equivalent to the torsion tensor according to the equality
Tabc = ε abcdt d . (3.8)
In the process of calculations, the Bianchi identities are used, which have the following form in the presence of torsion:
Ra[bcd] = Ta[bc;d] + Taf[bT f cd] , (3.9) Rab[cd;f] = −R abe[f T e cd] .
(3.10)
Everywhere we use the standard notation for any multi-index value κ: κ ... ... c) As a result of cumbersome calculations, we get:
δ V (2) t a = l 2 P 96π 2 ln 2π l P m 3(t a ) ;b ;b − 3 t b ;b ;a + 1 2 R · t a −2R a b t b +ε abcd tb tc;d . (3.13) 2. Contribution from V ⊗ V : δ V ⊗V t a = l 2 P 96π 2 ln 2π l P m − 1 2 R · t a −R a b t b − 3 2 t 2 · t a +6ε abcd tb tc;d . (3.14) 3. Contribution from V ⊗ V ⊗ V : δ V ⊗V ⊗V t a = l 2 P 96π 2 ln 2π l P m 3 2 t 2 · t a .
(3.15) 4. Putting together the contributions (2.8), (3.2), (3.6), (3.13), (3.14) and (3.15), we arrive at a self-consistent equation for the torsion tensor:
− 1 + (C V (0) ) 2 t a + l 2 P 96π 2 ln 2π l P m 3(t a ) ;b ;b − 3 t b ;b ;a − 3R a b t b +7ε abcd tb tc;d = 0. (3.16)
It is useful to rewrite Eq. (1.1) in the following form:
e µ b e ν c (D µ e aν − D ν e aµ ) = Tabc ≡ ε abcd t d .
IV. DISCUSSION
It would be interesting vacuum average the bilinear form of the Dirac fields in the Einstein equation δ A /δe a µ = 0 in a similar way. Thus, the Einstein equation would arise, in which the contribution of the Dirac field to the matter energy-momentum tensor would be expressed in terms of the curvature and torsion tensors. In deriving this equation, there would be additional physical difficulties: it would be necessary to make a subtraction from the cosmological constant, which diverges as the fourth power of the cutoff momentum and is due to the Dirac sea. This renormalization should lead to a very small cosmological constant. There was no such difficulty in deriving of Eq. (3.16). Another essential difference between the Einstein equation thus obtained and Eq. (3.16) is as follows. The energy-momentum tensor of real particles and antiparticles does not vanish if the curvature and torsion tensors are equal to zero (outside the mass shell). Therefore, in the presence of real particles and antiparticles, the curvature tensor cannot vanish due to the Einstein equation (on the mass shell). The statement remains valid for zero torsion tensor. On the contrary, Eq. (3.16) allows zeroing of the torsion tensor.
Let's assume that we have explicitly described Einstein equation. In this case, we have a closed system of equations for variables t a , e a µ and ω ab µ : this is Einstein equation and Eqs. (3.16), (3.17)). To do this, it is necessary to express all the geometric quantities in Einstein equation in terms of the indicated variables. However, in this paper, Einstein equation (in the specified context) is not studied.
A step in this direction was taken in [5]: the topological correction to the Hilbert-Einstein-Cartan action was calculated, which is contained in the integral over the Dirac field in the one-loop approximation. However, this problem remains unresolved in its entirety at present.
According to the equation (3.16), the torsion field cannot propagate in space-time like the field of scalar (or vector) particles. This statement is not a new result, it can be found in the excellent review of Hehl et al [6]. To clarify this statement, consider the Klein-Gordon-Fock equation for a scalar field in Minkowski space-time
(∂ b ∂ b φ + µ 2 φ) = 0 (4.1)
and an elementary consequence of this equation. In the case of a plane monochromatic wave, φ ∝ exp(−ikx), Eq. (4.1) has a nonzero solution only if
k 0 = ± µ 2 + k 2 . (4.2)
In the case of a scalar particle, we have µ 2 > 0. Therefore, scalar particles can propagate in space-time. However, in Eq. (3.16) the role of the "squared mass" is played by the constant
µ 2 = −32π 2 l −2 P ln 2π l P m −1 1 + (C V (0) ) 2 < 0. (4.3)
Indeed, consider the case t −→ 0 and R a bcd −→ 0. In this case, one can choose a gauge such as ω µ −→ 0 (see (2.10), so that D µ −→ ∂ µ . As a result, we can omit the last two terms in Eq. (3.16), and this equation breaks down into the following two equations:
∂ b ∂ b t a −32π 2 l −2 P ln 2π l P m −1 1 + (C V (0) ) 2 = 0,(4.4)
and ∂ b t b = 0. Comparison of Eqs. (4.1) and (4.4) leads to equality (4.3). But then for k 2 ≪ |µ 2 | we have k 0 = −i|µ|, and the wave decays exponentially in time: t a ∝ exp(−|µ|x 0 ). Thus, only homogeneous stationary solutions or those close to them can be nonzero. Such a situation (with an inverted mass square) takes place in Landau's theory of second-order phase transitions below critical temperature.
There is also a significant difference between Eq. (3.16) and the corresponding equation in Landau's theory: in Eq. (3.16) there is no cubic term, but there is a quadratic term containing the first derivative. Obviously, Eq. (3.16) always has a zero solution for the torsion tensor. Finding nonzero solutions to this equation will be the subject of further research. Here we only point out the possible existence of a certain solution. Let t a −→ 0 and gradient terms are unimportant, and we will expand the solution in this small quantity. Then in the equations (3.9) and (3.10) the right parts can be set equal to zero in the leading approximation, and the curvature tensor can be taken in the form: This superficial consideration shows that if there are no conditions for the existence of the torsion tensor at the present epoch, then such conditions could take place near the Big Bang point. Finally, we point out that, according to (2.5), we have the relation J 5a = 4/l 2 P t a , J 5a ≡ Ψγ 5 γ a Ψ. (4.6)
Equation (4.6) shows that if the torsion tensor is not equal to zero, then the axial current is also not equal to zero. The physical consequences following from this are not clear to the author.
γ a σ bc + σ bc γ a )Ψ (1.1)
[abc] ≡ (κ ... ...abc + κ ... ...bca + κ ... ...cab ). It is easy to check that, due to the representation (3.8), the second term on the right-hand side of the equality (3.9) vanishes identically: Taf[bTf cd] ≡ 0. With (3.9) we get: Rabcd − Rcdab = 1 2 (T bcd;a − Tcda;b − Tdab;c + Tabc;d) . (3.11) The Ricci tensor by definition Rab ≡ η cd Rcadb = η cd Racbd, Rab − Rba = T c ba;c . (3.12)
about the possibility of the existence of non-zero solutions of Eq. (3.16) are given in the next Section.
R ab cd = −H 2 δ a c δ b d − δ a d δ b c (curvature of de Sitter space). Then R a b = −3H 2 δ a b ,and in the case t a ;b = 0 Eq. 2 is given in (4.3). An equation (4.5) can have a non-zero solution if the parenthesis in that equation vanishes. This effect is achieved with a certain and sufficiently large value of the parameter
Acknowledgments I thank Prof. G.E. Volovik for drawing my attention to the problem, as well as for useful advice in the process of work. This work was carried out as a part of the State Program 0033-2019-0005.
. S Vergeles, Classical and Quantum Gravity. 3885022S. Vergeles, Classical and Quantum Gravity 38, 085022 (2021).
. A De Camargo, R Sobreiro, V Otoya, arXiv:2110.09363arXiv preprintA. de Camargo, R. Sobreiro, and V. Otoya, arXiv preprint arXiv:2110.09363 (2021).
. C.-S Chu, C.-H Leung, Physical Review Letters. 127111601C.-S. Chu and C.-H. Leung, Physical Review Letters 127, 111601 (2021).
. S S Cranganore, Physical Review D. 104124022S. S. Cranganore, Physical Review D 104, 124022 (2021).
. J Nascimento, A Y Petrov, P Porfírio, Physical Review D. 10544053J. Nascimento, A. Y. Petrov, and P. Porfírio, Physical Review D 105, 044053 (2022).
. F W Hehl, P Von Der Heyde, G D Kerlick, J M Nester, Reviews of Modern Physics. 48393F. W. Hehl, P. Von der Heyde, G. D. Kerlick, and J. M. Nester, Reviews of Modern Physics 48, 393 (1976).
| []
|
[
"PAMELA results on the cosmic-ray antiproton flux",
"PAMELA results on the cosmic-ray antiproton flux",
"PAMELA results on the cosmic-ray antiproton flux",
"PAMELA results on the cosmic-ray antiproton flux"
]
| [
"O Adriani \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"G C Barbarino \nDepartment of Physics\nUniversity of Naples \"Federico II\"\nI-80126NaplesItaly\n\nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"G A Bazilevskaya \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"R Bellotti \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"M Boezio \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"E A Bogomolov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"L Bonechi \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"M Bongi \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"V Bonvicini \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"S Borisov \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S Bottai \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"A Bruno \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"F Cafagna \nINFN\nSezione di BariI-70126BariItaly\n",
"D Campana \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"R Carbone \nINFN\nSezione di Naples\nI-80126NaplesItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"P Carlson \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"M Casolino \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"G Castellini ",
"L Consiglio \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"M P De Pascale \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"C De Santis \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"N De Simone \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"V Di Felice \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"A M Galper \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"W Gillard \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"L Grishantseva \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"P Hofverberg \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"G Jerse \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"A V Karelin \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S V Koldashov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S Y Krutkov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"A N Kvashnin \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"A Leonov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"V Malvezzi \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"L Marcelli \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"A G Mayorov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"W Menn ",
"V V Mikhailov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"E Mocchiutti \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"A Monaco \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"N Mori \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"N Nikonov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n\nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"G Osteria \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"P Papini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"M Pearce \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"P Picozza \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"C Pizzolotto \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"M Ricci ",
"S B Ricciarini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"L Rossetto \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"M Simon ",
"R Sparvoli \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"P Spillantini \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"Y I Stozhkov \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"A Vacchi \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"E Vannuccini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"G Vasilyev \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"S A Voronov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"J Wu \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"Y T Yurkin \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"G Zampa \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"N Zampa \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"V G Zverev \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"O Adriani \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"G C Barbarino \nDepartment of Physics\nUniversity of Naples \"Federico II\"\nI-80126NaplesItaly\n\nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"G A Bazilevskaya \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"R Bellotti \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"M Boezio \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"E A Bogomolov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"L Bonechi \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"M Bongi \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"V Bonvicini \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"S Borisov \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S Bottai \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"A Bruno \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"F Cafagna \nINFN\nSezione di BariI-70126BariItaly\n",
"D Campana \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"R Carbone \nINFN\nSezione di Naples\nI-80126NaplesItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"P Carlson \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"M Casolino \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"G Castellini ",
"L Consiglio \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"M P De Pascale \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"C De Santis \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"N De Simone \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"V Di Felice \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"A M Galper \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"W Gillard \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"L Grishantseva \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"P Hofverberg \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"G Jerse \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"A V Karelin \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S V Koldashov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"S Y Krutkov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"A N Kvashnin \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"A Leonov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"V Malvezzi \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"L Marcelli \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"A G Mayorov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"W Menn ",
"V V Mikhailov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"E Mocchiutti \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"A Monaco \nDepartment of Physics\nUniversity of Bari\nI-70126BariItaly\n\nINFN\nSezione di BariI-70126BariItaly\n",
"N Mori \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"N Nikonov \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n\nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"G Osteria \nINFN\nSezione di Naples\nI-80126NaplesItaly\n",
"P Papini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"M Pearce \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"P Picozza \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"C Pizzolotto \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"M Ricci ",
"S B Ricciarini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"L Rossetto \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"M Simon ",
"R Sparvoli \nINFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly\n\nDepartment of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly\n",
"P Spillantini \nDepartment of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly\n\nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"Y I Stozhkov \nLebedev Physical Institute\nRU-119991MoscowRussia\n",
"A Vacchi \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"E Vannuccini \nINFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly\n",
"G Vasilyev \nIoffe Physical Technical Institute\nRU-194021St. PetersburgRussia\n",
"S A Voronov \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"J Wu \nDepartment of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden\n",
"Y T Yurkin \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n",
"G Zampa \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"N Zampa \nINFN\nSezione di Trieste\nI-34149TriesteItaly\n",
"V G Zverev \nMoscow Engineering and Physics Institute\nRU-11540MoscowRussia\n"
]
| [
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nUniversity of Naples \"Federico II\"\nI-80126NaplesItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nUniversity of Naples \"Federico II\"\nI-80126NaplesItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Department of Physics\nUniversity of Bari\nI-70126BariItaly",
"INFN\nSezione di BariI-70126BariItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Naples\nI-80126NaplesItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"INFN\nSezione di Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Rome \"Tor Vergata\"\nI-00133RomeItaly",
"Department of Physics\nUniversity of Florence\nI-50019Sesto Fiorentino, FlorenceItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Lebedev Physical Institute\nRU-119991MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly",
"Ioffe Physical Technical Institute\nRU-194021St. PetersburgRussia",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"Department of Physics\nKTH\nOskar Klein Centre for Cosmoparticle Physics\nAlbaNova University Centre\nSE-10691StockholmSweden",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"INFN\nSezione di Trieste\nI-34149TriesteItaly",
"Moscow Engineering and Physics Institute\nRU-11540MoscowRussia"
]
| []
| The satellite-borne experiment PAMELA has been used to make a new measurement of the cosmic-ray antiproton flux and the antiproton-to-proton flux ratio which extends previously published measurements down to 60 MeV and up to 180 GeV in kinetic energy. During 850 days of data acquisition approximately 1500 antiprotons were observed. The measurements are consistent with purely secondary production of antiprotons in the galaxy. More precise secondary production models are required for a complete interpretation of the results. PACS numbers: 96.50.sb, 95.35.+d, 95.55.Vj | 10.1103/physrevlett.105.121101 | [
"https://arxiv.org/pdf/1007.0821v1.pdf"
]
| 7,603,143 | 1007.0821 | d7e6e5df322ad6d72ccb6b715f8963e267cdc432 |
PAMELA results on the cosmic-ray antiproton flux
6 Jul 2010
O Adriani
Department of Physics
University of Florence
I-50019Sesto Fiorentino, FlorenceItaly
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
G C Barbarino
Department of Physics
University of Naples "Federico II"
I-80126NaplesItaly
INFN
Sezione di Naples
I-80126NaplesItaly
G A Bazilevskaya
Lebedev Physical Institute
RU-119991MoscowRussia
R Bellotti
Department of Physics
University of Bari
I-70126BariItaly
INFN
Sezione di BariI-70126BariItaly
M Boezio
INFN
Sezione di Trieste
I-34149TriesteItaly
E A Bogomolov
Ioffe Physical Technical Institute
RU-194021St. PetersburgRussia
L Bonechi
Department of Physics
University of Florence
I-50019Sesto Fiorentino, FlorenceItaly
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
M Bongi
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
V Bonvicini
INFN
Sezione di Trieste
I-34149TriesteItaly
S Borisov
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
S Bottai
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
A Bruno
Department of Physics
University of Bari
I-70126BariItaly
INFN
Sezione di BariI-70126BariItaly
F Cafagna
INFN
Sezione di BariI-70126BariItaly
D Campana
INFN
Sezione di Naples
I-80126NaplesItaly
R Carbone
INFN
Sezione di Naples
I-80126NaplesItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
P Carlson
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
M Casolino
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
G Castellini
L Consiglio
INFN
Sezione di Naples
I-80126NaplesItaly
M P De Pascale
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
C De Santis
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
N De Simone
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
V Di Felice
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
A M Galper
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
W Gillard
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
L Grishantseva
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
P Hofverberg
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
G Jerse
INFN
Sezione di Trieste
I-34149TriesteItaly
A V Karelin
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
S V Koldashov
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
S Y Krutkov
Ioffe Physical Technical Institute
RU-194021St. PetersburgRussia
A N Kvashnin
Lebedev Physical Institute
RU-119991MoscowRussia
A Leonov
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
V Malvezzi
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
L Marcelli
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
A G Mayorov
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
W Menn
V V Mikhailov
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
E Mocchiutti
INFN
Sezione di Trieste
I-34149TriesteItaly
A Monaco
Department of Physics
University of Bari
I-70126BariItaly
INFN
Sezione di BariI-70126BariItaly
N Mori
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
N Nikonov
Ioffe Physical Technical Institute
RU-194021St. PetersburgRussia
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
G Osteria
INFN
Sezione di Naples
I-80126NaplesItaly
P Papini
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
M Pearce
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
P Picozza
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
C Pizzolotto
INFN
Sezione di Trieste
I-34149TriesteItaly
M Ricci
S B Ricciarini
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
L Rossetto
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
M Simon
R Sparvoli
INFN
Sezione di Rome "Tor Vergata"
I-00133RomeItaly
Department of Physics
University of Rome "Tor Vergata"
I-00133RomeItaly
P Spillantini
Department of Physics
University of Florence
I-50019Sesto Fiorentino, FlorenceItaly
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
Y I Stozhkov
Lebedev Physical Institute
RU-119991MoscowRussia
A Vacchi
INFN
Sezione di Trieste
I-34149TriesteItaly
E Vannuccini
INFN
Sezione di FlorenceI-50019Sesto Fiorentino, FlorenceItaly
G Vasilyev
Ioffe Physical Technical Institute
RU-194021St. PetersburgRussia
S A Voronov
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
J Wu
Department of Physics
KTH
Oskar Klein Centre for Cosmoparticle Physics
AlbaNova University Centre
SE-10691StockholmSweden
Y T Yurkin
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
G Zampa
INFN
Sezione di Trieste
I-34149TriesteItaly
N Zampa
INFN
Sezione di Trieste
I-34149TriesteItaly
V G Zverev
Moscow Engineering and Physics Institute
RU-11540MoscowRussia
PAMELA results on the cosmic-ray antiproton flux
6 Jul 2010arXiv:1007.0821v1 [astro-ph.HE] 14 IFAC, I-50019 Sesto Fiorentino, Florence, Italy 1
The satellite-borne experiment PAMELA has been used to make a new measurement of the cosmic-ray antiproton flux and the antiproton-to-proton flux ratio which extends previously published measurements down to 60 MeV and up to 180 GeV in kinetic energy. During 850 days of data acquisition approximately 1500 antiprotons were observed. The measurements are consistent with purely secondary production of antiprotons in the galaxy. More precise secondary production models are required for a complete interpretation of the results. PACS numbers: 96.50.sb, 95.35.+d, 95.55.Vj
Antiprotons and positrons are a small but not negligible component of the cosmic radiation. They can be produced in the interactions between cosmic-ray nuclei and the interstellar matter. Detailed measurements of the cosmic-ray antiproton energy spectrum therefore provide important information concerning the origin and propagation of cosmic-rays. Exotic sources of primary antiprotons such as the annihilation of dark matter particles [1][2][3] and the evaporation of primordial black holes [4,5] can also be probed. The theoretical energy spectrum of secondary antiprotons has a distinct peak around 2 GeV and rapidly decreases towards lower energies due to the kinematic constraints on the antiproton production. At higher energies the spectrum is slightly steeper than that of the parent protons (e.g. see [6]), which results in a slight decrease of the antiproton-to-proton flux ratio.
Since July 2006, PAMELA (a Payload for Antimatter Matter Exploration and Lightnuclei Astrophysics) is measuring the antiparticle component of the cosmic radiation. A previous PAMELA measurement of the antiproton-to-proton flux ratio between 1.5 and 100
GeV [7], was found to follow the expectation from secondary production calculations. However, the positron fraction [8,9] measured in the same energy range showed a clear deviation from secondary production models. In order to explain these results both astrophysical objects (e.g. pulsars) and dark matter have been proposed as positron sources (e.g. [10]). A contribution from pulsars would naturally increase the positron and electron abundances without affecting the antiproton component. Other astrophysical models [11] have been proposed to explain the PAMELA positron results but produce an increase in the antiproton component at very high energies (≥100 GeV). A dark matter contribution may require pure leptonic annihilation channels, e.g. [12], or the introduction of a new dark sector of forces, e.g. [13]. In [14] it is noted that any signal in the antiproton energy spectrum may be hidden due to incomplete modelling of secondary production and cosmic-ray propagation.
A detailed measurement of the antiproton energy spectrum over a large energy range is therefore of great interest.
The PAMELA experiment [10,15] comprises (from top to bottom): a time of flight system, a magnetic spectrometer with silicon tracker planes, an anticoincidence system, an electromagnetic imaging calorimeter, a shower tail catcher scintillator and a neutron detector. These components are housed inside a pressurized container attached to the Rus- We report on the cosmic-ray antiproton flux over the widest energy range ever achieved: 60 MeV to 180 GeV. We also confirm and extend the previously published PAMELA antiproton-to-proton flux ratio measurement [7] to the same energy range. Data were acquired from July 2006 to December 2008 (850 days), corresponding to > 10 9 triggers.
Triggered events were selected for analysis if the reconstructed rigidity exceeded the vertical geomagnetic cut-off (estimated using the satellite orbital information) by a factor of 1.3.
Downward-going charge-one particles were selected using the time-of-flight and spectrometer data. Time-of-flight information was also used to select low velocity (anti)protons while electrons were rejected using the electromagnetic calorimeter information, as described in [7]. The remaining electron contamination was estimated to be negligible while contamination from locally produced pions was found to be about 10% between 1 and 3 GV/c and negligible at lower and higher rigidities [7,16].
The highest energy at which antiprotons can be unambiguously measured by PAMELA is determined by the contamination of "spillover" protons which are reconstructed with an incorrect sign of curvature either due to the finite spectrometer resolution or scattering in the spectrometer planes. To reduce this contamination, strict requirements were applied on the quality of the tracks reconstructed in the spectrometer. For example, tracks accompanied by δ-ray emission were discarded to avoid poorly reconstructed coordinates on the silicon planes of the spectrometer. For each track the maximum detectable rigidity (MDR) was evaluated on an event-by-event basis by propagating the estimated coordinate errors and taking into account the track topology. The MDR was required to be 6 times larger than the measured rigidity. This allowed the antiproton measurement to be extended up to 180 GV/c with acceptable contamination from spillover protons. The contamination was estimated using the GPAMELA detector simulation which is based on the GEANT3 package [17]. The simulation contains an accurate representation of the geometry and performance of the PAMELA detectors. For the spectrometer [18] the measured noise of each silicon plane and performance variations over the duration of the measurement were accounted for. The simulation code was validated by comparing the distributions of several significant variables (e.g. coordinate residuals, χ 2 and the covariance matrix from the track fitting) with those obtained from real data. The high-energy region of the deflection distribution was studied before applying the MDR selection and agreement within 20% was found between data 4 and simulation. This difference was taken as a systematic uncertainty on the spillover contamination which was estimated to be ≃ 30% for the rigidity interval 100-180 GV/c. The efficiencies were carefully studied using both experimental and simulated data [16,19,20]. The time dependence of the detector performance (and therefore also efficiency) was studied using proton samples collected during 2 month long periods. The average global selection efficiency was measured to be ≃ 30%. The number of (anti)protons rejected by the selection criteria due to interactions and energy loss within the detector systems was estimated using the simulation. The number of antiprotons lost due to this selection is energy dependent and varies from ≃ 10% below 1 GeV to ≃ 6% above 50 GeV. The antiproton flux was obtained by considering the geometrical factor (estimated both analytically and with simulations) and the total live time which is provided by an on-board clock that times the periods during which the apparatus is waiting for a trigger.
The energy-binned antiproton fluxes and antiproton-to-proton flux ratios are given in Table I. The spectrometer resolution has not been unfolded and a systematic uncertainty is included to account for this. Contamination from pions and spillover protons has been subtracted from the results. The first and second errors in the table represent the statistical and systematic uncertainties, respectively. The total systematic uncertainty was obtained quadratically summing the various systematic errors considered: acceptance, contamination, efficiency estimation, energy losses, interactions and spectrum unfolding. Figure 1 shows the antiproton energy spectrum and Figure 2 shows the antiproton-toproton flux ratio measured by PAMELA along with other recent experimental data [21][22][23][24][25]28] and theoretical calculations assuming pure secondary production of antiprotons during the propagation of cosmic rays in the galaxy. The curves were calculated for solar minimum, which is appropriate for the PAMELA data taking period, using the force field approximation [29], [37].
The PAMELA results reproduce the expected peak around 2 GeV in the antiproton flux and are in overall agreement with pure secondary calculations. The experimental uncertainties are smaller than the spread in the different theoretical curves and, therefore, provide important constrains on parameters relevant for secondary production calculations.
For example, the antiproton flux bands from Donato et al. [26] presented in Figure 1 show uncertainties on the propagation parameters (dotted lines) and antiproton production crosssections (dashed lines) and indicate larger uncertainties than those present in the PAMELA measurements. Figure 3 shows the PAMELA antiproton-to-proton flux ratio compared with a calculation [14] (dashed line) including both a primary antiproton component from the annihilation of 180 GeV wino-like neutralinos and secondary antiprotons. This model, based on the non-thermal production of dark matter in the early universe, was proposed to explain the high-energy rise in the PAMELA positron fraction [8]. As shown by the dashed line in Figure 3, a reasonable choice of GALPROP [31] propagation parameters (dashed- secondary production takes place in the same region where cosmic rays are being accelerated [11]. An increase in the antiproton [32] and secondary nuclei abundances [33] are also predicted in this model. The solid line in Figure 3 shows the prediction for the high-energy antiproton-to-proton flux ratio. While this theoretical prediction is in good agreement with the PAMELA data, in this energy region it does not differ significantly from the expectation for standard secondary production models. Comparisons with experimental secondary cosmic-ray nuclei data are needed along with higher energy antiproton measurements. New data on the boron-to-carbon ratio measured by PAMELA will soon become available, while the antiproton spectrum is likely to be probed at higher energies by AMS-02 experiment [34] which will soon be placed on the International Space Station. 8 We have measured the antiproton energy spectrum and the antiproton-to-proton flux ratio over the most extended energy range ever achieved and with no atmospheric overbur- den. Our results are consistent with pure secondary production of antiprotons during the propagation of cosmic rays in the galaxy. We note that the quality of our data surpasses the current precision of the theoretical modeling of the cosmic-ray acceleration and propagation mechanisms. Improved models are needed to allow the full significance of these experimental results to be understood.
We acknowledge support from The Italian Space Agency (ASI), Deutsches Zentrum für Luft-und Raumfahrt (DLR), The Swedish National Space Board, The Swedish Research
sian Resurs-DK1 satellite, which was launched on June 15 th 2006. The orbit is elliptical and semi-polar, with an inclination of 70.0 • and an altitude varying between 350 km and 3 610 km.
M. Aguilar et al.) BESS-polar04 (K. Abe et al.) BESS1999 (Y. Asaoka et al.) BESS2000 (Y. Asaoka et al.) CAPRICE1998 (M. Boezio et al.) CAPRICE1994 (M. Boezio et al.) PAMELA FIG. 1: The antiproton energy spectrum at the top of the payload obtained in this work compared with contemporary measurements [21-25] and theoretical calculations for a pure secondary production of antiprotons during the propagation of cosmic rays in the galaxy. The dotted and dashed lines indicate the upper and lower limits calculated by Donato et al. [26] for different diffusion models, including uncertainties on propagation parameters and antiproton production cross-sections,respectively. The solid line shows the calculation by Ptuskin et al.[27] for the case of a Plain Diffusion model.
dotted line) allows a good description of PAMELA antiproton data with the inclusion of the wino-annihilation signal. Given current uncertainties on propagation parameters, this primary component cannot be ruled out. It has also been suggested that the PAMELA positron data can be explained without invoking a primary component. This is possible if 7 (Y. Asaoka et al.) BESS 1999 (Y. Asaoka et al.) BESS-polar 2004 (K. Abe et al.) CAPRICE 1994 (M. Boezio et al.) CAPRICE 1998 (M. Boezio et al.) HEAT-pbar 2000 (A. S. Beach et al.) PAMELA FIG. 2: The antiproton-to-proton flux ratio at the top of the payload obtained in this work compared with contemporary measurements [21-24, 28] and theoretical calculations for a pure secondary production of antiprotons during the propagation of cosmic rays in the galaxy. The dashed lines show the upper and lower limits calculated by Simon et al. [6] for the Leaky Box Model, while the dotted lines show the limits from Donato et al. [30] for a Diffusion Reacceleration with Convection model. The solid line shows the calculation by Ptuskin et al. [27] for the case of a Plain Diffusion model.
FIG. 3 :
3The antiproton-to-proton flux ratio at the top of the payload obtained in this work compared with theoretical calculations. The dotted lines show the upper and lower limits calculated for a pure secondary production of antiprotons during the propagation of cosmic rays in the galaxy by Donato et al. [30] for a Diffusion Reacceleration with Convection model. The dashed line is a calculation by Kane et al. [14] including both a primary antiproton component from annihilation of 180 GeV wino-like neutralinos and secondary antiprotons (dashed-dotted line for the secondary component). The solid line show the calculation by Blasi and Serpico [32] for secondary antiprotons including an additional antiproton component produced and accelerated at cosmic-ray sources.
TABLE I :
ISummary of antiproton results. Antiproton fluxes (×10 −3 particles/(m 2 sr s GeV)) and antiproton-to-proton flux ratios (×10 −5 ). The upper limits are 90% confidence levels. The first and second errors represent the statistical and systematic uncertainties, respectively.Rigidity
Mean Kinetic
Observed
Flux
p
p
at the
Energy at
number of
at top of
at top of
spectrometer
top of
events p
payload
payload
GV/c
payload GeV
0.35 -0.50
0.09
0
< 6.4
< 0.73
0.50 -1.01
0.28
7
6.7 ± 2.7 ± 0.2
0.48 ± 0.18 ± 0.01
1.01 -1.34
0.56
15
15.3 +7.5
−3.7 ± 0.9
0.99 +0.31
−0.26 ± 0.07
1.34 -1.63
0.81
19
17.2 +7.4
−3.9 ± 1.1
1.33 +0.38
−0.33 ± 0.10
1.63 -1.93
1.07
32
21.4 +6.8
−3.9 ± 1.3
2.04 ± 0.44 ± 0.15
1.93 -2.23
1.34
39
24.5 +7.2
−4.3 ± 1.5
2.78 ± 0.54 ± 0.20
2.23 -2.58
1.61
49
20.5 ± 3.2 ± 1.2
3.43 ± 0.49 ± 0.24
2.58 -2.99
2.03
78
27.1 ± 3.3 ± 1.6
5.44 ± 0.62 ± 0.39
2.99 -3.45
2.42
79
21.9 ± 2.6 ± 1.3
6.10 ± 0.68 ± 0.43
3.45 -3.99
2.90
96
22.7 ± 2.5 ± 1.3
7.78 ± 0.79 ± 0.55
3.99 -4.62
3.47
103
17.8 ± 1.9 ± 1.0
9.15 ± 0.89 ± 0.65
4.62 -5.36
4.14
109
15.7 ± 1.6 ± 0.9
10.7 ± 1.0 ± 0.8
5.36 -6.23
4.93
110
11.1 ± 1.1 ± 0.7
12.0 ± 1.1 ± 0.9
6.2 -7.3
5.9
106
8.31 ± 0.86 ± 0.49
12.5 ± 1.2 ± 0.9
7.3 -8.5
7.0
87
5.56 ± 0.64 ± 0.33
12.2 ± 1.3 ± 0.9
8.5 -10.1
8.4
98
5.16 ± 0.57 ± 0.30
15.6 ± 1.6 ± 1.1
10.1 -12.0
10.1
108
3.70 ± 0.38 ± 0.22
20.8 ± 1.9 ± 1.5
12.0 -14.6
12.3
82
2.12 ± 0.26 ± 0.12
16.1 ± 1.8 ± 1.1
14.6 -18.1
15.3
64
1.39 ± 0.19 ± 0.08
20.7 ± 2.4 ± 1.5
18.1 -23.3
19.6
56
0.67 ± 0.10 ± 0.04
17.4 ± 2.2 ± 1.2
23.3 -31.7
26.2
42
0.251 ± 0.041 ± 0.015
17.1 ± 2.5 ± 1.2
31.7 -48.5
38.0
36
0.127 ± 0.023 ± 0.007
18.3 ± 3.0 ± 1.3
48.5 -100.0
67.4
22
0.0228 ± 0.0072 ± 0.0008
17.7 ± 4.8 ± 0.8
100.0 -180.0
128.9
3
0.0036 +0.0057
−0.0020 ± 0.0002
14 +16
−10 ± 1
The Russian Space Agency (Roscosmos) and The Russian Foundation for Basic Research. Council, Council, The Russian Space Agency (Roscosmos) and The Russian Foundation for Basic Research.
. CN-430074* On leave from School of Mathematics and Physics. China University of Geosciences* On leave from School of Mathematics and Physics, China University of Geosciences,CN-430074
. China Wuhan, Wuhan, China
. G Jungman, M Kamionkowski, K Griest, Phys. Rep. 267195G. Jungman, M. Kamionkowski, and K. Griest, Phys. Rep. 267, 195 (1996).
. L Bergström, Rep. Prog. Phys. 63793L. Bergström, Rep. Prog. Phys. 63, 793 (2000).
. G Bertone, D Hooper, J Silk, Phys. Rep. 405279G. Bertone, D. Hooper, and J. Silk, Phys. Rep. 405, 279 (2005).
. S Hawking, Nature. 24830S. Hawking, Nature 248, 30 (1974).
. P Kiraly, Nature. 293120P. Kiraly et al., Nature 293, 120 (1981).
. M Simon, A Molnar, S Roesler, Astrophys. J. 499250M. Simon, A. Molnar, and S. Roesler, Astrophys. J. 499, 250 (1998).
. O Adriani, Phys. Rev. Lett. 10251101O. Adriani et al., Phys. Rev. Lett. 102, 051101 (2009).
. O Adriani, Nature. 458607O. Adriani et al., Nature 458, 607 (2009).
. O Adriani, arXiv:1001.3522v1Astropart. Phys. to appear onO. Adriani et al., to appear on Astropart. Phys., arXiv:1001.3522v1.
. M Boezio, New J. Phys. 11105023M. Boezio et al., New J. Phys. 11, 105023 (2009).
. P Blasi, Phys. Rev. Lett. 10351104P. Blasi, Phys. Rev. Lett. 103, 051104 (2009).
. M Cirelli, M Kadastik, M Raidal, A Strumia, Nucl. Phys. B. 8131M. Cirelli, M. Kadastik, M. Raidal, and A. Strumia, Nucl. Phys. B 813, 1 (2008).
. I Cholis, G Dobler, D P Finkbeiner, L Goodenough, N Weiner, Phys. Rev. D. 80123518I. Cholis, G. Dobler, D. P. Finkbeiner, L. Goodenough, and N. Weiner, Phys. Rev. D 80, 123518 (2009).
. G Kane, R Lu, S Watson, Phys. Lett. B. 681151G. Kane, R. Lu, and S. Watson, Phys. Lett. B 681, 151 (2009).
. P Picozza, Astropart. Phys. 27296P. Picozza et al., Astropart. Phys. 27, 296 (2007).
. A Bruno, Bari, ItalyUniversity of BariPh.D. thesisA. Bruno, Ph.D. thesis, University of Bari, Bari, Italy (2008), http://pamela.roma2.infn.it/.
Detector description and simulation tool. R Brun, ver- sion 3.21CERN program libraryR. Brun et al., Detector description and simulation tool, CERN program library (1994), ver- sion 3.21.
. S Straulino, Nucl. Instrum. Meth. A. 556100S. Straulino et al., Nucl. Instrum. Meth. A 556, 100 (2006).
. P Hofverberg, Stockholm, SwedenRoyal Institute of Technology (KTH)Ph.D. thesisP. Hofverberg, Ph.D. thesis, Royal Institute of Technology (KTH), Stockholm, Sweden (2008), http://pamela.roma2.infn.it/.
Licentiate thesis. J Wu, Stockholm, SwedenRoyal Institute of Technology (KTH)J. Wu, Licentiate thesis, Royal Institute of Technology (KTH), Stockholm, Sweden (2010).
. M Boezio, Astrophys. J. 487415M. Boezio et al., Astrophys. J. 487, 415 (1997).
. M Boezio, Astrophys. J. 561787M. Boezio et al., Astrophys. J. 561, 787 (2001).
. Y Asaoka, Phys. Rev. Lett. 8851101Y. Asaoka et al., Phys. Rev. Lett. 88, 051101 (2002).
. K Abe, Phys. Lett. B. 670103K. Abe et al., Phys. Lett. B 670, 103 (2008).
. M Aguilar, Phys. Rep. 366331M. Aguilar et al., Phys. Rep. 366, 331 (2002).
. F Donato, Astrophys. J. 563172F. Donato et al., Astrophys. J. 563, 172 (2001).
. V S Ptuskin, Astrophys. J. 642902V. S. Ptuskin et al., Astrophys. J. 642, 902 (2006).
. A S Beach, Phys. Rev. Lett. 87271101A. S. Beach et al., Phys. Rev. Lett. 87, 271101 (2001).
. L J Gleeson, W I Axford, Astrophys. J. 1541011L. J. Gleeson and W. I. Axford, Astrophys. J. 154, 1011 (1968).
. F Donato, D Maurin, P Brun, T Delahaye, P Salati, Phys. Rev. Lett. 10271301F. Donato, D. Maurin, P. Brun, T. Delahaye, and P. Salati, Phys. Rev. Lett. 102, 071301 (2009).
. A W Strong, I V Moskalenko, Astrophys. J. 509212A. W. Strong and I. V. Moskalenko, Astrophys. J. 509, 212 (1998).
. P Blasi, P D Serpico, Phys. Rev. Lett. 10381103P. Blasi and P. D. Serpico, Phys. Rev. Lett. 103, 081103 (2009).
. P Mertsch, S Sarkar, Phys. Rev. Lett. 10381104P. Mertsch and S. Sarkar, Phys. Rev. Lett 103, 081104 (2009).
R Battiston, Proc. 29th Int. Cosmic Ray Conf. (Pune) (2005). 29th Int. Cosmic Ray Conf. (Pune) (2005)10151R. Battiston et al., in Proc. 29th Int. Cosmic Ray Conf. (Pune) (2005), vol. 10, p. 151.
. J W Bieber, Phys. Rev. Lett. 83674J. W. Bieber et al., Phys. Rev. Lett. 83, 674 (1999).
. U W Langner, M S Potgieter, Adv. Sp. Res. 34144U. W. Langner and M. S. Potgieter, Adv. Sp. Res. 34, 144 (2004).
While more precise models of solar modulation, accounting for effects such as sign-charge dependence of the modulation. exist (e.g. [35, 36]), the force field model is a simple approach that provides a reasonably good approximation of the solar modulation above 1-2 GeVWhile more precise models of solar modulation, accounting for effects such as sign-charge dependence of the modulation, exist (e.g. [35, 36]), the force field model is a simple approach that provides a reasonably good approximation of the solar modulation above 1-2 GeV.
| []
|
[
"SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks",
"SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks"
]
| [
"Henrique Branquinho [email protected] \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n",
"Nuno Lourenço \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n",
"Ernesto Costa [email protected] \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n"
]
| [
"University of Coimbra\nCISUC\nDEICoimbraPortugal",
"University of Coimbra\nCISUC\nDEICoimbraPortugal",
"University of Coimbra\nCISUC\nDEICoimbraPortugal"
]
| []
| Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively. | 10.1145/3583133.3596399 | [
"https://export.arxiv.org/pdf/2305.10987v1.pdf"
]
| 258,762,578 | 2305.10987 | d3cbd607f82d8624f0f4e73a16dd753e7f076370 |
SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks
Henrique Branquinho [email protected]
University of Coimbra
CISUC
DEICoimbraPortugal
Nuno Lourenço
University of Coimbra
CISUC
DEICoimbraPortugal
Ernesto Costa [email protected]
University of Coimbra
CISUC
DEICoimbraPortugal
SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks
10.1145/3583133.3596399CCS CONCEPTS • Computing methodologies → Neural networksComputer vision• Theory of computation → Evolutionary algorithmsGrammars and context-free languages KEYWORDS spiking neural networks, neuroevolution, DENSER, computer vi- sion
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively.
the last decade, allowing for the development of high-performing models for computer vision, speech recognition, and natural language processing [23]. However, the success of ANNs is highly dependable not only on the availability of annotated data but mostly on computationally powerful hardware such as Graphical Processing Units (GPU). This hardware dependency foresees an unsustainable future for Artificial Intelligence (AI), as the state-of-the-art models have millions of floating point parameters and require large pipelines of power-hungry hardware for training resulting in large carbon footprints [35].
Spiking Neural Networks (SNNs), often called the third generation of neural networks, are biologically inspired neural network models built with spiking neurons, where information is encoded in discrete binary events over time called action potentials or spikes [31]. SNNs are innately sparse and highly parallelizable, which favors processing speed and energetic efficiency. Albeit still lagging behind ANNs in terms of performance, SNNs show great promise for the future of biologically plausible and sustainable AI. Current bottlenecks in SNN research include the lack of an established learning strategy, such as error backpropagation in ANNs, due to the non-differentiability of the spiking neuron's activation function, and high sensitivity to parameter tuning. Due to this, the focus of recent research, especially regarding image classification problems, has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning usually based on successful ANN models [34]. However, it is unclear if these ANN architectures are suited for SNNs as well.
Evolutionary computation (EC) methods are known to be an effective optimization tool [3], and their application to the optimization of ANNs, known as neuroevolution (NE), has proven successful both as a learning strategy as well as a way to automatically design networks and tune parameters [4]. DENSER [1,2] is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). DENSER has attained impressive results on several benchmark problems, and due to its grammar-based engine, can easily be generalized to a multitude of domains.
In this paper, we propose SPENSER (SPiking Evolutionary Network StructurEd Representation), a NE framework for evolving Convolutional Spiking Neural Networks (CSNN) based on DENSER. This paper is a preliminary experimental study to validate SPENSER for image classification problems. In this study, we evolved the architecture and parameters of SNNs with SPENSER on the MNIST [24] and Fashion-MNIST [42] public datasets, using a fixed learning strategy (Backpropagation Through Time and surrogate gradients).
To the best of our knowledge, this is the first work focusing on evolving SNNs trained with BPTT for image classification, including not only different architectures but different neuronal dynamics and optimizers in the search space. The main contribution of this paper is the preliminary validation of neuroevolution through SPENSER in the automatic generation of competitively performing CSNNs. The main focus of the paper is on the performance of the generated networks in terms of accuracy.
The remainder of this paper is structured as follows: Section 2 provides a review of important concepts regarding SNNs; Section 3 covers related work regarding evolutionary approaches for SNNs; Section 4 describes SPENSER; Section 5 describes the experimental setup; Section 6 analyses the experimental results, covering the evolutionary search and the testing performance of the generated models; Section 7 provides some final remarks and suggested guidelines for future research.
SPIKING NEURAL NETWORKS
Spiking Neural Networks (SNNs) are a class of neural network models built with spiking neurons where information is encoded in the timing and frequency of discrete events called spikes (or action potentials) over time [31]. Spiking neurons can be characterized by a membrane potential ( ) and activation threshold ℎ ℎ . The weighted sum of inputs of the neuron increases the membrane potential over time. When the membrane potential reaches its activation threshold, a spike is generated (fired) and propagated to subsequent connections. In a feed-forward network, inputs are presented to the network in the form of spike trains (timed sequences of spikes) over time steps, during which time spikes are accumulated and propagated throughout the network up to the output neurons.
There are a number of spiking neuron models that vary in biological plausibility and computational cost, such as the more realistic and computationally expensive Hodgkin-Huxley [13], to the more simplistic and computationally lighter models such as the Izhikevich [17], Integrate-and-Fire (IF) [22] and Leaky Integrate-and-Fire (LIF) [9]. We refer to Long and Fang [26] for an in-depth review of existing spiking neuron models and their behaviour.
The LIF neuron is the most commonly used in the literature due to its simplicity and low computational cost. The LIF neuron can be modulated as a simple parallel Resistor-Capacitor (RC) circuit with a "leaky" resistor:
= − ( ( ) − ) + ( )(1)
In Eq. 1, is a capacitor, is the "leaky" resistor (conductor), is the resting potential and ( ) is the current source (synaptic input) that charges up the capacitor to increase the membrane potential ( ). Solving this differential equation through Euler method (demonstration in [11]), we can calculate a neuron's membrane potential at a given timestep as:
[ ] = [ − 1] + [ ] − [ − 1] ℎ ℎ(2)
In Eq. 2, is the decay rate of the membrane potential, [ ] is the input vector (corresponding to ( )), is the vector of input weights, and [ ] is the activation function. The activation function can be defined as follows:
[ ] = 1, if [ ] > ℎ ℎ 0, ℎ(3)
A LIF neuron's membrane potential naturally decays to its resting state over time if no input is received ( [ − 1]). The potential increases when a spike is received from incoming connections, proportionally to the connection's weight ( [ ]). When the membrane potential ( ) surpasses the activation threshold ℎ ℎ a spike is emitted and propagated to outgoing connections and the membrane's potential resets (− [ − 1] ℎ ℎ ). Resetting the membrane's potential can be done either by subtraction, as is done in the presented example, where ℎ ℎ is subtracted at the onset of a spike; or to zero, where the membrane potential is set to 0 after a spike. A refractory period is usually taken into account where a neuron's potential remains at rest after spiking in spite of incoming spikes. The decay rate and threshold can be static or trainable.
Existing frameworks such as snntorch [11] allow for the development of SNNs by integration of spiking neuron layers in standard ANN architectures such as Convolutional Neural Networks, by simply replacing the activation layer with a spiking neuron layer.
Information Coding
Spiking systems rely on discrete events to propagate information, so the question arises as to how this information is encoded. We focus on two encoding strategies: rate coding and temporal coding. In rate coding, information is encoded in the frequency of firing rates. This is the case in the communication between photoreceptor cells and the visual cortex, where brighter inputs generate higher frequency firing rates as opposed to darker inputs and respectively lower frequency firing rates [14]. ANNs rely on rate coding of information, as each neuron's output is meant to represent an average firing rate. In temporal coding, information is encoded in the precise timing of spikes. A photoreceptor system with temporal coding would encode a bright input as an early spike and a dark input as a last spike. When considering the output of an SNN for a classification task, the predicted class would either be: the one with the highest firing frequency, using rate coding; the one that fires first, using temporal coding.
Temporal coding is advantageous in terms of speed and power consumption, as fewer spikes are needed to convey information, resulting in more sparse events which translate to fewer memory accesses and computation. On the other hand, rate coding is advantageous in terms of error tolerance, as the timing constraint is relaxed to the overall firing rate, and promoting learning, as the absence of spikes can lead to the "dead neuron" problem, where no learning takes place as there is no spike in the forward pass. Increased spiking activity prevents the "dead neuron" problem.
Learning
Learning in SNNs remains one of the biggest challenges in the community due to the non-differentiability of the activation function of spiking neurons (Eq. 3), which does not allow for the direct transposition of the error backpropagation algorithm.
Commonly used learning strategies include unsupervised learning through Spike-Timing-Dependent Plasticity (STDP) [8], offline conversion from trained ANNs to SNNs (also known as shadow training) [7,36], and supervised learning through backpropagation either using spike times [5] or adaptations of the activation function to a continuous-valued function [15,16,25,33,38]. In this work, we focus on the latter, by training SNNs using backpropagation through time (BPTT) and surrogate gradients.
BPTT is an application of the backpropagation algorithm to the unrolled computational graph over time, usually applied to Recurrent Neural Networks (RNNs) [41]. In order to bypass the nondifferentiability of the spiking neuron's activation function, one can use surrogate gradients, by approximating the activation function with continuous functions centered at the activation threshold during the backward pass of backpropagation [33].
In this experimental study, we considered two surrogate gradient functions available in snntorch [11]:
• Fast-Sigmoid
≈ 1 + | |(4)
• ATan -Shifted arc-tan function
≈ 1 ( 2 )(5)
Regarding the loss function, there are a number of choices available depending on the output encoding of the network (rate vs temporal), that calculate the loss based on spikes or on membrane potential. For this experimental study, we considered rate encoding for inputs and outputs, and as such, chose the Mean Square Error Spike Count Loss (adapted from [38]). The spike counts of both correct and incorrect classes are specified as targets as a proportion of the total number of time steps (for example, the correct class should fire 80% of the time and the incorrect classes should only fire 10%). The target firing rates are not required to sum to 100%. After a complete forward pass, the mean square error between the actual ( =0 [ ]) and target (ˆ) spike counts of each class is calculated and summed together (Eq.6).
L = 1 −1 ∑︁ =0 ( ∑︁ =0 [ ] −ˆ) 2(6)
RELATED WORK
Recent works blending EC and SNNs are mostly focused on evolving a network's weights, using evolutionary approaches as a learning strategy [20,29,30]. Schuman et al. [37] proposed Evolutionary Optimization for Neuromorphic Systems, aiming to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization. However, they focus on simple machine learning classification tasks and scalability is unclear. Elbrecht and Schuman [10] used HyperNeat [40] to evolve SNNs focusing on the same classification tasks. Grammatical Evolution (GE) has also been used previously by López-Vázquez et al. [30] to evolve SNNs for simple classification tasks.
The current state of the art in the automatic design of CSNN architectures are the works of Kim et al. [19] and AutoSNN by Na et al. [32]. Both works focus on Neural Architecture Search (NAS), with an evolutionary search component implemented in AutoSNN, and attain state-of-the-art performances in the CIFAR-10, CIFAR-100 [21], and TinyImageNet datasets. However, both works fix the generated networks' hyperparameters such as LIF neuron parameters and learning optimizer. Our work differs from these works by incorporating these properties in the search space.
SPENSER
SPENSER (SPiking Evolutionary Network StructurEd Representation) is a general-purpose evolutionary-based framework for the automatic design of SNNs, based on DENSER [1,2], combining the principles of Genetic Algorithms (GA) [39] and Dynamical Structured Grammatical Evolution (DSGE) [27,28]. SPENSER works on a two-level basis, separating the GA and the DSGE level, which allows for the modeling of the overall network structure at the GA level while leaving the network layer's specifications for the DSGE (Figure 1). The use of a grammar is what makes SPENSER a general-purpose framework, as one solely needs to change the grammar to handle different network and layer types, problems and parameters range.
The GA level encodes the macrostructure representing the sequence of evolutionary units that form the network. Each unit corresponds to a nonterminal from the grammar that is later expanded through DSGE. With this representation, we can encode not only the network's layers as evolutionary units but also the optimizer and data augmentation. Furthermore, by assigning each evolutionary unit to a grammar nonterminal, we can encode prior knowledge and bound the overall network architecture.
The DSGE level is responsible for the specification of each layer's type and parameters, working independently from the GA level. DSGE represents an individual's genotype as a set of expansion choices for each expansion rule in the grammar. Starting from a nonterminal unit from the GA level, DSGE follows the expansions set in the individual's genotype until all symbols in the phenotype are nonterminals. Rules for the layer types and parameters are represented as a Context-Free Grammar (CFG), making it easier to adapt the framework to different types of networks, layers and problem domains.
An example encoding to build CSNNs could be defined by Grammar 1 and the following GA macro structure:
[( , 1, 10), ( , 1, 3), ( , 1, 1), (, 1, 1)]
The numbers in each macro unit represent the minimum and maximum number of units that can be incorporated into the network. With this example, the block encodes layers for feature extraction, and therefore we can generate networks with convolutional and pooling layers, followed by 1 to 3 fully connected layers from the units. The activation layers are restricted to LIF nodes with different surrogate gradient options. The unit represents the optimizer used for learning and its parameters. The unit encodes the network's output layer. Numeric parameters are defined by their type, the number of parameters to generate, and the range of possible values. Regarding variation operators, SPENSER relies on mutations on both levels. At the GA level, individuals can be mutated by adding, replicating, or removing genes i.e. layers. At the DSGE level, mutation changes the layers' parameters by grammatical mutation (replacing grammatical expansions), integer mutation (replacing an integer parameter with a uniformly generated random one), and float mutation (modifying a float parameter through Gaussian perturbation). SPENSER follows a (1 + ) evolutionary strategy where the parent individual for the next generation is chosen by highest fitness and mutated to generate the offspring. This evolutionary strategy was chosen due to the computational demands of the network training process, which limits the population size in regard to execution time.
EXPERIMENTAL SETUP
For this experimental study, we evolved and tested networks on the MNIST [24] and Fashion-MNIST [42] datasets, available through the Torchvision library of Pytorch. All images were converted to grayscale and their original size was kept (28x28). In order to apply SNNs to these datasets, the images were converted to spike trains using rate coding. The pixel values are normalized between 0 and 1 and each pixel value is used as a probability in a Binomial distribution, which is then sampled from to generate spike trains of length time steps. No data augmentation was used. We considered different time steps for each dataset according to their complexity.
Datasets were split in three subsets: EvoTrain, Fitness and Test. The Test split is the one provided by Torchvision. The EvoTrain and Fitness splits are a 70/30 split of the original Train split. Each independent run generates different EvoTrain and Fitness splits. Table 1 summarises the chosen time steps and the number of samples per split for each dataset. As this is a preliminary study to validate SPENSER, we settled on one-pass training of individuals as a trade-off between speed and accuracy. During the evolutionary search, individuals are trained on the EvoTrain split for 1 epoch and tested against the Fitness split for fitness assignment. After the evolutionary search is complete, the best individual is further trained for 50 epochs on the entire Train set, and tested against the Test set for accuracy assessment.
We used snntorch [11] to assemble, train and evaluate SNNs based on rate coding. Individuals are trained using BPTT and the chosen loss function was the Mean Square Error Spike Count described in Section 2.2, with a target spiking proportion of 100% for the correct class and 0% for the incorrect class. The predicted class for a given instance is calculated based on the highest spike count of the output neurons. Accuracy is used as the fitness metric during the evolutionary search and as the final performance assessment of the best found individuals.
The macro structure of individuals for the GA level was set as:
[( , 1, 6), ( , 1, 4), ( , 1, 1), ( , 1, 1)]
Because we are dealing with an image recognition problem, we defined a grammar that contains primitives allowing for the construction of CSNNs, as shown in Grammar 2. Following is a brief description of the grammar. units can be expanded to either Convolutional + Activation, Convolutional + Pooling + Activation, or Dropout layers. Convolutional layers are defined by the number of filters, filter shape, stride, padding and bias. Pooling layers are defined by the pooling type (max or average) and the kernel size. units can be expanded to either Fully-Connected + Activation or Dropout layers. Fully-Connected layers are defined by the number of units. Dropout layers are defined by the dropout rate. The unit is set as a Fully-Connected + Activation where the number of units is fixed to the number of classes. Activation layers are currently limited to LIF neurons. LIF neurons are defined by the decay rate , the activation threshold ℎ ℎ , and the reset mechanism (subtraction or zero). Furthermore, they are also defined by the surrogate gradient function, which in this case can be either the ATan or the Fast-Sigmoid functions described in Section 2.2. The unit encodes the optimizer and can be expanded to either Stochastic Gradient Descent, Adam, or RMSProp. We increased the probability of choosing feature extraction layers over dropout for units (Grammar 2, line 1). Regarding SPENSER's main hyper-parameters, we followed the recommendations of [1,2], summarised in Table 2. The table is divided in two parts: i) evolutionary parameters, specifying the evolutionary engine properties such as number of generations, number of parents ( ), number of offspring ( ), mutation rates and fitness function; ii) training parameters, specifying the overall learning parameters fixed for all networks.
All the code, configuration files, grammar, and execution instructions for these experiments are publicly available at GitHub 1 .
1 https://github.com/henriquejsb/spenser The evolutionary results are promising and show that SPENSER is able to generate increasingly better-performing individuals. Figures 2 and 3 display the evolution of the best fitness and the average fitness of the population across 200 generations, and a violin plot of the fitness of the best found individuals in the MNIST and Fashion-MNIST datasets respectively. The more notable aspects of the evolutionary search are the constant increase in best fitness and the diminishing variance over generations ( Fig. 2(a), 3(a)). These aspects showcase SPENSER's ability to uncover new and better individuals, and its consistency over different runs in generating better performing individuals. Furthermore, the average fitness of the population also increases, particularly in the Fashion-MNIST dataset ( Fig. 3(b)), which demonstrates SPENSER's stability, as a random search would yield a constant average fitness. In order to understand if there are any notably better design choices for CSNNs, we summarized the best individuals' (both from MNIST and Fashion-MNIST) characteristics in Table 3. The most interesting result is the total absence of Average Pooling layers, as Kim et al. [19] had also stated that Average Pooling is not preferred for SNNs during their NAS and degraded performance. Furthermore, it is interesting to notice that the ATan surrogate gradient is preferred over the Fast-Sigmoid. The choice of Adam as a preferred optimizer is not surprising as it usually is the bestperforming optimizer of the three.
Test Results
After evolving for 200 generations, the best individuals were trained further for another 50 epochs (totaling 51 epochs) and evaluated on the Test set. Violin plots of the test accuracy on the MNIST and Fashion-MNIST datasets are displayed in Fig. 4. Test results of different have small variations, showcasing SPENSER's robustness in generating high-performing networks. We compared the best attained test accuracy with other works that also trained hand-tailored networks through spike based backpropagation. A comparison of test results is presented in Tab. 4. Albeit not surpassing the state-of-the-art, networks generated by SPENSER are head-to-head with the best-performing networks in the literature.
In order to validate our choice of one epoch training for fitness assessment, we also trained the best networks found in the first generation of each run for another 50 epochs and tested their performance on the Test set. Fig. 5 displays violin plots for the test accuracy of the best individuals from generation 1 and generation MNIST Fashion-MNIST Zhang et al. [43] 99.62% 90.13% Cheng et al. [6] 99.50% 92.07% Fang et al. [12] 99.72% 94.38% Jiang et al. [18] 99.61% 94.35% SPENSER (ours) 99.42% 91.65%
200. It is clear that the networks' performance is dependent on the architecture rather than training epochs and that the networks evolved by SPENSER perform better than random initialization. We hypothesize that a big limitation in this experimental study was the choice of the loss function's parameters, as it does not follow the literature's recommendations [34]. By setting the target firing rate of incorrect classes to 0%, we might be suppressing output activity which is important to distinguish between closely distanced inputs. Furthermore, this experimental setup is sluggish, as training with BPTT is slower than in traditional ANNs and highly memory intensive. Kim et al. [19] have achieved impressive results without training the generated networks during the search phase, by estimating their future performance based on spike activation patterns across different data samples, and we believe this might be an important improvement to our framework. With faster experiments, we can focus on increasing diversity and coverage of the search space, so that SPENSER can yield better individuals.
FINAL REMARKS
In this paper we propose SPENSER, a NE framework to automatically design CSNNs. SPENSER is able to generate competitive performing networks for image classification at the level of the state of the art, without human parametrization of the network's architecture and parameters. SPENSER generated networks with competitive results, attaining 99.42% accuracy on the MNIST [24] and 91.65% accuracy on the Fashion-MNIST [42] datasets. Current limitations rely on the execution time, due to the computationally intensive BPTT learning algorithm and the memory requirements. Furthermore, we believe the configuration of the loss function played a role in suppressing output activity and potentially decreasing accuracy.
Future Work
In the future, we plan on:
• Experiment with different loss functions / encode the loss function as an evolvable macro parameter; • Perform a more in-depth study of the preferred choices during evolution and observable patterns in the best-performing individuals. This could be relevant in uncovering novel optimal architectures and parameters; • Experiment with different learning algorithms.
• Implement skip connections and back connections.
• Apply regularisation methods to prevent vanishing and exploding gradients.
connected> ::= layer : dense [num-units, int, 1, 16, 128] <activation> ::= layer : LIF <surrogate-gradient> <surrogate-gradient> ::= <ATan> |<FastSigmoid> <output> ::= layer : dense num-units : 10 <activation> <learning> ::= <Adam> |<SGD> ... Grammar 1: Example of a Convolutional Spiking Neural Network grammar.
Figure 1 :
1Individual generation by SPENSER. The first line represents the GA level where the macrostructure of the network is defined (this individual has 2 features units and 2 classification units). The second line represents the specification of a classification unit through DSGE. Each number in the DSGE level represents the index of the chosen expansion rule for the current non-terminal. The last line is the resulting phenotype of the layer in question[1].
<features> ::= <aux-convolution> | <aux-convolution> | <aux-convolution> | <dropout> <aux-convolution> ::= <convolution> <pooling> <activation> <activation> ::= layer : act <beta> <threshold> <surr-grad> < reset-mechanism> <reset-mechanism> ::= reset : subtract | reset : zero <beta> ::= [beta, float, 1, 0, 1] <beta-trainable> <threshold> ::= [threshold, float, 1, 0.5, 1.5] <threshold-trainable> <beta-trainable> ::= beta-trainable : True | beta-trainable : False <threshold-trainable> ::= threshold-trainable : True | threshold-trainable : False <surr-grad> ::= surr-grad : atan | surr-grad : fast-sigmoid <pooling> ::= <pool-type> [kernel-size, int, 1, 2, 4] | layer : no-op <pool-type> ::= layer : pool-avg | layer : pool-max <classification> ::= <fully-connected> <activation> | <dropout> <convolution> ::= layer : conv [num-filters, int, 1, 32, 128] [filter-shape, int, 1, 2, 4] [stride, int, 1, 1, 3] <padding> <bias> <padding> ::= padding : same | padding : valid <dropout> ::= layer : dropout [rate, float, 1, 0, 0.5] <fully-connected> ::= layer : fc [num-units, int, 1, 32, 256] <bias> <bias> ::= bias : True | bias : False <output> ::= <fully-last> <activation> <fully-last> ::= layer : fc num-units : 10 bias : True <learning> ::= <gradient-descent> | <rmsprop> | <adam> <gradient-descent> ::= learning : gradient-descent [lr, float, 1, 0.0001, 0.1] [momentum, float, 1, 0.68, 0.99] [decay, float, 1, 0.000001, 0.001] <nesterov> <nesterov> ::= nesterov : True | nesterov : False <adam> ::= learning : adam [lr, float, 1, 0.0001, 0.1] [beta1, float, 1, 0.5, 0.9999] [beta2, float, 1, 0.5, 0.9999] [decay, float, 1, 0.000001, 0.001] <amsgrad> <amsgrad> ::= amsgrad : True | amsgrad : False <rmsprop> ::= learning : rmsprop [lr, float, 1, 0.0001, 0.1] [rho, float, 1, 0.5, 1] [decay, float, 1, 0.000001, 0.001] Grammar 2: Convolutional Spiking Neural Network grammar.
Figure 2 :
2Evolutionary analysis of SPENSER on the MNIST dataset over 200 generations. The results are averaged over 5 runs.
Figure 3 :
3Evolutionary analysis of SPENSER on the Fashion-MNIST dataset over 200 generations. The results are averaged over 5 runs.
Figure 4 :
4Violin plots of the Test accuracy of the best individuals after further training for 50 epochs.
Figure 5 :
5Test accuracy on Fashion-MNIST for the best individuals from Generation 1 and Generation 200, after 50 epochs of training.
Table 1 :
1Time steps and number of samples per split for each dataset (MNIST and Fashion-MNIST).Train
Time Steps (T) EvoTrain Fitness Test
MNIST
10
42000
18000 10000
F-MNIST
25
Table 2 :
2Hyper-parameters for SPENSER.Evolutionary Parameter Value
Number of runs
5
Number of Generations
200
(#Parents)
1
(#Offspring)
10
Add Layer Rate
25%
Duplicate Layer Rate
15%
Remove Layer Rate
25%
Layer DSGE Rate
15%
Learning DSGE Rate
30%
Gaussian Perturbations
(0 , 0.15)
Fitness Function
Accuracy
Training Parameters
Value
Number of epochs
1
Batch Size
64
Loss Function
Mean Square Error Spike Count
Correct Rate
1.0
Incorrect Rate
0.0
6 EXPERIMENTAL RESULTS
6.1 Evolutionary Search
Table 3 :
3Network characteristics (percentage) for the best 10 individuals from MNIST and Fashion-MNIST.Layer Types
Reset Mechanism
Convolutional
35% Subtract
63%
Average Pooling
0%
Zero
37%
Max Pooling
19%
Dropout
11% Optimizers
Fully-Connected
35% Adam
70%
SGD
20%
Surrogate Gradients
RMSProp
10%
ATan
76%
Fast-Sigmoid
24%
(a) Best Fitness
(b) Average Fitness
(c) Fitness of best found individuals
Table 4 :
4Test accuracy comparison of state of the art and our work.
ACKNOWLEDGMENTSThis research was supported by the Portuguese Recovery and Resilience Plan (PRR) through project C645008882-00000055, Center for Responsible AI, by the FCT -Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit -UIDB/00326/2020 or project code UIDP/00326/2020. The first author is partially funded by FCT -Foundation for Science and Technology, Portugal, under the grant 2022.11314.BD.
DENSER: deep evolutionary network structured representation. Genetic Programming and Evolvable Machines. Filipe Assunção, Nuno Lourenço, Penousal Machado, Bernardete Ribeiro, 20Filipe Assunção, Nuno Lourenço, Penousal Machado, and Bernardete Ribeiro. 2019. DENSER: deep evolutionary network structured representation. Genetic Programming and Evolvable Machines 20, 1 (2019), 5-35.
Fast-DENSER: Fast Deep Evolutionary Network Structured Representation. Filipe Assunção, Nuno Lourenço, Bernardete Ribeiro, Penousal Machado, SoftwareX. 14100694Filipe Assunção, Nuno Lourenço, Bernardete Ribeiro, and Penousal Machado. 2021. Fast-DENSER: Fast Deep Evolutionary Network Structured Representation. SoftwareX 14 (2021), 100694.
An overview of evolutionary algorithms for parameter optimization. Thomas Bäck, Hans-Paul Schwefel, Evolutionary computation. 1Thomas Bäck and Hans-Paul Schwefel. 1993. An overview of evolutionary algorithms for parameter optimization. Evolutionary computation 1, 1 (1993), 1-23.
On the automated, evolutionary design of neural networks: past, present, and future. Alejandro Baldominos, Yago Saez, Pedro Isasi, Neural computing and applications. 32Alejandro Baldominos, Yago Saez, and Pedro Isasi. 2020. On the automated, evo- lutionary design of neural networks: past, present, and future. Neural computing and applications 32 (2020), 519-545.
Error-backpropagation in temporally encoded networks of spiking neurons. Sander M Bohte, N Joost, Han La Kok, Poutre, Neurocomputing. 48Sander M Bohte, Joost N Kok, and Han La Poutre. 2002. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 1-4 (2002), 17-37.
LISNN: improving spiking neural networks with lateral interactions for robust object recognition. Xiang Cheng, Yunzhe Hao, Jiaming Xu, Bo Xu, Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. the Twenty-Ninth International Conference on International Joint Conferences on Artificial IntelligenceXiang Cheng, Yunzhe Hao, Jiaming Xu, and Bo Xu. 2021. LISNN: improving spiking neural networks with lateral interactions for robust object recognition. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 1519-1525.
Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks. Shikuang Deng, Shi Gu, International Conference on Learning Representations. Shikuang Deng and Shi Gu. 2021. Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks. In International Conference on Learning Representations.
Unsupervised learning of digit recognition using spike-timing-dependent plasticity. U Peter, Matthew Diehl, Cook, Frontiers in computational neuroscience. 999Peter U Diehl and Matthew Cook. 2015. Unsupervised learning of digit recogni- tion using spike-timing-dependent plasticity. Frontiers in computational neuro- science 9 (2015), 99.
Leaky integrate and fire neuron by charge-discharge dynamics in floating-body MOSFET. Sangya Dutta, Vinay Kumar, Aditya Shukla, Udayan Nihar R Mohapatra, Ganguly, Scientific reports. 78257Sangya Dutta, Vinay Kumar, Aditya Shukla, Nihar R Mohapatra, and Udayan Ganguly. 2017. Leaky integrate and fire neuron by charge-discharge dynamics in floating-body MOSFET. Scientific reports 7, 1 (2017), 8257.
Neuroevolution of Spiking Neural Networks Using Compositional Pattern Producing Networks. Daniel Elbrecht, Catherine Schuman, International Conference on Neuromorphic Systems 2020 (ICONS 2020). Association for Computing Machinery. New York, NY, USADaniel Elbrecht and Catherine Schuman. 2020. Neuroevolution of Spiking Neural Networks Using Compositional Pattern Producing Networks. In International Conference on Neuromorphic Systems 2020 (ICONS 2020). Association for Comput- ing Machinery, New York, NY, USA, 1-5.
Jason K Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D Lu, arXiv:2109.128942022. Training Spiking Neural Networks Using Lessons From Deep Learning. csJason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D. Lu. 2022. Training Spiking Neural Networks Using Lessons From Deep Learning. arXiv:2109.12894 [cs].
Incorporating learnable membrane time constant to enhance learning of spiking neural networks. Wei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, Yonghong Tian, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionWei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, and Yonghong Tian. 2021. Incorporating learnable membrane time constant to en- hance learning of spiking neural networks. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision. 2661-2671.
A quantitative description of membrane current and its application to conduction and excitation in nerve. A L Hodgkin, A F Huxley, The Journal of Physiology. 117A. L. Hodgkin and A. F. Huxley. 1952. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117, 4 (Aug. 1952), 500-544.
Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. H David, Hubel, N Torsten, Wiesel, The Journal of physiology. 160106David H Hubel and Torsten N Wiesel. 1962. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology 160, 1 (1962), 106.
Gradient descent for spiking neural networks. Dongsung Huh, Terrence J Sejnowski, Advances in neural information processing systems. 31Dongsung Huh and Terrence J Sejnowski. 2018. Gradient descent for spiking neural networks. Advances in neural information processing systems 31 (2018).
Eric Hunsberger, Chris Eliasmith, arXiv:1510.08829Spiking deep networks with LIF neurons. arXiv preprintEric Hunsberger and Chris Eliasmith. 2015. Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829 (2015).
Simple model of spiking neurons. M Eugene, Izhikevich, IEEE Transactions on neural networks. 14Eugene M Izhikevich. 2003. Simple model of spiking neurons. IEEE Transactions on neural networks 14, 6 (2003), 1569-1572.
KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential. Chunming Jiang, Yilei Zhang, arXiv:2302.09238arXiv preprintChunming Jiang and Yilei Zhang. 2023. KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential. arXiv preprint arXiv:2302.09238 (2023).
Youngeun Kim, Yuhang Li, Hyoungseob Park, 10.48550/arXiv.2201.10355arXiv:2201.10355Yeshwanth Venkatesha, and Priyadarshini Panda. 2022. Neural Architecture Search for Spiking Neural Networks. cs, eessYoungeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, and Priyadarshini Panda. 2022. Neural Architecture Search for Spiking Neural Net- works. https://doi.org/10.48550/arXiv.2201.10355 arXiv:2201.10355 [cs, eess].
The Evolution of Training Parameters for Spiking Neural Networks with Hebbian Learning. Katarzyna Kozdon, Peter Bentley, ALIFE 2018: The 2018 Conference on Artificial Life. MIT PressKatarzyna Kozdon and Peter Bentley. 2018. The Evolution of Training Parameters for Spiking Neural Networks with Hebbian Learning. In ALIFE 2018: The 2018 Conference on Artificial Life. MIT Press, 276-283.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
Recherches quantitatives sur l'excitation electrique des nerfs traitee comme une polarization. Louis Lapicque, Journal de physiologie et de. 9Louis Lapicque. 1907. Recherches quantitatives sur l'excitation electrique des nerfs traitee comme une polarization. Journal de physiologie et de pathologie générale 9 (1907), 620-635.
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, nature. 521Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436-444.
Gradientbased learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proc. IEEE. 86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient- based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278- 2324.
Eimantas Ledinauskas, Julius Ruseckas, arXiv:2006.04436Alfonsas Juršėnas, and Giedrius Buračas. 2020. Training deep spiking neural networks. arXiv preprintEimantas Ledinauskas, Julius Ruseckas, Alfonsas Juršėnas, and Giedrius Buračas. 2020. Training deep spiking neural networks. arXiv preprint arXiv:2006.04436 (2020).
A Review of Biologically Plausible Neuron Models for Spiking Neural Networks. Lyle Long, Guoliang Fang, AIAA Infotech@Aerospace. Atlanta, GeorgiaAmerican Institute of Aeronautics and AstronauticsLyle Long and Guoliang Fang. 2010. A Review of Biologically Plausible Neuron Models for Spiking Neural Networks. In AIAA Infotech@Aerospace 2010. American Institute of Aeronautics and Astronautics, Atlanta, Georgia.
Structured grammatical evolution: a dynamic approach. Nuno Lourenço, Filipe Assunção, B Francisco, Ernesto Pereira, Penousal Costa, Machado, Handbook of Grammatical Evolution. SpringerNuno Lourenço, Filipe Assunção, Francisco B Pereira, Ernesto Costa, and Penousal Machado. 2018. Structured grammatical evolution: a dynamic approach. In Handbook of Grammatical Evolution. Springer, 137-161.
SGE: a structured representation for grammatical evolution. Nuno Lourenço, B Francisco, Ernesto Pereira, Costa, International Conference on Artificial Evolution (Evolution Artificielle). SpringerNuno Lourenço, Francisco B Pereira, and Ernesto Costa. 2015. SGE: a structured representation for grammatical evolution. In International Conference on Artificial Evolution (Evolution Artificielle). Springer, 136-148.
Neuroevolution Guided Hybrid Spiking Neural Network Training. Sen Lu, Abhronil Sengupta, Frontiers in Neuroscience. 16838523Sen Lu and Abhronil Sengupta. 2022. Neuroevolution Guided Hybrid Spiking Neural Network Training. Frontiers in Neuroscience 16 (April 2022), 838523.
Evolutionary Spiking Neural Networks for Solving Supervised Classification Problems. G López-Vázquez, M Ornelas-Rodriguez, A Espinal, J A Soria-Alcaraz, A Rojas-Domínguez, H J Puga-Soberanes, J M Carpio, H Rostro-Gonzalez, e4182639. Publisher: HindawiComputational Intelligence and Neuroscience. G. López-Vázquez, M. Ornelas-Rodriguez, A. Espinal, J. A. Soria-Alcaraz, A. Rojas- Domínguez, H. J. Puga-Soberanes, J. M. Carpio, and H. Rostro-Gonzalez. 2019. Evolutionary Spiking Neural Networks for Solving Supervised Classification Prob- lems. Computational Intelligence and Neuroscience 2019 (March 2019), e4182639. Publisher: Hindawi.
Networks of spiking neurons: The third generation of neural network models. Wolfgang Maass, Neural Networks. 10Wolfgang Maass. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 9 (Dec. 1997), 1659-1671.
AutoSNN: Towards Energy-Efficient Spiking Neural Networks. Byunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, Sungroh Yoon, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine LearningByunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, and Sungroh Yoon. 2022. AutoSNN: Towards Energy-Efficient Spiking Neural Net- works. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 16253-16269.
Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. Hesham Emre O Neftci, Friedemann Mostafa, Zenke, IEEE Signal Processing Magazine. 36Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradi- ent learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine 36, 6 (2019), 51-63.
Spiking neural networks: A survey. D Joao, Marcelo Nunes, Diogo Carvalho, Jaime S Carneiro, Cardoso, IEEE Access. 10Joao D Nunes, Marcelo Carvalho, Diogo Carneiro, and Jaime S Cardoso. 2022. Spiking neural networks: A survey. IEEE Access 10 (2022), 60738-60764.
David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, Jeff Dean, arXiv:2104.10350Carbon emissions and large neural network training. arXiv preprintDavid Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350 (2021).
Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, Shih-Chii Liu, Frontiers in neuroscience. 11682Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. 2017. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in neuroscience 11 (2017), 682.
Evolutionary optimization for neuromorphic systems. Parker Catherine D Schuman, Robert M Mitchell, Thomas E Patton, James S Potok, Plank, Proceedings of the Neuro-inspired Computational Elements Workshop. the Neuro-inspired Computational Elements WorkshopCatherine D Schuman, J Parker Mitchell, Robert M Patton, Thomas E Potok, and James S Plank. 2020. Evolutionary optimization for neuromorphic systems. In Proceedings of the Neuro-inspired Computational Elements Workshop. 1-9.
Slayer: Spike layer error reassignment in time. B Sumit, Garrick Shrestha, Orchard, Advances in neural information processing systems. 31Sumit B Shrestha and Garrick Orchard. 2018. Slayer: Spike layer error reassign- ment in time. Advances in neural information processing systems 31 (2018).
Introduction to Genetic Algorithms. S N Sivanandam, S N Deepa, S.N. Sivanandam and S. N. Deepa. 2007. Introduction to Genetic Algorithms.
A hypercubebased encoding for evolving large-scale neural networks. O Kenneth, Stanley, B David, Jason D'ambrosio, Gauci, Artificial life. 15Kenneth O Stanley, David B D'Ambrosio, and Jason Gauci. 2009. A hypercube- based encoding for evolving large-scale neural networks. Artificial life 15, 2 (2009), 185-212.
Backpropagation through time: what it does and how to do it. J Paul, Werbos, Proc. IEEE. 78Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 10 (1990), 1550-1560.
Han Xiao, Kashif Rasul, Roland Vollgraf, arXiv:1708.07747Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprintHan Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
Spike-train level backpropagation for training deep recurrent spiking neural networks. Wenrui Zhang, Peng Li, Advances in neural information processing systems. 32Wenrui Zhang and Peng Li. 2019. Spike-train level backpropagation for training deep recurrent spiking neural networks. Advances in neural information processing systems 32 (2019).
| [
"https://github.com/henriquejsb/spenser"
]
|
[
"Learning Counterfactually Invariant Predictors",
"Learning Counterfactually Invariant Predictors"
]
| [
"Francesco Quinzan ",
"Cecilia †1 \nKTH Royal Institute of Technology\n2 Helmholtz AIMunich\n",
"Casolo ",
"Krikamol Muandet \nCISPA-Helmholtz Center for Information Security † , ‡ equal contribution\n\n",
"Yucen Luo \nMax Planck Institute for Intelligent Systems\n\n",
"Niki Kilbertus \nTechnical University of Munich\n\n"
]
| [
"KTH Royal Institute of Technology\n2 Helmholtz AIMunich",
"CISPA-Helmholtz Center for Information Security † , ‡ equal contribution\n",
"Max Planck Institute for Intelligent Systems\n",
"Technical University of Munich\n"
]
| []
| Counterfactual invariance has proven an essential property for predictors that are fair, robust, and generalizable in the real world. We propose a general definition of counterfactual invariance and provide simple graphical criteria that yield a sufficient condition for a predictor to be counterfactually invariant in terms of (conditional independence in) the observational distribution. Any predictor that satisfies our criterion is provably counterfactually invariant. In order to learn such predictors, we propose a model-agnostic framework, called Counterfactual Invariance Prediction (CIP), based on a kernel-based conditional dependence measure called Hilbert-Schmidt Conditional Independence Criterion (HSCIC). Our experimental results demonstrate the effectiveness of CIP in enforcing counterfactual invariance across various types of data including tabular, high-dimensional, and real-world dataset. * Part of this work was done while Francesco Quinzan visited the Structural causal models. We start with basic definitions and refer to Pearl [2000] for more details.Definition 2.1 (Structural causal model (SCM)). A structural causal model is a tuple (U, V, F, P U ) such that U is a set of background variables that are exogenous to the model; V is a set of observable (endogenous) variables;; P U is a probability distribution over the domain of U. Further, the subsets pa(V ) ⊆ V \ {V } are chosen such that the graph G over V where the edge V → V is in G if and only if V ∈ pa(V ) is a directed acyclic graph (DAG).We always denote with Y ⊂ V the outcome (or prediction target), and withŶ a predictor for that target. The predictorŶ is not strictly part of the SCM because we get to tune fŶ. Since it takes inputs from V, we often treat it as an observed variable in the SCM. As such, | 10.48550/arxiv.2207.09768 | [
"https://export.arxiv.org/pdf/2207.09768v2.pdf"
]
| 250,698,907 | 2207.09768 | 4010e28d7beb71b25b271a39e1105f150c7284f9 |
Learning Counterfactually Invariant Predictors
Francesco Quinzan
Cecilia †1
KTH Royal Institute of Technology
2 Helmholtz AIMunich
Casolo
Krikamol Muandet
CISPA-Helmholtz Center for Information Security † , ‡ equal contribution
Yucen Luo
Max Planck Institute for Intelligent Systems
Niki Kilbertus
Technical University of Munich
Learning Counterfactually Invariant Predictors
Counterfactual invariance has proven an essential property for predictors that are fair, robust, and generalizable in the real world. We propose a general definition of counterfactual invariance and provide simple graphical criteria that yield a sufficient condition for a predictor to be counterfactually invariant in terms of (conditional independence in) the observational distribution. Any predictor that satisfies our criterion is provably counterfactually invariant. In order to learn such predictors, we propose a model-agnostic framework, called Counterfactual Invariance Prediction (CIP), based on a kernel-based conditional dependence measure called Hilbert-Schmidt Conditional Independence Criterion (HSCIC). Our experimental results demonstrate the effectiveness of CIP in enforcing counterfactual invariance across various types of data including tabular, high-dimensional, and real-world dataset. * Part of this work was done while Francesco Quinzan visited the Structural causal models. We start with basic definitions and refer to Pearl [2000] for more details.Definition 2.1 (Structural causal model (SCM)). A structural causal model is a tuple (U, V, F, P U ) such that U is a set of background variables that are exogenous to the model; V is a set of observable (endogenous) variables;; P U is a probability distribution over the domain of U. Further, the subsets pa(V ) ⊆ V \ {V } are chosen such that the graph G over V where the edge V → V is in G if and only if V ∈ pa(V ) is a directed acyclic graph (DAG).We always denote with Y ⊂ V the outcome (or prediction target), and withŶ a predictor for that target. The predictorŶ is not strictly part of the SCM because we get to tune fŶ. Since it takes inputs from V, we often treat it as an observed variable in the SCM. As such,
Introduction and Related Work
Invariance, or equivariance to certain transformations of data, has proven essential in numerous applications of machine learning (ML), since it can lead to better generalization capabilities [Arjovsky et al., 2019, Bloem-Reddy and Teh, 2020, Chen et al., 2020. For instance, in image recognition, predictions ought to remain unchanged under scaling, translation, or rotation of the input image. Data augmentation is an early heuristic developed to promote this kind of invariance, which has become indispensable for training models like deep neural networks (DNNs) [Shorten andKhoshgoftaar, 2019, Xie et al., 2020]. Well-known examples of certain types of "invariance by design" include convolutional neural networks (CNNs) for translation invariance [Krizhevsky et al., 2012], group equivariant CNNs for other group transformations [Cohen and Welling, 2016], recurrent neural networks (RNNs) and transformers for sequential data [Vaswani et al., 2017], DeepSet [Zaheer et al., 2017] for sets, and graph neural networks (GNNs) for different types of geometric structures [Battaglia et al., 2018].
Many real-world applications in modern ML, however, call for an arguably stronger notion of invariance based on causality, called counterfactual invariance. This case has been made for image classification, algorithmic fairness [Hardt et al., 2016, Mitchell et al., 2021, robustness [Bühlmann, 2020], and out-of-distribution generalization [Lu et al., 2021]. These applications require predictors to exhibit invariance with respect to hypothetical manipulations of the data generating process (DGP) [Arjovsky et al., 2019, Bühlmann, 2020, Heinze-Deml et al., 2018, Peters et al., 2016, Rojas-Carulla et al., 2018. In image classification, for instance, we seek a model that "would have made the same prediction, if the object position had been different with everything else being equal". Similarly, in algorithmic fairness, Kilbertus et al. [2017], Kusner et al. [2017] introduce notions of interventional and counterfactual fairness, based on certain invariances in the DGP of the causal relationships between observed variables. Counterfactual invariance has the advantage that it incorporates structural knowledge of the DGP. However, enforcing counterfactual invariance is challenging in practice, because it is typically untestable in real-world observational settings unless strong prior knowledge of the DGP is available.
Inspired by problems in natural language processing (NLP), Veitch et al. [2021] analyze two specific causal graphs (dubbed causal and anticausal) with the goal of "stress-testing" models for spurious correlations. They develop necessary, but not sufficient, criteria to achieve counterfactual invariance in these two settings based only on the observational distribution. These criteria are enforced in practice for discrete conditioning variables via distribution matching using the maximum mean discrepancy (MMD). Our work differs in that we provide graphical criteria for any given causal graph and develop a sufficient (potentially not necessary) criterion for counterfactual invariance, again based on the observational distribution only. Hence, unlike Veitch et al. [2021], our approach guarantees counterfactual invariance. Depending on the assumed causal graph, this can come at the cost of requiring certain variables to be observed. Finally, we propose a model-agnostic learning framework, called Counterfactual Invariance Prediction (CIP), based on a kernel-based conditional dependence measure called Hilbert-Schmidt Conditional Independence Criterion (HSCIC) [Park and Muandet, 2020]. CIP thus allows for mixed categorical and continuous multivariate variables.
In another related yet orthogonal work, Mouli and Ribeiro [2022] develop out-of-distribution classifiers assuming that the test distribution is different from the training set in terms of symmetry transformations, expressed as equivalence relations. Finding relevant transformations that actually affect the label can then be viewed as a causal discovery task. Their "asymmetry learning" method thus performs a causal model search to ultimately learn a counterfactually invariant representation of the input with respect to these symmetry transformations that can be used for out-of-distribution classification under certain identifiability conditions. Finally, in concurrent work, Pogodin et al. [2022] propose an efficient regularizer for learning features of input data that allow for estimating a target while being conditionally independent of a distractor given the target. Since CIP ultimately enforces conditional independence (Theorem 3.2), we believe it could further benefit from leveraging the efficiency and convergence properties of their technique.
A X Y S (a) A X ∧ Y X ⊥ A X ⊥ Y (b) X ⊥ Y X ∧ X ⊥ A Y A (c) A X Y S (d) A X Y C (e) A U
C Y X (f ) Figure 1: (a) An example for Theorem 3.2. Any predictorŶ such thatŶ ⊥ ⊥ A | Z with Z = X ∪ S is counterfactually invariant in A with respect to X. (b)-(c) Causal and anti-causal structure as in Veitch et al. [2021]. The variable X is decomposed in three parts. X ⊥ A is the part of X that is not causally influenced by A, X ⊥ Y is the part that does not causally influence Y, and X ∧ is the remaining part that is both influenced by A and that influences Y. (d) Assumed causal structure for the synthetic experiments (see Section 4.1). The precise corresponding generative random variables are described in Section F. (e) Assumed causal structure for the UCI Adult dataset, where A consists of the protected attributes gender and age. it also "derives its randomness from the exogenous variables", i.e., is defined on the same probability space. Each SCM implies a unique observational distribution over V, but it also entails interventional distributions [Pearl, 2000]. Given a variable A ∈ V, an intervention A ← a amounts to replacing f A in F with the constant function setting A to a. This yields a new SCM, which induces the interventional distribution under intervention A ← a. Similarly, we can intervene on multiple variables V ⊇ A ← a. We then write Y * a for the outcome in the intervened SCM, also called potential outcome. Note that the interventional distribution P Y * a (y) differs in general from the conditional distribution P Y|A (y | a). 1 This is typically the case when Y and A have a shared parent, i.e., are confounded. We can also condition on a set of variables W ⊆ V in the (observational distribution of the) original SCM before performing an intervention, which we denote by P Y * a |W (y | w). This is a counterfactual distribution: "Given that we have observed W = w, what would Y have been had we set A ← a, instead of the value A has actually taken?" Note that the sets A and W need not be disjoint.
Graph terminology. Consider a path π (a sequence of distinct adjacent nodes) in a DAG G. A set of nodes S is said to block π, if π contains a triple of consecutive nodes A, B, C such that one of the following hold:
(i) A → B → C or A ← B ← C or A ← B → C and B ∈ S;
(ii) A → B ← C and neither B nor any descendent of B is in S. Further, we call π a causal path between sets of nodes A, B, when it is a directed path from a node in A to a node in B. A causal path π is a proper causal path if it only intersects A at the first node in π. Finally, we denote with G A the graph obtained by removing from G all incoming arrows into nodes in A. We now define the notion of valid adjustment sets [Shpitser et al., 2010, Def. 5], which our graphical criterion for counterfactual invariance relies on.
Definition 2.2. Let G be a causal graph and let X, Y be disjoint (sets of) nodes in G. A set of nodes S is a valid adjustment set for (X, Y), if (i) No element in S is a descendant in G X of any node W / ∈ X which lies on a proper causal path from X to Y. (ii) S blocks all non-causal paths from X to Y in G.
Kernel mean embeddings and conditional measures. Our method heavily relies on kernel mean embeddings (KMEs). We now highlight the main concepts pertaining KMEs and refer the reader to Berlinet and Thomas-Agnan [2011], Muandet et al. [2017], Schölkopf et al. [2002], Smola et al. [2007] for more details. Fix a measurable space Y with respect to a σ-algebra F Y , and consider a probability measure P on the space (Y , F Y ). Let H be a reproducing kernel Hilbert space (RKHS) with a bounded kernel k Y : Y × Y → R, i.e., k Y is such that sup y∈Y k(y, y) < ∞. The kernel mean embedding µ P of P is defined as the expected value of the function k( · , y) with respect to y, i.e., µ P := E [k( · , y)]. The definition of KMEs can be extended to conditional distributions , Grünewälder et al., 2012. Consider two random variables Y, Z, and denote with (Ω Y , F Y ) and (Ω Z , F Z ) the respective measurable spaces. These random variables induce a probability measure P Y,Z in the product space
Ω Y × Ω Z . Let H Y be a RKHS with a bounded kernel k Y (·, ·) on Ω Y . We define the KME of a conditional distribution P Y|Z (· | z) via µ Y|Z=z := E [k Y ( · , y) | Z = z].
Here, the expected value is taken over y. KMEs of conditional measures can be estimated from samples [Grünewälder et al., 2012].
Counterfactual Invariance Prediction (CIP)
Sufficient Criterion for Counterfactual Invariance
We start with our definition of counterfactual invariance. A counterfactually invariant predictor can be viewed as robust to changes of A in the sense that the (conditional) post-interventional distribution ofŶ does not change for different values of the intervention. We now discuss some properties of our Definition 3.1 in comparison to other notions of counterfactual invariance. First, we can condition on observations W, which allows us to model true counterfactuals including the abduction step where we condition on observed evidence. For example, enforcing counterfactual fairness requires modeling true counterfactual [Kusner et al., 2017]. This sets our definition apart, for example, from Veitch et al. [2021, Def. 1.1], who requireŶ * a =Ŷ * a almost surely for all a, a in the domain of A. While this condition appears stronger by enforcing equality of random variables instead of equality of distributions, in practice Veitch et al. [2021] also enforces equality of distributions (via MMD). Moreover, sinceŶ * a ,Ŷ * a are (deterministic) functions of the same exogenous (unobserved) random variables, distributional equality is a natural choice for counterfactual invariance. Mouli and Ribeiro [2022, Def. 1] instead define counterfactually invariant representations of some data as being invariant under a family of pre-specified symmetry transformations of the data (based on equivalence relations).
Next, we establish a graphical criterion to express counterfactual invariance as conditional independence in the observational distribution of an SCM, rendering it estimable from observational data. Crucially, we provide sufficient conditions for counterfactual invariance.
Theorem 3.2. Let G be a causal graph, A, W be two (not necessarily disjoint) sets of nodes in G, such that (A ∪ W) ∩ Y = ∅, let S be a valid adjustment set for (A ∪ W, Y), and define Z := (S ∪ W) \ A. Then, in all SCMs compatible with G, if a predictorŶ satisfiesŶ ⊥ ⊥ A | Z, thenŶ is counterfactually invariant in A with respect to W.
2 With an abuse of notation, if W = ∅ then the requirement of conditional counterfactual invariance becomes PŶ * a (y) = PŶ * a (y) almost surely, for all a, a in the domain of A.
We illustrate Theorem 3.2 with an example in Fig. 1(a). Note that the conditioning set Z in Theorem 3.2 depends on A, W, and the given valid adjustment set S. In particular, the conditioning set Z need not itself be a valid adjustment set. The proof is deferred to Section A. Our key observation is that the set Z in Theorem 3.2 acts as a d-separator for certain random variables in a graph that allows reasoning about dependencies among preand post-interventional random variables. This graph simplifies the counterfactual graph by Shpitser and Pearl [2008] and generalizes the augmented graph structure described in Theorem 1 by Shpitser and Pearl [2009]. We can then combine the Markov property with covariate adjustment to prove our claim. Crucially, our proof does not rely on the identification of the counterfactual distributions (e.g., via the do-calculus [Pearl, 2000]).
Theorem 3.2 provides a sufficient condition for the predictorŶ to be counterfactually invariant (Definition 3.1) in terms of conditional independenceŶ ⊥ ⊥ A | Z. In the following, we will develop an operator denoted by HSCIC(Ŷ, A | Z) that is efficiently estimable from observational data, differentiable, serves as a measure of conditional dependence, and is zero if and only ifŶ ⊥ ⊥ A | Z. We can then use this operator as a model-agnostic objective to train counterfactually invariant predictors. Some background is required.
HSCIC for Conditional Independence
Consider two random variables Y and A, and denote with (Ω Y , F Y ) and (Ω A , F A ) the respective measurable spaces. Suppose that we are given two RKHSs H Y , H A over the support of Y and A respectively. The tensor product space H Y ⊗H A is defined as the space of functions of the form (f ⊗ g)(y, a) := f (y)g(a), for all f ∈ H Y and g ∈ H A . The tensor product space yields a natural RKHS structure, with kernel k defined by k(y⊗a, y ⊗a ) := k Y (y, y )k A (a, a ). We refer to Szabó and Sriperumbudur [2017] for more details on tensor product spaces.
Definition 3.3 (HSCIC). For (sets of) random variables Y, A, Z, the HSCIC between Y and A given Z is defined as the real-valued random variable HSCIC(Y, A | Z) = H Y,A|Z • Z where H Y,A|Z is a real-valued deterministic function, defined as H Y,A|Z (z) := µ Y,A|Z=z − µ Y|Z=z ⊗ µ A|Z=z with · the norm induced by the inner product of the tensor product space H X ⊗ H A .
Our Definition 3.3 is heavily motivated by, but differs slightly from Park and Muandet [2020, Def. 5.3], which relies on the Bochner conditional expected value. While it is functionally equivalent (with the same implementation, see Eq. (2)), ours has the benefit of bypassing some technical assumptions required by Park and Muandet [2020] (see Section C-Section D for details). The HSCIC has the following important property.
Theorem 3.4 (Theorem 5.4 by Park and Muandet [2020]). If the kernel k of H
X ⊗ H A is characteristic 3 , HSCIC(Y, A | Z) = 0 almost surely if and only if Y ⊥ ⊥ A | Z.
A proof is in Section B. We remark that "most interesting" kernels such as the Gaussian and Laplacian kernels are characteristic. Furthermore, if kernels are translation-invariant and characteristic, then their tensor product is also a characteristic kernel [Szabó and Sriperumbudur, 2017]. Hence, this natural assumption is non-restrictive in practice. Combining Theorems 3.2 and 3.4, we can now use HSCIC to reliably achieve counterfactual invariance.
Corollary 3.5. Consider an SCM with graph G and fix two (not necessarily disjoint) sets of nodes A, W. Let Z be a set of nodes as in Theorem 3.2. Then, any predictorŶ that satisfies HSCIC(Ŷ, A | Z) = 0 almost surely is counterfactually invariant in A with respect to W.
Learning Counterfactually Invariant Predictors
Corollary 3.5 justifies our proposed objective, namely to minimize the following loss
L CIP (Ŷ) = L(Ŷ) + γ · HSCIC(Ŷ, A | Z) ,(1)
where L(Ŷ) is a task-dependent loss function (e.g., cross-entropy for classification, or mean squared error for regression) and γ ≥ 0 is a parameter that regulates the trade-off between predictive performance and counterfactual invariance. It is important to note that the second term in Eq. (1) does not act as a regularizer in that it aims at overcoming an ill-posedness of the problem, e.g., multiple models with equal training loss L. Instead, it is an additional, and potentially conflicting objective. As a result, γ does not need to decay to zero as the sample size increases and it is impossible to select an "optimal value" based on data alone. In practice, we can adopt a two-stage approach: (i) learn a collection of CIP models on the Pareto frontier using different values of γ; (ii) choose one of these models depending on the preferred trade-off between predictive performance and counterfactual invariance for the task at hand.
Compared to other possible dependence measures one could use in Eq.
(1) to enforce conditional independence, we consider HSCIC in this work because it does not require parametric assumptions on the underlying probability distributions, and it is applicable for any data type, as long as we can define positive definite kernels on them. With HSCIC, for example, we do not require A, W, Z or Y to be binary or categorical nor do they need to be scalar, a major improvement over existing methods [Chiappa, 2019, Xu et al., 2020.
Estimating the HSCIC from samples.
Given n samples {(ŷ i , a i , z i )} n i=1 , denote witĥ KŶ the kernel matrix with entries [KŶ] i,j := k Y (ŷ i ,ŷ j )
, and letK A be the kernel matrix for A. We estimate the HŶ ,A|X ≡ HŶ ,A|X (·) aŝ
H 2 Y,A|Z =ŵ T Y,A|Z KŶ K A ŵŶ ,A|Z − 2 ŵ T Y|ZK YŵŶ ,A|X ŵ T A|XK AŵŶ ,A|Z + ŵ T Y|ZKŶŵŶ|Z ŵ T A|ZK AŵA|Z ,(2)
where is element-wise multiplication. The functionsŵŶ |Z ≡ŵŶ |Z (·),ŵ A|Z ≡ŵ A|Z (·), andŵŶ ,A|Z ≡ŵŶ ,A|Z (·) are found via kernel ridge regression. Caponnetto and Vito [2007] provide the convergence rates of the estimandĤ 2 Y,A|Z under mild conditions. In practice, computing the HSCIC approximation by the formula in Eq. (2) can be computationally expensive. To speed it up, we can use random Fourier features to approximate the matriceŝ KŶ andK A [Avron et al., 2017, Rahimi andRecht, 2007]. We emphasize that Eq. (2) allows us to consistently estimate the HSCIC from observational i.i.d. samples, without prior knowledge of the counterfactual distributions.
Measuring counterfactual invariance. Besides predictive performance, e.g., mean squared error (MSE) for regression or accuracy for classification, our key metric of interest is the level of counterfactual invariance achieved by the predictorŶ. Such a measure must capture how the distribution ofŶ * a changes for different values of a across all conditioning values w. We quantify this in a single scalar, which we call the Variance of CounterFactuals (VCF)
VCF(Ŷ) = E w∼P W var a ∼P A [EŶ * a |W=w [ŷ | w]] .(3)
That is, we look at how the average outcome varies with the interventional value a at conditioning value w and average this variance over w. Figure 2: (Left) trade-off between the accuracy and counterfactual invariance. We observe that the VCF decreases, as the MSE increases. Vertical bars denote standard errors over 10 different random seeds. (Right) Correspondence between the HSCIC and the VCF, for increasing γ. Again, vertical bars denote standard errors over 10 different random seeds.
expectation is zero if and only if the variance term is zero almost surely. Hence, VCF(Ŷ) = 0 almost surely is equivalent to counterfactual invariance.
To estimate VCF in practice, we pick d datapoints (w i ) d i=1 from the observed data, and for each compute the counterfactual outcomesŶ * a | w i for k different values of a drawn from the observational distribution. The inner expectation is simply the deterministic predictor output. We use empirical variances with k examples for each of the d chosen datapoints, and the empirical mean of the d variances for the outer expectation. Crucially, VCF requires access to ground-truth counterfactual distributions, which by their very nature are unavailable in practice (neither for training nor at test time). Hence, we can only assess VCF, as a direct measure of counterfactual invariance, in synthetic scenarios. Our experiments demonstrate that HSCIC (estimable from the observed data) empirically serves as a proxy for VCF.
Applications of CIP
We briefly outline potential applications of counterfactual invariance, which we will subsequently study empirically.
Robustness. Counterfactual invariance serves as a strong notion of robustness in highdimensional settings such as image classification: "Would the truck have been classified correctly had it been winter in this exact situation instead of summer ?" For concrete demonstration, we will use the dSprites dataset [Matthey et al., 2017], which consists of relatively simple, yet high-dimensional, square black and white images of different shapes (squares, ellipses, etc.), sizes, and orientations (rotation) in different xy-positions.
Counterfactual fairness. The popular definition of counterfactual fairness [Kusner et al., 2017] is captured informally by the following question after receiving a consequential decision: "Would I have gotten the same outcome had my gender, race, or age been different with all else being equal ?". Again, we denote the outcome by Y ⊂ V and so-called protected attributes such as gender, race, or age-protected under anti-discrimination laws [Barocas and Selbst, 2016]-by A ⊆ V \ Y. Collecting all remaining observed covariates into W := V \ Y, the following definition of counterfactual fairness by Kusner et al. [2017] is an example of our counterfactual invariance: A predictorŶ is counterfactually fair with respect to A if under any context W = w and A = a, it holds PŶ * a |W,A (y | w, a) = PŶ * a |W,A (y | w, a), for all y and for any value a attainable by A.
Text classification. Veitch et al. [2021] motivate the importance of counterfactual invariance in text classification tasks. Specifically, they consider the causal and anti-causal structures depicted in Veitch et al. [2021, Fig. 1], which we replicate in Fig. 1 these two specific settings, they prove that ifŶ * a =Ŷ * a almost surely, thenŶ ⊥ ⊥ A and Y ⊥ ⊥ A | Y, respectively. Even though achieving these (conditional) independencies does not imply counterfactual invariance, they propose to enforce these necessary conditions as a potential way to obtain (approximate) counterfactual invariance in practice. To apply our sufficient criterion to their settings, an unconfoundedness assumption is required.
Corollary 3.6. Under the causal and anti-causal graph as in Fig. 1
(b,c), suppose that A and Y are not confounded. IfŶ ⊥ ⊥ A, it holds that PŶ * a (y) = PŶ * a (y) almost surely, for all a, a in the domain of A.
In our empirical comparison in Section F.6, we show that CIP performs similarly to Veitch et al. [2021] even if this unconfoundedness assumption is violated. Table 1: MSE , HSCIC, VCF for increasing dimension of A, on synthetic datasets as in Section F.3. All other variables are one-dimensional. dimA=10 dimA=20 γ = 0 0.05 ± 0.01 0.51 ± 0.08 0.55 ± 0.05 0.28 ± 0.01 4.70 ± 0.28 0.23 ± 0.01 γ = 0.5 0.07 ± 0.02 0.49 ± 0.08 0.54 ± 0.06 0.56 ± 0.02 4.55 ± 0.30 0.21 ± 0.02 γ = 1 0.13 ± 0.04 0.48 ± 0.08 0.52 ± 0.07 0.74 ± 0.02 4.42 ± 0.20 0.18 ± 0.02 γ = 10 1.60 ± 0.50 0.38 ± 0.06 0.29 ± 0.04 7.34 ± 0.07 3.83 ± 0.38 0.10 ± 0.01 γ = 50 4.00 ± 0.58 0.34 ± 0.05 0.17 ± 0.02 13.68 ± 0.17 3.72 ± 0.15 0.06 ± 0.01 γ = 100 4.89 ± 0.42 0.33 ± 0.05 0.12 ± 0.02 15.87 ± 0.20 3.70 ± 0.11 0.04 ± 0.01 γ = 500 6.73 ± 0.61 0.33 ± 0.05 0.03 ± 0.01 21.25 ± 0.26 3.70 ± 0.15 0.01 ± 0.01 γ = 1000 7.24 ± 0.69 0.32 ± 0.05 0.01 ± 0.01 23.33 ± 0.29 3.69 ± 0.11 0.01 ± 0.01
Experiments
In this section, we aim to demonstrate the effectiveness of the proposed method in enforcing counterfactual invariance across various types of data, including tabular, high-dimensional, and real-world datasets. We compare CIP with established baselines to showcase its competitive results in preserving counterfactual invariance.
Synthetic Experiments
We begin our empirical assessment of HSCIC, by generating various synthetic datasets following the causal graph in Fig. 1(d). The datasets are composed of four sets of observed continuous variables: (i) the prediction target Y, (ii) the variable(s) we want to be counterfactually invariant in A, (iii) covariates that mediate effects from A on Y, and (iv) confounding variables S. The goal is to learn a predictorŶ that is counterfactually invariant in A with respect to W := A ∪ X ∪ S. Following the notation of Theorem 3.2, we have Z = X ∪ S. We consider various synthetic datasets for this case, which mainly differ in the dimension of the observed variables and their correlations. All datasets are described in detail in Section F.
Model choices and parameters. For all synthetic experiments, we train fully connected neural networks (MLPs) with MSE loss L mse (Ŷ) as the predictive loss L in Eq. (1) for continuous outcomes Y. We generate 10k samples from the observational distribution in each setting and use an 80 to 20 train to test split. All metrics reported are on the test set. We perform hyper-parameter tuning for MLP hyperparameters based on a random strategy (see Section F for details). The HSCIC(Ŷ, A | Z) term is computed as in Eq.
(2) using a Gaussian kernel with amplitude 1.0 and length scale 0.1. The regularization parameter λ for the ridge regression is set to λ = 0.01. We set d = 1000 and k = 500 in the estimation of VCF.
Model performance. We first perform a set of experiments to study the effect of the HSCIC, and to highlight the trade-off between accuracy and counterfactual invariance. For this set of experiments, we generate a dataset as described in Section F.1. Fig. 2 (top) shows the values attained by the VCF and MSE for increasing γ, demonstrating the expected trade-off in raw predictive performance and enforcing counterfactual invariance. Finally, Fig. 2 (bottom) highlights the usefulness of HSCIC as a measure of counterfactual invariance, being in strong agreement with VCF (see discussion after Eq. (3)).
Comparison with baselines. We compare CIP against baselines in different simulated settings. Since counterfactually invariant training has not received much attention yet, our choice of baselines for experimental comparison is highly limited. Corollary 3.6 together with the fact that Veitch et al. [2021] in practice only enforce distributional equality implies that CIP subsumes theirs in the causal setting they have proposed. We benchmarked CIP against Veitch et al. [2021] in the specific causal and anti-causal settings of Fig. 6(b-c) in Section F.6, showing that our method performs on par with theirs. Since counterfactual fairness is a special case of counterfactual invariance, we also compare against two methods proposed by Kusner et al. [2017] (in applicable settings). We compare to the Level 1 (only use non-descendants of A as inputs toŶ) and the Level 2 (assume an additive noise model and in addition to non-descendants, only use the residuals of descendants of A after regression on A as inputs toŶ) approaches of Kusner et al. [2017]. We refer to these two baselines as CF1 and CF2 respectively. In Fig. 3, the results for a non-additive noise model data-generating mechanism are shown. For a suitable choice of γ, CIP outperforms the baseline CF2 in both MSE and VCF simultaneously. While CF1 satisfies counterfactual invariance perfectly by construction (VCF = 0), its MSE is generally higher in comparison to other possible choices of the parameter γ that still achieve high levels of counterfactual invariance. Our method provides to flexibly trade predictive performance for counterfactual invariance via a single tuning knob λ and Pareto-dominates existing methods. In Section F.2, the results in another simulated setting are presented.
Multi-dimensional variables. We perform a third set of experiments to assess HSCIC's performance in higher dimensions. We consider simulated datasets (described in Section F.3), where we increase the dimension of A, leaving the rest of the variables unchanged. The results in Table 1 for different trade-off parameters γ and different dimensions of A demonstrate that HSCIC can handle multi-dimensional variables while maintaining performance, as counterfactual invariance is approached when γ increases.
High-dimensional Image Experiments
We consider the image classification task on the dSprites dataset [Matthey et al., 2017]. Since this dataset is fully synthetic and labelled, we consider a causal model as depicted in Fig. 1(f). The full structural equations are provided in Section F.4, where we assume a causal graph over the determining factors of the image, and essentially look up the corresponding image in the simulated dataset. This experiment is particularly challenging due to the mixed categorical and continuous variables in C (shape, y-pos) and X (color, orientation), continuous A (x-pos). Our goal is to learn a predictorŶ that is counterfactually invariant in the x-position with respect to all other observed variables. Following Theorem 3.2, we seek to achievê Y ⊥ ⊥ x-pos | {shape, y-pos, scale} via the HSCIC operator. To accommodate the mixed input types,Ŷ puts an MLP on top of features extracted from the images via convolutional layers concatenated with features extracted from the remaining inputs via an MLP. Fig. 4 demonstrates that HSCIC achieves improved VCF as γ increases up to a certain point while affecting MSE, an inevitable trade-off. Experimental results for higher values of the trade-off parameter γ can be found in Section F.4.
Fairness with Continuous Protected Attributes
We then apply CIP to the widely-used UCI Adult dataset [Kohavi and Becker, 1996]. The goal is to predict whether an individual's income is above a certain threshold based on demographic information, including protected attributes. We follow Chiappa [2019], Nabi and Shpitser [2018], where a subset of variables are selected from the dataset and a causal structure is assumed as in Fig. 1(e) (see Section F.5 and Fig. 11 for details). We choose gender (considered binary in this dataset) and age (considered continuous) as the protected attributes A. We denote the marital status, level of education, occupation, working hours per week, and work class jointly by X and combine the remaining observed attributes in C. Our aim is to learn a predictorŶ that is counterfactually invariant in A with respect to W = C ∪ X. We remark that achieving fairness for continuous or even mixed categorical and continuous protected attributes is an ongoing area of research (even for non-causal fairness notions) [Chiappa andPacchiano, 2021, Mary et al., 2019], but is directly supported by HSCIC.
We use an MLP with binary cross-entropy loss forŶ. Since this experiment is based on real data, the true counterfactual distribution cannot be known. Hence, we follow Chiappa and Pacchiano [2021] and estimate a possible true SCM by inferring the posterior distribution over the unobserved variables using variational autoencoders [Kingma and Welling, 2014]. Figure 5 (left) highlights once more that the HSCIC operator is in agreement with the VCF, again trading off accuracy. Figure 5 (right) presents the counterfactual distribution (i.e., Eq.
(3) before taking the outer expectation) for one seed for different trade-off parameters. It shows that CIP achieves more counterfactually fair outcome distributions (more mass of the VCF distribution near zero) than an unconstrained classifier (γ = 0).
Discussion and Future Work
We developed a method to learn counterfactually invariant predictorsŶ, i.e., predictors that remain invariant in changes of certain covariates (conditioned on observed evidence). First, we presented a novel sufficient graphical criterion to characterize counterfactual invariance and reduce it to conditional independence in the observational distribution. Our method (CIP) does not require identifiability of the counterfactual distribution. We then built on kernel mean embeddings and the Hilbert-Schmidt Conditional Independence Criterion to devise an efficiently estimable, model-agnostic objective to practically train counterfactually invariant predictors. This choice allowed us to deal with mixed continuous/categorical, multi-dimensional variables. We demonstrated the efficacy of CIP in regression and classification tasks involving simulation studies, high-dimensional images, and in a fairness application on tabular data, where it outperforms existing baselines.
The main limitation of our work, shared by all studies in this domain, is the assumption that the causal graph is known. Another limitation is that our methodology is applicable only when our graphical criterion is satisfied, requiring a certain set of variables to be observed (albeit unobserved confounders are not generally excluded). From an ethics perspective, the increased robustness of counterfactually invariant-or societal benefits of counterfactually fair-predictors are certainly desirable. However, this presupposes that the often untestable assumptions are valid. Overall, the causal methodology should not be applied lightly, especially in high-stakes and consequential decisions. A critical analysis of the broader context or systemic factors may hold more promise for societal benefits, than a well-crafted algorithmic predictor.
An important direction for future work is to assess the sensitivity of CIP to misspecifications of the causal graph or insufficient knowledge of the required blocking set. Lastly, our graphical criterion and KME-based objective can also be useful for causal representation learning, where one aims to isolate causally relevant, autonomous factors underlying the data-generating process of high-dimensional data.
Heinze-Deml, C., Peters, J., and Meinshausen, N. (2018). Invariant causal prediction for nonlinear models. Journal of Causal Inference, 6(2).
Kilbertus, N., Rojas Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., and Schölkopf, B. (2017). Avoiding discrimination through causal reasoning. In Advances in neural information processing systems, volume 30. Our proof technique generalizes the work of Shpitser and Pearl [2009]. To understand the proof technique, note that conditional counterfactual distributions of the form P Y * a |W (y | w) involve quantities from two different worlds. The variables W belong to the pre-interventional world, and the interventional variable Y * a belongs to the world after performing the intervention A ← a. Hence, we study the identification of conditional counterfactual distributions using a diagram that embeds the causal relationships between the pre-and the post-interventional world. After defining this diagram, we prove that some conditional measures in this new model provide an estimate for P Y * a |W (y | w). We then combine this result with the properties of Z to prove the desired result.
A.2 Identifiability of counterfactual distributions
In this section, we discuss a well-known criterion for the identifiability of conditional distributions, which we will then use to prove Theorem 3.2. To this end, we use the notions of a blocked path and valid adjustment set, which we restate for clarity.
Definition A.1. Consider a path π of causal graph G. A set of nodes Z blocks π, if π contains a triple of consecutive nodes connected in one of the following ways:
N i → Z → N j , N i ← Z → N j , with N i , N j / ∈ Z, Z ∈ Z, or N i → M ← N j and neither M nor any descendent of M is in Z.
Using this definition, we define the concept of a valid adjustment set.
Definition 2.2. Let G be a causal graph and let X, Y be disjoint (sets of) nodes in G. A set of nodes S is a valid adjustment set for (X, Y), if (i) No element in S is a descendant in G X of any node W / ∈ X which lies on a proper causal path from X to Y. (ii) S blocks all non-causal paths from X to Y in G.
Definition 2.2 is a useful graphical criterion for the identifiability of counterfactual distributions. In fact, following Corollary 1 by Shpitser et al. [2010], if S satisfies the adjustment criterion relative to (A, Y), then it holds P Y * a (y) = P Y|A,S (y | a, s)dP S .
Furthermore, this identifiability criterion is complete. That is, consider any graph G and a set of nodes S that do not fulfill the valid adjustment criterion with respect to (A, Y).
A.3 d-separation and conditional independence
In this section, we discuss a well-known criterion for conditional independence, which we will then use to prove Theorem 3.2. We use the notion of a blocked path, as in Definition A.2 and the concept of d-separation as follows.
Definition A.2 (d-Separation). Consider a causal graph G. Two sets of nodes X and Y of G are said to be d-separated by a third set S if every path from any node of X to any node of Y is blocked by S.
We use the notation X ⊥ ⊥ G Y | S to indicate that X and Y are d-separated by S in G. We use Definition A.2 as a graphical criterion for conditional independence [Pearl, 2000].
Lemma A.3 (Markov Property). Consider a causal graph G, and suppose that two sets of nodes X and Y of G are d-separated by S. Then, X is independent of Y given S in any model induced by the graph G.
The Markov Property is also referred to as d-separation property. We use the notation X ⊥ ⊥ G Y | S to indicate that X and Y are d-separated by S in G.
A.4 A graphical characterization of counterfactual distributions
We study the relationships between the pre-interventional model corresponding to a causal diagram G and the post-interventional model, inducing a diagram G a after an intervention A ← a. A natural way to study this relationship is to use the counterfactual graph [Shpitser and Pearl, 2008]. However, the construction of the counterfactual graph is rather intricate. For our purposes it is sufficient to consider a simpler construction, generalizing the work by Shpitser and Pearl [2009].
Consider an SGM with causal graph G, and fix a set of observed random variables of interest W. Denote with de(A) all descendants of A in G. Furthermore, for each node N of G, denote with an(N ) the set of all its ancestral variables. We define the corresponding graph G A∪W in the following steps:
1. Define G A∪W to be the same graph as G.
2. For each node N ∈ A ∪ W, add a new duplicate node N to G A∪W .
3. For each node N ∈ A ∪ W and for each ancestral variable P ∈ an(N ) \ (A ∪ W) such that P ∈ de((A ∪ W)), add a new duplicate node P to G A∪W .
4. For each duplicate node N and for each parent P ∈ pa(N ), if a duplicate node P was added in steps 2-3, then add an edge P → N ; otherwise add an edge P → N .
5. For each duplicate node N , add an edge U N → N .
An illustration of this graph is presented in Fig. 6. We denote with H the set of duplicate nodes that were added to G A∪W . We can naturally define structural equations for the new variables N as
N = f N (pa(N ), U N ),
with f N the structural equation for N in the original model, and pa(N ) the parents of N in the newly define graph G A∪W . Note that each random variable N is a copy of the corresponding N , in the sense that N = N almost surely. Importantly, the following lemma holds.
Lemma A.4. Suppose that a set of nodes S satisfies the adjustment criterion relative to (A ∪ W, Y) in G. Then, S satisfies the adjustment criterion relative to (A ∪ W, Y) in G A∪W .
Proof. We prove the claim, by showing that all non-causal paths in G A∪W from A ∪ W to Y are blocked by S. Indeed, if S satisfies the adjustment criterion relative to (A ∪ W, Y) in G, then condition (i) of the adjustment criterion Definition 2.2 relative to (A ∪ W, Y) in G A∪W is satisfied. Let π be any such non-causal path in G A∪W from A ∪ W to Y. If π does not cross any duplicate node, then it is blocked by S. Otherwise, without loss of generality, we can decompose π in three paths, which we refer to as π 1 , π 2 , and π 3 . The path π 1 starts from a node in A ∪ W of G, and it terminates in H. The path π 2 only contains nodes in a node in H, and the path π 3 starts from a node of H, and it terminates in Y. The paths π 1 and π 3 necessarily contain paths of the form N ← P or N ← U N → N , with N ∈ H, P and N nodes of G, and U P a latent variable. By construction, no node N ∈ H belongs to the adjustment set S. Hence, the path π contains a fork of three nodes, with the central node, or any descendants of the central node, are included in S. Hence, the path π is blocked.
We further prove the following lemma.
Lemma A.5 (Following Theorem 4 by Shpitser et al. [2010]). Define the sets X = W \ A and X = W \ A. Suppose that a set of nodes S satisfies the adjustment criterion relative to (A ∪ W, Y) in G. Then, it holds Y * a,x ⊥ ⊥ A, X | S for any intervention intervention A, X ← a, x.
Proof. By Lemma A.4, S satisfies the adjustment criterion relative to (A ∪ W, Y) in G A∪W . Equivalently, S satisfies the adjustment criterion relative to (A ∪ X, Y) in G A∪W . Hence, by the sufficiency of the adjustment criterion (Theorem 4 by Shpitser et al. [2010]), it hold Y ⊥ ⊥ A, X | S in the graph (G A∪W ) a,x , which is obtained from G A∪W by performing an intervention A, X ← a, x. By definition, the group of random variables A and X in (G A∪W ) a,x are copies of the pre-interventional variables A, X in (G A∪W ) a,x . It follows that Y ⊥ ⊥ A, X | S in the graph (G A∪W ) a,x or, equivalently, that Y * a,x ⊥ ⊥ A, X | S, as claimed.
A.5 Proof of Theorem 3.2
We can identify conditional counterfactual distributions in G, by identifying distributions on G . We can combine this observation with the notion of a valid adjustment set to derive a closed formula for the identification of the distributions of interest.
Proof of Theorem 3.2. Following the notation of Lemma A.5, define the sets X = W \ A, X = W \ A, and let G A∪W be the augmented graph obtained by adding duplicate nodes. Note that, using this notation, the assumption that Y ⊥ ⊥ A | Z can be written as Y ⊥ ⊥ A | X, S. Denote with P the induced measure on G A∪W . Suppose that it holds P Y * a ,x |A,X (y | a, x) = P Y|A,X,S (y | a , x, s)dP S|A,X (s | a, x)
for any intervention A ← a, and for any possible value w attained by W. Assuming that Eq. (5) holds, we have that P Y * a ,x |A,X (y | a, x) = P Y|A,X,S (y | a , x, s)dP S|A,X (z | a, x) (assuming Eq. (5)) = P Y|A,X,S (y | a, x, s)dP S|A,X (s | a, x) (Y ⊥ ⊥ A | X, S) = P Y * a,x |A,X (y | a, x). (assuming Eq. (5)) (6) To conclude, define the set T = A \ W. It follows that
P Y * a ,x |W (y | w) = P Y * a ,x |A,X (y | a, x)dP T|W (t | w) (by conditioning) = P Y * a,x |A,X (y | a, x)dP T|W (t | w) (by Eq. (6)) = P Y * a,x |W (y | w). (by unconditioning)
Since X ⊆ W, from the inequalities above it holds
P Y * a |W (y | w) = P Y * a ,x |W (y | w) = P Y * a ,x |W (y | w) = P Y * a,x |W (y | w) = P Y * a |W (y | w),
as claimed. The proof of Theorem 3.2 thus boils down to proving Eq. (5). To this end, we use the valid adjustment property of S. Note that by Lemma A.5 it holds Y * a ,x ⊥ ⊥ A, X | S. Hence,
P Y * a ,x |A,X (y | a, x) = P Y * a ,x |A,X,S (y | a, x, s)dP S|A,X (s | a, x) (by conditioning) = P Y * a ,x |S (y | s)dP S|A,X (s | a, x) (Y * a ,x ⊥ ⊥ A, X | S)
= P Y|A,X,S (y | a , x, s)dP S|A,X (s | a, x), (by Lemma A.4) and Eq. (5) follows.
B Proof of Theorem 3.4
We prove that the HSCIC can be used to promote conditional independence, using a similar technique as Park and Muandet [2020]. The following theorem holds.
Theorem 3.4 (Theorem 5.4 by Park and Muandet [2020]). If the kernel k of
H X ⊗ H A is characteristic 4 , HSCIC(Y, A | Z) = 0 almost surely if and only if Y ⊥ ⊥ A | Z.
Proof. By definition, we can write HSCIC(Y,
A | Z) = H Y,A|Z • Z,
where H Y,A|Z is a real-valued deterministic function. Hence, the HSCIC is a real-valued random variable, defined over the same domain Ω Z of the random variable X.
We first prove that if HSCIC(Y, A | Z) = 0 almost surely, then it holds Y ⊥ ⊥ A | Z.
To this end, consider an event Ω ⊆ Ω X that occurs almost surely, and such that it holds (H Y,A|X • X)(ω) = 0 for all ω ∈ Ω . Fix a sample ω ∈ Ω , and consider the corresponding value z ω = Z(ω), in the support of Z. It holds
k(y ⊗ a, · )dP Y,A|Z=zω = µ Y,A|Z=zω (by definition) = µ Y|Z=zω ⊗ µ A|Z=zω (since ω ∈ Ω ) = k Y (y, · )dP Y|Z=zω ⊗ k A (a, · )dP A|Z=zω (by definition ) = k Y (y, · ) ⊗ k A (a, · )dP Y|Z=zω P A|Z=zω , (by Fubini's Theorem)
with k Y and k A the kernels of H Y and H A respectively. Since the kernel k of the tensor product space H Y ⊗ H A is characteristic, then the kernels k Y and k A are also characteristic. Hence, it holds P Y,A|Z=zω = P Y|Z=zω P A|Z=zω for all ω ∈ Ω . Since the event Ω occurs almost surely, then P Y,A|Z=zω = P Y|Z=zω P A|Z=zω almost surely, that is Y ⊥ ⊥ A | Z.
Assume now that Y ⊥ ⊥ A | Z. By definition there exists an event Ω ⊆ Ω Z such that P Y,A|Z=zω = P Y|Z=zω P A|Z=zω for all samples ω ∈ Ω , with z ω = Z(ω). It holds
µ Y,A|Z=zω = k(y ⊗ a, · )dP Y,A|Z=zω (by definition) = k(y ⊗ a, · )dP Y|Z=zω P A|Z=zω (since ω ∈ Ω ) = k Y (y, · )k A (a, · )dP Y|Z=zω P A|Z=zω (by definition of k) = k Y (y, · )dP Y|Z=zω ⊗ k A (a, · )dP A|Z=zω (by Fubini's Theorem) = µ Y|Z=zω ⊗ µ A|Z=zω . (by definition)
The claim follows.
C Conditional kernel mean embeddings and the HSCIC
The notion of conditional kernel mean embeddings has already been studied in the literature. We show that, under stronger assumptions, our definition is equivalent to the definition by Park and Muandet [2020].
C.1 Conditional kernel mean embeddings and conditional independence
We show that, under stronger assumptions, the HSCIC can be defined using the Bochner conditional expected value. The Bochner conditional expected value is defined as follows.
Definition C.1. Fix two random variables Y, Z taking value in a Banach space H, and denote with (Ω, F, P) their joint probability space. Then, the Bochner conditional expectation of Y given Z is any H-valued random variable X such that
E YdP = E XdP
for all E ∈ σ(Z) ⊆ F, with σ(Z) the σ-algebra generated by Z. We denote with E [Y | Z] the Bochner expected value. Any random variable X as above is a version of E [Y | Z].
The existence and almost sure uniqueness of the conditional expectation are shown in Dinculeanu [2000]. Given a RKHS H with kernel k over the support of Y, Park and Muandet [2020] define the corresponding conditional kernel mean embedding as
µ Y|Z := E [k(·, y) | Z] .
Note that, according to this definition, µ Y|Z is an H-valued random variable, not a single point of H. Park and Muandet [2020] use this notion to define the HSCIC as follows.
Definition C.2 (The HSCIC according to Park and Muandet [2020]). Consider (sets of) random variables Y, A, Z, and consider two RKHS H Y , H A over the support of Y and A respectively. The HSCIC between Y and A given Z is defined as the real-valued random variable
ω → µ Y,A|Z (ω) − µ Y|Z (ω) ⊗ µ A|Z (ω) ,
for all samples ω in the domain Ω Z of Z. Here, · the metric induced by the inner product of the tensor product space H Y ⊗ H Z .
We show that, under more restrictive assumptions, Definition C.2 can be used to promote conditional independence. To this end, we use the notion of a regular version.
Definition C.3 (Regular Version, following Definition 2.4 by Çinlar and ðCınlar [2011]). Consider two random variables Y, Z, and consider the induced measurable spaces (Ω Y , F Y ) and
(Ω Z , F Z ). A regular version Q for P Y|Z is a mapping Q :
Ω Z × F Y → [0, +∞] : (ω, y) → Q ω (y) such that: (i) the map ω → Q ω (x) is F A -measurable for all y; (ii) the map y → Q ω (y) is a measure on (Ω Y , F Y ) for all ω; (iii) the function Q ω (y) is a version for E 1 {Y=y} | Z .
The following theorem shows that the random variable as in Definition C.2 can be used to promote conditional independence.
Theorem C.4 (Theorem 5.4 by Park and Muandet [2020]). With the notation introduced above, suppose that the kernel k of the tensor product space H X ⊗ H A is characteristic. Furthermore, suppose that P Y,A|X admits a regular version. Then,
µ Y,A|Z (ω) − µ Y|Z (ω) ⊗ µ A|Z (ω) = 0 almost surely if and only if Y ⊥ ⊥ A | Z.
Note that the assumption of the existence of a regular version is essential in Theorem C.4. In this work, HSCIC is not used for conditional independence testing but as a conditional independence measure.
C.2 Equivalence with our approach
The following theorem shows that under the existence of a regular version, conditional kernel mean embeddings can be defined using the Bochner conditional expected value. To this end, we use the following theorem.
Theorem C.5 (Following Proposition 2.5 by Çinlar and ðCınlar [2011]). Following the notation introduced in Definition C.3, suppose that P Y|Z (· | Z) admits a regular version Q ω (y). Consider a kernel k over the support of Y. Then, the mapping
ω → k(·, y)dQ ω (y) is a version of E [k(·, y) | Z].
As a consequence of Theorem C.5, we prove the following result.
Lemma C.6. Fix two random variables Y, Z. Suppose that P Y|Z admits a regular version. Denote with Ω Z the domain of Z. Then, there exists a subset Ω ⊆ Ω Z that occurs almost surely, such that µ Y|Z (ω) = µ Y|Z=Z(ω) for all ω ∈ Ω. Here, µ Y|Z=Z(ω) is the embedding of conditional measures as in Section 2.
Proof. Let Q ω (y) be a regular version of P Y|Z . Without loss of generality we may assume that it holds P Y|Z (y | {Z = Z(ω)}) = Q ω (y). By Theorem C.5 there exists an event Ω ⊆ Ω Z that occurs almost surely such that
µ Y|Z (ω) = E[k(y, · ) | Z](ω) = k(y, · )dQ ω (y),(7)
for all ω ∈ Ω. Then, for all ω ∈ Ω it holds µ Y|Z (ω) = k(x, · )dQ ω (x) (it follows from Eq. (7))
= k(x, · )dP X|A (x | {A = A(ω)}) (Q ω (y) = P Y|Z (y | {Z = Z(ω)}))
= µ X|{A=A(ω)} , (by definition as in Section 2) as claimed.
As a consequence of Lemma C.6, we can prove that the definition of the HSCIC by Park and Muandet [2020] is equivalent to ours. The following corollary holds.
Corollary C.7. Consider (sets of ) random variables Y, A, Z, and consider two RKHS H Y , H A over the support of Y and A respectively. Suppose that P Y,A|Z (· | Z) admits a regular version. Then, there exists a set Ω ⊆ Ω A that occurs almost surely, such that
µ X,A|Z (ω) − µ X|Z (ω) ⊗ µ A|Z (ω) = (H Y,A|Z • Z)(ω).
Here, H Y,A|Z is a real-valued deterministic function, defined as
H Y,A|Z (z) := µ Y,A|Z=z − µ Y|Z=z ⊗ µ A|Z=z ,
and · is the metric induced by the inner product of the tensor product space H X ⊗ H A .
We remark that the assumption of the existence of a regular version is essential in Corollary C.7.
D The cross-covariance operator
In this section, we show that under additional assumptions, our definition of conditional KMEs is equivalent to the definition based on the cross-covariance operator, under more restrictive assumptions.
The definition of KMEs based on the cross-covariance operator requires the use of the following well-known result.
Lemma D.1. Fix two RKHS H X and H Z , and let {ϕ i } ∞ i=1 and {ψ j } ∞ j=1 be orthonormal bases of H X and H Z respectively. Denote with HS(H X , H Z ) the set of Hilbert-Schmidt operators between H X and H Z . There is an isometric isomorphism between the tensor product space H X ⊗ H Z and HS(H X , H Z ), given by the map
T : ∞ i=1 ∞ j=1 c i,j ϕ i ⊗ ψ j → ∞ i=1 ∞ j=1 c i,j · , ϕ i H X ψ j .
For proof of this result see i.e., Park and Muandet [2020]. This lemma allows us to define the cross-covariance operator between two random variables, using the operator T .
Definition D.2 (Cross-Covariance Oprator). Consider two random variables X, Z. Consider corresponding mean embeddings µ X,Z , µ X and µ Z , as defined in Section 3. The cross-covariance operator is defined as Σ X,Z := T (µ X,Z − µ X ⊗ µ Z ). Here, T is the isometric isomorphism as in Lemma D.1.
It is well-known that the cross-covariance operator can be decomposed into the covariance of the marginals and the correlation. That is, there exists a unique bounded operator Λ Y,Z such that
Σ Y,Z = Σ 1/2 Y,Y • Λ Y,Z • Σ 1/2 Z,Z
Using this notation, we define the normalized conditional cross-covariance operator. Given three random variables Y, A, Z and corresponding kernel mean embeddings, this operator is defined as
Λ Y,A|Z := Λ Y,A − Λ Y,Z • Λ Z,A .(8)
This operator was introduced by Fukumizu et al. [2007]. The normalized conditional crosscovariance can be used to promote statistical independence, as shown in the following theorem.
Theorem D.3 (Theorem 3 by Fukumizu et al. [2007]). Following the notation introduced above, define the random variableÄ := (A, Z). Let P Z be the distribution of the random variable Z, and denote with L 2 (P Z ) the space of the square integrable functions with probability P Z . Suppose that the tensor product kernel k Y ⊗ k A ⊗ k Z is characteristic. Furthermore, suppose that H Z + R is dense in L 2 (P Z ). Then, it holds
Λ Y,Ä|Z = 0 if and only if Y ⊥ ⊥ A | X.
Here, Λ Y,Ä|Z is an operator defined as in Eq. (8).
By Theorem D.3, the operator Λ Y,Ä|Z can also be used to promote conditional independence. However, CIP is more straightforward since it requires less assumptions. In fact, Theorem D.3 requires to embed the variable Z in an RKHS. In contrast, CIP only requires the embedding of the variables Y and A.
E Random Fourier Features
Random Fourier features is an approach to scaling up kernel methods for shift-invariant kernels [Rahimi and Recht, 2007]. Recall that a shift-invariant kernel is a kernel of the form k(z, z ) = h k (z − z ), with h k a positive definite function.
Fourier features are defined via the following well-known theorem.
Theorem E.1 (Bochner's Theorem). For every shift-invariant kernel of the form k(z, z ) = h k (z − z ) with h k (0) = 1, there exists a probability probability density function P k (η) such that k(z, z ) = e −2πiη T (z−z ) dP k .
Since both the kernel k and the probability distribution P k are real-valued functions, the integrand in Theorem E.1 ca be replaced by the function cos η T (z − z ), and we obtain the following formula
k(z, z ) = cos η T (z − z )dP k = E cos η T (z − z ) ,(9)
where the expected value is taken with respect to the distributionP k (η). This equation allows to approximate the kernel k(z, z ), via the empirical mean of points η 1 , . . . , η l sampled independently according to P k . In fact, it is possible to prove exponentially fast convergence of an empirical estimate for E cos η T (z − z ) , as shown in the following theorem.
Theorem E.2 (Uniform Convergence of Fourier Features, Claim 1 by Rahimi and Recht [2007]). Following the notation introduced above, fix any compact subset Ω in the domain of k, and consider points η 1 , . . . , η l sampled independent according to the distribution P k . Define the functionk
(z, z ) := 1 l l j=1 cos η T j (z − z ),
for all (z, z ) ∈ Ω. Then, it holds
P sup z,z k (z, z ) − k(z, z ) ≥ ε ≤ 2 8 σ k diam(Ω) ε exp − ε 2 l 4(d + 1)
.
Here σ 2 k is the second moment of the Fourier transform of the kernel k, and d is the dimension of the arrays z and z .
By Theorem E.2, the estimated kernelk is a good approximation of the true kernel k on the set Ω.
Similarly, we can approximate the Kernel matrix using Random Fourier features. Following the notation introduced above, define the function
ζ k,l (z) := 1 √ l cos η T 1 z, . . . , cos η T l z(10)
with η 1 , . . . , η l sampled independent according to the distribution P k .
We can approximate the Kernel matrix using the functions defined as in Eq. (10). Consider n samples z 1 , . . . , z n , and denote with Z the n × l matrix whose i-th row is given by ζ k,l (z i ).
Similarly, denote with Z * the l × n matrix whose i-th column is given by ζ * k,l (z i ). Then, we can approximate the kernel matrix asK Z ≈ ZZ * .
We can also use this approximation to compute the kernel ridge regression parameters as in Section 3 using the formulaŵ Y|Z (·) ≈ (ZZ * − nλI) −1 k Z (·, z 1 ), · · · , k Z (·, z n ) T . Avron et al. [2017] argue that the approximate kernel ridge regression, as defined above, is an accurate estimate of the true distribution. Their argument is based on proving that the matrix ZZ * − nλI is a good approximation ofK Z − nλI. The notion of good approximation is clarified by the following definition.
Definition E.3. Fix two Hermitian matrices A and B of the same size. We say that a matrix A is a γ-spectral approximation of another matrix B, if it holds (1 − γ)B A (1 + γ)B.
Here, the symbol means that A − (1 − γ)B is positive semi-definite, and that (1 + γ)B − A is positive semi-definite. Avron et al. [2017] prove that ZZ * − nλI is a γ-approximation ofK Z − nεI, if the number of samples η 1 , . . . , η l is sufficiently large.
Theorem E.4 (Theorem 7 by Avron et al. [2017]). Fix a constant γ ≤ 1/2. Consider n samples z 1 , . . . , z n , and denote withK Z the corresponding kernel matrix. Suppose that it holds K Z 2 ≥ nλ for a constant λ > 0. Fix η 1 , . . . , η l samples with
l ≥ 8 3γ 2 λ ln 16 tr λ (K Z ) γ
Then, the matrix ZZ * − nλI is a γ-approximation ofK Z − nλI with probability at least 1 − γ, for all γ ∈ (0, 1). Here, tr λ (K Z ) is defined as the trace of the matrixK Z (K Z + nλI) −1 .
We conclude this section by illustrating the use of random Fourier features to approximate a simple Gaussian kernel. Suppose that we are given a kernel of the form
k(z, z ) := exp − 1 2 σ z − z 2 2 .
Then, k(z, z ) can be estimated as in Theorem E.2, with η 1 , . . . , η l ∼ N (0, Σ), with Σ := σ −1 I, with I the identity matrix. The functions ζ k,l (z) can be defined accordingly.
F Experiment settings
Additional information on the experiments is now provided.
F.1 Dataset for model performance with the use of the HSCIC
The data-generating mechanism corresponding to the results in Fig. 3 is the following:
Z ∼ N (0, 1) A = Z 2 + ε A X = exp − 1 2 A 2 sin (2A) + 2Z 1 5 ε X Y = 1 2 exp {−XZ} · sin (2XZ) + 5A + 1 5 ε Y , where ε A ∼ N (0, 1) and ε Y , ε X i.i.d.
∼ N (0, 0.1). In the first experiment, Fig. 3 shows the results of feed-forward neural networks consisting of 8 hidden layers with 20 nodes each, connected with a rectified linear activation function (ReLU) and a linear final layer. Mini-batch size of 256 and the Adam optimizer with a learning rate of 10 −3 for 300 epochs were used.
F.2 Datasets and results for comparison with baselines
The comparison of our method CIP with the CF1 and CF2 is done on different simulated datasets. These will be referred to as Scenario 1 and Scenario 2. The data generating mechanism corresponding to the results in Fig. 3 (Scenario 1) is the following:
Z ∼ N (0, 1) A = exp 1 2 Z 2 · sin (2Z) + ε A X = exp 1 2 A 2 · ε X + 2Z Y = 1 2 exp {−XZ} · sin (2XZ) + 5A + 1 5 ε Y , where ε A , ε X i.i.d. ∼ N (0, 1) and ε Y i.i.d.
∼ N (0, 0.1). The data generating mechanism for Scenario 2 is the following:
Z ∼ N (0, 1) A = exp 1 2 Z 2 · sin (2Z) + ε A X = 1 2 Z + A · ε X Y = sin (Z) + A + X + 1 5 ε Y , where ε A , ε X i.i.d.
∼ N (0, 1) and ε Y i.i.d.
∼ N (0, 0.1). Fig. 7 shows the performance of CIP against baselines CF1 and CF2 in Scenario 2. In Table 2, the results of MSE, HSCIC and VCF are presented. In this table, both Scenario 1 and Scenario 2 were considered. The results shown in Fig. 3, Fig. 7 and Table 2 are the average and standard deviation resulting from 9 random seeds runs. For CIP, the same hyperparameters as in the previous setting are used. Figure 7: Results of MSE, HSCIC operator and VCF in comparison with CF1 and CF2 for Scenario 2. The plot shows the results for 10 different seeds, along with the mean and standard deviations. CF2 is Pareto-dominated by the VCF-MSE frontier, we can hence pick a γ value to outperform CF2 in both accuracy and counterfactual invariance simultaneously.
implemented in CF1 and CF2 used for the prediction ofŶ and the one used for the prediction of the X residuals in CF2 are all designed with similar architecture and training method. The MLP models consist of 8 hidden layers with 20 nodes each, connected with a rectified linear activation function (ReLU) and a linear final layer. During training, mini-batch size of 64 and the Adam optimizer with a learning rate of 10 −3 for 200 epochs were used.
F.3 Datasets and results for multi-dimensional variables experiments
The data-generating mechanisms for the multi-dimensional settings of Table 1 are now shown. Given dimA = D 1 ≥ 2, the datasets were generated from: N (0, 1). In this experiment, the mini-batch size chosen is 512 and the same hyperparameters are used as in the previous settings. The neural network architecture is trained for 800 epochs. Table 3 shows the presents additional results when the dimension of A is 5. Fig. 8 and Fig. 9 present the results corresponding to 4 random seeds with different values of the trade-off parameter γ corresponding to different values of dimA among {5, 10, 20}. In all of the box plots, it is evident that there exists a trade-off between the accuracy and counterfactual invariance of the predictor. As the value of γ increases, there is a consistent trend of augmenting counterfactual invariance (as evidenced by the decrease in the VCF metric). Similarly to the previous boxplots visualizations, the boxes represent the interquartile range (IQR), the horizontal line is the median, and whiskers show the minimum and maximum values, excluding the outliers (determined as a function of the inter-quartile range). Outliers are represented in the plot as dots.
Z ∼ N (0, 1) A i = Z 2 + ε i A for i ∈ {1, D 1 } X = exp − 1 2 A 1 + D 1 i=1 A i · sin(Z) + 0.1 · ε X Y = exp − 1 2 A 2 · D 1 i=1 A i + XZ + 0.1 · ε Y , where ε X , ε Y i.i.d ∼ N (0, 0.1) and ε 1 A , ..., ε D 1 A i.i.d ∼
F.4 High-dimensional image dataset
The simulation procedure for the results shown in Section 4.2 is the following.
shape ∼ P(shape) y-pos ∼ P(y-pos)
color ∼ P(color) orientation ∼ P(orientation)
x-pos = round(x), where x ∼ N (shape + y-pos, 1) scale = round x-pos 24 + y-pos 24 · shape + S Y = e shape · x-pos + scale 2 · sin(y-pos) + Y , where S ∼ N (0, 1) and Y ∼ N (0, 0.01). The data has been generated via a matching procedure on the original dSprites dataset. In Table 5, the hyperparameters of the layers of the convolutional neural network are presented. Each of the convolutional groups also has a ReLU activation function and a dropout layer. Two MLP architectures have been used. The former takes as input the observed tabular features. It is composed by two hidden layers of 16 and 8 nodes respectively, connected with ReLU activation functions and dropout layers. The latter takes as input the concatenated outcomes of the CNN and the other MLP. It consists of three hidden layers of 8, 8 and 16 nodes, respectively. In Figure 10 the results are presented for higher values of γ, with a specific emphasis on the interplay between accuracy and counterfactual invariance. The means and standard deviations corresponding to 8 seeds can be found in Table 4. As evidenced by the results for γ = 500, there is a clear trade-off between these two factors, with a notable loss in accuracy leading to a significant improvement in counterfactual invariance, as indicated by the low VCF metric.
F.5 Fairness with continuous protected attributes
The pre-processing of the UCI Adult dataset was based upon the work of [Chiappa and Pacchiano, 2021]. Referring to the causal graph in Fig. 11, a variational autoencoder [Kingma and Welling, 2014] was trained for each of the unobserved variables H m , H l and H r . The prior distribution of these latent variables is assumed to be standard Gaussian. The posterior distributions P(H m |V ), P(H r |V ), P(H l |V ) are modeled as 10-dimensional Gaussian distributions, whose means and variances are the outputs of the encoder.
The encoder architecture consists of a hidden layer of 20 hidden nodes with hyperbolic tangent activation functions, followed by a linear layer. The decoders have two linear layers with a hyperbolic tangent activation function. The training loss of the variational autoencoder consists of a reconstruction term (Mean-Squared Error for continuous variables and Cross-Entropy Loss for binary ones) and the Kullback-Leibler divergence between the posterior and the prior distribution of the latent variables. For training, we used the Adam optimizer with learning rate of 10 −2 , 100 epochs, mini-batch size 128.
The predictorŶ is the output of a feed-forward neural network consisting of a hidden layer with a hyperbolic tangent activation function and a linear final layer. In the training we used the Adam optimizer with learning rate 10 −3 , mini-batch size 128, and trained for 100 epochs. The choice of the network architecture is based on the work of [Chiappa and Pacchiano, 2021].
The estimation of counterfactual outcomes is based on a Monte Carlo approach. Given a data point, 500 values of the unobserved variables are sampled from the estimated posterior
A M L R Y
Hm H l Hr C Figure 11: Assumed causal graph for the Adult dataset, as in Chiappa and Pacchiano [2021]. The variables H m , H l , H r are unobserved, and jointly trained with the predictorŶ. distribution. Given an interventional value for A, a counterfactual outcome is estimated for each of the sampled unobserved values. The final counterfactual outcome is estimated as the average of these counterfactual predictions. In this experimental setting, we have k = 100 and d = 1000.
In the causal graph presented in Fig. 11, A includes the variables age and gender, C includes nationality and race, M marital status, L level of education, R the set of the working class, occupation, and hours per week and Y the income class. Compared to [Chiappa and Pacchiano, 2021], we include the race variable in the dataset as part of the baseline features C. The loss function is the same as Eq. (1) but Binary Cross-Entropy loss (L BCE ) is used instead of Mean-Squared Error loss:
L CIP (Ŷ) = L BCE (Ŷ) + γ · HSCIC Ŷ , {Age, Gender} Z ,(11)
where the set S = {Race, Nationality} blocks all the non-causal paths from W ∪ A to Y and Z = (S ∪ W) \ A. In this example we have W = {C ∪ M ∪ L ∪ R}. The results in Fig. 5 (center, right) refer to one run with conditioning set S = {Race, Nationality}. The results in Fig. 5 (left) are the average and standard deviation of four random seeds.
F.6 Baseline Experiments
We provide an experimental comparison against the method by Veitch et al. [2021]. To this end, we consider the following data-generating mechanism for the causal structure (see Fig. 1 The data-generating mechanism of the anti-causal structure is the following (see Fig. 1(c)):
Z ∼ N (0, 1) A = 1 5 sin (Z) + ε A Y = 1 10 sin (Z) + ε Y X = A + Y + 1 10 ε X where ε Y , ε A i.i.d
∼ N (0, 0.1) and ε X i.i.d ∼ N (0, 1). We compare our method (CIP) against the method by Veitch et al. [2021] using different values for the trade-off parameter γ. In Fig. 1(b-c) the causal and anti-causal graphical settings proposed by Veitch et al. [2021] are presented. In both of these settings there is an unobserved confounder Z between A and Y. The graphical assumptions outlined in Theorem 3.2 of the CIP are not met in the graphical structures under examination, as the confounding path is not effectively blocked by an observed variable (Z is unobserved). In light of this, it is assumed in our implementation that there is no unobserved confounder. In the graphical structure Fig. 1(b), CIP enforces HSCIC(Ŷ, A | X) to become small, gradually enforcingŶ ⊥ ⊥ A|X. Differently, Veitch et al. [2021] enforces as independence criterion HSIC(Ŷ, A). HSIC is the Hilbert-Schmidt Table 6: Results of the MSE, HSCIC, VCF of CIP and the baseline [Veitch et al., 2021] applied to the causal and anti-causal structure in Fig. 1(b-c). Although the graphical assumptions are not satisfied, CIP shows an overall decrease of HSCIC, VCF in both of the graphical structures, performing on par with the baseline. Veitch et al. [2021] in terms of accuracy and counterfactual invariance. Independence Criterion, which is commonly used to promote independence (see, i.e., Fukumizu et al. [2007], Gretton et al. [2005]). In the anti-causal graphical setting presented in Fig. 1(c), the objective term used in CIP is again HSCIC(Ŷ, A | X), while in the method of Veitch et al. [2021] is HSCIC(Ŷ, A | Y). In Table 6, the results of accuracy, HSCIC(Ŷ, A | X, Z) and VCF are presented.
In the experiments, the predictorŶ is a feed-forward neural network consisting of 8 hidden layers with 20 nodes each, connected with a rectified linear activation function (ReLU) and a linear final layer. Mini-batch size of 256 and the Adam optimizer with a learning rate of 10 −4 for 500 epochs were used.
F.7 Comparison Heuristic Methods Experiments
We provide an experimental comparison of the proposed method (CIP) with some heuristic methods, specifically data-augmentation-based methods. We consider the same data-generating procedure and causal structure as presented in Section F. The heuristic methods considered are data augmentation and causal-based data augmentation. In the former, data augmentation is performed by generating N = 50 samples for every data-point by sampling new values of A as a 1 , ..., a N i.i.d ∼ P A and leaving Z, X, Y unchanged. Differently, in the latter causal-based Table 7: Results of MSE and VCF (all times 10 2 for readability) on synthetic data of CIP with trade-off parameters γ = 0.5 and γ = 1 with the heuristic methods data augmentation and causal-based data augmentation.
VCF ×10 2 MSE ×10 2 data augmentation 3.12 ± 0.16 0.003 ± 0.001 causal-based data augmentation 3.04 ± 0.16 0.013 ± 0.012 CIP (γ = 0.5) 1.550 ± 0.13 0.044 ± 0.022 CIP (γ = 1.0) 0.95 ± 0.19 0.19 ± 0.072 data augmentation method, we also take into account the causal structure given by the known DAG. Indeed, when manipulating the variable A, its descendants (in this example X) will also change. In this experiment, a predictor for X asX = f θ (A, Z) is trained on 80% of the original dataset. In the data augmentation mechanism, for every data-point {a, x, z, y}, N = 50 samples are generated by sampling new values of A as a 1 , ..., a N i.i.d ∼ P A , estimating the values of X as x 1 = f θ (a 1 , z), ..., x N = f θ (a N , z), while leaving the values of Z and Y unchanged. Heuristic methods such as data-augmentation methods do not theoretically guarantee to provide counterfactually invariant predictors. The results of an empirical comparison are shown in Table 7. It can be shown that these theoretical insights are supported by experimental results, as the VCF metric measure counterfactual invariance is lower in both of the two settings of the CIP (γ = 1 2 and γ = 1). A dataset of n = 3000 is used, along with k = 500 and d = 500. The architecture for predicting X and Y are feed-forward neural networks consisting of 8 hidden layers with 20 nodes each, connected with a rectified linear activation function (ReLU) and linear final layer. Mini-batch size of 256 and the Adam optimizer with a learning rate of 10 −3 for 100 epochs were used.
(f ) Causal structure for the constructed high-dim dSprites ground truth, where A = {Pos.X}, U = {Scale}, C = {Shape, Pos.Y}, X = {Color, Orientation}, and Y = {Outcome}.
Figure 4 :Figure 5 :
45Results of MSE, HSCIC operator and VCF for the dSprites image dataset experiment. The HSCIC operator decreases steadily with higher values of γ. Similarly, a necessary increase of MSE can be observed. For both γ = 1 and γ = 10 an overall decrease of VCF is observed compared to the unconstrained setting. Boxes represent the interquartile range (IQR), the horizontal line is the median, and whiskers show the minimum and maximum values, excluding outliers. Outliers are represented as dots. The results correspond to 12 seeds.Accuracy (%) HSCIC ×10 2 VCF×10 2 (Left) Results on accuracy, HSCIC and VCF, showing a strong decrease in VCF as γ increases at the cost of only a moderate drop in accuracy. (Right) Distribution of VCF values (unnormalized) for different choices of γ for one seed. We observe less variance and more mass near zero for γ ≥ 0. Notably, for γ = 10 we have substantial increase in counterfactual invariance, as evidenced by the values in the table.
Figure 6 :
6(a) A causal graph G, which embeds information for the random variables of the model in the pre-interventional world. (b) The corresponding graph G for the set W = {A, X}. The variables A and X are copies of A and X respectively. (c) The post-interventional graph G a . By construction, any intervention of the form A ← a does not affect the group W = {A, X}.AppendixA Proof of Theorem 3.2A.1 Overview of the proof techniquesWe restate the main theorem for completeness.Theorem 3.2. Let G be a causal graph, A, W be two (not necessarily disjoint) sets of nodes in G, such that (A ∪ W) ∩ Y = ∅, let S be a valid adjustment set for (A ∪ W, Y), and define Z := (S ∪ W) \ A. Then, in all SCMs compatible with G, if a predictorŶ satisfiesŶ ⊥ ⊥ A | Z, thenŶ is counterfactually invariant in A with respect to W.
Figure 8 :Figure 9 :Figure 10 :
8910Results of MSE, HSCIC operator and VCF for multi-dimensional variable experiment with dimA = 10 . Results of MSE, HSCIC operator and VCF for multi-dimensional variable experiment with dimA = 20 . Results of MSE, HSCIC operator and VCF for the dSprites image dataset experiment. The HSCIC operator decreases with higher values of γ. Similarly, a necessary increase of MSE can be observed. A decrease of VCF is observed compared to the unconstrained setting.
Definition 3.1 (Counterfactual invariance). Let A, W be (not necessarily disjoint) sets of nodes in a given SCM. A predictorŶ is counterfactually invariant inA w.r.t. W if PŶ * a |W (y | w) = PŶ * a |W (y | w) almost surely,for all a, a in the domain of A and all w in the domain of W. 2
(b,c). Both diagrams consist of a protected attribute A, an observed covariate X, and the outcome Y. For0.00
0.01
0.02
0.03
MSE
0.010
0.015
0.020
0.025
0.030
0.035
HSCIC
method
CF Level 1
CF Level 2
0.0
12.5
25.0
37.5
50.0
62.5
75.0
87.5
100.0
trade-off parameter
0.00
0.01
0.02
0.03
MSE
0.00
0.01
0.02
0.03
VCF
method
CF Level 1
CF Level 2
0.0
12.5
25.0
37.5
50.0
62.5
75.0
87.5
100.0
trade-off parameter
Figure 3: Performance of CIP against baselines CF1 and CF2 on a synthetic dataset (see
Section F.2). Notably, the HSCIC-MSE frontier traced out for different values of the trade-off
parameter (which is available in purely observational settings) can guide the desired choice of
γ as it closely mimics the VCF-MSE frontier. CF2 is Pareto-dominated by this frontier, i.e.,
we can pick γ to outperform CF2 in both MSE and VCF simultaneously.
The MLPs0.00
0.05
0.10
MSE
0.015
0.020
0.025
0.030
0.035
0.040
HSCIC
method
CF Level 1
CF Level 2
0
25
50
75
100
125
150
175
200
trade-off parameter
0.00
0.05
0.10
MSE
0.00
0.05
0.10
0.15
VCF
method
CF Level 1
CF Level 2
0
25
50
75
100
125
150
175
200
trade-off parameter
Table 2 :
2Performance of the HSCIC against baselines CF1 and CF2 on two synthetic datasets. Notably, for γ within[2, 5] in Scenario 1 CIP outperforms CF2 in MSE and VCF simultaneously. Similarly, in Scenario 2 this holds for γ within [2, 3].Scenario 1
Scenario 2
MSE×10 3 HSCIC×10 3
VCF×10 3
MSE×10 3 HSCIC×10 2
VCF×10 2
γ = 0.0
0.01 ± 0.00 35.22 ± 0.87 30.37 ± 0.94
0.01 ± 0.01
4.12 ± 0.05 13.39 ± 1.35
γ = 0.1
0.05 ± 0.01 34.54 ± 0.85 29.32 ± 1.74
0.04 ± 0.01
4.10 ± 0.06 13.34 ± 1.41
γ = 0.2
0.24 ± 0.07 33.50 ± 1.10 27.67 ± 0.88
0.10 ± 0.02
4.07 ± 0.06 12.67 ± 0.68
γ = 0.3
0.61 ± 0.08 32.01 ± 1.11 25.93 ± 1.88
0.21 ± 0.04
4.03 ± 0.06 12.64 ± 0.72
γ = 0.4
1.28 ± 0.10 30.36 ± 1.13 24.20 ± 1.72
0.49 ± 0.05
4.01 ± 0.07 12.44 ± 1.20
γ = 0.5
2.36 ± 0.25 28.13 ± 1.10 20.32 ± 2.08
0.59 ± 0.13
3.97 ± 0.09 12.44 ± 0.73
γ = 0.6
3.70 ± 0.20 25.69 ± 0.78 20.01 ± 2.60
0.84 ± 0.15
3.90 ± 0.07 12.12 ± 0.80
γ = 0.7
5.10 ± 0.26 23.56 ± 0.62 18.96 ± 2.44
1.24 ± 0.27
3.87 ± 0.08 12.09 ± 0.74
γ = 0.8
6.39 ± 0.30 21.86 ± 0.75 18.08 ± 3.01
1.73 ± 0.35
3.81 ± 0.08 11.93 ± 0.70
γ = 0.9
7.72 ± 0.60 22.00 ± 0.83 16.57 ± 3.22
2.21 ± 0.46
3.76 ± 0.08 11.90 ± 0.70
γ = 1.0
9.11 ± 0.60 18.87 ± 0.81 14.58 ± 1.62
2.96 ± 0.42
3.69 ± 0.08 11.28 ± 1.30
γ = 2.0 17.29 ± 0.92 13.05 ± 0.43
4.03 ± 1.67
14.09 ± 1.91
2.90 ± 0.10 10.22 ± 0.73
γ = 3.0 20.73 ± 0.77 11.60 ± 0.30
1.46 ± 1.11
25.42 ± 1.62
2.42 ± 0.11
8.29 ± 0.67
γ = 4.0 22.27 ± 0.99 11.17 ± 0.32
0.76 ± 0.24
33.80 ± 4.52
2.20 ± 0.05
7.25 ± 0.86
γ = 5.0 23.17 ± 0.98 10.94 ± 0.30
0.50 ± 0.22
39.16 ± 5.15
2.09 ± 0.10
7.27 ± 1.59
γ = 7.0 24.48 ± 1.07 10.70 ± 0.32
0.46 ± 0.13
49.90 ± 3.67
1.90 ± 0.11
5.89 ± 0.86
γ = 10
25.40 ± 1.09 10.58 ± 0.32
0.24 ± 0.08
56.49 ± 3.88
1.82 ± 0.07
5.79 ± 1.27
γ = 50
28.70 ± 1.13 10.37 ± 0.32
0.13 ± 0.09
98.23 ± 4.53
1.61 ± 0.03
3.39 ± 1.03
γ = 100 29.54 ± 1.27 10.36 ± 0.32
0.01 ± 0.01
114.3 ± 6.67
1.58 ± 0.03
2.46 ± 0.50
CF1
25.50 ± 0.98 14.68 ± 0.05
0 125.81 ± 5.64
2.98 ± 0.05
0
CF2
23.39 ± 1.39 16.57 ± 0.10
6.45 ± 4.32
28.71 ± 2.38
3.16 ± 0.05 10.96 ± 1.56
Table 3 :
3MSE , HSCIC, VCF for dimA=5. All other variables are one-dimensional.dimA=5
MSE ×10 3 HSCIC ×10 2 VCF ×10 2
γ = 0.0
0.15 ± 0.01
0.63 ± 0.05
1.42 ± 0.09
γ = 0.5
0.17 ± 0.04
0.61 ± 0.05
1.36 ± 0.08
γ = 1.0
0.17 ± 0.04
0.59 ± 0.07
1.30 ± 0.09
γ = 10.0
1.74 ± 0.20
0.49 ± 0.05
0.98 ± 0.09
γ = 50.0
5.41 ± 0.41
0.45 ± 0.06
0.77 ± 0.08
γ = 100.0
7.23 ± 0.42
0.43 ± 0.06
0.69 ± 0.07
γ = 500.0
12.63 ± 1.11
0.43 ± 0.10
0.29 ± 0.11
γ = 1000.0
14.62 ± 1.84
0.42 ± 0.11
0.25 ± 0.13
Table 4 :
4Results of MSE, HSCIC and VCF for the dSprites image dataset experiment. The results present the mean and standard deviation for 8 seeds.MSE ×10 1 HSCIC ×10 3 VCF ×10 2
γ = 0
6.07 ± 0.26
35.79 ± 0.31
3.15 ± 0.43
γ = 1
6.15 ± 0.23
35.55 ± 0.24
2.98 ± 0.35
γ = 10
6.24 ± 0.17
35.44 ± 0.25
2.80 ± 0.44
γ = 100
6.57 ± 0.27
35.17 ± 0.13
2.77 ± 0.30
γ = 500
8.95 ± 0.64
35.13 ± 0.18
1.82 ± 0.33
Table 5 :
5Architecture of the convolutional neural network used for the image dataset, as described in Section F.4.layer
# filters kernel size stride size padding size
convolution
16
5
2
2
max pooling
1
3
2
0
convolution
64
5
1
2
max pooling
1
1
2
0
convolution
64
5
1
2
max pooling
1
2
1
0
convolution
16
5
1
3
max pooling
1
2
2
0
CIP MSE ×10 2 HSCIC ×10 2 22 ± 0.16 1.49 ± 0.16 γ = 1.0 1.37 ± 0.02 3.20 ± 0.16 1.28 ± 0.19VCF
γ = 0.5
4.48 ± 0.31
3.60 ± 0.21 0.19 ± 0.02
γ = 1.0
5.00 ± 0.36
3.43 ± 0.12 0.17 ± 0.01
Veitch et al. [2021]
MSE×10 2 HSCIC ×10 3
VCF
γ = 0.5
4.50 ± 0.40
4.54 ± 0.15 0.19 ± 0.02
γ = 1.0
5.45 ± 0.41
4.42 ± 0.13 0.18 ± 0.02
CIP
MSE×10 2 HSCIC ×10 3
VCF
γ = 0.5
1.16 ± 0.01
3.22 ± 0.16 1.49 ± 0.16
γ = 1.0
1.37 ± 0.02
3.20 ± 0.16 1.28 ± 0.19
Veitch et al. [2021]
MSE×10 2 HSCIC ×10 3
VCF
γ = 0.5
1.16 ± 0.01
3.
We use P for distributions (common in the kernel literature) and the notation Y * a instead of Y | do(a) for conciseness.
The tensor product kernel k is characteristic if the mapping P Y,A → Ey,a [k( · , y ⊗ a)] is injective.
The tensor product kernel k is characteristic if the mapping P Y,A → Ey,a [k( · , y ⊗ a)] is injective.
AcknowledgementThis work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B. Cecilia Casolo is supported by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research.
Invariant risk minimization. M Arjovsky, L Bottou, I Gulrajani, D Lopez-Paz, arXiv:1907.02893arXiv preprintArjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.
Random fourier features for kernel ridge regression: Approximation bounds and statistical guarantees. H Avron, M Kapralov, C Musco, C Musco, A Velingker, A Zandieh, International conference on machine learning. 70Avron, H., Kapralov, M., Musco, C., Musco, C., Velingker, A., and Zandieh, A. (2017). Random fourier features for kernel ridge regression: Approximation bounds and statistical guarantees. In International conference on machine learning, volume 70, pages 253-262.
Big data's disparate impact. S Barocas, A D Selbst, California Law Review. 104Barocas, S. and Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104.
P W Battaglia, J B Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, M Malinowski, A Tacchetti, D Raposo, A Santoro, R Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintBattaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., et al. (2018). Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.
Reproducing kernel Hilbert spaces in probability and statistics. A Berlinet, C Thomas-Agnan, Springer Science & Business MediaBerlinet, A. and Thomas-Agnan, C. (2011). Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media.
Probabilistic symmetries and invariant neural networks. B Bloem-Reddy, Y W Teh, The Journal of Machine Learning Research. 21Bloem-Reddy, B. and Teh, Y. W. (2020). Probabilistic symmetries and invariant neural networks. The Journal of Machine Learning Research, 21:90-1.
Invariance, causality and robustness. P Bühlmann, Statistical Science. 353Bühlmann, P. (2020). Invariance, causality and robustness. Statistical Science, 35(3):404-426.
Optimal rates for the regularized least-squares algorithm. A Caponnetto, E D Vito, Foundations of Computational Mathematics. 73Caponnetto, A. and Vito, E. D. (2007). Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331-368.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, PMLRInternational conference on machine learning. Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR.
Path-specific counterfactual fairness. S Chiappa, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Chiappa, S. (2019). Path-specific counterfactual fairness. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 33, pages 7801-7808.
S Chiappa, A Pacchiano, arXiv:2101.02084Fairness with continuous optimal transport. arXiv preprintChiappa, S. and Pacchiano, A. (2021). Fairness with continuous optimal transport. arXiv preprint arXiv:2101.02084.
Probability and stochastics. E Çinlar, E Ðcınlar, Springer261Çinlar, E. and ðCınlar, E. (2011). Probability and stochastics, volume 261. Springer.
Group equivariant convolutional networks. T Cohen, M Welling, PMLRInternational conference on machine learning. Cohen, T. and Welling, M. (2016). Group equivariant convolutional networks. In International conference on machine learning, pages 2990-2999. PMLR.
Vector integration and stochastic integration in Banach spaces. N Dinculeanu, John Wiley & SonsDinculeanu, N. (2000). Vector integration and stochastic integration in Banach spaces, vol- ume 48. John Wiley & Sons.
Kernel measures of conditional dependence. K Fukumizu, A Gretton, X Sun, B Schölkopf, Advances in neural information processing systems. 20Fukumizu, K., Gretton, A., Sun, X., and Schölkopf, B. (2007). Kernel measures of conditional dependence. In Advances in neural information processing systems, volume 20.
Kernel bayes' rule: Bayesian inference with positive definite kernels. K Fukumizu, L Song, A Gretton, Journal of Machine Learning Research. 141Fukumizu, K., Song, L., and Gretton, A. (2013). Kernel bayes' rule: Bayesian inference with positive definite kernels. Journal of Machine Learning Research, 14(1):3753-3783.
Measuring statistical dependence with hilbert-schmidt norms. A Gretton, O Bousquet, A J Smola, B Schölkopf, Algorithmic Learning Theory. 3734Gretton, A., Bousquet, O., Smola, A. J., and Schölkopf, B. (2005). Measuring statistical dependence with hilbert-schmidt norms. In Algorithmic Learning Theory, volume 3734, pages 63-77.
Conditional mean embeddings as regressors. S Grünewälder, G Lever, A Gretton, L Baldassarre, S Patterson, M Pontil, International conference on machine learning. Grünewälder, S., Lever, G., Gretton, A., Baldassarre, L., Patterson, S., and Pontil, M. (2012). Conditional mean embeddings as regressors. In International conference on machine learning.
Equality of opportunity in supervised learning. M Hardt, E Price, E Price, N Srebro, Advances in Neural Information Processing Systems. 29Hardt, M., Price, E., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, volume 29.
L Matthey, I Higgins, D Hassabis, A Lerchner, dsprites: Disentanglement testing sprites dataset. Matthey, L., Higgins, I., Hassabis, D., and Lerchner, A. (2017). dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/.
Algorithmic fairness: Choices, assumptions, and definitions. S Mitchell, E Potash, S Barocas, A D'amour, K Lum, Annual Review of Statistics and Its Application. 8Mitchell, S., Potash, E., Barocas, S., D'Amour, A., and Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8:141-163.
Asymmetry learning for counterfactually-invariant classification in OOD tasks. S C Mouli, B Ribeiro, Proc. of ICLR. of ICLRMouli, S. C. and Ribeiro, B. (2022). Asymmetry learning for counterfactually-invariant classification in OOD tasks. In Proc. of ICLR.
Kernel mean embedding of distributions: A review and beyond. Foundations and Trends in Machine Learning. K Muandet, K Fukumizu, B Sriperumbudur, B Schölkopf, 10Muandet, K., Fukumizu, K., Sriperumbudur, B., and Schölkopf, B. (2017). Kernel mean embedding of distributions: A review and beyond. Foundations and Trends in Machine Learning, 10(1-2):1-141.
Fair inference on outcomes. R Nabi, I Shpitser, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Nabi, R. and Shpitser, I. (2018). Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
A measure-theoretic approach to kernel conditional mean embeddings. J Park, K Muandet, Advances in Neural Information Processing Systems. Park, J. and Muandet, K. (2020). A measure-theoretic approach to kernel conditional mean embeddings. In Advances in Neural Information Processing Systems, pages 21247-21259.
Causality: Models, Reasoning and Inference. J Pearl, Cambridge University PressPearl, J. (2000). Causality: Models, Reasoning and Inference. Cambridge University Press.
Causal inference by using invariant prediction: identification and confidence intervals. J Peters, P Bühlmann, N Meinshausen, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 785Peters, J., Bühlmann, P., and Meinshausen, N. (2016). Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):947-1012.
R Pogodin, N Deka, Y Li, D J Sutherland, V Veitch, A Gretton, arXiv:2212.08645Efficient conditionally invariant representation learning. arXiv preprintPogodin, R., Deka, N., Li, Y., Sutherland, D. J., Veitch, V., and Gretton, A. (2022). Efficient conditionally invariant representation learning. arXiv preprint arXiv:2212.08645.
Random features for large-scale kernel machines. A Rahimi, B Recht, Advances in Neural Information Processing Systems. Rahimi, A. and Recht, B. (2007). Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 1177-1184.
Invariant models for causal transfer learning. M Rojas-Carulla, B Schölkopf, R Turner, J Peters, The Journal of Machine Learning Research. 191Rojas-Carulla, M., Schölkopf, B., Turner, R., and Peters, J. (2018). Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309-1342.
Learning with kernels: support vector machines, regularization, optimization, and beyond. B Schölkopf, A J Smola, F Bach, MIT pressSchölkopf, B., Smola, A. J., Bach, F., et al. (2002). Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.
A survey on image data augmentation for deep learning. C Shorten, T M Khoshgoftaar, Journal of big data. 61Shorten, C. and Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of big data, 6(1):1-48.
Complete identification methods for the causal hierarchy. I Shpitser, J Pearl, Journal of Machine Learning Research. 9Shpitser, I. and Pearl, J. (2008). Complete identification methods for the causal hierarchy. Journal of Machine Learning Research, 9:1941-1979.
Effects of treatment on the treated: Identification and generalization. I Shpitser, J Pearl, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. the Twenty-Fifth Conference on Uncertainty in Artificial IntelligenceShpitser, I. and Pearl, J. (2009). Effects of treatment on the treated: Identification and generalization. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence,, pages 514-521.
On the validity of covariate adjustment for estimating causal effects. I Shpitser, T J Vanderweele, J M Robins, Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. the Twenty-Sixth Conference on Uncertainty in Artificial IntelligenceShpitser, I., VanderWeele, T. J., and Robins, J. M. (2010). On the validity of covariate adjustment for estimating causal effects. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence,, pages 527-536.
A hilbert space embedding for distributions. A Smola, A Gretton, L Song, B Schölkopf, International Conference on Algorithmic Learning Theory. SpringerSmola, A., Gretton, A., Song, L., and Schölkopf, B. (2007). A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13-31. Springer.
Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. L Song, K Fukumizu, A Gretton, IEEE Signal Processing Magazine. 304Song, L., Fukumizu, K., and Gretton, A. (2013). Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Processing Magazine, 30(4):98-111.
Hilbert space embeddings of conditional distributions with applications to dynamical systems. L Song, J Huang, A J Smola, K Fukumizu, International Conference on Machine Learning. 382Song, L., Huang, J., Smola, A. J., and Fukumizu, K. (2009). Hilbert space embeddings of conditional distributions with applications to dynamical systems. In International Conference on Machine Learning, volume 382, pages 961-968.
Characteristic and universal tensor product kernels. Z Szabó, B K Sriperumbudur, Journal of Machine Learning Research. 1829Szabó, Z. and Sriperumbudur, B. K. (2017). Characteristic and universal tensor product kernels. Journal of Machine Learning Research, 18:233:1-233:29.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, volume 30.
Counterfactual invariance to spurious correlations in text classification. V Veitch, A D'amour, S Yadlowsky, J Eisenstein, Advances in Neural Information Processing Systems. Veitch, V., D'Amour, A., Yadlowsky, S., and Eisenstein, J. (2021). Counterfactual invariance to spurious correlations in text classification. In Advances in Neural Information Processing Systems, pages 16196-16208.
Unsupervised data augmentation for consistency training. Q Xie, Z Dai, E Hovy, T Luong, Le , Q , Advances in Neural Information Processing Systems. 33Xie, Q., Dai, Z., Hovy, E., Luong, T., and Le, Q. (2020). Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems, volume 33, pages 6256-6268.
Algorithmic decision making with conditional fairness. R Xu, P Cui, K Kuang, B Li, L Zhou, Z Shen, W Cui, Proceedings of the Conference on Knowledge Discovery and Data Mining. the Conference on Knowledge Discovery and Data MiningXu, R., Cui, P., Kuang, K., Li, B., Zhou, L., Shen, Z., and Cui, W. (2020). Algorithmic decision making with conditional fairness. In Proceedings of the Conference on Knowledge Discovery and Data Mining, pages 2125-2135.
Deep sets. M Zaheer, S Kottur, S Ravanbakhsh, B Poczos, R R Salakhutdinov, A J Smola, Advances in Neural Information Processing Systems. 30Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. (2017). Deep sets. In Advances in Neural Information Processing Systems, volume 30.
| [
"https://github.com/deepmind/dsprites-dataset/."
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.